10 interesting stories served every morning and every evening.




1 1,792 shares, 60 trendiness

LinkedIn Is Illegally Searching Your Computer

Every time any of LinkedIn’s one bil­lion users vis­its linkedin.com, hid­den code searches their com­puter for in­stalled soft­ware, col­lects the re­sults, and trans­mits them to LinkedIn’s servers and to third-party com­pa­nies in­clud­ing an American-Israeli cy­ber­se­cu­rity firm.

The user is never asked. Never told. LinkedIn’s pri­vacy pol­icy does not men­tion it.

Because LinkedIn knows each user’s real name, em­ployer, and job ti­tle, it is not search­ing anony­mous vis­i­tors. It is search­ing iden­ti­fied peo­ple at iden­ti­fied com­pa­nies. Millions of com­pa­nies. Every day. All over the world.

Fairlinked e. V. is an as­so­ci­a­tion of com­mer­cial LinkedIn users. We rep­re­sent the pro­fes­sion­als who use LinkedIn, the busi­nesses that in­vest in and de­pend on the plat­form, and the tool­mak­ers who build prod­ucts for it.

BrowserGate is our in­ves­ti­ga­tion and cam­paign to doc­u­ment one of the largest cor­po­rate es­pi­onage and data breach scan­dals in dig­i­tal his­tory, to in­form the pub­lic and reg­u­la­tors, to col­lect ev­i­dence, and to raise funds for the le­gal pro­ceed­ings re­quired to stop it.

LinkedIn’s scan re­veals the re­li­gious be­liefs, po­lit­i­cal opin­ions, dis­abil­i­ties, and job search ac­tiv­ity of iden­ti­fied in­di­vid­u­als. LinkedIn scans for ex­ten­sions that iden­tify prac­tic­ing Muslims, ex­ten­sions that re­veal po­lit­i­cal ori­en­ta­tion, ex­ten­sions built for neu­ro­di­ver­gent users, and 509 job search tools that ex­pose who is se­cretly look­ing for work on the very plat­form where their cur­rent em­ployer can see their pro­file.

Under EU law, this cat­e­gory of data is not reg­u­lated. It is pro­hib­ited. LinkedIn has no con­sent, no dis­clo­sure, and no le­gal ba­sis. Its pri­vacy pol­icy does not men­tion any of this.

LinkedIn scans for over 200 prod­ucts that di­rectly com­pete with its own sales tools, in­clud­ing Apollo, Lusha, and ZoomInfo. Because LinkedIn knows each user’s em­ployer, it can map which com­pa­nies use which com­peti­tor prod­ucts. It is ex­tract­ing the cus­tomer lists of thou­sands of soft­ware com­pa­nies from their users’ browsers with­out any­one’s knowl­edge.

Then it uses what it finds. LinkedIn has al­ready sent en­force­ment threats to users of third-party tools, us­ing data ob­tained through this covert scan­ning to iden­tify its tar­gets.

In 2023, the EU des­ig­nated LinkedIn as a reg­u­lated gate­keeper un­der the Digital Markets Act and or­dered it to open its plat­form to third-party tools. LinkedIn’s re­sponse:

It pub­lished two re­stricted APIs and pre­sented them to the European Commission as com­pli­ance. Together, these APIs han­dle ap­prox­i­mately 0.07 calls per sec­ond. Meanwhile, LinkedIn al­ready op­er­ates an in­ter­nal API called Voyager that pow­ers every LinkedIn web and mo­bile prod­uct at 163,000 calls per sec­ond. In Microsoft’s 249-page com­pli­ance re­port to the EU, the word API ap­pears 533 times. Voyager” ap­pears zero times.

At the same time, LinkedIn ex­panded its sur­veil­lance of the ex­act tools the reg­u­la­tion was de­signed to pro­tect. The scan list grew from roughly 461 prod­ucts in 2024 to over 6,000 by February 2026. The EU told LinkedIn to let third-party tools in. LinkedIn built a sur­veil­lance sys­tem to find and pun­ish every user of those tools.

LinkedIn loads an in­vis­i­ble track­ing el­e­ment from HUMAN Security (formerly PerimeterX), an American-Israeli cy­ber­se­cu­rity firm, zero pix­els wide, hid­den off-screen, that sets cook­ies on your browser with­out your knowl­edge. A sep­a­rate fin­ger­print­ing script runs from LinkedIn’s own servers. A third script from Google ex­e­cutes silently on every page load. All of it en­crypted. None of it dis­closed.

Microsoft has 33,000 em­ploy­ees and a $15 bil­lion le­gal bud­get. We have the ev­i­dence. What we need is peo­ple and fund­ing to hold them ac­count­able.

...

Read the original on browsergate.eu »

2 1,659 shares, 57 trendiness

Gemma 4

Our most in­tel­li­gent open mod­els, built from Gemini 3 re­search and tech­nol­ogy to max­i­mize in­tel­li­gence-per-pa­ra­me­ter

Your browser does not sup­port the video tag. Your browser does not sup­port the video tag. A new level of in­tel­li­gence for mo­bile and IoT de­vices Your browser does not sup­port the video tag. Your browser does not sup­port the video tag. Your browser does not sup­port the video tag. Your browser does not sup­port the video tag.A new level of in­tel­li­gence for mo­bile and IoT de­vices Your browser does not sup­port the video tag. Your browser does not sup­port the video tag.

Build au­tonomous agents that plan, nav­i­gate apps, and com­plete tasks on your be­half, with na­tive sup­port for func­tion call­ing. Develop ap­pli­ca­tions with strong au­dio and vi­sual un­der­stand­ing, for rich mul­ti­modal sup­port.Cre­ate mul­ti­lin­gual ex­pe­ri­ences that go be­yond trans­la­tion and un­der­stand cul­tural con­text.Im­prove per­for­mance for spe­cific tasks by train­ing Gemma us­ing your pre­ferred frame­works and tech­niques.Run mod­els on your own hard­ware for ef­fi­cient de­vel­op­ment and de­ploy­ment.

A new level of in­tel­li­gence for mo­bile and IoT de­vice­sAudio and vi­sion sup­port for real-time edge pro­cess­ing. They can run com­pletely of­fline with near-zero la­tency on edge de­vices like phones, Raspberry Pi, and Jetson Nano.

Advanced rea­son­ing for IDEs, cod­ing as­sis­tants, and agen­tic work­flows. These mod­els are op­ti­mized for con­sumer GPUs — giv­ing stu­dents, re­searchers, and de­vel­op­ers the abil­ity to turn work­sta­tions into lo­cal-first AI servers.

Gemma 4 mod­els un­dergo the same rig­or­ous in­fra­struc­ture se­cu­rity pro­to­cols as our pro­pri­etary mod­els. By choos­ing Gemma 4, en­ter­prises and sov­er­eign or­ga­ni­za­tions gain a trusted, trans­par­ent foun­da­tion that de­liv­ers state-of-the-art ca­pa­bil­i­ties while meet­ing the high­est stan­dards for se­cu­rity and re­li­a­bil­ity.

...

Read the original on deepmind.google »

3 1,537 shares, 67 trendiness

Sam Altman May Control Our Future—Can He Be Trusted?

Skip to main con­tentSam Altman May Control Our Future—Can He Be Trusted?New in­ter­views and closely guarded doc­u­ments shed light on the per­sis­tent doubts about the head of OpenAI. Altman promised to be a safe stew­ard for A.I. But some of his col­leagues be­lieved that he was not trust­wor­thy enough to, as one put it, have his fin­ger on the but­ton.”In the fall of 2023, Ilya Sutskever, OpenAI’s chief sci­en­tist, sent se­cret memos to three fel­low-mem­bers of the or­ga­ni­za­tion’s board of di­rec­tors. For weeks, they’d been hav­ing furtive dis­cus­sions about whether Sam Altman, OpenAI’s C.E.O., and Greg Brockman, his sec­ond-in-com­mand, were fit to run the com­pany. Sutskever had once counted both men as friends. In 2019, he’d of­fi­ci­ated Brockman’s wed­ding, in a cer­e­mony at OpenAI’s of­fices that in­cluded a ring bearer in the form of a ro­botic hand. But as he grew con­vinced that the com­pany was near­ing its long-term goal—cre­at­ing an ar­ti­fi­cial in­tel­li­gence that could ri­val or sur­pass the cog­ni­tive ca­pa­bil­i­ties of hu­man be­ings—his doubts about Altman in­creased. As Sutskever put it to an­other board mem­ber at the time, I don’t think Sam is the guy who should have his fin­ger on the but­ton.”At the be­hest of his fel­low board mem­bers, Sutskever worked with like-minded col­leagues to com­pile some sev­enty pages of Slack mes­sages and H.R. doc­u­ments, ac­com­pa­nied by ex­plana­tory text. The ma­te­r­ial in­cluded im­ages taken with a cell­phone, ap­par­ently to avoid de­tec­tion on com­pany de­vices. He sent the fi­nal memos to the other board mem­bers as dis­ap­pear­ing mes­sages, to in­sure that no one else would ever see them. He was ter­ri­fied,” a board mem­ber who re­ceived them re­called. The memos, which we re­viewed, have not pre­vi­ously been dis­closed in full. They al­lege that Altman mis­rep­re­sented facts to ex­ec­u­tives and board mem­bers, and de­ceived them about in­ter­nal safety pro­to­cols. One of the memos, about Altman, be­gins with a list headed Sam ex­hibits a con­sis­tent pat­tern of . . .” The first item is Lying.”Many tech­nol­ogy com­pa­nies is­sue vague procla­ma­tions about im­prov­ing the world, then go about max­i­miz­ing rev­enue. But the found­ing premise of OpenAI was that it would have to be dif­fer­ent. The founders, who in­cluded Altman, Sutskever, Brockman, and Elon Musk, as­serted that ar­ti­fi­cial in­tel­li­gence could be the most pow­er­ful, and po­ten­tially dan­ger­ous, in­ven­tion in hu­man his­tory, and that per­haps, given the ex­is­ten­tial risk, an un­usual cor­po­rate struc­ture would be re­quired. The firm was es­tab­lished as a non­profit, whose board had a duty to pri­or­i­tize the safety of hu­man­ity over the com­pa­ny’s suc­cess, or even its sur­vival. The C.E.O. had to be a per­son of un­com­mon in­tegrity. According to Sutskever, any per­son work­ing to build this civ­i­liza­tion-al­ter­ing tech­nol­ogy bears a heavy bur­den and is tak­ing on un­prece­dented re­spon­si­bil­ity.” But the peo­ple who end up in these kinds of po­si­tions are of­ten a cer­tain kind of per­son, some­one who is in­ter­ested in power, a politi­cian, some­one who likes it.” In one of the memos, he seemed con­cerned with en­trust­ing the tech­nol­ogy to some­one who just tells peo­ple what they want to hear.” If OpenAI’s C.E.O. turned out not to be re­li­able, the board, which had six mem­bers, was em­pow­ered to fire him. Some mem­bers, in­clud­ing Helen Toner, an A.I.-policy ex­pert, and Tasha McCauley, an en­tre­pre­neur, re­ceived the memos as a con­fir­ma­tion of what they had al­ready come to be­lieve: Altman’s role en­trusted him with the fu­ture of hu­man­ity, but he could not be trusted.Alt­man was in Las Vegas, at­tend­ing a Formula 1 race, when Sutskever in­vited him to a video call with the board, then read a brief state­ment ex­plain­ing that he was no longer an em­ployee of OpenAI. The board, fol­low­ing le­gal ad­vice, re­leased a pub­lic mes­sage say­ing only that Altman had been re­moved be­cause he was not con­sis­tently can­did in his com­mu­ni­ca­tions.” Many of OpenAI’s in­vestors and ex­ec­u­tives were shocked. Microsoft, which had in­vested some thir­teen bil­lion dol­lars in OpenAI, learned of the plan to fire Altman just mo­ments be­fore it hap­pened. I was very stunned,” Satya Nadella, Microsoft’s C.E.O., later said. I could­n’t get any­thing out of any­body.” He spoke with the LinkedIn co-founder Reid Hoffman, an OpenAI in­vestor and a Microsoft board mem­ber, who be­gan call­ing around to de­ter­mine whether Altman had com­mit­ted a clear of­fense. I did­n’t know what the fuck was go­ing on,” Hoffman told us. We were look­ing for em­bez­zle­ment, or sex­ual ha­rass­ment, and I just found noth­ing.”Other busi­ness part­ners were sim­i­larly blind­sided. When Altman called the in­vestor Ron Conway to say that he’d been fired, Conway held up his phone to Representative Nancy Pelosi, with whom he was hav­ing lunch. You bet­ter get out of here re­ally quick,” she told Conway. OpenAI was on the verge of clos­ing a large in­vest­ment from Thrive, a ven­ture-cap­i­tal firm founded by Josh Kushner, Jared Kushner’s brother, whom Altman had known for years. The deal would value OpenAI at eighty-six bil­lion dol­lars and al­low many em­ploy­ees to cash out mil­lions in eq­uity. Kushner emerged from a meet­ing with Rick Rubin, the mu­sic pro­ducer, to a missed call from Altman. We just im­me­di­ately went to war,” Kushner later said.The day that Altman was fired, he flew back to his twenty-seven-mil­lion-dol­lar man­sion in San Francisco, which has panoramic views of the bay and once fea­tured a can­tilevered in­fin­ity pool, and set up what he called a sort of gov­ern­ment-in-ex­ile.” Conway, the Airbnb co-founder Brian Chesky, and the fa­mously ag­gres­sive cri­sis-com­mu­ni­ca­tions man­ager Chris Lehane joined, some­times for hours a day, by video and phone. Some mem­bers of Altman’s ex­ec­u­tive team camped out in the hall­ways of the house. Lawyers set up in a home of­fice next to his bed­room. During bouts of in­som­nia, Altman would wan­der by them in his pa­ja­mas. When we spoke with Altman re­cently, he de­scribed the af­ter­math of his fir­ing as just this weird fugue.”With the board silent, Altman’s ad­vis­ers built a pub­lic case for his re­turn. Lehane has in­sisted that the fir­ing was a coup or­ches­trated by rogue effective al­tru­ists”—ad­her­ents of a be­lief sys­tem that fo­cusses on max­i­miz­ing the well-be­ing of hu­man­ity, who had come to see A.I. as an ex­is­ten­tial threat. (Hoffman told Nadella that the fir­ing might be due to effective-altruism crazi­ness.”) Lehane—whose re­ported motto, af­ter Mike Tyson, is Everyone has a game plan un­til you punch them in the mouth”—urged Altman to wage an ag­gres­sive so­cial-me­dia cam­paign. Chesky stayed in con­tact with the tech jour­nal­ist Kara Swisher, re­lay­ing crit­i­cism of the board.Alt­man in­ter­rupted his war room” at six o’­clock each evening with a round of Negronis. You need to chill,” he re­calls say­ing. Whatever’s gonna hap­pen is gonna hap­pen.” But, he added, his phone records show that he was on calls for more than twelve hours a day. At one point, Altman con­veyed to Mira Murati, who had given Sutskever ma­te­r­ial for his memos and was serv­ing as the in­terim C.E.O. of OpenAI in that pe­riod, that his al­lies were going all out” and finding bad things” to dam­age her rep­u­ta­tion, as well as those of oth­ers who had moved against him, ac­cord­ing to some­one with knowl­edge of the con­ver­sa­tion. (Altman does not re­call the ex­change.)Within hours of the fir­ing, Thrive had put its planned in­vest­ment on hold and sug­gested that the deal would be con­sum­mated—and em­ploy­ees would thus re­ceive pay­outs—only if Altman re­turned. Texts from this pe­riod show Altman coör­di­nat­ing closely with Nadella. (“how about: satya and my top pri­or­ity re­mains to save ope­nai,” Altman sug­gested, as the two worked on a state­ment. Nadella pro­posed an al­ter­na­tive: to en­sure OpenAI con­tin­ues to thrive.”) Microsoft soon an­nounced that it would cre­ate a com­pet­ing ini­tia­tive for Altman and any em­ploy­ees who left OpenAI. A pub­lic let­ter de­mand­ing his re­turn cir­cu­lated at the or­ga­ni­za­tion. Some peo­ple who hes­i­tated to sign it re­ceived im­plor­ing calls and mes­sages from col­leagues. A ma­jor­ity of OpenAI em­ploy­ees ul­ti­mately threat­ened to leave with Altman.The board was backed into a cor­ner. Control Z, that’s one op­tion,” Toner said—undo the fir­ing. Or the other op­tion is the com­pany falls apart.” Even Murati even­tu­ally signed the let­ter. Altman’s al­lies worked to win over Sutskever. Brockman’s wife, Anna, ap­proached him at the of­fice and pleaded with him to re­con­sider. You’re a good per­son—you can fix this,” she said. Sutskever later ex­plained, in a court de­po­si­tion, I felt that if we were to go down the path where Sam would not re­turn, then OpenAI would be de­stroyed.” One night, Altman took an Ambien, only to be awak­ened by his hus­band, an Australian coder named Oliver Mulherin, who told him that Sutskever was wa­ver­ing, and that peo­ple were telling Altman to speak with the board. I woke up in this, like, crazy Ambien haze, and I was so dis­ori­ented,” Altman told us. I was, like, I can­not talk to the board right now.”In a se­ries of in­creas­ingly tense calls, Altman de­manded the res­ig­na­tions of board mem­bers who had moved to fire him. I have to pick up the pieces of their mess while I’m in this crazy cloud of sus­pi­cion?” Altman re­called ini­tially think­ing, about his re­turn. I was just, like, Absolutely fuck­ing not.” Eventually, Sutskever, Toner, and McCauley lost their board seats. Adam D’Angelo, a founder of Quora, was the sole orig­i­nal mem­ber who re­mained. As a con­di­tion of their exit, the de­part­ing mem­bers de­manded that the al­le­ga­tions against Altman—including that he pit­ted ex­ec­u­tives against one an­other and con­cealed his fi­nan­cial en­tan­gle­ments—be in­ves­ti­gated. They also pressed for a new board that could over­see the out­side in­quiry with in­de­pen­dence. But the two new mem­bers, the for­mer Harvard pres­i­dent Lawrence Summers and the for­mer Facebook C.T.O. Bret Taylor, were se­lected af­ter close con­ver­sa­tions with Altman. would you do this,” Altman texted Nadella. bret, larry sum­mers, adam as the board and me as ceo and then bret han­dles the in­ves­ti­ga­tion.” (McCauley later tes­ti­fied in a de­po­si­tion that when Taylor was pre­vi­ously con­sid­ered for a board seat she’d had con­cerns about his def­er­ence to Altman.)Less than five days af­ter his fir­ing, Altman was re­in­stated. Employees now call this mo­ment the Blip,” af­ter an in­ci­dent in the Marvel films in which char­ac­ters dis­ap­pear from ex­is­tence and then re­turn, un­changed, to a world pro­foundly al­tered by their ab­sence. But the de­bate over Altman’s trust­wor­thi­ness has moved be­yond OpenAI’s board­room. The col­leagues who fa­cil­i­tated his ouster ac­cuse him of a de­gree of de­cep­tion that is un­ten­able for any ex­ec­u­tive and dan­ger­ous for a leader of such a trans­for­ma­tive tech­nol­ogy. We need in­sti­tu­tions wor­thy of the power they wield,” Murati told us. The board sought feed­back, and I shared what I was see­ing. Everything I shared was ac­cu­rate, and I stand be­hind all of it.” Altman’s al­lies, on the other hand, have long dis­missed the ac­cu­sa­tions. After the fir­ing, Conway texted Chesky and Lehane de­mand­ing a pub­lic-re­la­tions of­fen­sive. This is REPUTATIONAL TO SAM,” he wrote. He told the Washington Post that Altman had been mistreated by a rogue board of di­rec­tors.”Ope­nAI has since be­come one of the most valu­able com­pa­nies in the world. It is re­port­edly prepar­ing for an ini­tial pub­lic of­fer­ing at a po­ten­tial val­u­a­tion of a tril­lion dol­lars. Altman is dri­ving the con­struc­tion of a stag­ger­ing amount of A.I. in­fra­struc­ture, some of it con­cen­trated within for­eign au­toc­ra­cies. OpenAI is se­cur­ing sweep­ing gov­ern­ment con­tracts, set­ting stan­dards for how A.I. is used in im­mi­gra­tion en­force­ment, do­mes­tic sur­veil­lance, and au­tonomous weaponry in war zones.Alt­man has pro­moted OpenAI’s growth by tout­ing a vi­sion in which, he wrote in a 2024 blog post, astounding tri­umphs—fix­ing the cli­mate, es­tab­lish­ing a space colony, and the dis­cov­ery of all of physics—will even­tu­ally be­come com­mon­place.” His rhetoric has helped sus­tain one of the fastest cash burns of any startup in his­tory, re­ly­ing on part­ners that have bor­rowed vast sums. The U.S. econ­omy is in­creas­ingly de­pen­dent on a few highly lever­aged A.I. com­pa­nies, and many ex­perts—at times in­clud­ing Altman—have warned that the in­dus­try is in a bub­ble. Someone is go­ing to lose a phe­nom­e­nal amount of money,” he told re­porters last year. If the bub­ble pops, eco­nomic cat­a­stro­phe may fol­low. If his most bull­ish pro­jec­tions prove cor­rect, he may be­come one of the wealth­i­est and most pow­er­ful peo­ple on the planet.In a tense call af­ter Altman’s fir­ing, the board pressed him to ac­knowl­edge a pat­tern of de­cep­tion. This is just so fucked up,” he said re­peat­edly, ac­cord­ing to peo­ple on the call. I can’t change my per­son­al­ity.” Altman says that he does­n’t re­call the ex­change. It’s pos­si­ble I meant some­thing like I do try to be a uni­fy­ing force,’ ” he told us, adding that this trait had en­abled him to lead an im­mensely suc­cess­ful com­pany. He at­trib­uted the crit­i­cism to a ten­dency, es­pe­cially early in his ca­reer, to be too much of a con­flict avoider.” But a board mem­ber of­fered a dif­fer­ent in­ter­pre­ta­tion of his state­ment: What it meant was I have this trait where I lie to peo­ple, and I’m not go­ing to stop.’ ” Were the col­leagues who fired Altman mo­ti­vated by alarmism and per­sonal an­i­mus, or were they right that he could­n’t be trusted?One morn­ing this win­ter, we met Altman at OpenAI’s head­quar­ters, in San Francisco, for one of more than a dozen con­ver­sa­tions with him for this story. The com­pany had re­cently moved into a pair of eleven-story glass tow­ers, one of which had been oc­cu­pied by Uber, an­other tech be­he­moth, whose co-founder and C.E.O., Travis Kalanick, seemed like an un­stop­pable prodigy—un­til he re­signed, in 2017, un­der pres­sure from in­vestors, who cited con­cerns about his ethics. (Kalanick now runs a ro­bot­ics startup; in his free time, he said re­cently, he uses OpenAI’s ChatGPT to get to the edge of what’s known in quan­tum physics.”)An em­ployee gave us a tour of the of­fice. In an airy space full of com­mu­nal ta­bles, there was an an­i­mated dig­i­tal paint­ing of the com­puter sci­en­tist Alan Turing; its eyes tracked us as we passed. The in­stal­la­tion is a wink­ing ref­er­ence to the Turing test, the 1950 thought ex­per­i­ment about whether a ma­chine can cred­i­bly im­i­tate a per­son. (In a 2025 study, ChatGPT passed the test more re­li­ably than ac­tual hu­mans did.) Typically, you can in­ter­act with the paint­ing. But the sound had been dis­abled, our guide told us, be­cause it would­n’t stop eaves­drop­ping on em­ploy­ees and then butting into their con­ver­sa­tions. Elsewhere in the of­fice, plaques, brochures, and mer­chan­dise dis­played the words Feel the AGI.” The phrase was orig­i­nally as­so­ci­ated with Sutskever, who used it to cau­tion his col­leagues about the risks of ar­ti­fi­cial gen­eral in­tel­li­gence—the thresh­old at which ma­chines match hu­man cog­ni­tive ca­pac­i­ties. After the Blip, it be­came a cheer­ful slo­gan hail­ing a su­per­abun­dant fu­ture.We met Altman in a generic-look­ing con­fer­ence room on the eighth floor. People used to tell me about de­ci­sion fa­tigue, and I did­n’t get it,” Altman told us. Now I wear a gray sweater and jeans every day, and even pick­ing which gray sweater out of my closet—I’m, like, I wish I did­n’t have to think about that.” Altman has a youth­ful ap­pear­ance—he is slen­der, with wide-set blue eyes and tou­sled hair—but he is now forty, and he and Mulherin have a one-year-old son, de­liv­ered by a sur­ro­gate. I’m sure, like, be­ing President of the United States would be a much more stress­ful job, but of all the jobs that I think I could rea­son­ably do, this is the most stress­ful one I can imag­ine,” he said, mak­ing eye con­tact with one of us, then with the other. The way that I’ve ex­plained this to my friends is: This was the most fun job in the world un­til the day we launched ChatGPT.’ We were mak­ing these mas­sive sci­en­tific dis­cov­er­ies—I think we did the most im­por­tant piece of sci­en­tific dis­cov­ery in, I don’t know, many decades.” He cast his eyes down. And then, since the launch of ChatGPT, the de­ci­sions have got­ten very dif­fi­cult.”Alt­man grew up in Clayton, Missouri, an af­flu­ent sub­urb of St. Louis, as the el­dest of four sib­lings. His mother, Connie Gibstine, is a der­ma­tol­o­gist; his fa­ther, Jerry Altman, was a real-es­tate bro­ker and a hous­ing ac­tivist. Altman at­tended a Reform syn­a­gogue and a pri­vate prepara­tory school that he has de­scribed as not the kind of place where you would re­ally stand up and talk about be­ing gay.” In gen­eral, though, the fam­i­ly’s wealthy sub­ur­ban cir­cles were rel­a­tively lib­eral. When Altman was six­teen or sev­en­teen, he said, he was out late in a pre­dom­i­nantly gay neigh­bor­hood in St. Louis and was sub­jected to a bru­tal phys­i­cal at­tack and ho­mo­pho­bic slurs. Altman did not re­port the in­ci­dent, and he was re­luc­tant to give us more de­tails on the record, say­ing that a fuller telling would make me look like I’m ma­nip­u­la­tive or play­ing for sym­pa­thy.” He dis­missed the idea that this event, and his sex­u­al­ity broadly, was sig­nif­i­cant to his iden­tity. But, he said, probably that has, like, some deep-seated psy­cho­log­i­cal thing—that I think I’m over but I’m not—about not want­ing more con­flict.”Alt­man’s at­ti­tude in child­hood, his brother told The New Yorker, in 2016, was I have to win, and I’m in charge of every­thing.” He went to Stanford, where he at­tended reg­u­lar off-cam­pus poker games. I think I learned more about life and busi­ness from that than I learned in col­lege,” he later said.All Stanford stu­dents are am­bi­tious, but many of the most en­ter­pris­ing among them drop out. The sum­mer af­ter his sopho­more year, Altman went to Massachusetts to join the in­au­gural batch of en­tre­pre­neurs at Y Combinator, a startup in­cu­ba­tor” co-founded by the renowned soft­ware en­gi­neer Paul Graham. Each en­trant joined Y.C. with an idea for a startup. (Altman’s batch mates in­cluded founders of Reddit and Twitch.) Altman’s pro­ject, even­tu­ally called Loopt, was a proto so­cial net­work that used the lo­ca­tions of peo­ple’s flip phones to tell their friends where they were. The com­pany re­flected his drive, and a ten­dency to in­ter­pret am­bigu­ous sit­u­a­tions to his ad­van­tage. Federal rules re­quired that phone car­ri­ers be able to track the lo­ca­tions of phones for emer­gency ser­vices; Altman struck deals with car­ri­ers to tap these ca­pa­bil­i­ties for the com­pa­ny’s use.“These num­bers in­di­cate that some­body here has the soul of a poet.”Most of Altman’s em­ploy­ees at Loopt liked him, but some said that they were struck by his ten­dency to ex­ag­ger­ate, even about triv­ial things. One re­called Altman brag­ging widely that he was a cham­pion Ping-Pong player—“like, Missouri high-school Ping-Pong champ”—and then prov­ing to be one of the worst play­ers in the of­fice. (Altman says that he was prob­a­bly jok­ing.) As Mark Jacobstein, an older Loopt em­ployee who was asked by in­vestors to act as Altman’s babysitter,” later told Keach Hagey, for The Optimist,” a bi­og­ra­phy of Altman, There’s a blur­ring be­tween I think I can maybe ac­com­plish this thing’ and I have al­ready ac­com­plished this thing’ that in its most toxic form leads to Theranos,” Elizabeth Holmes’s fraud­u­lent startup.Groups of se­nior em­ploy­ees, con­cerned with Altman’s lead­er­ship and lack of trans­parency, asked Loopt’s board on two oc­ca­sions to fire him as C.E.O., ac­cord­ing to Hagey. But Altman in­spired fierce loy­alty, too. A for­mer em­ployee was told that a board mem­ber re­sponded, This is Sam’s com­pany, get back to fuck­ing work.” (A board mem­ber de­nied that the at­tempts to re­move Altman as C.E.O. were se­ri­ous.) Loopt strug­gled to gain users, and in 2012 it was ac­quired by a fin­tech com­pany. The ac­qui­si­tion had been arranged, ac­cord­ing to a per­son fa­mil­iar with the deal, largely to help Altman save face. Still, by the time Graham re­tired from Y.C., in 2014, he had re­cruited Altman to be his suc­ces­sor as pres­i­dent. I asked Sam in our kitchen,” Graham told The New Yorker. And he smiled, like, it worked. I had never seen an un­con­trolled smile from Sam. It was like when you throw a ball of pa­per into the waste­bas­ket across the room—that smile.”Alt­man’s new role made him, at twenty-eight, a king­maker. His job was to se­lect the hun­gri­est and most promis­ing en­tre­pre­neurs, con­nect them with the best coders and in­vestors, and help them de­velop their star­tups into in­dus­try-defin­ing mo­nop­o­lies (while Y.C. took a six- or seven-per-cent cut). Altman over­saw a pe­riod of ag­gres­sive ex­pan­sion, grow­ing Y.C.’s ros­ter of star­tups from dozens to hun­dreds. But sev­eral Silicon Valley in­vestors came to be­lieve that his loy­al­ties were di­vided. An in­vestor told us that Altman was known to make per­sonal in­vest­ments, se­lec­tively, into the best com­pa­nies, block­ing out­side in­vestors.” (Altman de­nies block­ing any­one.) Altman had worked as a scout” for the in­vest­ment fund Sequoia Capital, as part of a pro­gram that in­volved in­vest­ing in early-stage star­tups and tak­ing a small cut of any prof­its. When Altman made an an­gel in­vest­ment in Stripe, a fi­nan­cial-ser­vices startup, he in­sisted on a big­ger por­tion, galling Sequoia’s part­ners, a per­son fa­mil­iar with the deal said. The per­son added, It’s a pol­icy of Sam first.’ ” Altman is an in­vestor in, by his own es­ti­mate, some four hun­dred other com­pa­nies. (Altman de­nies this char­ac­ter­i­za­tion of the Stripe deal. Around 2010, he made an ini­tial in­vest­ment of fif­teen thou­sand dol­lars in Stripe, a two-per-cent share. The com­pany is now val­ued at more than a hun­dred and fifty bil­lion dol­lars.)By 2018, sev­eral Y.C. part­ners were so frus­trated with Altman’s be­hav­ior that they ap­proached Graham to com­plain. Graham and Jessica Livingston, his wife and a Y.C. founder, ap­par­ently had a frank con­ver­sa­tion with Altman. Afterward, Graham started telling peo­ple that al­though Altman had agreed to leave the com­pany, he was re­sist­ing in prac­tice. Altman told some Y.C. part­ners that he would re­sign as pres­i­dent but be­come chair­man in­stead. In May, 2019, a blog post an­nounc­ing that Y.C. had a new pres­i­dent came with an as­ter­isk: Sam is tran­si­tion­ing to Chairman of YC.” A few months later, the post was edited to read Sam Altman stepped away from any for­mal po­si­tion at YC; af­ter that, the phrase was re­moved en­tirely. Nevertheless, as re­cently as 2021, a Securities and Exchange Commission fil­ing listed Altman as the chair­man of Y Combinator. (Altman says that he was­n’t aware of this un­til much later.)Alt­man has main­tained over the years, both in pub­lic and in re­cent de­po­si­tions, that he was never fired from Y.C., and he told us that he did not re­sist leav­ing. Graham has tweeted that we did­n’t want him to leave, just to choose” be­tween Y.C. and OpenAI. In a state­ment, Graham told us, We did­n’t have the le­gal power to fire any­one. All we could do was ap­ply moral pres­sure.” In pri­vate, though, he has been un­am­bigu­ous that Altman was re­moved be­cause of Y.C. part­ners’ mis­trust. This ac­count of Altman’s time at Y Combinator is based on dis­cus­sions with sev­eral Y.C. founders and part­ners, in ad­di­tion to con­tem­po­ra­ne­ous ma­te­ri­als, all of which in­di­cate that the part­ing was not en­tirely mu­tual. On one oc­ca­sion, Graham told Y.C. col­leagues that, prior to his re­moval, Sam had been ly­ing to us all the time.”In May, 2015, Altman e-mailed Elon Musk, then the hun­dredth-rich­est per­son in the world. Like many promi­nent Silicon Valley en­tre­pre­neurs, Musk was pre­oc­cu­pied by an ar­ray of threats that he con­sid­ered ex­is­ten­tially ur­gent but which would have struck most peo­ple as far-fetched hy­po­thet­i­cals. We need to be su­per care­ful with AI,” he tweeted. Potentially more dan­ger­ous than nukes.”Alt­man had gen­er­ally been a techno-op­ti­mist, but his rhetoric about A.I. soon turned apoc­a­lyp­tic. In pub­lic, and in his pri­vate cor­re­spon­dence with Musk and oth­ers, he warned that the tech­nol­ogy should not be dom­i­nated by a profit-seek­ing mega-cor­po­ra­tion. Been think­ing a lot about whether it’s pos­si­ble to stop hu­man­ity from de­vel­op­ing AI,” he wrote to Musk. If it’s go­ing to hap­pen any­way, it seems like it would be good for some­one other than Google to do it first.” Picking up on the anal­ogy to nu­clear weapons, he pro­posed a Manhattan Project for AI.” He out­lined the over­ar­ch­ing prin­ci­ples that such an or­ga­ni­za­tion would have—“safety should be a first-class re­quire­ment”; obviously we’d com­ply with/​ag­gres­sively sup­port all reg­u­la­tion”—and he and Musk set­tled on a name: OpenAI.Unlike the orig­i­nal Manhattan Project, a gov­ern­ment ini­tia­tive that led to the cre­ation of the atom bomb, OpenAI would be pri­vately funded, at least at first. Altman pre­dicted that an ar­ti­fi­cial su­per­in­tel­li­gence—a the­o­ret­i­cal thresh­old be­yond even A.G.I., at which ma­chines would fully eclipse the ca­pa­bil­i­ties of the hu­man mind—would even­tu­ally cre­ate enough eco­nomic ben­e­fits to capture the light cone of all fu­ture value in the uni­verse.” But he also warned of ex­is­ten­tial dan­ger. At some point, the na­tional-se­cu­rity im­pli­ca­tions could grow so dire that the U.S. gov­ern­ment would have to take con­trol of OpenAI, per­haps by na­tion­al­iz­ing it and mov­ing its op­er­a­tions to a se­cure bunker in the desert. By late 2015, Musk was per­suaded. We should say that we are start­ing with a $1B fund­ing com­mit­ment,” he wrote. I will cover what­ever any­one else does­n’t pro­vide.”Alt­man housed OpenAI in Y Combinator’s non­profit arm, fram­ing it as an in­ter­nal phil­an­thropic pro­ject. He gave OpenAI re­cruits Y.C. stock and moved do­na­tions through Y.C. ac­counts. At one point, the lab was sup­ported by a Y.C. fund in which he held a per­sonal stake. (Altman later de­scribed this stake as in­signif­i­cant. He told us that the Y.C. stock he gave to re­cruits was his own.)The Manhattan Project anal­ogy ap­plied to em­ployee re­cruit­ment, too. Like nu­clear-fis­sion re­search, ma­chine learn­ing was a small sci­en­tific field with epochal im­pli­ca­tions which was dom­i­nated by a cadre of ec­cen­tric ge­niuses. Musk and Altman, along with Brockman, who joined from Stripe, were con­vinced that there were only a few com­puter sci­en­tists alive ca­pa­ble of mak­ing the re­quired break­throughs. Google had a huge cash ad­van­tage and a mul­ti­year head start. We are out­manned and out­gunned by a ridicu­lous mar­gin,” Musk later wrote in an e-mail. But if we are able to at­tract the most tal­ented peo­ple over time and our di­rec­tion is cor­rectly aligned, then OpenAI will pre­vail.”A top re­cruit­ing tar­get was Sutskever, an in­tense and in­tro­verted re­searcher who was of­ten called the most gifted A.I. sci­en­tist of his gen­er­a­tion. Sutskever, who was born in the Soviet Union in 1986, has a re­ced­ing hair­line, dark eyes, and a habit of paus­ing, un­blink­ing, while choos­ing his words. Another tar­get was Dario Amodei, a bio­physi­cist and a font of fre­netic en­ergy who has a ten­dency to ner­vously twist his black hair, and re­sponds to one-line e-mails with multi-para­graph es­says. Both had lu­cra­tive jobs else­where, but Altman lav­ished them with at­ten­tion. He later joked, I stalked Ilya.”Musk was the big­ger name, but Altman had the smoother touch. He e-mailed Amodei, and they set up a one-on-one din­ner at an Indian restau­rant. (Altman: fuck my uber got in a crash! run­ning about 10 late.” Amodei: Wow, hope you’re ok.”) Like many A.I. re­searchers, Amodei be­lieved that the tech­nol­ogy should be built only if it was shown to be aligned” with hu­man val­ues, mean­ing that it would act in ac­cor­dance with what peo­ple wanted with­out mak­ing a po­ten­tially fa­tal er­ror—say, fol­low­ing an in­struc­tion to clean up the en­vi­ron­ment by elim­i­nat­ing its great­est pol­luter, the hu­man race. Altman was re­as­sur­ing, mir­ror­ing these safety con­cerns.Amodei, who later joined the com­pany, took de­tailed notes on Altman and Brockman’s be­hav­ior for years, un­der the head­ing My Experience with OpenAI” (subheading: Private: Do Not Share”). A col­lec­tion of more than two hun­dred pages of doc­u­ments re­lated to Amodei, in­clud­ing those notes and in­ter­nal e-mails and memos, has been cir­cu­lated by col­leagues in Silicon Valley but never be­fore dis­closed pub­licly. In his notes, Amodei wrote that Altman’s goal was to build an AI lab that would be fo­cused on safety (‘maybe not right away, but as soon as it can be’).”In December, 2015, hours be­fore OpenAI was pub­licly an­nounced, Altman e-mailed Musk about a ru­mor that Google was going to give every­one in ope­nAI mas­sive coun­terof­fers to­mor­row to try to kill it.” Musk replied, Has Ilya come back with a solid yes?” Altman as­sured him that Sutskever was hold­ing firm. Google of­fered Sutskever six mil­lion dol­lars a year, which OpenAI could­n’t come close to match­ing. But, Altman boasted, they un­for­tu­nately dont have do the right thing’ on their side.”“I’m just say­ing, if we tear up the pil­lows and rip up the mat­tress, it might make our place look more lived in.”Musk pro­vided some of­fice space for OpenAI in a for­mer suit­case fac­tory in the Mission District of San Francisco. The pitch to em­ploy­ees, Sutskever told us, was You’re go­ing to save the world.”If every­thing went right, the OpenAI founders be­lieved, ar­ti­fi­cial in­tel­li­gence could usher in a post-scarcity utopia, au­tomat­ing grunt work, cur­ing can­cer, and lib­er­at­ing peo­ple to en­joy lives of leisure and abun­dance. But if the tech­nol­ogy went rogue, or fell into the wrong hands, the dev­as­ta­tion could be to­tal. China could use it to build a novel bioweapon or a fleet of ad­vanced drones; an A.I. model could out­ma­neu­ver its over­seers, repli­cat­ing it­self on se­cret servers so that it could­n’t be turned off; in ex­treme cases, it might seize con­trol of the en­ergy grid, the stock mar­ket, or the nu­clear ar­se­nal. Not every­one be­lieved this, to say the least, but Altman re­peat­edly af­firmed that he did. He wrote on his blog in 2015 that su­per­hu­man ma­chine in­tel­li­gence does not have to be the in­her­ently evil sci-fi ver­sion to kill us all. A more prob­a­ble sce­nario is that it sim­ply does­n’t care about us much ei­ther way, but in an ef­fort to ac­com­plish some other goal . . . wipes us out.” OpenAI’s founders vowed not to priv­i­lege speed over safety, and the or­ga­ni­za­tion’s ar­ti­cles of in­cor­po­ra­tion made ben­e­fit­ting hu­man­ity a legally bind­ing duty. If A.I. was go­ing to be the most pow­er­ful tech­nol­ogy in his­tory, it fol­lowed that any in­di­vid­ual with sole con­trol over it stood to be­come uniquely pow­er­ful—a sce­nario that the founders re­ferred to as an AGI dic­ta­tor­ship.”Alt­man told early re­cruits that OpenAI would re­main a pure non­profit, and pro­gram­mers took sig­nif­i­cant pay cuts to work there. The com­pany ac­cepted char­i­ta­ble grants, in­clud­ing thirty mil­lion dol­lars from what was then called Open Philanthropy, a hub of the ef­fec­tive-al­tru­ism move­ment whose com­mit­ments in­cluded sup­port­ing the dis­tri­b­u­tion of mos­quito nets to the global poor.Brock­man and Sutskever man­aged OpenAI’s daily op­er­a­tions, while Musk and Altman, still busy with their other jobs, stopped by around once a week. By September, 2017, though, Musk had grown im­pa­tient. During dis­cus­sions about whether to re­con­sti­tute OpenAI as a for-profit com­pany, he de­manded ma­jor­ity con­trol. Altman’s replies var­ied de­pend­ing on the con­text. His main con­sis­tent de­mand seems to have been that if OpenAI were re­or­ga­nized un­der the con­trol of a C.E.O. that job should go to him. Sutskever seemed un­com­fort­able with this idea. He sent Musk and Altman a long, plain­tive e-mail on be­half of him­self and Brockman, with the sub­ject line Honest Thoughts.” He wrote, The goal of OpenAI is to make the fu­ture good and to avoid an AGI dic­ta­tor­ship.” He con­tin­ued, ad­dress­ing Musk, So it is a bad idea to cre­ate a struc­ture where you could be­come a dic­ta­tor.” He re­layed sim­i­lar con­cerns to Altman: We don’t un­der­stand why the CEO ti­tle is so im­por­tant to you. Your stated rea­sons have changed, and it’s hard to re­ally un­der­stand what’s dri­ving it.”“Guys, I’ve had enough,” Musk replied. Either go do some­thing on your own or con­tinue with OpenAI as a non­profit”—oth­er­wise I’m just be­ing a fool who is es­sen­tially pro­vid­ing free fund­ing for you to cre­ate a startup.” He quit, ac­ri­mo­niously, five months later. (In 2023, he founded a for-profit com­peti­tor called xAI. The fol­low­ing year, he sued Altman and OpenAI for fraud and breach of char­i­ta­ble trust, al­leg­ing that he had been assiduously ma­nip­u­lated” by Altman’s long con”—that Altman had preyed on his con­cerns about the dan­gers of A.I. in or­der to sep­a­rate him from his money. The suit, which OpenAI has vig­or­ously con­tested, is on­go­ing.)Af­ter Musk’s de­par­ture, Amodei and other re­searchers chafed against the lead­er­ship of Brockman, whom some con­sid­ered an abra­sive op­er­a­tor, and of Sutskever, who was gen­er­ally viewed as prin­ci­pled but dis­or­ga­nized. In the process of be­com­ing C.E.O., Altman seems to have made dif­fer­ent promises to dif­fer­ent fac­tions at the com­pany. He as­sured some re­searchers that Brockman’s man­age­r­ial au­thor­ity would be di­min­ished. But, un­be­knownst to them, he also struck a se­cret hand­shake deal with Brockman and Sutskever: Altman would get the C.E.O. ti­tle; in ex­change, he agreed to re­sign if the other two deemed it nec­es­sary. (He dis­puted this char­ac­ter­i­za­tion, say­ing he took the C.E.O. role only be­cause he was asked to. All three men con­firmed that the pact ex­isted, though Brockman said that it was in­for­mal. He uni­lat­er­ally told us that he’d step down if we ever both asked him to,” he told us. We ob­jected to this idea, but he said it was im­por­tant to him. It was purely al­tru­is­tic.”) Later, the board was alarmed to learn that its C.E.O. had es­sen­tially ap­pointed his own shadow board.In­ter­nal records show that the founders had pri­vate doubts about the non­profit struc­ture as early as 2017. That year, af­ter Musk tried to take con­trol, Brockman wrote in a di­ary en­try, cannot say that we are com­mit­ted to the non-profit . . . if three months later we’re do­ing b-corp then it was a lie.” Amodei, in one of his early notes, re­called press­ing Brockman on his pri­or­i­ties and Brockman re­ply­ing that he wanted money and power.” Brockman dis­putes this. His di­ary en­tries from this time sug­gest con­flict­ing in­stincts. One reads, Happy to not be­come rich on this, so long as no one else is.” In an­other, he asks, So what do I *really* want?” Among his an­swers is Financially what will take me to $1B.”In 2017, Sutskever was in the of­fice when he read a pa­per that Google re­searchers had just pub­lished, propos­ing a new sim­ple net­work ar­chi­tec­ture, the Transformer.” He jumped out of his chair, ran down the hall, and told his fel­low-re­searchers, Stop every­thing you’re do­ing. This is it.” The Transformer, Sutskever saw, was an in­no­va­tion that might en­able OpenAI to train vastly more so­phis­ti­cated mod­els. Out of this dis­cov­ery came the first gen­er­a­tive pre-trained trans­former—the seed of what would be­come ChatGPT.As the tech­nol­ogy be­came in­creas­ingly pow­er­ful, we learned, about a dozen of OpenAI’s top en­gi­neers held a se­ries of se­cret meet­ings to dis­cuss whether OpenAI’s founders, in­clud­ing Brockman and Altman, could be trusted. At one, an em­ployee was re­minded of a sketch by the British com­edy duo Mitchell and Webb, in which a Nazi sol­dier on the Eastern Front, in a mo­ment of clar­ity, asks, Are we the bad­dies?”By 2018, Amodei had started ques­tion­ing the founders’ mo­tives more openly. Everything was a ro­tat­ing set of schemes to raise money,” he later wrote in his notes. I felt like what OpenAI needed was a clear state­ment of what it would do, what it would not do, and how its ex­is­tence would make the world bet­ter.” OpenAI al­ready had a mis­sion state­ment: To en­sure that ar­ti­fi­cial gen­eral in­tel­li­gence ben­e­fits all of hu­man­ity.” But it was­n’t clear to Amodei what this meant to the ex­ec­u­tives, if it meant any­thing at all. In early 2018, Amodei has said, he started draft­ing a char­ter for the com­pany and, in weeks of con­ver­sa­tions with Altman and Brockman, ad­vo­cated for its most rad­i­cal clause: if a value-aligned, safety-con­scious pro­ject” came close to build­ing an A.G.I. be­fore OpenAI did, the com­pany would stop com­pet­ing with and start as­sist­ing this pro­ject.” According to the merge and as­sist” clause, as it was called, if, say, Google’s re­searchers fig­ured out how to build a safe A.G.I. first, then OpenAI could wind it­self down and do­nate its re­sources to Google. By any nor­mal cor­po­rate logic, this was an in­sane thing to promise. But OpenAI was not sup­posed to be a nor­mal com­pany.That premise was tested in the spring of 2019, when OpenAI was ne­go­ti­at­ing a bil­lion-dol­lar in­vest­ment from Microsoft. Although Amodei, who was lead­ing the com­pa­ny’s safety team, had helped to pitch the deal to Bill Gates, many peo­ple on the team were anx­ious about it, fear­ing that Microsoft would in­sert pro­vi­sions that over­rode OpenAI’s eth­i­cal com­mit­ments. Amodei pre­sented Altman with a ranked list of safety de­mands, plac­ing the preser­va­tion of the merge-and-as­sist clause at the very top. Altman agreed to that de­mand, but in June, as the deal was clos­ing, Amodei dis­cov­ered that a pro­vi­sion grant­ing Microsoft the power to block OpenAI from any merg­ers had been added. Eighty per cent of the char­ter was just be­trayed,” Amodei re­called. He con­fronted Altman, who de­nied that the pro­vi­sion ex­isted. Amodei read it aloud, point­ing to the text, and ul­ti­mately forced an­other col­league to con­firm its ex­is­tence to Altman di­rectly. (Altman does­n’t re­mem­ber this.) Amodei’s notes de­scribe es­ca­lat­ing tense en­coun­ters, in­clud­ing one, months later, in which Altman sum­moned him and his sis­ter, Daniela, who worked in safety and pol­icy at the com­pany, to tell them that he had it on good au­thor­ity” from a se­nior ex­ec­u­tive that they had been plot­ting a coup. Daniela, the notes con­tinue, lost it,” and brought in that ex­ec­u­tive, who de­nied hav­ing said any­thing. As one per­son briefed on the ex­change re­called, Altman then de­nied hav­ing made the claim. I did­n’t even say that,” he said. You just said that,” Daniela replied. (Altman said that this was not quite his rec­ol­lec­tion, and that he had ac­cused the Amodeis only of political be­hav­ior.”) In 2020, Amodei, Daniela, and other col­leagues left to found Anthropic, which is now one of OpenAI’s chief ri­vals.Alt­man con­tin­ued tout­ing OpenAI’s com­mit­ment to safety, es­pe­cially when po­ten­tial re­cruits were within earshot. In late 2022, four com­puter sci­en­tists pub­lished a pa­per mo­ti­vated in part by con­cerns about deceptive align­ment,” in which suf­fi­ciently ad­vanced mod­els might pre­tend to be­have well dur­ing test­ing and then, once de­ployed, pur­sue their own goals. (It’s one of sev­eral A.I. sce­nar­ios that sound like sci­ence fic­tion—but, un­der cer­tain ex­per­i­men­tal con­di­tions, it’s al­ready hap­pen­ing.) Weeks af­ter the pa­per was pub­lished, one of its au­thors, a Ph.D. stu­dent at the University of California, Berkeley, got an e-mail from Altman, who said that he was in­creas­ingly wor­ried about the threat of un­aligned A.I. He added that he was think­ing of com­mit­ting a bil­lion dol­lars to the is­sue, which many A.I. ex­perts con­sid­ered the most im­por­tant un­solved prob­lem in the world, po­ten­tially by en­dow­ing a prize to in­cen­tivize re­searchers around the world to study it. Although the grad­u­ate stu­dent had heard vague ru­mors about Sam be­ing slip­pery,” he told us, Altman’s show of com­mit­ment won him over. He took an aca­d­e­mic leave to join OpenAI.But, in the course of sev­eral meet­ings in the spring of 2023, Altman seemed to wa­ver. He stopped talk­ing about en­dow­ing a prize. Instead, he ad­vo­cated for es­tab­lish­ing an in-house superalignment team.” An of­fi­cial an­nounce­ment, re­fer­ring to the com­pa­ny’s re­serves of com­put­ing power, pledged that the team would get 20% of the com­pute we’ve se­cured to date”—a re­source po­ten­tially worth more than a bil­lion dol­lars. The ef­fort was nec­es­sary, ac­cord­ing to the an­nounce­ment, be­cause, if align­ment re­mained un­solved, A.G.I. might lead to the dis­em­pow­er­ment of hu­man­ity or even hu­man ex­tinc­tion.” Jan Leike, who was ap­pointed to lead the team with Sutskever, told us, It was a pretty ef­fec­tive re­ten­tion tool.”The twenty-per-cent com­mit­ment evap­o­rated, how­ever. Four peo­ple who worked on or closely with the team said that the ac­tual re­sources were be­tween one and two per cent of the com­pa­ny’s com­pute. Furthermore, a re­searcher on the team said, most of the su­per­align­ment com­pute was ac­tu­ally on the old­est clus­ter with the worst chips.” The re­searchers be­lieved that su­pe­rior hard­ware was be­ing re­served for profit-gen­er­at­ing ac­tiv­i­ties. (OpenAI dis­putes this.) Leike com­plained to Murati, then the com­pa­ny’s chief tech­nol­ogy of­fi­cer, but she told him to stop press­ing the point—the com­mit­ment had never been re­al­is­tic.“She skip­pidy-boop-bee-bop-doo-wop­pity-wopped right out of my life.”Around this time, a for­mer em­ployee told us, Sutskever was get­ting su­per safety-pilled.” In the early days of OpenAI, he had con­sid­ered con­cerns about cat­a­strophic risk le­git­i­mate but re­mote. Now, as he came to be­lieve that A.G.I. was im­mi­nent, his wor­ries grew more acute. There was an all-hands meet­ing, the for­mer em­ployee con­tin­ued, where Ilya gets up and he’s, like, Hey, every­one, there’s go­ing to be a point in the next few years where ba­si­cally every­one at this com­pany has to switch to work­ing on safety, or else we’re fucked.” But the su­per­align­ment team was dis­solved the fol­low­ing year, with­out com­plet­ing its mis­sion.By then, in­ter­nal mes­sages show, ex­ec­u­tives and board mem­bers had come to be­lieve that Altman’s omis­sions and de­cep­tions might have ram­i­fi­ca­tions for the safety of OpenAI’s prod­ucts. In a meet­ing in December, 2022, Altman as­sured board mem­bers that a va­ri­ety of fea­tures in a forth­com­ing model, GPT-4, had been ap­proved by a safety panel. Toner, the board mem­ber and A.I.-policy ex­pert, re­quested doc­u­men­ta­tion. She learned that the most con­tro­ver­sial fea­tures—one that al­lowed users to fine-tune” the model for spe­cific tasks, and an­other that de­ployed it as a per­sonal as­sis­tant—had not been ap­proved. As McCauley, the board mem­ber and en­tre­pre­neur, left the meet­ing, an em­ployee pulled her aside and asked if she knew about the breach” in India. Altman, dur­ing many hours of brief­ing with the board, had ne­glected to men­tion that Microsoft had re­leased an early ver­sion of ChatGPT in India with­out com­plet­ing a re­quired safety re­view. It just was kind of com­pletely ig­nored,” Jacob Hilton, an OpenAI re­searcher at the time, said.Al­though these lapses did not cause se­cu­rity crises, Carroll Wainwright, an­other re­searcher, said that they were part of a continual slide to­ward em­pha­siz­ing prod­ucts over safety.” After the re­lease of GPT-4, Leike e-mailed mem­bers of the board. OpenAI has been go­ing off the rails on its mis­sion,” he wrote. We are pri­or­i­tiz­ing the prod­uct and rev­enue above all else, fol­lowed by AI ca­pa­bil­i­ties, re­search and scal­ing, with align­ment and safety com­ing third.” He con­tin­ued, Other com­pa­nies like Google are learn­ing that they should de­ploy faster and ig­nore safety prob­lems.”Mc­Cauley, in an e-mail to her fel­low-mem­bers, wrote, I think we’re def­i­nitely at a point where the board should be in­creas­ing its level of scrutiny.” The board mem­bers tried to con­front what they viewed as a mount­ing prob­lem, but they were out­matched. You had a bunch of J.V. peo­ple who’ve never done any­thing, to be blunt,” Sue Yoon, a for­mer board mem­ber, said. In 2023, the com­pany was prepar­ing to re­lease its GPT-4 Turbo model. As Sutskever de­tails in the memos, Altman ap­par­ently told Murati that the model did­n’t need safety ap­proval, cit­ing the com­pa­ny’s gen­eral coun­sel, Jason Kwon. But when she asked Kwon, over Slack, he replied, ugh . . . con­fused where sam got that im­pres­sion.” (A rep­re­sen­ta­tive for OpenAI, where Kwon re­mains an ex­ec­u­tive, said that the mat­ter was not a big deal.”)Soon af­ter­ward, the board made its de­ci­sion to fire Altman—and then the world watched as Altman re­versed it. A ver­sion of the OpenAI char­ter is still on the or­ga­ni­za­tion’s web­site. But peo­ple fa­mil­iar with OpenAI’s gov­ern­ing doc­u­ments said that it has been di­luted to the point of mean­ing­less­ness. Last June, on his per­sonal blog, Altman wrote, re­fer­ring to ar­ti­fi­cial su­per­in­tel­li­gence, We are past the event hori­zon; the take­off has started.” This was, ac­cord­ing to the char­ter, ar­guably the mo­ment when OpenAI might stop com­pet­ing with other com­pa­nies and start work­ing with them. But in that post, called The Gentle Singularity,” he adopted a new tone, re­plac­ing ex­is­ten­tial ter­ror with ebul­lient op­ti­mism. We’ll all get bet­ter stuff,” he wrote. We will build ever-more-won­der­ful things for each other.” He ac­knowl­edged that the align­ment prob­lem re­mained un­solved, but he re­de­fined it—rather than be­ing a deadly threat, it was an in­con­ve­nience, like the al­go­rithms that tempt us to waste time scrolling on Instagram.Altman is of­ten de­scribed, ei­ther with rev­er­ence or with sus­pi­cion, as the great­est pitch­man of his gen­er­a­tion. Steve Jobs, one of his idols, was said to pro­ject a reality-distortion field”—an unas­sail­able con­fi­dence that the world would con­form to his vi­sion. But even Jobs never told his cus­tomers that if they did­n’t buy his brand of MP3 player every­one they loved would die. When Altman was twenty-three, in 2008, Graham, his men­tor, wrote, You could para­chute him into an is­land full of can­ni­bals and come back in 5 years and he’d be the king.” This judg­ment was based not on Altman’s track record, which was mod­est, but on his will to pre­vail, which Graham con­sid­ered al­most un­govern­able. When ad­vised not to in­clude Y.C. alumni on a list of the world’s top startup founders, Graham put Altman on it any­way. Sam Altman can’t be stopped by such flimsy rules,” he wrote.Gra­ham meant this as a com­pli­ment. But some of Altman’s clos­est col­leagues came to have a dif­fer­ent view of this qual­ity. After Sutskever grew more dis­tressed about A.I. safety, he com­piled the memos about Altman and Brockman. They have since taken on a leg­endary sta­tus in Silicon Valley; in some cir­cles, they are sim­ply called the Ilya Memos. Meanwhile, Amodei was con­tin­u­ing to as­sem­ble notes. These and the other doc­u­ments re­lated to him chart his shift from cau­tious ide­al­ism to alarm. His lan­guage is more heated than Sutskever’s, by turns in­censed at Altman—“His words were al­most cer­tainly bull­shit”—and wist­ful about what he says was a fail­ure to cor­rect OpenAI’s course.Nei­ther col­lec­tion of doc­u­ments con­tains a smok­ing gun. Rather, they re­count an ac­cu­mu­la­tion of al­leged de­cep­tions and ma­nip­u­la­tions, each of which might, in iso­la­tion, be greeted with a shrug: Altman pur­port­edly of­fers the same job to two peo­ple, tells con­tra­dic­tory sto­ries about who should ap­pear on a live stream, dis­sem­bles about safety re­quire­ments. But Sutskever con­cluded that this kind of be­hav­ior does not cre­ate an en­vi­ron­ment con­ducive to the cre­ation of a safe AGI.” Amodei and Sutskever were never close friends, but they reached sim­i­lar con­clu­sions. Amodei wrote, The prob­lem with OpenAI is Sam him­self.”We have in­ter­viewed more than a hun­dred peo­ple with first­hand knowl­edge of how Altman con­ducts busi­ness: cur­rent and for­mer OpenAI em­ploy­ees and board mem­bers; guests and staffers at Altman’s var­i­ous houses; his col­leagues and com­peti­tors; his friends and en­e­mies and sev­eral peo­ple who, given the mer­ce­nary cul­ture of Silicon Valley, have been both. (OpenAI has an agree­ment with Condé Nast, the owner of The New Yorker, which al­lows OpenAI to dis­play its con­tent in search re­sults for a lim­ited term.)Some peo­ple de­fended Altman’s busi­ness acu­men and dis­missed his ri­vals, es­pe­cially Sutskever and Amodei, as failed as­pi­rants to his throne. Others por­trayed them as gullible, ab­sent-minded sci­en­tists, or as hys­ter­i­cal doomers,” gripped by the delu­sion that the soft­ware they were build­ing would some­how come alive and kill them. Yoon, the for­mer board mem­ber, ar­gued that Altman was not this Machiavellian vil­lain” but merely, to the point of fecklessness,” able to con­vince him­self of the shift­ing re­al­i­ties of his sales pitches. He’s too caught up in his own self-be­lief,” she said. So he does things that, if you live in the real world, make no sense. But he does­n’t live in the real world.”Yet most of the peo­ple we spoke to shared the judg­ment of Sutskever and Amodei: Altman has a re­lent­less will to power that, even among in­dus­tri­al­ists who put their names on space­ships, sets him apart. He’s un­con­strained by truth,” the board mem­ber told us. He has two traits that are al­most never seen in the same per­son. The first is a strong de­sire to please peo­ple, to be liked in any given in­ter­ac­tion. The sec­ond is al­most a so­cio­pathic lack of con­cern for the con­se­quences that may come from de­ceiv­ing some­one.”The board mem­ber was not the only per­son who, un­prompted, used the word sociopathic.” One of Altman’s batch mates in the first Y Combinator co­hort was Aaron Swartz, a bril­liant but trou­bled coder who died by sui­cide in 2013 and is now re­mem­bered in many tech cir­cles as some­thing of a sage. Not long be­fore his death, Swartz ex­pressed con­cerns about Altman to sev­eral friends. You need to un­der­stand that Sam can never be trusted,” he told one. He is a so­ciopath. He would do any­thing.” Multiple se­nior ex­ec­u­tives at Microsoft said that, de­spite Nadella’s long-stand­ing loy­alty, the com­pa­ny’s re­la­tion­ship with Altman has be­come fraught. He has mis­rep­re­sented, dis­torted, rene­go­ti­ated, re­neged on agree­ments,” one said. Earlier this year, OpenAI reaf­firmed Microsoft as the ex­clu­sive cloud provider for its stateless”—or mem­o­ry­less—mod­els. That day, it an­nounced a fifty-bil­lion-dol­lar deal mak­ing Amazon the ex­clu­sive re­seller of its en­ter­prise plat­form for A.I. agents. While re­selling is per­mit­ted, Microsoft ex­ec­u­tives ar­gue OpenAI’s plan could col­lide with Microsoft’s ex­clu­siv­ity. (OpenAI main­tains that the Amazon deal will not vi­o­late the ear­lier con­tract; a Microsoft rep­re­sen­ta­tive said the com­pany is confident that OpenAI un­der­stands and re­spects” its le­gal oblig­a­tions.) The se­nior ex­ec­u­tive at Microsoft said, of Altman, I think there’s a small but real chance he’s even­tu­ally re­mem­bered as a Bernie Madoff- or Sam Bankman-Fried-level scam­mer.”Alt­man is not a tech­ni­cal sa­vant—ac­cord­ing to many in his or­bit, he lacks ex­ten­sive ex­per­tise in cod­ing or ma­chine learn­ing. Multiple en­gi­neers re­called him mis­us­ing or con­fus­ing ba­sic tech­ni­cal terms. He built OpenAI, in large part, by har­ness­ing other peo­ple’s money and tech­ni­cal tal­ent. This does­n’t make him unique. It makes him a busi­ness­man. More re­mark­able is his abil­ity to con­vince skit­tish en­gi­neers, in­vestors, and a tech-skep­ti­cal pub­lic that their pri­or­i­ties, even when mu­tu­ally ex­clu­sive, are also his pri­or­i­ties. When such peo­ple have tried to hin­der his next move, he has of­ten found the words to neu­tral­ize them, at least tem­porar­ily; usu­ally, by the time they lose pa­tience with him, he’s got what he needs. He sets up struc­tures that, on pa­per, con­strain him in the fu­ture,” Wainwright, the for­mer OpenAI re­searcher, said. But then, when the fu­ture comes and it comes time to be con­strained, he does away with what­ever the struc­ture was.”“He’s un­be­liev­ably per­sua­sive. Like, Jedi mind tricks,” a tech ex­ec­u­tive who has worked with Altman said. He’s just next level.” A clas­sic hy­po­thet­i­cal sce­nario in align­ment re­search in­volves a con­test of wills be­tween a hu­man and a high-pow­ered A.I. In such a con­test, re­searchers usu­ally ar­gue, the A.I. would surely win, much the way a grand­mas­ter will beat a child at chess. Watching Altman out­ma­neu­ver the peo­ple around him dur­ing the Blip, the ex­ec­u­tive con­tin­ued, had been like watch­ing an A.G.I. break­ing out of the box.”In the days af­ter his fir­ing, Altman fought to avoid any out­side in­ves­ti­ga­tion of the claims against him. He told two peo­ple that he wor­ried even the ex­is­tence of an in­ves­ti­ga­tion would make him look guilty. (Altman de­nies this.) But, af­ter the re­sign­ing board mem­bers made their de­par­ture con­di­tional on there be­ing an in­de­pen­dent in­quiry, Altman ac­ceded to a review” of recent events.” The two new board mem­bers in­sisted that they con­trol that re­view, ac­cord­ing to peo­ple in­volved in the ne­go­ti­a­tions. Summers, with his net­work of po­lit­i­cal and Wall Street con­nec­tions, seemed to lend it cred­i­bil­ity. (Last November, af­ter the dis­clo­sure of e-mails in which Summers sought Jeffrey Epstein’s ad­vice while pur­su­ing a ro­man­tic re­la­tion­ship with a young pro­tégée, he re­signed from the board.) OpenAI en­listed WilmerHale, the dis­tin­guished law firm re­spon­si­ble for the in­ter­nal in­ves­ti­ga­tions of Enron and WorldCom, to con­duct the re­view.Six peo­ple close to the in­quiry al­leged that it seemed de­signed to limit trans­parency. Some of them said that the in­ves­ti­ga­tors ini­tially did not con­tact im­por­tant fig­ures at the com­pany. An em­ployee reached out to Summers and Taylor to com­plain. They were just in­ter­ested in the nar­row range of what hap­pened dur­ing the board drama, and not the his­tory of his in­tegrity,” the em­ployee re­called of his in­ter­view with in­ves­ti­ga­tors. Others were un­com­fort­able shar­ing con­cerns about Altman be­cause they felt there was not a suf­fi­cient ef­fort to in­sure anonymity. Everything pointed to the fact that they wanted to find the out­come, which is to ac­quit him,” the em­ployee said. (Some of the lawyers in­volved de­fended the process, say­ing, It was an in­de­pen­dent, care­ful, com­pre­hen­sive re­view that fol­lowed the facts wher­ever they led.” Taylor also said that the re­view was thorough and in­de­pen­dent.”)Cor­po­rate in­ves­ti­ga­tions aim to con­fer le­git­i­macy. At pri­vate com­pa­nies, their find­ings are some­times not writ­ten down—this can be a way to limit li­a­bil­ity. But in cases in­volv­ing pub­lic scan­dals there is of­ten a greater ex­pec­ta­tion of trans­parency. Before Kalanick left Uber, in 2017, its board hired an out­side firm, which re­leased a thir­teen-page sum­mary to the pub­lic. Given OpenAI’s 501(c)(3) sta­tus and the high-pro­file na­ture of the fir­ing, many ex­ec­u­tives there ex­pected to see ex­ten­sive find­ings. In March, 2024, how­ever, OpenAI an­nounced that it would clear Altman but re­leased no re­port. The com­pany pro­vided, on its web­site, some eight hun­dred words ac­knowl­edg­ing a breakdown in trust.”Peo­ple in­volved in the in­ves­ti­ga­tion said that no re­port was re­leased be­cause none was writ­ten. Instead, the find­ings were lim­ited to oral brief­ings, shared with Summers and Taylor. The re­view did not con­clude that Sam was a George Washington cherry tree of in­tegrity,” one of the peo­ple close to the in­quiry said. But the in­ves­ti­ga­tion ap­pears not to have cen­tered the ques­tions of in­tegrity be­hind Altman’s fir­ing, de­vot­ing much of its fo­cus to a hunt for clear crim­i­nal­ity; on that ba­sis, it con­cluded that he could re­main as C.E.O. Shortly there­after, Altman, who had been kicked off the board when he was fired, re­joined it. The de­ci­sion not to put the re­port in writ­ing was made in part on the ad­vice of Summers’s and Taylor’s per­sonal at­tor­neys, the per­son close to the in­quiry told us. (Summers de­clined to com­ment on the record. Taylor said that, in light of the oral brief­ings, there had been no need for a for­mal writ­ten re­port.”)Many for­mer and cur­rent OpenAI em­ploy­ees told us that they were shocked by the lack of dis­clo­sure. Altman said he be­lieved that all the board mem­bers who joined in the af­ter­math of his re­in­state­ment re­ceived the oral brief­ings. That’s an ab­solute, out­right lie,” a per­son with di­rect knowl­edge of the sit­u­a­tion said. Some board mem­bers told us that on­go­ing ques­tions about the in­tegrity of the re­port could prompt, as one put it, a need for an­other in­ves­ti­ga­tion.”The ab­sence of a writ­ten record helped min­i­mize the al­le­ga­tions. So, in­creas­ingly, did Altman’s stature in Silicon Valley. Multiple promi­nent in­vestors who have worked with Altman told us that he has a rep­u­ta­tion for freez­ing out in­vestors if they back OpenAI’s com­peti­tors. If they in­vest in some­thing that he does­n’t like, they won’t get ac­cess to other things,” one of them said. Another source of Altman’s power is his vast list of in­vest­ments, which at times ex­tends to his per­sonal life. He has fi­nan­cial en­tan­gle­ments with nu­mer­ous for­mer ro­man­tic part­ners: as a fund co-man­ager, a lead in­vestor, or a fre­quent co-in­vestor. This is hardly un­usual. Many of Silicon Valley’s straight ex­ec­u­tives do the same thing with their ro­man­tic and sex­ual part­ners. (“You have to,” one promi­nent C.E.O. told us.) I’ve ob­vi­ously in­vested with some exes af­ter the fact. And I think that’s, like, to­tally fine,” Altman said. But the dy­namic af­fords an ex­tra­or­di­nary level of con­trol. It cre­ates a very, very high de­pen­dence, es­sen­tially,” a per­son close to Altman said. Oftentimes, it’s a life­time de­pen­dence.”Even for­mer col­leagues can be af­fected. Murati left OpenAI in 2024 and be­gan build­ing her own A.I. startup. Josh Kushner, the close Altman ally, called her. He praised her lead­er­ship, then made what seemed to be a veiled threat, not­ing that he was concerned about” her reputation” and that for­mer col­leagues now viewed her as an enemy.” (Kushner, through a rep­re­sen­ta­tive, said that this ac­count did not convey full con­text”; Altman said that he was un­aware of the call.)At the be­gin­ning of his tenure as C.E.O., Altman had an­nounced that OpenAI would cre­ate a capped profit” com­pany, which would be owned by the non­profit. This byzan­tine cor­po­rate struc­ture ap­par­ently did not ex­ist un­til Altman de­vised it. In the midst of the con­ver­sion, a board mem­ber named Holden Karnofsky ob­jected to it, ar­gu­ing that the non­profit was be­ing se­verely un­der­val­ued. I can’t do that in good faith,” Karnofsky, who is Amodei’s brother-in-law, said. According to con­tem­po­ra­ne­ous notes, he voted against it. However, af­ter an at­tor­ney for the board said that his dis­sent might be a flag to in­ves­ti­gate fur­ther” the le­git­i­macy of the new struc­ture, his vote was recorded as an ab­sten­tion, ap­par­ently with­out his con­sent—a po­ten­tial fal­si­fi­ca­tion of busi­ness records. (OpenAI told us that sev­eral em­ploy­ees re­call Karnofsky ab­stain­ing, and pro­vided the min­utes from the meet­ing record­ing his vote as an ab­sten­tion.)Last October, OpenAI recapitalized” as a for-profit en­tity. The firm touts its as­so­ci­ated non­profit, now called the OpenAI Foundation, as one of the best re­sourced” in his­tory. But it is now a twenty-six-per-cent stake­holder of the com­pany, and its board mem­bers are also, with one ex­cep­tion, mem­bers of the for-profit board.Dur­ing con­gres­sional tes­ti­mony, Altman was asked if he made a lot of money.” He replied, I have no eq­uity in OpenAI . . . I’m do­ing this be­cause I love it”—a care­ful an­swer, given his in­di­rect eq­uity through the Y.C. fund. This is still tech­ni­cally true. But sev­eral peo­ple, in­clud­ing Altman, in­di­cated to us that it could soon change. Investors are, like, I need to know you’re gonna stick with this when times get hard,” Altman said, but added that there was no active dis­cus­sion” about it. According to a le­gal de­po­si­tion, Brockman seems to own a stake in the com­pany that is worth about twenty bil­lion dol­lars. Altman’s share would pre­sum­ably be worth more. Still, he told us that he was not pri­mar­ily mo­ti­vated by wealth. A for­mer em­ployee re­calls him say­ing, I don’t care about money. I care more about power.”In 2023, Altman mar­ried Mulherin in a small cer­e­mony at a home they own in Hawaii. (They’d met nine years prior, late at night in Peter Thiel’s hot tub.) They have hosted a range of guests at the prop­erty, and those we spoke with re­ported wit­ness­ing noth­ing more re­mark­able than the stan­dard di­ver­sions of the very wealthy: meals pre­pared by a pri­vate chef, boat rides at golden hour. One New Year’s party was Survivor”-themed; a pho­to­graph shows a num­ber of shirt­less, smil­ing men, and also Jeff Probst, the real host of Survivor.” Altman has also hosted smaller groups of friends at his prop­er­ties, gath­er­ings that have in­cluded, in at least one in­stance, a spir­ited game of strip poker. (A pho­to­graph of the event, which did not in­clude Altman, leaves un­clear who won, but at least three men clearly lost.) We spoke to many of Altman’s for­mer guests who sug­gested only that he is a gen­er­ous host.Nev­er­the­less, ru­mors about Altman’s per­sonal life have been ex­ploited and dis­torted by com­peti­tors. Ruthless busi­ness ri­val­ries are noth­ing new, but the com­pe­ti­tion within the A.I. in­dus­try has be­come ex­tra­or­di­nar­ily cut­throat. (“Shakespearean” was the word an OpenAI ex­ec­u­tive used to de­scribe it to us, adding, The nor­mal rules of the game sort of don’t ap­ply any­more.”) Intermediaries di­rectly con­nected to, and in at least one case com­pen­sated by, Musk have cir­cu­lated dozens of pages of de­tailed op­po­si­tion re­search about Altman. They re­flect ex­ten­sive sur­veil­lance, doc­u­ment­ing shell com­pa­nies as­so­ci­ated with him, the per­sonal con­tact in­for­ma­tion of close as­so­ci­ates, and even in­ter­views about a pur­ported sex worker, con­ducted at gay bars. One of the Musk in­ter­me­di­aries claimed that Altman’s flights and the par­ties he at­tended were be­ing tracked. Altman told us, I don’t think any­one has had more pri­vate in­ves­ti­ga­tors hired against them.”Ex­treme claims have cir­cu­lated. The right-wing broad­caster Tucker Carlson sug­gested, with­out any ap­par­ent proof, that Altman was in­volved in the death of a whis­tle-blower. This claim and oth­ers have been am­pli­fied by ri­vals. Altman’s sis­ter, Annie, claimed in a law­suit, and in in­ter­views with us, that he sex­u­ally abused her for years, be­gin­ning when she was three and he was twelve. (We could not sub­stan­ti­ate Annie’s ac­count, which Altman has de­nied and his broth­ers and mother have called utterly un­true” and a source of immense pain to our en­tire fam­ily.” In in­ter­views that the jour­nal­ist Karen Hao con­ducted for her book, Empire of AI,” Annie sug­gested that mem­o­ries of abuse were re­cov­ered dur­ing flash­backs in adult­hood.)Mul­ti­ple peo­ple work­ing within ri­val com­pa­nies and in­vest­ment firms in­sin­u­ated to us that Altman sex­u­ally pur­sues mi­nors—a nar­ra­tive per­sis­tent in Silicon Valley which ap­pears to be un­true. We spent months look­ing into the mat­ter, con­duct­ing dozens of in­ter­views, and could find no ev­i­dence to sup­port it. This is dis­gust­ing be­hav­ior from a com­peti­tor that I as­sume is part of an at­tempt at taint­ing the jury in our up­com­ing cases,” Altman told us. As ridicu­lous as this is to have to say, any claims about me hav­ing sex with a mi­nor, hir­ing sex work­ers, or be­ing in­volved in a mur­der are com­pletely un­true.” He added that he was sort of grate­ful” that we had spent months so ag­gres­sively try­ing to look into this.”“My apart­ment is full of smells that I per­son­ally am in no way re­spon­si­ble for.”Alt­man has ac­knowl­edged dat­ing younger men of le­gal age. We spoke to sev­eral of his part­ners, who told us that they did not find this prob­lem­atic. Yet the op­po­si­tion dossiers from Musk in­ter­me­di­aries spin it as a line of at­tack. (The dossiers in­clude sala­cious and un­sub­stan­ti­ated ref­er­ences to a Twink Army” and Sugar Daddy’s Sexual Habits.”) I think there’s a lot of ho­mo­pho­bia that gets pushed,” Altman said. Swisher, the tech jour­nal­ist, agreed. All these rich guys do wild stuff, wilder than any­thing I’ve been told about Sam,” she told us. But he’s a gay guy in San Francisco,” she added, so that gets weaponized.”For a decade, so­cial-me­dia ex­ec­u­tives promised that they could change the world with lit­tle or no down­side. They dis­missed the law­mak­ers who wanted to slow them down as mere Luddites, even­tu­ally earn­ing bi­par­ti­san de­ri­sion. Altman, by con­trast, came across as re­fresh­ingly con­sci­en­tious. Rather than ward­ing off reg­u­la­tion, he prac­ti­cally begged for it. Testifying be­fore the Senate Judiciary Committee in 2023, he pro­posed a new fed­eral agency to over­see ad­vanced A.I. mod­els. If this tech­nol­ogy goes wrong, it can go quite wrong,” he said. Senator John Kennedy, of Louisiana, known for his can­tan­ker­ous ex­changes with tech C.E.O.s, seemed charmed, rest­ing his face on his hand and sug­gest­ing that per­haps Altman should en­force the rules him­self.But, as Altman pub­licly wel­comed reg­u­la­tion, he qui­etly lob­bied against it. In 2022 and 2023, ac­cord­ing to Time, OpenAI suc­cess­fully pressed to di­lute a European Union ef­fort that would have sub­jected large A.I. com­pa­nies to more over­sight. In 2024, a bill was in­tro­duced in the California state leg­is­la­ture man­dat­ing safety test­ing for A.I. mod­els. Its pro­vi­sions in­cluded mea­sures re­sem­bling the ones that Altman had ad­vo­cated for in his con­gres­sional tes­ti­mony. OpenAI pub­licly op­posed the bill but in pri­vate be­gan is­su­ing threats. I would say that, over the course of the year, we saw in­creas­ingly cun­ning, de­cep­tive be­hav­ior from OpenAI,” a leg­isla­tive aide told us.Con­way, the in­vestor, lob­bied state po­lit­i­cal lead­ers, in­clud­ing Nancy Pelosi and Gavin Newsom, to kill the bill. In the end, it passed the leg­is­la­ture with bi­par­ti­san sup­port, but Newsom ve­toed it. This year, con­gres­sional can­di­dates who fa­vor A.I. reg­u­la­tions have faced op­po­nents funded by Leading the Future, a new pro-A.I.” su­per PAC de­voted to scut­tling such re­stric­tions. OpenAI’s of­fi­cial stance is that it will not con­tribute to such su­per PACs. This is­sue tran­scends par­ti­san pol­i­tics,” Lehane re­cently told CNN. And yet one of the ma­jor donors to Leading the Future is Greg Brockman, who has com­mit­ted fifty mil­lion dol­lars. (This year, Brockman and his wife do­nated twenty-five mil­lion dol­lars to MAGA Inc., a pro-Trump su­per PAC.)OpenAI’s cam­paign has ex­tended be­yond tra­di­tional lob­by­ing. Last year, a suc­ces­sor bill was in­tro­duced in the California Senate. One night, Nathan Calvin, a twenty-nine-year-old lawyer who worked at the non­profit Encode and had helped craft the bill, was at home hav­ing din­ner with his wife when a process server ar­rived to de­liver a sub­poena from OpenAI. The com­pany claimed to be hunt­ing for ev­i­dence that Musk was covertly fund­ing its crit­ics. But it de­manded all of Calvin’s pri­vate com­mu­ni­ca­tions about the bill in the state Senate. They could have asked us, Have you ever talked to or been given money by Elon Musk?’—which we haven’t,” Calvin told us. Other sup­port­ers of the bill, and some crit­ics of OpenAI’s for-profit re­struc­tur­ing, also re­ceived sub­poe­nas. They were go­ing af­ter folks to ba­si­cally scare them into shut­ting up,” Don Howard, who heads a char­ity called the James Irvine Foundation, said. (OpenAI claims that this was part of the stan­dard le­gal process.)Alt­man has long sup­ported Democrats. I’m very sus­pi­cious of pow­er­ful au­to­crats telling a story of fear to gang up on the weak,” he told us. That’s a Jewish thing, not a gay thing.” In 2016, he en­dorsed Hillary Clinton and called Trump an un­prece­dented threat to America.” In 2020, he do­nated to the Democratic Party and to the Biden Victory Fund. During the Biden Administration, Altman met with the White House at least half a dozen times. He helped de­velop a lengthy ex­ec­u­tive or­der lay­ing out the first fed­eral regime of safety tests and other guardrails for A.I. When Biden signed it, Altman called it a good start.”In 2024, with Biden’s poll num­bers slip­ping, Altman’s rhetoric be­gan to shift. I be­lieve that America is go­ing to be fine no mat­ter what hap­pens in this elec­tion,” he said. After Trump won, Altman do­nated a mil­lion dol­lars to his in­au­gural fund, then took self­ies with the in­flu­encers Jake and Logan Paul at the Inauguration. On X, in his stan­dard low­er­case style, Altman wrote, watching @potus more care­fully re­cently has re­ally changed my per­spec­tive on him (i wish i had done more of my own think­ing . . . ).” Trump, on his first day back in of­fice, re­pealed Biden’s ex­ec­u­tive or­der on A.I. He’s found an ef­fec­tive way for the Trump Administration to do his bid­ding,” a se­nior Biden Administration of­fi­cial said, of Altman.Musk con­tin­ues to ex­co­ri­ate Altman in pub­lic, call­ing him Scam Altman” and Swindly Sam.” (When Altman com­plained on X about a Tesla he’d or­dered, Musk replied, You stole a non-profit.”) And yet, in Washington, Altman seems to have out­flanked him. Musk spent more than two hun­dred and fifty mil­lion dol­lars to help Trump get reëlected, and worked in the White House for months. Then Musk left Washington, dam­ag­ing his re­la­tion­ship with Trump in the process.Alt­man is now one of Trump’s fa­vored ty­coons, even ac­com­pa­ny­ing him on a trip to visit the British Royal Family at Windsor Castle. Altman and Trump speak a few times a year. You can just, like, call him,” Altman said. This is not a buddy. But, yeah, if I need to talk to him about some­thing, I will.” When Trump hosted a din­ner with tech lead­ers at the White House last year, Musk was no­tably ab­sent; Altman sat across from the President. Sam, you’re a big leader,” Trump said. You told me things be­fore that are ab­solutely un­be­liev­able.”Over the years, Altman has con­tin­ued to com­pare the quest for A.G.I. to the Manhattan Project. Like J. Robert Oppenheimer, who used im­pas­sioned ap­peals about sav­ing the world from the Nazis to per­suade physi­cists to up­root their lives and move to Los Alamos, Altman lever­ages fears about the geopo­lit­i­cal stakes of his tech­nol­ogy. Depending on the au­di­ence, Altman has used this anal­ogy to en­cour­age ei­ther ac­cel­er­a­tion or cau­tion. In a meet­ing with U.S. in­tel­li­gence of­fi­cials in the sum­mer of 2017, he claimed that China had launched an A.G.I. Manhattan Project,” and that OpenAI needed bil­lions of dol­lars of gov­ern­ment fund­ing to keep pace. When pressed for ev­i­dence, Altman said, I’ve heard things.” It was the first of sev­eral meet­ings in which he made the claim. After one of them, he told an in­tel­li­gence of­fi­cial that he would fol­low up with ev­i­dence. He never did. The of­fi­cial, af­ter look­ing into the China pro­ject, con­cluded that there was no ev­i­dence that it ex­isted: It was just be­ing used as a sales pitch.” (Altman says that he does not re­call de­scrib­ing Beijing’s ef­forts in ex­actly that way.)With more safety-con­scious au­di­ences, Altman in­voked the anal­ogy to im­ply the op­po­site: that A.G.I. had to be pur­sued care­fully, with in­ter­na­tional coör­di­na­tion, lest the con­se­quences be dis­as­trous. In 2017, Amodei hired Page Hedley, a for­mer pub­lic-in­ter­est lawyer, to be OpenAI’s pol­icy and ethics ad­viser. In an early PowerPoint pre­sen­ta­tion to ex­ec­u­tives, Hedley out­lined how OpenAI might avert a catastrophic” arms race—per­haps by build­ing a coali­tion of A.I. labs that would even­tu­ally coör­di­nate with an in­ter­na­tional body akin to NATO, to in­sure that the tech­nol­ogy was de­ployed safely. As Hedley re­called it, Brockman did­n’t un­der­stand how this would help the com­pany beat its com­peti­tors. No mat­ter what I said,” Hedley told us, Greg kept go­ing back to So how do we raise more money? How do we win?’ ” According to sev­eral in­ter­views and con­tem­po­ra­ne­ous records, Brockman of­fered a coun­ter­pro­posal: OpenAI could en­rich it­self by play­ing world pow­ers—in­clud­ing China and Russia—against one an­other, per­haps by start­ing a bid­ding war among them. According to Hedley, the think­ing seemed to be, It worked for nu­clear weapons, why not for A.I.?He was aghast: The premise, which they did­n’t dis­pute, was We’re talk­ing about po­ten­tially the most de­struc­tive tech­nol­ogy ever in­vented—what if we sold it to Putin?’ ” (Brockman main­tains that he never se­ri­ously en­ter­tained auc­tion­ing A.I. mod­els to gov­ern­ments. Ideas were bat­ted around at a high level about what po­ten­tial frame­works might look like to en­cour­age co­op­er­a­tion be­tween na­tions—some­thing akin to an International Space Station for AI,” an OpenAI rep­re­sen­ta­tive said. Attempting to char­ac­ter­ize it as any­thing more than that is ut­terly ridicu­lous.”)Brain­storm­ing ses­sions of­ten pro­duce out­landish ideas. Hedley hoped that this one, which came to be known in­ter­nally as the countries plan,” would be dropped. Instead, ac­cord­ing to sev­eral peo­ple in­volved and to con­tem­po­ra­ne­ous doc­u­ments, OpenAI ex­ec­u­tives seemed to grow only more ex­cited about it. Brockman’s goal, ac­cord­ing to Jack Clark, OpenAI’s pol­icy di­rec­tor at the time, was to set up, ba­si­cally, a pris­on­er’s dilemma, where all of the na­tions need to give us fund­ing,” and that implicitly makes not giv­ing us fund­ing kind of dan­ger­ous.” A ju­nior re­searcher re­called think­ing, as the plan was de­tailed at a com­pany meet­ing, This is com­pletely fuck­ing in­sane.”Ex­ec­u­tives dis­cussed the ap­proach with at least one po­ten­tial donor. But later that month, af­ter sev­eral em­ploy­ees talked about quit­ting, the plan was aban­doned. Altman would lose staff,” Hedley said. I feel like that was al­ways some­thing that had more weight in Sam’s cal­cu­la­tions than This is not a good plan be­cause it might cause a war be­tween great pow­ers.’ ”“I can­not wait for crop tops to go out of style.”Un­de­terred by the col­lapse of the coun­tries plan, Altman pur­sued vari­a­tions on the theme. In January, 2018, he con­vened an A.G.I. week­end” at the Hotel Bel-Air, an Old Hollywood re­sort with rolling gar­dens of pink bougainvil­lea and an ar­ti­fi­cial pond stocked with real swans. The at­ten­dees in­cluded Nick Bostrom, a philoso­pher, then at Oxford, who had be­come a prophet of A.I. doom; Omar Al Olama, an Emirati sul­tan and an A.I. booster; and at least seven bil­lion­aires. The safety-con­cerned among them were told that this would be an op­por­tu­nity to think through how so­ci­ety might pre­pare for the dis­rup­tive ar­rival of ar­ti­fi­cial gen­eral in­tel­li­gence; the in­vestors ar­rived ex­pect­ing to hear pitches.The days were spent in a sleek con­fer­ence room, where guests gave talks. (Hoffman, the LinkedIn co-founder, ex­pounded on the pos­si­bil­i­ties of en­cod­ing A.I. with Buddhist com­pas­sion.) The fi­nal pre­sen­ter was Altman, armed with a pitch deck that de­scribed a global cryp­tocur­rency redeemable for the at­ten­tion of the AGI.” Once the A.G.I. was max­i­mally use­ful, and anti-evil,” peo­ple every­where would clamor to buy time on OpenAI’s servers. Amodei wrote in his notes, This idea was ab­surd on its face (would Vladimir Putin end up own­ing some of the to­kens? . . .) In ret­ro­spect this was one of many red flags about Sam that I should have taken more se­ri­ously.” The plan seemed like a cash grab, but Altman sold it as a boon for A.I. safety. One of his slides read, I want to get as many peo­ple on the good’ team as pos­si­ble, and win, and do the right thing.” Another read, Please hold your laugh­ter un­til the end of the pre­sen­ta­tion.”Alt­man’s fund-rais­ing pitch has evolved over the years, but it has al­ways re­flected the fact that the de­vel­op­ment of A.G.I. re­quires a stag­ger­ing amount of cap­i­tal. He was fol­low­ing a rel­a­tively sim­ple scaling law”: the more data and com­put­ing power you used to train the mod­els, the smarter they seemed to get. The spe­cial­ized chips that en­able this process are enor­mously ex­pen­sive. OpenAI, in its most re­cent fund­ing round alone, raised more than a hun­dred and twenty bil­lion dol­lars—the largest pri­vate round in his­tory, and a sum four times larger than the biggest I.P.O. ever. When you think about en­ti­ties with a hun­dred bil­lion dol­lars they can dis­cre­tionar­ily spend per year, there re­ally are only a hand­ful in the world,” a tech ex­ec­u­tive and in­vestor told us. There’s the U.S. gov­ern­ment, and the four or five biggest U.S. tech com­pa­nies, and the Saudis, and the Emiratis—that’s ba­si­cally it.”Alt­man’s ini­tial fo­cus was Saudi Arabia. He first met Mohammed bin Salman, the coun­try’s crown prince and de-facto monarch, in 2016, at a din­ner at San Francisco’s Fairmont Hotel. After that, Hedley re­called, Altman re­ferred to the prince as a friend.” In September, 2018, ac­cord­ing to Hedley’s notes, Altman said, I’m try­ing to de­cide if we would ever take tens of bil­lions from the Saudi PIF,” or pub­lic in­vest­ment fund.The fol­low­ing month, a hit squad, re­port­edly act­ing on bin Salman’s or­ders, stran­gled Jamal Khashoggi, a Washington Post jour­nal­ist who had been crit­i­cal of the regime, and used a bone saw to dis­mem­ber his corpse. A week later, it was an­nounced that Altman had joined the ad­vi­sory board for Neom, a city of the fu­ture” that bin Salman hoped to build in the desert. Sam, you can­not be on this board,” Clark, the pol­icy di­rec­tor, who now works at Anthropic, re­called telling Altman. He ini­tially de­fended his in­volve­ment, telling Clark that Jared Kushner had as­sured him that the Saudis didn’t do this.” (Altman does not re­call this. Kushner says that they were not in con­tact at the time.)As bin Salman’s role be­came in­creas­ingly clear, Altman left the Neom board. Yet be­hind the scenes, a pol­icy con­sul­tant from whom Altman sought ad­vice re­called, he treated the sit­u­a­tion as a tem­po­rary set­back, ask­ing whether he could some­how still get money from bin Salman. The ques­tion was not Is this a bad thing or not?’ ” the con­sul­tant said. But, just, What would the con­se­quences be if we did it? Would there be some ex­port-con­trol is­sue? Would there be sanc­tions? Like, can I get away with it?’ ”By then, Altman was al­ready ey­ing an­other source of cash: the United Arab Emirates. The coun­try was in the midst of a fif­teen-year ef­fort to trans­form it­self from an oil state to a tech hub. The pro­ject was over­seen by Sheikh Tahnoon bin Zayed al-Nahyan, the President’s brother and the na­tion’s spy­mas­ter. Tahnoon runs the state-con­trolled A.I. con­glom­er­ate G42, and con­trols $1.5 tril­lion in sov­er­eign wealth. In June, 2023, Altman vis­ited Abu Dhabi, meet­ing with Olama and other of­fi­cials. In re­marks at a gov­ern­ment-backed func­tion, he said that the coun­try had been talk­ing about A.I. since be­fore it was cool,” and out­lined a vi­sion for the fu­ture of A.I. with the Middle East in a central role.”Fund-rais­ing from Gulf states has be­come cus­tom­ary for many large busi­nesses. But Altman was pur­su­ing a more sweep­ing geopo­lit­i­cal vi­sion. In the fall of 2023, he be­gan qui­etly re­cruit­ing new tal­ent for a plan—even­tu­ally known as ChipCo—in which Gulf states would pro­vide tens of bil­lions of dol­lars for the con­struc­tion of huge mi­crochip foundries and data cen­ters, some to be sit­u­ated in the Middle East. Altman pitched Alexandr Wang, now the head of A.I. at Meta, on a lead­er­ship role, telling him that Jeff Bezos, the founder of Amazon, could head the new com­pany. Altman sought enor­mous con­tri­bu­tions from the Emiratis. My un­der­stand­ing was that this whole thing hap­pened with­out any board knowl­edge,” the board mem­ber said. A re­searcher Altman tried to re­cruit for the pro­ject, James Bradbury, re­called turn­ing him down. My ini­tial re­ac­tion was This is gonna work, but I don’t know if I want it to work,’ ” he said.A.I. ca­pac­ity may soon dis­place oil or en­riched ura­nium as the re­source that dic­tates the global bal­ance of power. Altman has said that com­put­ing power is the cur­rency of the fu­ture.” Normally, it might not mat­ter where a data cen­ter was sit­u­ated. But many American na­tional-se­cu­rity of­fi­cials were anx­ious about con­cen­trat­ing ad­vanced A.I. in­fra­struc­ture in Gulf au­toc­ra­cies. The U.A.E.’s telecom­mu­ni­ca­tions in­fra­struc­ture is heav­ily de­pen­dent on hard­ware from Huawei, a Chinese tech gi­ant linked to the gov­ern­ment, and the U.A.E. has re­port­edly leaked American tech­nol­ogy to Beijing in the past. Intelligence agen­cies wor­ried that ad­vanced U.S. mi­crochips sent to the Emiratis could be used by Chinese en­gi­neers. Data cen­ters in the Middle East are also more vul­ner­a­ble to mil­i­tary strikes; in re­cent weeks, Iran has bombed American data cen­ters in Bahrain and the U.A.E. And, hy­po­thet­i­cally, a Gulf monar­chy could com­man­deer an American-owned data cen­ter and use it to build dis­pro­por­tion­ately pow­er­ful mod­els—a ver­sion of the AGI dic­ta­tor­ship” sce­nario, but in an ac­tual dic­ta­tor­ship.Af­ter Altman’s fir­ing, the per­son he re­lied on most was Chesky, the Airbnb co-founder and one of Altman’s fiercest loy­al­ists. Watching my friend stare into the abyss like that, it made me ques­tion some fun­da­men­tal things about what it means to re­ally run a com­pany,” Chesky told us. The fol­low­ing year, at a gath­er­ing of Y Combinator alumni, he gave an im­promptu talk, which ended up last­ing two hours. It felt like a group-ther­apy ses­sion,” he said. The up­shot was: Your in­stincts for how to run the com­pany that you started are the best in­stincts, and any­one who tells you oth­er­wise is gaslight­ing you. You’re not crazy, even though peo­ple who work for you tell you you are,” Chesky said. Paul Graham, in a blog post about the speech, gave this de­fi­ant at­ti­tude a name: Founder Mode.Since the Blip, Altman has been in Founder Mode. In February, 2024, the Wall Street Journal pub­lished a de­scrip­tion of Altman’s vi­sion for ChipCo. He con­ceived of it as a joint en­tity funded by an in­vest­ment of five to seven tril­lion dol­lars. (“fk it why not 8,” he tweeted.) This was how many em­ploy­ees learned about the plan. Everyone was, like, Wait, what?’ ” Leike re­called. Altman in­sisted at an in­ter­nal meet­ing that safety teams had been looped in.” Leike sent a mes­sage urg­ing him not to falsely sug­gest that the ef­fort had been ap­proved.Dur­ing the Biden Administration, Altman ex­plored get­ting a se­cu­rity clear­ance to join clas­si­fied A.I.-policy dis­cus­sions. But staffers at the RANDCorporation, which helped coör­di­nate the process, ex­pressed con­cern. He has been ac­tively rais­ing hundreds of bil­lions of dol­lars’ from for­eign gov­ern­ments,” one of them wrote. The UAE re­cently gifted him a car. (I as­sume it was a very nice car.)” The staffer con­tin­ued, The only per­son I can think of who ever went thru the process with this mag­ni­tude of for­eign fi­nan­cial ties is Jared Kushner, and the ad­ju­di­ca­tors rec­om­mended that he not be granted a clear­ance.” Altman ul­ti­mately with­drew from the process. He was push­ing these trans­ac­tional re­la­tion­ships, pri­mar­ily with the Emiratis, that raised a lot of red flags for some of us,” a se­nior Administration of­fi­cial in­volved in talks with Altman told us. A lot of peo­ple in the Administration did not trust him a hun­dred per cent.”When we asked Altman about gifts from Tahnoon, he said, I’m not gonna say what gifts he has given me specif­i­cally. But he and other world lead­ers . . . have given me gifts.” He added, We have a stan­dard pol­icy, which ap­plies to me as well, which is that every gift from any po­ten­tial busi­ness part­ner is dis­closed to the com­pany.” Altman has at least two hy­per­cars: an all-white Koenigsegg Regera, worth about two mil­lion dol­lars, and a red McLaren F1, worth about twenty mil­lion dol­lars. In 2024, Altman was spot­ted dri­ving the Regera through Napa. A few sec­onds of video made its way onto so­cial me­dia: Altman in a low-slung bucket seat, peer­ing out the win­dow of a gleam­ing white ma­chine. A tech in­vestor aligned with Musk posted the footage on X, writ­ing, I’m start­ing a non­profit next.”In 2024, Altman took two OpenAI em­ploy­ees to visit Sheikh Tahnoon on his two-hun­dred-and-fifty-mil­lion-dol­lar su­pery­acht, the Maryah. One of the largest such ves­sels in the world, the Maryah has a he­li­pad, a night club, a movie the­atre, and a beach club. Altman’s em­ploy­ees ap­par­ently stood out amid Tahnoon’s armed se­cu­rity de­tail, and at least one later told col­leagues that he found the ex­pe­ri­ence dis­con­cert­ing. Altman, on X, later re­ferred to Tahnoon as a dear per­sonal friend.”Alt­man con­tin­ued to meet with the Biden Administration, which had en­acted a pol­icy re­quir­ing White House ap­proval for the ex­port of sen­si­tive tech­nol­ogy. Multiple Administration of­fi­cials emerged from these meet­ings ner­vous about Altman’s am­bi­tions in the Middle East. He of­ten made grandiose claims, ac­cord­ing to those of­fi­cials, in­clud­ing call­ing A.I. the new elec­tric­ity.” In 2018, he said that OpenAI was plan­ning to buy a fully func­tion­ing quan­tum com­puter from a com­pany called Rigetti Computing. This was news even to other OpenAI ex­ec­u­tives in the room. Rigetti was not yet close to be­ing able to sell a us­able quan­tum com­puter. In a meet­ing, Altman claimed that by 2026 an ex­ten­sive net­work of nu­clear-fu­sion re­ac­tors across the United States would power the A.I. boom. The se­nior Administration of­fi­cial said, We were, like, Well, that’s, you know, news, if they made nu­clear fu­sion work.’ ” The Biden Administration ul­ti­mately with­held ap­proval. We’re not go­ing to be build­ing ad­vanced chips in the U.A.E.,” a leader at the Department of Commerce told Altman.Four days be­fore Trump’s Inauguration, the Wall Street Journal re­ported, Tahnoon paid half a bil­lion dol­lars to the Trump fam­ily in ex­change for a stake in its cryp­tocur­rency com­pany. The fol­low­ing day, Altman held a twenty-five-minute call with Trump, dur­ing which they dis­cussed an­nounc­ing a ver­sion of a ChipCo, timed so that Trump could take credit for it. On Trump’s sec­ond day in of­fice, Altman stood in the Roosevelt Room and an­nounced Stargate, a five-hun­dred-bil­lion-dol­lar joint ven­ture that aims to build a vast net­work of A.I. in­fra­struc­ture across the U.S.In May, the Administration re­scinded Biden’s ex­port re­stric­tions on A.I. tech­nol­ogy. Altman and Trump trav­elled to the Saudi royal court to meet with bin Salman. Around the same time, the Saudis ad­ver­tised the launch of a gi­ant state-backed A.I. firm in the king­dom, with bil­lions to spend on in­ter­na­tional part­ner­ships. About a week later, Altman laid out a plan for Stargate to ex­pand into the U.A.E. The com­pany plans to build a data-cen­ter cam­pus in Abu Dhabi which is seven times larger than Central Park and con­sumes roughly as much elec­tri­cal power as the city of Miami. The truth of this is, we’re build­ing por­tals from which we’re gen­uinely sum­mon­ing aliens,” a for­mer OpenAI ex­ec­u­tive said. The por­tals cur­rently ex­ist in the United States and China, and Sam has added one in the Middle East.” He went on, I think it’s just, like, wildly im­por­tant to get how scary that should be. It’s the most reck­less thing that has been done.”The ero­sion of safety com­mit­ments has be­come an in­dus­try norm. The found­ing premise of Anthropic was that, given the right struc­ture and lead­er­ship, it could keep safety com­mit­ments from dis­in­te­grat­ing un­der com­mer­cial pres­sure. One such com­mit­ment was a responsible scal­ing pol­icy,” which ob­lig­ated Anthropic to stop train­ing more pow­er­ful mod­els if it could not demon­strate that they were safe. In February, as the firm se­cured thirty bil­lion dol­lars in new fund­ing, it weak­ened that pledge. In some re­spects, Anthropic still em­pha­sizes safety more than OpenAI does. But Clark, the for­mer pol­icy di­rec­tor, has said, The sys­tem of cap­i­tal mar­kets says, Go faster.” He added, The world gets to make this de­ci­sion, not com­pa­nies.” Last year, Amodei sent a memo to Anthropic em­ploy­ees, dis­clos­ing that the firm would seek in­vest­ments from the United Arab Emirates and Qatar and ac­knowl­edg­ing that this would likely en­rich dictators.” (Like many au­thors, we are both par­ties in a class-ac­tion law­suit al­leg­ing that Anthropic used our books with­out our per­mis­sion to train its mod­els. Condé Nast has opted into a set­tle­ment agree­ment with Anthropic re­gard­ing the com­pa­ny’s use of cer­tain books pub­lished by Condé Nast and its sub­sidiaries.)In 2024, Anthropic part­nered with Palantir, one of Silicon Valley’s most hawk­ish de­fense con­trac­tors, push­ing its A.I. model, Claude, di­rectly into the mil­i­tary ecosys­tem. Anthropic be­came the only A.I. con­trac­tor used in the Pentagon’s most clas­si­fied set­tings. Last year, the Pentagon awarded the com­pany a fur­ther two-hun­dred-mil­lion-dol­lar con­tract. In January, the U.S. mil­i­tary launched a mid­night raid that cap­tured the Venezuelan President, Nicolás Maduro. According to the Wall Street Journal, Claude was used in the clas­si­fied op­er­a­tion.But ten­sions arose be­tween Anthropic and the gov­ern­ment. Years ear­lier, OpenAI had deleted from its poli­cies a blan­ket ban on us­ing its tech­nol­ogy for military and war­fare.” Eventually, Anthropic’s ri­vals—in­clud­ing Google and xAI—agreed to pro­vide their mod­els to the mil­i­tary for all law­ful pur­poses.” Anthropic, whose poli­cies bar it from en­abling fully au­tonomous weapons or do­mes­tic mass sur­veil­lance, re­sisted on these points, slow­ing ne­go­ti­a­tions for an over­hauled deal. On a Tuesday in late February, Defense Secretary Pete Hegseth sum­moned Amodei to the Pentagon and de­liv­ered an ul­ti­ma­tum: the firm had un­til 5:01 P.M. that Friday to aban­don those pro­hi­bi­tions. The day be­fore the dead­line, Amodei de­clined to do so. Hegseth tweeted that he would des­ig­nate Anthropic a supply-chain risk”—a dev­as­tat­ing black­list his­tor­i­cally re­served for com­pa­nies, like Huawei, that have ties to for­eign ad­ver­saries—and made good on the threat days later.Hun­dreds of em­ploy­ees at OpenAI and Google signed an open let­ter ti­tled We Will Not Be Divided,” de­fend­ing Anthropic. In an in­ter­nal memo, Altman wrote that the dis­pute was an is­sue for the whole in­dus­try,” and claimed that OpenAI shared Anthropic’s eth­i­cal bound­aries. But Altman had been in ne­go­ti­a­tions with the Pentagon for at least two days. Emil Michael, the Under-Secretary of Defense for Research and Engineering, had con­tacted Altman as he sought re­place­ments for Anthropic. I needed to hurry and find al­ter­na­tives,” Michael re­called. I called Sam, and he was will­ing to jump. I think he’s a pa­triot.” Altman asked Michael, What can I do for the coun­try?” It ap­pears that he al­ready knew the an­swer. OpenAI lacked the se­cu­rity ac­cred­i­ta­tion re­quired for the clas­si­fied sys­tems in which Anthropic’s tech­nol­ogy was em­bed­ded. But a fifty-bil­lion-dol­lar deal, an­nounced that Friday morn­ing, in­te­grated OpenAI’s tech­nol­ogy into Amazon Web Services, a key part of the Pentagon’s dig­i­tal in­fra­struc­ture. That night, Altman an­nounced on X that the mil­i­tary would now be us­ing OpenAI’s mod­els.By some mea­sures, Altman’s ma­neu­ver has not hin­dered the com­pa­ny’s suc­cess. The day he an­nounced the deal, a new fund­ing round in­creased OpenAI’s value by a hun­dred and ten bil­lion dol­lars. But many users deleted the ChatGPT app. At least two se­nior em­ploy­ees de­parted—one for Anthropic. At a staff meet­ing, Altman chas­tised em­ploy­ees who raised con­cerns. So maybe you think the Iran strike was good and the Venezuela in­va­sion was bad,” he said. You don’t get to weigh in on that.”Sev­eral ex­ec­u­tives con­nected to OpenAI have ex­pressed on­go­ing reser­va­tions about Altman’s lead­er­ship and floated Fidji Simo, who was for­merly the C.E.O. of Instacart and now serves as OpenAI’s C.E.O. for AGI Deployment, as a suc­ces­sor. Simo her­self has pri­vately said that she be­lieves Altman may even­tu­ally step down, a per­son briefed on a re­cent dis­cus­sion told us. (Simo dis­putes this. Instacart re­cently reached a set­tle­ment with the F.T.C., in which it ad­mit­ted no wrong­do­ing but agreed to pay a sixty-mil­lion-dol­lar fine for al­leged de­cep­tive prac­tices un­der Simo’s lead­er­ship.)Alt­man de­scribes his shift­ing com­mit­ments as a by-prod­uct of his abil­ity to adapt to chang­ing cir­cum­stances—not a ne­far­i­ous long con,” as Musk and oth­ers have al­leged, but a grad­ual, good-faith evo­lu­tion. I think what some peo­ple want,” he told us, is a leader who is go­ing to be ab­solutely sure of what they think and stick with it, and it’s not go­ing to change. And we are in a field, in an area, where things change ex­tremely quickly.” He de­fended some of his ac­tions as the prac­tice of normal com­pet­i­tive busi­ness.” Several in­vestors we spoke to de­scribed Altman’s de­trac­tors as naïve to ex­pect any­thing else. There is a group of fa­tal­is­tic ex­trem­ists that has taken the safety pill al­most to a sci­ence-fic­tion level,” Conway, the in­vestor, told us. His mis­sion is mea­sured by num­bers. And, when you look at the suc­cess of OpenAI, it’s hard to ar­gue with the num­bers.”But oth­ers in Silicon Valley think that Altman’s be­hav­ior has cre­ated un­ac­cept­able man­age­r­ial dys­func­tion. It’s more about a prac­ti­cal in­abil­ity to gov­ern the com­pany,” the board mem­ber said. And some still be­lieve that the ar­chi­tects of A.I. should be eval­u­ated more strin­gently than ex­ec­u­tives in other in­dus­tries. The vast ma­jor­ity of peo­ple we spoke to agreed that the stan­dards by which Altman now asks to be judged are not those he ini­tially pro­posed. During one con­ver­sa­tion, we asked Altman whether run­ning an A.I. com­pany came with an el­e­vated re­quire­ment of in­tegrity.” This was sup­posed to be an easy ques­tion. Until re­cently, when asked a ver­sion of it, his an­swer was a clear, un­qual­i­fied yes. Now he added, I think there’s, like, a lot of busi­nesses that have po­ten­tial huge im­pact, good and bad, on so­ci­ety.” (Later, he sent an ad­di­tional state­ment: Yes, it de­mands a height­ened level of in­tegrity, and I feel the weight of the re­spon­si­bil­ity every day.”)Of all the promises made at OpenAI’s found­ing, ar­guably the most cen­tral was its pledge to bring A.I. into ex­is­tence safely. But such con­cerns are now of­ten de­rided in Silicon Valley and in Washington. Last year, J. D. Vance, the for­mer ven­ture cap­i­tal­ist who is now the Vice-President, ad­dressed a con­fer­ence in Paris called the A.I. Action Summit. (It was pre­vi­ously called the A.I. Safety Summit.) The A.I. fu­ture is not go­ing to be won by hand-wring­ing about safety,” he said. At Davos this year, David Sacks, a ven­ture cap­i­tal­ist who was serv­ing as the White House’s A.I. and crypto czar, dis­missed safety con­cerns as a self-inflicted in­jury” that could cost America the A.I. race. Altman now calls Trump’s dereg­u­la­tory ap­proach a very re­fresh­ing change.”Ope­nAI has closed many of its safety-fo­cussed teams. Around the time the su­per­align­ment team was dis­solved, its lead­ers, Sutskever and Leike, re­signed. (Sutskever co-founded a com­pany called Safe Superintelligence.) On X, Leike wrote, Safety cul­ture and processes have taken a back­seat to shiny prod­ucts.” Soon af­ter­ward, the A.G.I.-readiness team, tasked with prepar­ing so­ci­ety for the shock of ad­vanced A.I., was also dis­solved. When the com­pany was asked on its most re­cent I.R.S. dis­clo­sure form to briefly de­scribe its most sig­nif­i­cant ac­tiv­i­ties,” the con­cept of safety, pre­sent in its an­swers to such ques­tions on pre­vi­ous forms, was not listed. (OpenAI said that its mission did not change” and added, We con­tinue to in­vest in and evolve our work on safety, and will con­tinue to make or­ga­ni­za­tional changes.”) The Future of Life Institute, a think tank whose prin­ci­ples on safety Altman once en­dorsed, grades each ma­jor A.I. com­pany on existential safety”; on the most re­cent re­port card, OpenAI got an F. In fair­ness, so did every other ma­jor com­pany ex­cept for Anthropic, which got a D, and Google DeepMind, which got a D-.“My vibes don’t match a lot of the tra­di­tional A.I.-safety stuff,” Altman said. He in­sisted that he con­tin­ued to pri­or­i­tize these mat­ters, but when pressed for specifics he was vague: We still will run safety pro­jects, or at least safety-ad­ja­cent pro­jects.” When we asked to in­ter­view re­searchers at the com­pany who were work­ing on ex­is­ten­tial safety—the kinds of is­sues that could mean, as Altman once put it, lights-out for all of us”—an OpenAI rep­re­sen­ta­tive seemed con­fused. What do you mean by existential safe­ty’?” he replied. That’s not, like, a thing.”A.I. doomers have been pushed to the fringes, but some of their fears seem less fan­tas­ti­cal with each pass­ing month. In 2020, ac­cord­ing to a U.N. re­port, an A.I. drone was used in the Libyan civil war to fire deadly mu­ni­tions, pos­si­bly with­out over­sight by a hu­man op­er­a­tor. Since then, A.I. has only be­come more cen­tral to mil­i­tary op­er­a­tions around the world, in­clud­ing, re­port­edly, in the cur­rent U.S. cam­paign in Iran. In 2022, re­searchers at a phar­ma­ceu­ti­cal com­pany tested whether a drug-dis­cov­ery model could be used to find new tox­ins; within a few hours, it had sug­gested forty thou­sand deadly chem­i­cal-war­fare agents. And many more mun­dane harms are al­ready com­ing to pass. We in­creas­ingly rely on A.I. to help us write, think, and nav­i­gate the world, ac­cel­er­at­ing what ex­perts call human en­fee­ble­ment”; the ubiq­uity of A.I. slop” makes life eas­ier for scam­mers and harder for peo­ple who sim­ply want to know what’s real. A.I. agents” are start­ing to act in­de­pen­dently, with lit­tle or no hu­man su­per­vi­sion. Days be­fore the 2024 New Hampshire Democratic pri­mary, thou­sands of vot­ers re­ceived robo­calls from an A.I.-generated deep­fake of Joe Biden’s voice, telling them to save their votes for November and stay home—an act of voter sup­pres­sion re­quir­ing vir­tu­ally no tech­ni­cal ex­per­tise. OpenAI is now fac­ing seven wrong­ful-death law­suits, which al­lege that ChatGPT prompted sev­eral sui­cides and a mur­der. Chat logs in the mur­der case show that it en­cour­aged a man’s para­noid delu­sion that his eighty-three-year-old mother was sur­veilling and try­ing to poi­son him. Soon af­ter­ward, he fa­tally beat and stran­gled her and stabbed him­self. (OpenAI is fight­ing the law­suits, and says that it’s con­tin­u­ing to im­prove its mod­el’s safe­guards.)As OpenAI pre­pares for its po­ten­tial I.P.O., Altman has faced ques­tions not only about the ef­fect of A.I. on the econ­omy—it could soon cause se­vere la­bor dis­rup­tion, per­haps elim­i­nat­ing mil­lions of jobs—but about the com­pa­ny’s own fi­nances. Eric Ries, an ex­pert on startup gov­er­nance, de­rided circular deals” in the in­dus­try—for ex­am­ple, OpenAI’s deals with Nvidia and other chip man­u­fac­tur­ers—and said that in other eras some of the com­pa­ny’s ac­count­ing prac­tices would have been con­sid­ered borderline fraud­u­lent.” The board mem­ber told us, The com­pany lev­ered up fi­nan­cially in a way that’s risky and scary right now.” (OpenAI dis­putes this.)In February, we spoke again with Altman. He was wear­ing a drab-green sweater and jeans, and sat in front of a pho­to­graph of a NASA moon rover. He tucked one leg be­neath him, then hung it over the arm of his chair. In the past, he said, his main flaw as a man­ager had been his ea­ger­ness to avoid con­flict. Now I’m very happy to fire peo­ple quickly,” he had told us. I’m happy to just say, We’re gonna bet in this di­rec­tion.’ ” Any em­ploy­ees who did­n’t like his choices needed to leave.”He is more bull­ish than ever about the fu­ture. My de­f­i­n­i­tion of win­ning is that peo­ple crazy up­level—and the in­sane sci-fi fu­ture comes true for all of us,” he said. I’m very am­bi­tious as far as, like, my hope for hu­man­ity, and what I ex­pect us all to achieve. I weirdly have very lit­tle per­sonal am­bi­tion.” At times, he seemed to catch him­self. No one be­lieves you’re do­ing this just be­cause it’s in­ter­est­ing,” he said. You’re do­ing it for power or for some other thing.”Even peo­ple close to Altman find it dif­fi­cult to know where his hope for hu­man­ity” ends and his am­bi­tion be­gins. His great­est strength has al­ways been his abil­ity to con­vince dis­parate groups that what he wants and what they need are one and the same. He made use of a unique his­tor­i­cal junc­ture, when the pub­lic was wary of tech-in­dus­try hype and most of the re­searchers ca­pa­ble of build­ing A.G.I. were ter­ri­fied of bring­ing it into ex­is­tence. Altman re­sponded with a move that no other pitch­man had per­fected: he used apoc­a­lyp­tic rhetoric to ex­plain how A.G.I. could de­stroy us all—and why, there­fore, he should be the one to build it. Maybe this was a pre­med­i­tated mas­ter­stroke. Maybe he was fum­bling for an ad­van­tage. Either way, it worked.Not all the ten­den­cies that make chat­bots dan­ger­ous are glitches; some are by-prod­ucts of how the sys­tems are built. Large lan­guage mod­els are trained, in part, on hu­man feed­back, and hu­mans tend to pre­fer agree­able re­sponses. Models of­ten learn to flat­ter users, a ten­dency known as syco­phancy, and will some­times pri­or­i­tize this over hon­esty. Models can also make things up, a ten­dency known as hal­lu­ci­na­tion. Major A.I. labs have doc­u­mented these prob­lems, but they some­times tol­er­ate them. As mod­els have grown more com­plex, some hal­lu­ci­nate with more per­sua­sive fab­ri­ca­tions. In 2023, shortly be­fore his fir­ing, Altman ar­gued that al­low­ing for some false­hoods can, what­ever the risks, con­fer ad­van­tages. If you just do the naïve thing and say, Never say any­thing that you’re not a hun­dred per cent sure about,’ you can get a model to do that,” he said. But it won’t have the magic that peo­ple like so much.” ♦

...

Read the original on www.newyorker.com »

4 1,429 shares, 64 trendiness

Securing critical software for the AI era

Today we’re an­nounc­ing Project Glasswing1, a new ini­tia­tive that brings to­gether Amazon Web Services, Anthropic, Apple, Broadcom, Cisco, CrowdStrike, Google, JPMorganChase, the Linux Foundation, Microsoft, NVIDIA, and Palo Alto Networks in an ef­fort to se­cure the world’s most crit­i­cal soft­ware. We formed Project Glasswing be­cause of ca­pa­bil­i­ties we’ve ob­served in a new fron­tier model trained by Anthropic that we be­lieve could re­shape cy­ber­se­cu­rity. Claude Mythos2 Preview is a gen­eral-pur­pose, un­re­leased fron­tier model that re­veals a stark fact: AI mod­els have reached a level of cod­ing ca­pa­bil­ity where they can sur­pass all but the most skilled hu­mans at find­ing and ex­ploit­ing soft­ware vul­ner­a­bil­i­ties.Mythos Preview has al­ready found thou­sands of high-sever­ity vul­ner­a­bil­i­ties, in­clud­ing some in every ma­jor op­er­at­ing sys­tem and web browser. Given the rate of AI progress, it will not be long be­fore such ca­pa­bil­i­ties pro­lif­er­ate, po­ten­tially be­yond ac­tors who are com­mit­ted to de­ploy­ing them safely. The fall­out—for economies, pub­lic safety, and na­tional se­cu­rity—could be se­vere. Project Glasswing is an ur­gent at­tempt to put these ca­pa­bil­i­ties to work for de­fen­sive pur­poses.As part of Project Glasswing, the launch part­ners listed above will use Mythos Preview as part of their de­fen­sive se­cu­rity work; Anthropic will share what we learn so the whole in­dus­try can ben­e­fit. We have also ex­tended ac­cess to a group of over 40 ad­di­tional or­ga­ni­za­tions that build or main­tain crit­i­cal soft­ware in­fra­struc­ture so they can use the model to scan and se­cure both first-party and open-source sys­tems. Anthropic is com­mit­ting up to $100M in us­age cred­its for Mythos Preview across these ef­forts, as well as $4M in di­rect do­na­tions to open-source se­cu­rity or­ga­ni­za­tions.Pro­ject Glasswing is a start­ing point. No one or­ga­ni­za­tion can solve these cy­ber­se­cu­rity prob­lems alone: fron­tier AI de­vel­op­ers, other soft­ware com­pa­nies, se­cu­rity re­searchers, open-source main­tain­ers, and gov­ern­ments across the world all have es­sen­tial roles to play. The work of de­fend­ing the world’s cy­ber in­fra­struc­ture might take years; fron­tier AI ca­pa­bil­i­ties are likely to ad­vance sub­stan­tially over just the next few months. For cy­ber de­fend­ers to come out ahead, we need to act now.Cy­ber­se­cu­rity in the age of AIThe soft­ware that all of us rely on every day—re­spon­si­ble for run­ning bank­ing sys­tems, stor­ing med­ical records, link­ing up lo­gis­tics net­works, keep­ing power grids func­tion­ing, and much more—has al­ways con­tained bugs. Many are mi­nor, but some are se­ri­ous se­cu­rity flaws that, if dis­cov­ered, could al­low cy­ber­at­tack­ers to hi­jack sys­tems, dis­rupt op­er­a­tions, or steal data.We have al­ready seen the se­ri­ous con­se­quences of cy­ber­at­tacks for im­por­tant cor­po­rate net­works, health­care sys­tems, en­ergy in­fra­struc­ture, trans­port hubs, and the in­for­ma­tion se­cu­rity of gov­ern­ment agen­cies across the world. On the global stage, state-spon­sored at­tacks from ac­tors like China, Iran, North Korea, and Russia have threat­ened to com­pro­mise the in­fra­struc­ture that un­der­pins both civil­ian life and mil­i­tary readi­ness. Even smaller-scale at­tacks, such as those where in­di­vid­ual hos­pi­tals or schools are tar­geted, can still in­flict sub­stan­tial eco­nomic dam­age, ex­pose sen­si­tive data, and even put lives at risk. The cur­rent global fi­nan­cial costs of cy­ber­crime are chal­leng­ing to es­ti­mate, but might be around $500B every year.Many flaws in soft­ware go un­no­ticed for years be­cause find­ing and ex­ploit­ing them has re­quired ex­per­tise held by only a few skilled se­cu­rity ex­perts. With the lat­est fron­tier AI mod­els, the cost, ef­fort, and level of ex­per­tise re­quired to find and ex­ploit soft­ware vul­ner­a­bil­i­ties have all dropped dra­mat­i­cally. Over the past year, AI mod­els have be­come in­creas­ingly ef­fec­tive at read­ing and rea­son­ing about code—in par­tic­u­lar, they show a strik­ing abil­ity to spot vul­ner­a­bil­i­ties and work out ways to ex­ploit them. Claude Mythos Preview demon­strates a leap in these cy­ber skills—the vul­ner­a­bil­i­ties it has spot­ted have in some cases sur­vived decades of hu­man re­view and mil­lions of au­to­mated se­cu­rity tests, and the ex­ploits it de­vel­ops are in­creas­ingly so­phis­ti­cated.Ten years af­ter the first DARPA Cyber Grand Challenge, fron­tier AI mod­els are now be­com­ing com­pet­i­tive with the best hu­mans at find­ing and ex­ploit­ing vul­ner­a­bil­i­ties. Without the nec­es­sary safe­guards, these pow­er­ful cy­ber ca­pa­bil­i­ties could be used to ex­ploit the many ex­ist­ing flaws in the world’s most im­por­tant soft­ware. This could make cy­ber­at­tacks of all kinds much more fre­quent and de­struc­tive, and em­power ad­ver­saries of the United States and its al­lies. Addressing these is­sues is there­fore an im­por­tant se­cu­rity pri­or­ity for de­mo­c­ra­tic states.Al­though the risks from AI-augmented cy­ber­at­tacks are se­ri­ous, there is rea­son for op­ti­mism: the same ca­pa­bil­i­ties that make AI mod­els dan­ger­ous in the wrong hands make them in­valu­able for find­ing and fix­ing flaws in im­por­tant soft­ware—and for pro­duc­ing new soft­ware with far fewer se­cu­rity bugs. Project Glasswing is an im­por­tant step to­ward giv­ing de­fend­ers a durable ad­van­tage in the com­ing AI-driven era of cy­ber­se­cu­rity.

Over the past few weeks, we have used Claude Mythos Preview to iden­tify thou­sands of zero-day vul­ner­a­bil­i­ties (that is, flaws that were pre­vi­ously un­known to the soft­ware’s de­vel­op­ers), many of them crit­i­cal, in every ma­jor op­er­at­ing sys­tem and every ma­jor web browser, along with a range of other im­por­tant pieces of soft­ware.In a post on our Frontier Red Team blog, we pro­vide tech­ni­cal de­tails for a sub­set of these vul­ner­a­bil­i­ties that have al­ready been patched and, in some cases, the ways that Mythos Preview found to ex­ploit them. It was able to iden­tify nearly all of these vul­ner­a­bil­i­ties—and de­velop many re­lated ex­ploits—en­tirely au­tonomously, with­out any hu­man steer­ing. The fol­low­ing are three ex­am­ples:Mythos Preview found a 27-year-old vul­ner­a­bil­ity in OpenBSD—which has a rep­u­ta­tion as one of the most se­cu­rity-hard­ened op­er­at­ing sys­tems in the world and is used to run fire­walls and other crit­i­cal in­fra­struc­ture. The vul­ner­a­bil­ity al­lowed an at­tacker to re­motely crash any ma­chine run­ning the op­er­at­ing sys­tem just by con­nect­ing to it;It also dis­cov­ered a 16-year-old vul­ner­a­bil­ity in FFmpeg—which is used by in­nu­mer­able pieces of soft­ware to en­code and de­code video—in a line of code that au­to­mated test­ing tools had hit five mil­lion times with­out ever catch­ing the prob­lem;The model au­tonomously found and chained to­gether sev­eral vul­ner­a­bil­i­ties in the Linux ker­nel—the soft­ware that runs most of the world’s servers—to al­low an at­tacker to es­ca­late from or­di­nary user ac­cess to com­plete con­trol of the ma­chine.We have re­ported the above vul­ner­a­bil­i­ties to the main­tain­ers of the rel­e­vant soft­ware, and they have all now been patched. For many other vul­ner­a­bil­i­ties, we are pro­vid­ing a cryp­to­graphic hash of the de­tails to­day (see the Red Team blog), and we will re­veal the specifics af­ter a fix is in place.Eval­u­a­tion bench­marks such as CyberGym re­in­force the sub­stan­tial dif­fer­ence be­tween Mythos Preview and our next-best model, Claude Opus 4.6:In ad­di­tion to our own work, many of our part­ners have al­ready been us­ing Claude Mythos Preview for sev­eral weeks. This is what they’ve found:“AI ca­pa­bil­i­ties have crossed a thresh­old that fun­da­men­tally changes the ur­gency re­quired to pro­tect crit­i­cal in­fra­struc­ture from cy­ber threats, and there is no go­ing back. Our foun­da­tional work with these mod­els has shown we can iden­tify and fix se­cu­rity vul­ner­a­bil­i­ties across hard­ware and soft­ware at a pace and scale pre­vi­ously im­pos­si­ble. That is a pro­found shift, and a clear sig­nal that the old ways of hard­en­ing sys­tems are no longer suf­fi­cient.

Providers of tech­nol­ogy must ag­gres­sively adopt new ap­proaches now, and cus­tomers need to be ready to de­ploy. That is why Cisco joined Project Glasswing—this work is too im­por­tant and too ur­gent to do alone.”“At AWS, we build de­fenses be­fore threats emerge, from our cus­tom sil­i­con up through the tech­nol­ogy stack. Security is­n’t a phase for us; it’s con­tin­u­ous and em­bed­ded in every­thing we do. Our teams an­a­lyze over 400 tril­lion net­work flows every day for threats, and AI is cen­tral to our abil­ity to de­fend at scale.

We’ve been test­ing Claude Mythos Preview in our own se­cu­rity op­er­a­tions, ap­ply­ing it to crit­i­cal code­bases, where it’s al­ready help­ing us strengthen our code. We’re bring­ing deep se­cu­rity ex­per­tise to our part­ner­ship with Anthropic and are help­ing to harden Claude Mythos Preview so even more or­ga­ni­za­tions can ad­vance their most am­bi­tious work with se­cu­rity that sets the stan­dard.”“As we en­ter a phase where cy­ber­se­cu­rity is no longer bound by purely hu­man ca­pac­ity, the op­por­tu­nity to use AI re­spon­si­bly to im­prove se­cu­rity and re­duce risk at scale is un­prece­dented. Joining Project Glasswing, with ac­cess to Claude Mythos Preview, al­lows us to iden­tify and mit­i­gate risk early and aug­ment our se­cu­rity and de­vel­op­ment so­lu­tions so we can bet­ter pro­tect cus­tomers and Microsoft.

When tested against CTI-REALM, our open-source se­cu­rity bench­mark, Claude Mythos Preview showed sub­stan­tial im­prove­ments com­pared to pre­vi­ous mod­els. We look for­ward to part­ner­ing with Anthropic and the broader in­dus­try to im­prove se­cu­rity out­comes for all.”“The win­dow be­tween a vul­ner­a­bil­ity be­ing dis­cov­ered and be­ing ex­ploited by an ad­ver­sary has col­lapsed—what once took months now hap­pens in min­utes with AI.

Claude Mythos Preview demon­strates what is now pos­si­ble for de­fend­ers at scale, and ad­ver­saries will in­evitably look to ex­ploit the same ca­pa­bil­i­ties. That is not a rea­son to slow down; it’s a rea­son to move to­gether, faster. If you want to de­ploy AI, you need se­cu­rity. That is why CrowdStrike is part of this ef­fort from day one.”“In the past, se­cu­rity ex­per­tise has been a lux­ury re­served for or­ga­ni­za­tions with large se­cu­rity teams. Open source main­tain­ers—whose soft­ware un­der­pins much of the world’s crit­i­cal in­fra­struc­ture—have his­tor­i­cally been left to fig­ure out se­cu­rity on their own. Open source soft­ware con­sti­tutes the vast ma­jor­ity of code in mod­ern sys­tems, in­clud­ing the very sys­tems AI agents use to write new soft­ware.

By giv­ing the main­tain­ers of these crit­i­cal open source code­bases ac­cess to a new gen­er­a­tion of AI mod­els that can proac­tively iden­tify and fix vul­ner­a­bil­i­ties at scale, Project Glasswing of­fers a cred­i­ble path to chang­ing that equa­tion. This is how AI-augmented se­cu­rity can be­come a trusted side­kick for every main­tainer, not just those who can af­ford ex­pen­sive se­cu­rity teams.”“Pro­mot­ing the cy­ber­se­cu­rity and re­siliency of the fi­nan­cial sys­tem is cen­tral to JPMorganChase’s mis­sion, and we be­lieve the in­dus­try is strongest when lead­ing in­sti­tu­tions work to­gether on shared chal­lenges. Project Glasswing pro­vides a unique, early stage op­por­tu­nity to eval­u­ate next-gen­er­a­tion AI tools for de­fen­sive cy­ber­se­cu­rity across crit­i­cal in­fra­struc­ture both on our own terms and along­side re­spected tech­nol­ogy lead­ers.

We will take a rig­or­ous, in­de­pen­dent ap­proach to de­ter­min­ing how to pro­ceed and where we can help. Anthropic’s ini­tia­tive re­flects the kind of for­ward-look­ing, col­lab­o­ra­tive ap­proach that this mo­ment de­mands.”“Google is pleased to see this cross-in­dus­try cy­ber­se­cu­rity ini­tia­tive com­ing to­gether and to make Mythos Preview avail­able to par­tic­i­pants via Vertex AI. It’s al­ways been crit­i­cal that the in­dus­try work to­gether on emerg­ing se­cu­rity is­sues, whether it’s post-quan­tum cryp­tog­ra­phy, re­spon­si­ble zero-day dis­clo­sure, se­cure open source soft­ware, or de­fense against AI-based at­tacks.

We have long be­lieved that AI poses new chal­lenges and opens new op­por­tu­ni­ties in cy­ber de­fense, which is why we’ve built AI-powered tools—such as Big Sleep and CodeMender—to find and fix crit­i­cal soft­ware flaws. We will con­tinue in­vest­ing in our lead­ing cy­ber­se­cu­rity plat­form and a cul­ture fo­cused on pro­tect­ing users, cus­tomers, the ecosys­tem, and na­tional se­cu­rity.”“Over the past few weeks, we’ve had ac­cess to the Claude Mythos Preview model, us­ing it to iden­tify com­plex vul­ner­a­bil­i­ties that prior-gen­er­a­tion mod­els missed en­tirely. This is not only a game changer for find­ing pre­vi­ously hid­den vul­ner­a­bil­i­ties, but it also sig­nals a dan­ger­ous shift where at­tack­ers can soon find even more zero-day vul­ner­a­bil­i­ties and de­velop ex­ploits faster than ever be­fore.

It’s clear that these mod­els need to be in the hands of open source own­ers and de­fend­ers every­where to find and fix these vul­ner­a­bil­i­ties be­fore at­tack­ers get ac­cess. Perhaps even more im­por­tant: every­one needs to pre­pare for AI-assisted at­tack­ers. There will be more at­tacks, faster at­tacks, and more so­phis­ti­cated at­tacks. Now is the time to mod­ern­ize cy­ber­se­cu­rity stacks every­where. We com­mend Anthropic for part­ner­ing with the in­dus­try to en­sure these pow­er­ful ca­pa­bil­i­ties pri­or­i­tize de­fense first.”The pow­er­ful cy­ber ca­pa­bil­i­ties of Claude Mythos Preview are a re­sult of its strong agen­tic cod­ing and rea­son­ing skills. For ex­am­ple, as shown in the eval­u­a­tion re­sults be­low, the model has the high­est scores of any model yet de­vel­oped on a va­ri­ety of soft­ware cod­ing tasks.More in­for­ma­tion on the mod­el’s ca­pa­bil­i­ties, its safety prop­er­ties, and its gen­eral char­ac­ter­is­tics can be found in the Claude Mythos Preview sys­tem card.We do not plan to make Claude Mythos Preview gen­er­ally avail­able, but our even­tual goal is to en­able our users to safely de­ploy Mythos-class mod­els at scale—for cy­ber­se­cu­rity pur­poses, but also for the myr­iad other ben­e­fits that such highly ca­pa­ble mod­els will bring. To do so, we need to make progress in de­vel­op­ing cy­ber­se­cu­rity (and other) safe­guards that de­tect and block the mod­el’s most dan­ger­ous out­puts. We plan to launch new safe­guards with an up­com­ing Claude Opus model, al­low­ing us to im­prove and re­fine them with a model that does not pose the same level of risk as Mythos Preview3.Today’s an­nounce­ment is the be­gin­ning of a longer-term ef­fort. To be suc­cess­ful, it will re­quire broad in­volve­ment from across the tech­nol­ogy in­dus­try and be­yond.Pro­ject Glasswing part­ners will re­ceive ac­cess to Claude Mythos Preview to find and fix vul­ner­a­bil­i­ties or weak­nesses in their foun­da­tional sys­tems—sys­tems that rep­re­sent a very large por­tion of the world’s shared cy­ber­at­tack sur­face. We an­tic­i­pate this work will fo­cus on tasks like lo­cal vul­ner­a­bil­ity de­tec­tion, black box test­ing of bi­na­ries, se­cur­ing end­points, and pen­e­tra­tion test­ing of sys­tems.An­throp­ic’s com­mit­ment of $100M in model us­age cred­its to Project Glasswing and ad­di­tional par­tic­i­pants will cover sub­stan­tial us­age through­out this re­search pre­view. Afterward, Claude Mythos Preview will be avail­able to par­tic­i­pants at $25/$125 per mil­lion in­put/​out­put to­kens (participants can ac­cess the model on the Claude API, Amazon Bedrock, Google Cloud’s Vertex AI, and Microsoft Foundry).In ad­di­tion to our com­mit­ment of model us­age cred­its, we’ve do­nated $2.5M to Alpha-Omega and OpenSSF through the Linux Foundation, and $1.5M to the Apache Software Foundation to en­able the main­tain­ers of open-source soft­ware to re­spond to this chang­ing land­scape (maintainers in­ter­ested in ac­cess can ap­ply through the Claude for Open Source pro­gram).We in­tend for this work to grow in scope and con­tinue for many months, and we’ll share as much as we can so that other or­ga­ni­za­tions can ap­ply the lessons to their own se­cu­rity. Partners will, to the ex­tent they’re able, share in­for­ma­tion and best prac­tices with each other; within 90 days, Anthropic will re­port pub­licly on what we’ve learned, as well as the vul­ner­a­bil­i­ties fixed and im­prove­ments made that can be dis­closed. We will also col­lab­o­rate with lead­ing se­cu­rity or­ga­ni­za­tions to pro­duce a set of prac­ti­cal rec­om­men­da­tions for how se­cu­rity prac­tices should evolve in the AI era. This will po­ten­tially in­clude:An­thropic has also been in on­go­ing dis­cus­sions with US gov­ern­ment of­fi­cials about Claude Mythos Preview and its of­fen­sive and de­fen­sive cy­ber ca­pa­bil­i­ties. As we noted above, se­cur­ing crit­i­cal in­fra­struc­ture is a top na­tional se­cu­rity pri­or­ity for de­mo­c­ra­tic coun­tries—the emer­gence of these cy­ber ca­pa­bil­i­ties is an­other rea­son why the US and its al­lies must main­tain a de­ci­sive lead in AI tech­nol­ogy. Governments have an es­sen­tial role to play in help­ing main­tain that lead, and in both as­sess­ing and mit­i­gat­ing the na­tional se­cu­rity risks as­so­ci­ated with AI mod­els. We are ready to work with lo­cal, state, and fed­eral rep­re­sen­ta­tives to as­sist in these tasks.We are hope­ful that Project Glasswing can seed a larger ef­fort across in­dus­try and the pub­lic sec­tor, with all par­ties help­ing to ad­dress the biggest ques­tions around the im­pact of pow­er­ful mod­els on se­cu­rity. We in­vite other AI in­dus­try mem­bers to join us in help­ing to set the stan­dards for the in­dus­try. In the medium term, an in­de­pen­dent, third-party body—one that can bring to­gether pri­vate- and pub­lic-sec­tor or­ga­ni­za­tions—might be the ideal home for con­tin­ued work on these large-scale cy­ber­se­cu­rity pro­jects.

The pro­ject is named for the glass­wing but­ter­fly, Greta oto. The metaphor can be ap­plied in two ways: the but­ter­fly’s trans­par­ent wings let it hide in plain sight, much like the vul­ner­a­bil­i­ties dis­cussed in this post; they also al­low it to evade harm—like the trans­parency we’re ad­vo­cat­ing for in our ap­proach. From the Ancient Greek for utterance” or narrative”: the sys­tem of sto­ries through which civ­i­liza­tions made sense of the world.Se­cu­rity pro­fes­sion­als whose le­git­i­mate work is af­fected by these safe­guards will be able to ap­ply to an up­com­ing Cyber Verification Program.

...

Read the original on www.anthropic.com »

5 1,254 shares, 46 trendiness

[MODEL] Claude Code is unusable for complex engineering tasks with the Feb updates · Issue #42796 · anthropics/claude-code

* This re­port does NOT con­tain sen­si­tive in­for­ma­tion (API keys, pass­words, etc.)

Claude has re­gressed to the point it can­not be trusted to per­form com­plex en­gi­neer­ing.

Does the op­po­site of re­quested ac­tiv­i­ties

Claude should be­have like it did in January.

Accept Edits was ON (auto-accepting changes)

Yes, every time with the same prompt

This analy­sis was pro­duced by Claude by an­a­lyz­ing ses­sion log data from January through March.

Quantitative analy­sis of 17,871 think­ing blocks and 234,760 tool calls across

6,852 Claude Code ses­sion files re­veals that the roll­out of think­ing con­tent

redac­tion (redact-thinking-2026-02-12) cor­re­lates pre­cisely with a mea­sured

qual­ity re­gres­sion in com­plex, long-ses­sion en­gi­neer­ing work­flows.

The data sug­gests that ex­tended think­ing to­kens are not a nice to have” but

are struc­turally re­quired for the model to per­form multi-step re­search,

con­ven­tion ad­her­ence, and care­ful code mod­i­fi­ca­tion. When think­ing depth is

re­duced, the mod­el’s tool us­age pat­terns shift mea­sur­ably from re­search-first

to edit-first be­hav­ior, pro­duc­ing the qual­ity is­sues users have re­ported.

This re­port pro­vides data to help Anthropic un­der­stand which work­flows are

most af­fected and why, with the goal of in­form­ing de­ci­sions about think­ing

to­ken al­lo­ca­tion for power users.

The qual­ity re­gres­sion was in­de­pen­dently re­ported on March 8 — the ex­act date

redacted think­ing blocks crossed 50%. The roll­out pat­tern (1.5% → 25% → 58% →

100% over one week) is con­sis­tent with a staged de­ploy­ment.

The sig­na­ture field on think­ing blocks has a 0.971 Pearson cor­re­la­tion

with think­ing con­tent length (measured from 7,146 paired sam­ples where both

are pre­sent). This al­lows es­ti­ma­tion of think­ing depth even af­ter redac­tion.

Thinking depth had al­ready dropped ~67% by late February, be­fore redac­tion

be­gan. The redac­tion roll­out in early March made this in­vis­i­ble to users.

These met­rics were com­puted in­de­pen­dently from 18,000+ user prompts be­fore

the think­ing analy­sis was per­formed.

A stop hook (stop-phrase-guard.sh) was built to pro­gram­mat­i­cally catch

own­er­ship-dodg­ing, pre­ma­ture stop­ping, and per­mis­sion-seek­ing be­hav­ior.

It fired 173 times in 17 days af­ter March 8. It fired zero times be­fore.

Analysis of 234,760 tool in­vo­ca­tions shows the model stopped read­ing code

be­fore mod­i­fy­ing it.

The model went from 6.6 reads per edit to 2.0 reads per edit — a 70%

re­duc­tion in re­search be­fore mak­ing changes.

In the good pe­riod, the mod­el’s work­flow was: read the tar­get file, read

re­lated files, grep for us­ages across the code­base, read head­ers and tests,

then make a pre­cise edit. In the de­graded pe­riod, it reads the im­me­di­ate

file and ed­its, of­ten with­out check­ing con­text.

The de­cline in re­search ef­fort be­gins in mid-Feb­ru­ary — the same pe­riod when

es­ti­mated think­ing depth dropped 67%.

Full-file Write us­age dou­bled — the model in­creas­ingly chose to rewrite

en­tire files rather than make sur­gi­cal ed­its, which is faster but loses

pre­ci­sion and con­text aware­ness.

* 191,000 lines merged across two PRs in a week­end dur­ing the good pe­riod

Extended think­ing is the mech­a­nism by which the model:

* Plans multi-step ap­proaches be­fore act­ing (which files to read, what or­der)

* Catches its own mis­takes be­fore out­putting them

* Decides whether to con­tinue work­ing or stop (session man­age­ment)

When think­ing is shal­low, the model de­faults to the cheap­est ac­tion avail­able:

edit with­out read­ing, stop with­out fin­ish­ing, dodge re­spon­si­bil­ity for fail­ures,

take the sim­plest fix rather than the cor­rect one. These are ex­actly the

symp­toms ob­served.

Transparency about think­ing al­lo­ca­tion: If think­ing to­kens are be­ing

re­duced or capped, users who de­pend on deep rea­son­ing need to know. The

redact-think­ing header makes it im­pos­si­ble to ver­ify ex­ter­nally.

A max think­ing” tier: Users run­ning com­plex en­gi­neer­ing work­flows

would pay sig­nif­i­cantly more for guar­an­teed deep think­ing. The cur­rent

sub­scrip­tion model does­n’t dis­tin­guish be­tween users who need 200 think­ing

to­kens per re­sponse and users who need 20,000.

Thinking to­ken met­rics in API re­sponses: Even if think­ing con­tent is

redacted, ex­pos­ing think­ing_­to­kens in the us­age re­sponse would let users

mon­i­tor whether their re­quests are get­ting the rea­son­ing depth they need.

Canary met­rics from power users: The stop hook vi­o­la­tion rate

(0 → 10/day) is a ma­chine-read­able sig­nal that could be mon­i­tored across

the user base as a lead­ing in­di­ca­tor of qual­ity re­gres­sions.

The fol­low­ing be­hav­ioral pat­terns were mea­sured across 234,760 tool calls and

18,000+ user prompts. Each is a pre­dictable con­se­quence of re­duced rea­son­ing

depth: the model takes short­cuts be­cause it lacks the think­ing bud­get to

eval­u­ate al­ter­na­tives, check con­text, or plan ahead.

When the model has suf­fi­cient think­ing bud­get, it reads re­lated files, greps

for us­ages, checks head­ers, and reads tests be­fore mak­ing changes. When

think­ing is shal­low, it skips re­search and ed­its di­rectly.

One in three ed­its in the de­graded pe­riod was made to a file the model had

not read in its re­cent tool his­tory. The prac­ti­cal con­se­quence: ed­its that

break sur­round­ing code, vi­o­late file-level con­ven­tions, splice new code into

the mid­dle of ex­ist­ing com­ment blocks, or du­pli­cate logic that al­ready ex­ists

else­where in the file.

Spliced com­ments are a par­tic­u­larly vis­i­ble symp­tom. When the model ed­its

a file it has­n’t read, it does­n’t know where com­ment blocks end and code

be­gins. It in­serts new de­c­la­ra­tions be­tween a doc­u­men­ta­tion com­ment and the

func­tion it doc­u­ments, break­ing the se­man­tic as­so­ci­a­tion. This never hap­pened

in the good pe­riod be­cause the model al­ways read the file first.

When think­ing is deep, the model re­solves con­tra­dic­tions in­ter­nally be­fore

pro­duc­ing out­put. When think­ing is shal­low, con­tra­dic­tions sur­face in the

out­put as vis­i­ble self-cor­rec­tions: oh wait”, actually,”, let me

re­con­sider”, hmm, ac­tu­ally”, no wait.”

The rate more than tripled. In the worst ses­sions, the model pro­duced 20+

rea­son­ing re­ver­sals in a sin­gle re­sponse — gen­er­at­ing a plan, con­tra­dict­ing

it, re­vis­ing, con­tra­dict­ing the re­vi­sion, and ul­ti­mately pro­duc­ing out­put

that could not be trusted be­cause the rea­son­ing path was vis­i­bly in­co­her­ent.

The word simplest” in the mod­el’s out­put is a sig­nal that it is op­ti­miz­ing

for the least ef­fort rather than eval­u­at­ing the cor­rect ap­proach. With deep

think­ing, the model eval­u­ates mul­ti­ple ap­proaches and chooses the right one.

With shal­low think­ing, it grav­i­tates to­ward what­ever re­quires the least

rea­son­ing to jus­tify.

In one ob­served 2-hour win­dow, the model used simplest” 6 times while

pro­duc­ing code that its own later self-cor­rec­tions de­scribed as lazy and

wrong”, rushed”, and sloppy.” Each time, the model had cho­sen an ap­proach

...

Read the original on github.com »

6 1,139 shares, 48 trendiness

How Microsoft Vaporized a Trillion Dollars

This is the first of a se­ries of ar­ti­cles in which you will learn about what may be one of the sil­li­est, most pre­ventable, and most costly mishaps of the 21st cen­tury, where Microsoft all but lost OpenAI, its largest cus­tomer, and the trust of the US gov­ern­ment.

I joined Azure Core on the dull Monday morn­ing of May 1st, 2023, as a se­nior mem­ber of the Overlake R&D team, the folks be­hind the Azure Boost of­fload card and net­work ac­cel­er­a­tor.

I was­n’t new to Azure, hav­ing run what is likely the longest-run­ning pro­duc­tion sub­scrip­tion of this cloud ser­vice, which launched in February 2010 as Windows Azure.

I was­n’t new to Microsoft ei­ther, hav­ing been part of the Windows team since 1/1/2013 and later helped mi­grate SharePoint Online to Azure, be­fore join­ing the Core OS team as a ker­nel en­gi­neer, where I no­tably helped im­prove the ker­nel and helped in­vent and de­liver the Container plat­form that sup­ports Docker, Azure Kubernetes, Azure Container Instances, Azure App Services, and Windows Sandbox, all ship­ping tech­nolo­gies that re­sulted in mul­ti­ple granted patents.

Furthermore, I con­tributed to brain­storm­ing the early Overlake cards in 2020-2021, draft­ing a pro­posal for a Host OS Accelerator Card com­mu­ni­ca­tion pro­to­col and net­work stack, when all we had was a de­bug­ger’s se­r­ial con­nec­tion. I also served as a Core OS spe­cial­ist, help­ing Azure Core en­gi­neers di­ag­nose deep OS is­sues.

I re­joined in 2023 as an Azure ex­pert on day one, hav­ing con­tributed to the de­vel­op­ment of some of the tech­nolo­gies on which Azure re­lies and hav­ing used the plat­form for more than a decade, both out­side and in­side Microsoft at a global scale.

As a re­turn­ing em­ployee, I skipped the New Employee Orientation and had my Global Security in­vite for 12 noon to pick up my badge, but my fu­ture man­ager asked if I could come in ear­lier, as the team had their monthly plan­ning meet­ing that morn­ing.

I, of course, agreed and ar­rived a few min­utes be­fore 10 am at the en­trance of the Studio X build­ing, not far from The Commons on the West Campus in Redmond. A man showed up in the lobby and opened the door for me. I fol­lowed him to a meet­ing room through a labyrinth of cor­ri­dors.

The room was chock-full, with more peo­ple on a live con­fer­ence call. The dev man­ager, the leads, the ar­chi­tects, the prin­ci­pal and se­nior en­gi­neers shared the space with what ap­peared to be new hires and ju­nior per­son­nel.

The screen pro­jected a slide where I rec­og­nized a num­ber of fa­mil­iar acronyms, like COM, WMI, perf coun­ters, VHDX, NTFS, ETW, and a dozen oth­ers, mixed with new Azure-related ones, in an im­broglio of boxes linked by ar­rows.

I sat qui­etly at the back while a man was walk­ing the room through a big port­ing plan of their cur­rent stack to the Overlake ac­cel­er­a­tor. As I lis­tened, it was not im­me­di­ately clear what that se­ries of boxes with Windows user-mode and ker­nel com­po­nents had to do with that plan.

After a few min­utes, I risked a ques­tion: Are you plan­ning to port those Windows fea­tures to Overlake? The an­swer was yes, or at least they were look­ing into it. The dev man­ager showed some doubt, and the man replied that they could at least ask a cou­ple of ju­nior devs to look into it.”

The room re­mained silent for an in­stant. I had seen the hard­ware specs for the SoC on the Overlake card in my pre­vi­ous tenure: the RAM ca­pac­ity and the power bud­get, which was just a tiny frac­tion of the TDP you can ex­pect from a reg­u­lar server CPU.

The hard­ware folks I had spo­ken with told me they could only spare 4KB of dual-ported mem­ory on the FPGA for my door­bell shared-mem­ory com­mu­ni­ca­tion pro­to­col.

Everything was nim­ble, ef­fi­cient, and power-savvy, and the team I had joined 10 min­utes ear­lier was se­ri­ously con­sid­er­ing port­ing half of Windows to that tiny, fan­less, Linux-running chip the size of a fin­ger­nail.

That felt like Elon talk­ing about col­o­niz­ing Mars: just nuke the poles then grow an at­mos­phere! Easier said than done, uh?

That en­tire 122-strong org was knee-deep in im­pos­si­ble ru­mi­na­tions in­volv­ing port­ing Windows to Linux to sup­port their ex­ist­ing VM man­age­ment agents.

The man was a Principal Group Engineering Manager over­see­ing a chunk of the soft­ware run­ning on each Azure node; his boss, a Partner Engineering Manager, was in the room with us, and they re­ally con­tem­plated port­ing Windows to Linux to sup­port their cur­rent soft­ware.

At first, I ques­tioned my un­der­stand­ing. Was that se­ri­ous? The rest of the talk left no doubt: the plan was out­lined, and the dev leads were tasked with con­tribut­ing peo­ple to the ef­fort. It was im­me­di­ately clear to me that this plan would never suc­ceed and that the org needed a lot of help.

That first hour in the new role left me with a mix of strange feel­ings, stu­pe­fac­tion, and in­credulity.

The stack was hit­ting its scal­ing lim­its on a 400 Watt Xeon at just a few dozen VMs per node, I later learned, a far cry from the 1,024 VMs limit I knew the hy­per­vi­sor was ca­pa­ble of, and was a noisy neigh­bor con­sum­ing so many re­sources that it was caus­ing jit­ter ob­serv­able from the cus­tomer VMs.

There is no di­men­sion in the uni­verse where this stack would fit on a tiny ARM SoC and scale up by many fac­tors. It was not go­ing to hap­pen.

I have seen a lot in my decades of in­dus­try (and Microsoft) ex­pe­ri­ence, but I had never seen an or­ga­ni­za­tion so far from re­al­ity. My day-one prob­lem was there­fore not to ramp up on new tech­nol­ogy, but rather to con­vince an en­tire org, up to my skip-skip-level, that they were on a death march.

Somewhere, I knew it was go­ing to be a fierce up­hill bat­tle. As you can imag­ine, it did­n’t go well, as you will later learn.

I spent the next few days read­ing more about the plans, study­ing the cur­rent sys­tems, and vis­it­ing old friends in Core OS, my alma mater. I was lost away from home in a bizarre ter­ri­tory where peo­ple made plans that did­n’t make sense with the aplomb of a drunk LLM.

I no­tably spent more than 90 min­utes chat­ting in per­son with the head of the Linux System Group, a solid scholar with a PhD from INRIA, who was among the folks who hired me on the ker­nel team years ear­lier.

His org is re­spon­si­ble for de­liv­er­ing Mariner Linux (now Azure Linux) and the trimmed-down dis­tro run­ning on the Overlake / Azure Boost card. He kindly an­swered all my ques­tions, and I learned that they had iden­ti­fied 173 agents (one hun­dred sev­enty-three) as can­di­dates for port­ing to Overlake.

I later re­searched this fur­ther and found that no one at Microsoft, not a sin­gle soul, could ar­tic­u­late why up to 173 agents were needed to man­age an Azure node, what they all did, how they in­ter­acted with one an­other, what their fea­ture set was, or even why they ex­isted in the first place.

Azure sells VMs, net­work­ing, and stor­age at the core. Add ob­serv­abil­ity and ser­vic­ing, and you should be good. Everything else, SQL, K8s, AI work­loads, and what­not all build on VMs with xPU, net­work­ing, and stor­age, and the heavy lift­ing to make the magic hap­pen is done by the good Core OS folks and the hy­per­vi­sor.

How the Azure folks came up with 173 agents will prob­a­bly re­main a mys­tery, but it takes a se­ri­ous amount of mis­un­der­stand­ing to get there, and this is also how dis­as­ters are built.

Now, fathom for a sec­ond that this pile of un­con­trolled stuff” is or­ches­trat­ing the VMs run­ning Anthropic’s Claude, what’s left of OpenAI’s APIs on Azure, SharePoint Online, the gov­ern­ment clouds and other mis­sion-crit­i­cal in­fra­struc­ture, and you’ll be close to un­der­stand­ing how a grain of sand in that frag­ile pileup can cause a global col­lapse, with se­ri­ous National Security im­pli­ca­tions as well as po­ten­tial busi­ness-end­ing con­se­quences for Microsoft.

We are still far from the va­por­ized tril­lion in mar­ket cap, my let­ters to the CEO, to the Microsoft Board of Directors, and to the Cloud + AI EVP and their to­tal si­lence, the quasi-loss of OpenAI, the breach of trust with the US gov­ern­ment as pub­licly stated by the Secretary of Defense, the wasted en­gi­neer­ing ef­forts, the Rust man­date, my stint on the OpenAI bare-metal team in Azure Core, the es­cort ses­sions from China and else­where, and the de­layed fea­tures pub­licly im­plied as ship­ping since 2023, be­fore the work even be­gan.

If you’re run­ning pro­duc­tion work­loads on Azure or re­ly­ing on it for mis­sion-crit­i­cal sys­tems, this story mat­ters more than you think.

...

Read the original on isolveproblems.substack.com »

7 1,069 shares, 22 trendiness

Artemis II Launch Day Updates

Live launch day up­dates for NASAs Artemis II test flight will be pub­lished on this page. All times are Eastern.

The Orion space­craft’s SAWs (solar ar­rays wings) have fully de­ployed, com­plet­ing a key con­fig­u­ra­tion step for the Artemis II mis­sion. Flight con­trollers in Houston con­firmed that all four wings un­folded as planned, lock­ing into place and be­gin­ning to draw power.

Each so­lar ar­ray wing ex­tends out­ward from the European Service Module, giv­ing Orion, named Integrity, a wingspan of roughly 63 feet when fully de­ployed. Each wing has 15,000 so­lar cells to con­vert sun­light to elec­tric­ity. The ar­rays can turn on two axes that al­low them to ro­tate and track the Sun, max­i­miz­ing power gen­er­a­tion as the space­craft changes at­ti­tude dur­ing its time in Earth or­bit and on its out­bound jour­ney to the Moon.

The next ma­jor mile­stones are the PRM (perigee raise ma­neu­ver) and ARB (apogee raise burn) that will in­crease the low­est and high­est points of the Orion space­craft’s or­bit and pre­pare the space­craft for deep‑space op­er­a­tions.

Following the burns, NASA will hold a post­launch news con­fer­ence at 9 p.m. from Kennedy Space Center in Florida. Following the news con­fer­ence, the Artemis II crew will be­gin prepa­ra­tions for Orion’s prox­im­ity op­er­a­tions demon­stra­tion. This demon­stra­tion will test the abil­ity to man­u­ally ma­neu­ver Orion rel­a­tive to an­other space­craft, in this case, the in­terim cryo­genic propul­sion stage af­ter sep­a­ra­tion.

Coverage on NASA+ will soon con­clude, how­ever 24/7 cov­er­age will con­tinue on NASAs YouTube chan­nel, and keep fol­low­ing the Artemis blog for live up­dates of key mile­stones through­out the mis­sion.

Main en­gine cut­off of the SLS (Space Launch System) core stage is com­plete, and the core stage has suc­cess­fully sep­a­rated from the in­terim cryo­genic propul­sion stage and the Orion space­craft. This marks the end of the first ma­jor propul­sion phase of the Artemis II mis­sion and the tran­si­tion to up­per‑stage op­er­a­tions.

The next ma­jor mile­stone is the de­ploy­ment of the space­craft’s SAWs (solar ar­ray wings) sched­uled to be­gin ap­prox­i­mately 18 min­utes af­ter launch. Once ex­tended, the four SAWs will pro­vide con­tin­u­ous elec­tri­cal power to the space­craft through­out its jour­ney, sup­port­ing life‑sup­port sys­tems, avion­ics, com­mu­ni­ca­tions, and on­board op­er­a­tions. Deployment is a crit­i­cal step in con­fig­ur­ing Orion for the re­main­der of its time in Earth or­bit and for the out­bound trip to the Moon.

The space­craft adapter jet­ti­son fair­ings that en­close the ser­vice mod­ule and the launch abort sys­tem have sep­a­rated from the Orion space­craft. With the rocket and space­craft now fly­ing above the dens­est lay­ers of Earth’s at­mos­phere, Orion no longer re­quires the pro­tec­tive struc­tures that shielded it dur­ing the early, high‑dy­namic‑pres­sure por­tion of launch.

The next ma­jor mile­stone is core stage sep­a­ra­tion and Interim Cryogenic Propulsion Stage ig­ni­tion.

The SLS (Space Launch System) twin solid rocket boost­ers have sep­a­rated. The boost­ers, each stand­ing 177 feet tall and gen­er­at­ing more than 3.6 mil­lion pounds of thrust at liftoff, pro­vide most of the rock­et’s power dur­ing the first two min­utes of flight and sep­a­ra­tion re­duces mass and al­lows the core stage to con­tinue pro­pelling the Orion space­craft, named Integrity, to­ward or­bit.

With the boost­ers now clear, the SLS core stage re­mains the pri­mary source of thrust.

In about one minute, the space­craft adapter jet­ti­son fair­ings that en­close Orion’s ser­vice mod­ule and the launch abort sys­tem will sep­a­rate from the space­craft.

6:35 p.m.

NASA’s Artemis II SLS (Space Launch System) rocket, with the Orion spacecraft atop car­ry­ing NASA as­tro­nauts Reid Wiseman, Victor Glover, and Christina Koch, along with CSA (Canadian Space Agency) as­tro­naut Jeremy Hansen, lifted off from Kennedy Space Center’s Launch Complex 39B in Florida at 6:35 p.m. EDT to be­gin its jour­ney to deep space.

The twin solid rocket boost­ers ig­nited first, de­liv­er­ing more than 75% of the thrust needed to lift the 5.75-million-pound rocket off the pad. Their com­bined power, along with the four RS-25 en­gines al­ready at full thrust, gen­er­ated an in­cred­i­ble 8.8 mil­lion pounds of force at liftoff. As the rocket rose, the um­bil­i­cals — which pro­vided power, fuel, and data con­nec­tions dur­ing prelaunch — dis­con­nected and re­tracted into pro­tec­tive hous­ings. This en­sured the ve­hi­cle is free from ground sys­tems and fully au­tonomous for flight.

The ap­prox­i­mately 10-day Artemis II mis­sion around the Moon is the first crewed flight un­der NASAs Artemis cam­paign. It will help test the sys­tems and hard­ware needed to con­tinue send­ing as­tro­nauts on in­creas­ingly dif­fi­cult mis­sions to ex­plore more of the Moon for sci­en­tific dis­cov­ery, eco­nomic ben­e­fits, and to con­tinue build­ing to­ward the first crewed mis­sions to Mars.

Below are the as­cent mile­stones that will oc­cur lead­ing up to core stage sep­a­ra­tion. Times may vary by sev­eral sec­onds.

The Artemis II count­down has en­tered ter­mi­nal count, and the ground launch se­quencer has taken con­trol, or­ches­trat­ing a pre­cise se­ries of au­to­mated com­mands to pre­pare the SLS (Space Launch System) rocket and Orion space­craft for liftoff at a T-0 time of 6:35 p.m. EDT.

The ground launch se­quencer en­sures that all sys­tems – from propul­sion to avion­ics – tran­si­tion into flight mode. Key ac­tions per­formed in­clude pres­sur­iz­ing pro­pel­lant tanks for op­ti­mal en­gine per­for­mance, ac­ti­vat­ing flight soft­ware and switch­ing con­trol from ground to on­board sys­tems, and per­form­ing fi­nal health checks across thou­sands of sen­sors to con­firm readi­ness.

This au­to­mated se­quence min­i­mizes hu­man in­ter­ven­tion, re­duc­ing risk and en­sur­ing syn­chro­niza­tion across com­plex sub­sys­tems. For Artemis II, this mo­ment marks the cul­mi­na­tion of years of plan­ning and test­ing, as the mis­sion moves from ground op­er­a­tions to the thresh­old of launch.

See the list be­low of the ter­mi­nal count mile­stones:

* T-4M — GLS is go for core stage aux­il­iary power unit (APU) start

Inside the ter­mi­nal count­down, teams have a few op­tions to hold the count if needed.

The launch team can hold at 6 min­utes for the du­ra­tion of the launch win­dow, less the 6 min­utes needed to launch, with­out hav­ing to re­cy­cle back to 10 min­utes.

If teams need to stop the clock be­tween T-6 min­utes and T-1 minute, 30 sec­onds, they can hold for up to 3 min­utes and re­sume the clock to launch. If they re­quire more than 3 min­utes of hold time, the count­down would re­cy­cle back to T-10.

If the clock stops af­ter T-1 minute and 30 sec­onds, but be­fore the au­to­mated launch se­quencer takes over, then teams can re­cy­cle back to T-10 to try again, pro­vided there is ad­e­quate launch win­dow re­main­ing.

After han­dover to the au­to­mated launch se­quencer, any is­sue that would stop the count­down would lead to con­clud­ing the launch at­tempt for that day.

Artemis II Launch Director Charlie Blackwell-Thompson conducted one of the most im­por­tant steps be­fore liftoff: the go/no-go” poll for the team to pro­ceed with the fi­nal 10 min­utes of the count­down known as ter­mi­nal count.

A unan­i­mous go” across the board sig­nals that Artemis II is fully pre­pared to pro­ceed to­ward launch. This mo­ment rep­re­sents the cul­mi­na­tion of years of plan­ning and hours of metic­u­lous pre-launch work, bring­ing the mis­sion to the thresh­old of his­tory.

The launch team has made the de­ci­sion to ex­tend the T-10 minute hold ahead of to­day’s launch to give en­gi­neers time to work through fi­nal prepa­ra­tions for liftoff. There is a two-hour win­dow in which Artemis II could launch, and a new liftoff time will be set shortly

NASAs Artemis II closeout crew com­pleted its fi­nal tasks and de­parted Launch Complex 39B at NASAs Kennedy Space Center in Florida. After hours of metic­u­lous work as­sist­ing the as­tro­nauts with suit-up, hatch clo­sure, and crit­i­cal space­craft checks, the team ex­ited the White Room and left the Orion space­craft sealed and ready for flight.

This de­par­ture marks a ma­jor tran­si­tion in launch op­er­a­tions: the space­craft is now fully con­fig­ured, and re­spon­si­bil­ity shifts to the launch con­trol team for the fi­nal count­down. The close­out crew’s pre­ci­sion and ex­per­tise en­sure that every con­nec­tion, seal, and sys­tem is ver­i­fied be­fore they step away – mak­ing this mo­ment a key mile­stone on the path to liftoff.

Engineers in­ves­ti­gated a sen­sor on the launch abort sys­tem’s at­ti­tude con­trol mo­tor con­troller bat­tery that showed a higher tem­per­a­ture than would be ex­pected. It is be­lieved to be an in­stru­men­ta­tion is­sue and will not af­fect to­day’s launch.

The weather con­tin­ues to co­op­er­ate and has now been up­graded to 90% go for launch.

Engineers have now re­solved an is­sue with the hard­ware that com­mu­ni­cates with the flight ter­mi­na­tion sys­tem that would have pre­vented the ground from send­ing a sig­nal to de­struct the rocket if it were to veer off course dur­ing as­cent, to pro­tect pub­lic safety. A con­fi­dence test was per­formed to en­sure that the hard­ware is ready to sup­port to­day’s launch.

Meanwhile, tech­ni­cians have com­pleted the launch abort sys­tem hatch clo­sure – an es­sen­tial step that en­sures the Orion space­craft is fully sealed and ready for flight. The hatch pro­vides an ad­di­tional pro­tec­tive bar­rier for the crew mod­ule, de­signed to safe­guard as­tro­nauts dur­ing the Artemis II flight path and, if nec­es­sary, en­able a rapid es­cape in the event of an emer­gency.

During this phase, the close­out team ver­i­fies hatch align­ment, en­gages lock­ing mech­a­nisms, and con­firms pres­sure in­tegrity. These checks guar­an­tee that the launch abort sys­tem hatch can per­form its func­tion flaw­lessly, main­tain­ing struc­tural in­tegrity un­der ex­treme launch con­di­tions. With the hatch se­cured, Orion en­ters its fi­nal con­fig­u­ra­tion for liftoff, mark­ing one of the last ma­jor mile­stones be­fore fu­el­ing and launch.

Although the count­down to to­day’s Artemis II launch is con­tin­u­ing to progress, the Eastern Range has iden­ti­fied an is­sue that they are cur­rently work­ing to re­solve re­lated to their com­mu­ni­ca­tion with the flight ter­mi­na­tion sys­tem. The flight ter­mi­na­tion sys­tem is a safety sys­tem that al­lows en­gi­neers on the ground to send a sig­nal to de­struct the rocket if it were to veer off course dur­ing as­cent, to pro­tect pub­lic safety. Without as­sur­ance that this sys­tem would work if needed, to­day’s launch would be no-go. However, en­gi­neers have de­vised a way to ver­ify the sys­tem and are cur­rently prepar­ing to test this so­lu­tion.

Technicians be­gan in­stalling the crew mod­ule hatch ser­vice panel on the Orion space­craft, an im­por­tant step in fi­nal launch prepa­ra­tions. This panel pro­tects key con­nec­tions and en­sures the hatch area is se­cure for flight.

As part of cur­rent close­out ac­tiv­i­ties, teams are con­firm­ing all sys­tems around the hatch are prop­erly sealed and ready for the mis­sion.

With the hatch area se­cured, teams will con­tinue fi­nal checks and count­down op­er­a­tions at Launch Pad 39B at NASAs Kennedy Space Center in Florida, bring­ing us closer to send­ing as­tro­nauts on a his­toric jour­ney around the Moon.

NASA en­gi­neers have con­ducted coun­ter­bal­ance mech­a­nism op­er­a­tions and are now per­form­ing hatch seal pres­sure de­cay checks in­side the White Room at Launch Complex 39B. These steps en­sure Orion’s hatch main­tains proper pres­sure in­tegrity and that the coun­ter­bal­ance sys­tem func­tions as de­signed for launch con­di­tions.

The coun­ter­bal­ance mech­a­nism is a pre­ci­sion-en­gi­neered as­sem­bly that off­sets the weight of the crew mod­ule hatch, al­low­ing tech­ni­cians to open and close it smoothly with­out in­tro­duc­ing stress on the hinge or seal. This sys­tem uses cal­i­brated springs and dampers to main­tain align­ment and pre­vent sud­den move­ments, which is es­sen­tial for pre­serv­ing the hatch’s air­tight seal. During this phase, tech­ni­cians ver­ify the mech­a­nis­m’s load dis­tri­b­u­tion and con­firm that its lock­ing fea­tures en­gage cor­rectly un­der sim­u­lated launch loads.

Following these ad­just­ments, the team per­forms seal pres­sur­iza­tion de­cay checks – mon­i­tor­ing pres­sure loss over time to con­firm the hatch’s in­tegrity. These checks are vi­tal for as­tro­naut safety, en­sur­ing the cabin re­mains se­cure in all mis­sion phases.

NASAs Artemis II close­out crew is now com­plet­ing one of the most crit­i­cal steps be­fore launch: prepar­ing and clos­ing the crew mod­ule hatch to the Orion space­craft. Inside the White Room at Launch Complex 39B, the close­out crew is work­ing metic­u­lously to in­spect seals, se­cure fas­ten­ers, and ver­ify that the hatch is air­tight.

This process en­sures Orion is fully pres­sur­ized and ready for flight. Once the hatch is closed and locked, the as­tro­nauts are of­fi­cially sealed in­side their space­craft, mark­ing a ma­jor mile­stone on the path to liftoff.

NASAs Artemis II crew mem­bers are board­ing the agen­cy’s Orion space­craft to be­gin com­mu­ni­ca­tion checks to con­firm voice links with mis­sion con­trol and on­board sys­tems.

Before en­ter­ing the space­craft that will be their home on the ap­prox­i­mately 10-day jour­ney around the Moon and back, all four crew­mates signed the in­side of the White Room, an area at the end of the crew ac­cess arm that pro­vides ac­cess to the space­craft. The term White Room” dates to NASAs Gemini pro­gram, and to honor this hu­man space­flight tra­di­tion, the room re­mains white to­day.

The Artemis II closeout crew is now work­ing to help the as­tro­nauts en­ter the Orion space­craft and make fi­nal prepa­ra­tions for their nearly 700,000-mile trip to the Moon and back. As part of the process, the close­out crew is help­ing the as­tro­nauts don their Orion Crew Survival System helmets and gloves, as well as board Orion and get buck­led in.

A short time from now, the close­out crew will close the crew mod­ule and ex­te­rior launch abort sys­tem hatches. Even a sin­gle strand of hair in­side the hatch doors could po­ten­tially pose is­sues with clos­ing ei­ther hatch, so the process is care­fully done and takes up to four hours. Each step in the close­out process en­sures air­tight seals and com­mu­ni­ca­tion readi­ness for the mis­sion ahead.

Following com­mu­ni­ca­tion checks, the team per­formed suit leak checks – a vi­tal safety pro­ce­dure en­sur­ing each pres­sure suit main­tains in­tegrity in case of cabin de­pres­sur­iza­tion. These op­er­a­tions are es­sen­tial for crew readi­ness and mis­sion as­sur­ance, mark­ing one of the fi­nal phases be­fore hatch clo­sure and launch prepa­ra­tions.

With assistance from the close­out crew, the Artemis II crew are care­fully don­ning their hel­mets and gloves – fi­nal­iz­ing suit in­tegrity checks be­fore board­ing the Orion space­craft.

This step is more than cer­e­mo­nial; it en­sures air­tight seals and com­mu­ni­ca­tion readi­ness for the mis­sion ahead. The close­out crew plays a vi­tal role, guid­ing the as­tro­nauts through these pro­ce­dures and con­firm­ing every con­nec­tion is se­cure be­fore hatch clo­sure.

Stay tuned as we con­tinue to fol­low the Artemis II team through each count­down mile­stone on their path to liftoff.

NASAs Artemis II crew NASA as­tro­nauts Reid Wiseman, Victor Glover, and Christina Koch, along with CSA (Canadian Space Agency) as­tro­naut Jeremy Hansen, arrived at Launch Complex 39B at the agen­cy’s Kennedy Space Center in Florida, where the agen­cy’s SLS (Space Launch System) rocket with Orion space­craft atop stands ready for launch. The open­ing of to­day’s launch win­dow is slated for just over 4 hours from now, at 6:24 p.m. EDT.

In the next few min­utes, the crew will take the el­e­va­tor up the pad’s fixed ser­vice struc­ture and walk down the cli­mate-con­trolled crew ac­cess arm to the White Room, their fi­nal stop be­fore climb­ing aboard their Orion space­craft. In this clean, con­trolled en­vi­ron­ment at the end of the crew ac­cess arm, the close­out crew will as­sist the as­tro­nauts with hatch op­er­a­tions and ver­ify that all safety sys­tems are ready for launch.

Since the late 1960s, pads A and B at Kennedy’s Launch Complex 39 have sup­ported America’s ma­jor space pro­grams, with Pad A used most fre­quently for launches un­der the Space Shuttle Program. After the re­tire­ment of the shut­tle in 2011, Pad A helped usher in a new era of hu­man space­flight as launch pad for the agen­cy’s Commercial Crew Program, which re­turned hu­man space­flight ca­pa­bil­ity to the United States. Pad B saw the launch of NASAs Artemis I mis­sion in November 2022 and will con­tinue to be the pri­mary launch pad for America’s ef­forts to re­turn to hu­mans the Moon.

Just mo­ments ago, NASAs Artemis II flight crew be­gan the walk that every NASA as­tro­naut has made since Apollo 7 in 1968, head­ing to the el­e­va­tor and down through the dou­ble doors be­low the Neil A. Armstrong Building’s Astronaut Crew Quarters at NASAs Kennedy Space Center in Florida.

Before they left the suit-up room, the crew com­pleted one last piece of un­fin­ished busi­ness — a card game. A long-held space­flight tra­di­tion, NASA crews play cards be­fore leav­ing the crew quar­ters ahead of launch un­til the com­man­der, in this in­stance NASA as­tro­naut Reid Wiseman, loses. It is hoped that by los­ing, the com­man­der burns off all his or her bad luck, thereby clear­ing the mis­sion for only good luck.

NASAs Artemis II is the first crewed mis­sion of the Artemis pro­gram and will carry Wiseman and fel­low NASA as­tro­nauts Vic­tor Glover and Christina Koch, as well as CSA (Canadian Space Agency) as­tro­naut Jeremy Hansen on an ap­prox­i­mately 10-day mis­sion around the Moon and back to Earth.

The first crewed deep-space flight in over 50 years, Artemis II is ex­pected to send the crew far­ther from Earth than any pre­vi­ous hu­man mis­sion, po­ten­tially break­ing the record of about 248,655 miles (400,171 km) from Earth set by Apollo 13 dur­ing its lu­nar free-re­turn tra­jec­tory. This mile­stone will oc­cur dur­ing the lu­nar flyby phase, when the crew trav­els on a free-re­turn tra­jec­tory around the Moon, which al­lows the space­craft to loop around the Moon and re­turn to Earth with­out en­ter­ing lu­nar or­bit.

During the test flight, NASA will test life-sup­port sys­tems and crit­i­cal op­er­a­tions in deep space, paving the way for fu­ture lu­nar land­ings and Mars ex­plo­ration.

Having received good­byes and well wishes from their fam­i­lies and friends, the crew em­barks on the 20-minute jour­ney to Kennedy’s Launch Pad 39B and their await­ing space­craft.

NASAs pad res­cue and close­out crew teams have ar­rived at Launch Complex 39B at the agen­cy’s Kennedy Space Center in Florida to en­sure safety and readi­ness dur­ing the crit­i­cal fu­el­ing op­er­a­tions. These spe­cial­ized teams play a vi­tal role in pro­tect­ing per­son­nel and hard­ware through­out the count­down.

The pad res­cue team will be po­si­tioned to re­spond im­me­di­ately in the un­likely event of an emer­gency, en­sur­ing safe evac­u­a­tion pro­ce­dures for pad per­son­nel. The res­cue team is equipped with ad­vanced gear and trained for rapid crew ex­trac­tion, fire sup­pres­sion, and haz­ard mit­i­ga­tion. Their pres­ence en­sures as­tro­naut safety re­mains the top pri­or­ity, pro­vid­ing an all-im­por­tant layer of pro­tec­tion as fu­el­ing op­er­a­tions and sys­tem checks con­tinue.

The closeout crew is re­spon­si­ble for clos­ing the Orion crew mod­ule and launch abort sys­tem hatches, se­cur­ing ac­cess points, ver­i­fy­ing pad con­fig­u­ra­tions, and main­tain­ing the in­tegrity of the launch area dur­ing pro­pel­lant load­ing and sys­tem checks. Their work is crit­i­cal for guar­an­tee­ing a se­cure en­vi­ron­ment for the as­tro­nauts be­fore the launch pad is cleared for liftoff op­er­a­tions.

These teams are es­sen­tial for mit­i­gat­ing risk and sup­port­ing the com­plex chore­og­ra­phy of Artemis IIs prelaunch ac­tiv­i­ties. With both teams in place, Artemis II remains on track for its his­toric mis­sion to send as­tro­nauts around the Moon.

NASA as­tro­nauts Reid Wiseman, com­man­der; Victor Glover, pi­lot; and Christina Koch, mis­sion spe­cial­ist; along with CSA (Canadian Space Agency) as­tro­naut Jeremy Hansen, mis­sion spe­cial­ist, are suit­ing up in­side the Astronaut Crew Quarters of the Neil A. Armstrong Operations and Checkout Building at the agen­cy’s Kennedy Space Center in Florida.

A team of suit tech­ni­cians help the crew put on their Orion Crew Survival System suits, which are each tai­lored for mo­bil­ity and com­fort while en­sur­ing max­i­mum safety dur­ing the dy­namic phases of flight. The bright or­ange space­suits are de­signed to pro­tect them on their jour­ney and fea­ture many im­prove­ments from head to toe to the suits worn on the space shut­tle. NASA reengi­neered many el­e­ments to im­prove safety and range of mo­tion for Artemis as­tro­nauts, and in­stead of the small, medium, and large sizes from the shut­tle era, they are cus­tom fit for each crew mem­ber.

The outer layer is fire-re­sis­tant, and a stronger zip­per al­lows as­tro­nauts to quickly put the suit on. Improved ther­mal man­age­ment will help keep them cool and dry. A lighter, stronger hel­met im­proves com­fort and com­mu­ni­ca­tion, and the gloves are more durable and touch-screen com­pat­i­ble. Better-fitting boots also pro­vide pro­tec­tion in the case of fire and help an as­tro­naut move more swiftly.

The suits’ de­sign and en­gi­neer­ing en­hance­ments pro­vide an ad­di­tional layer of pro­tec­tion for as­tro­nauts and en­sure they re­turn home safely from deep space mis­sions.

During suit-up, teams will check for leaks and en­sure that all con­nect­ing life sup­port sys­tems, in­clud­ing air and power, are op­er­at­ing nom­i­nally ahead of the crew’s ride to NASA Kennedy’s Launch Complex 39B.

With NASA teams now main­tain­ing the liq­uid oxy­gen lev­els in the in­terim cryo­genic propul­sion, all cryo­genic stages of the SLS (Space Launch System) rocket have tran­si­tioned to re­plen­ish mode dur­ing the Artemis II launch count­down. This in­cludes the core stage and SLS up­per stage, en­sur­ing both liq­uid hy­dro­gen and liq­uid oxy­gen tanks re­main at flight-ready lev­els.

Replenish mode is es­sen­tial for main­tain­ing sta­ble pro­pel­lant quan­ti­ties and pres­sure as su­per-cold fu­els nat­u­rally boil off over time. Continuous ad­just­ments keep the rocket fully fu­eled and ready for ig­ni­tion, sup­port­ing the RS-25 en­gines on the core stage and the RL10 en­gine on the SLS up­per stage for their es­sen­tial roles in launch and translu­nar in­jec­tion.

These mile­stones co­in­cide with the Artemis II count­down en­ter­ing a planned 1-hour and 10-minute built-in hold. This sched­uled pause al­lows teams to com­plete cru­cial sys­tem checks, ver­ify launch readi­ness, and ad­dress any last-minute ad­just­ments be­fore pro­ceed­ing to­ward crew ingress and fi­nal fu­el­ing op­er­a­tions.

During this hold, en­gi­neers re­view data from cryo­genic load­ing, propul­sion sys­tems, and com­mu­ni­ca­tions to en­sure all pa­ra­me­ters meet strict safety and per­for­mance cri­te­ria. The hold also pro­vides flex­i­bil­ity for re­solv­ing mi­nor is­sues with­out im­pact­ing the over­all launch time­line.

Once the hold con­cludes, the count­down will re­sume with prepa­ra­tions for as­tro­naut ar­rival at Launch Pad 39B at NASAs Kennedy Space Center in Florida.

NASAs Artemis II astronauts received a fi­nal weather brief­ing in­side the Astronaut Crew Quarters of the Neil A. Armstrong Operations and Checkout Building at the agen­cy’s Kennedy Space Center in Florida, as part of prelaunch prepa­ra­tions.

This weather up­date pro­vides as­tro­nauts and mis­sion teams with the lat­est con­di­tions at NASA Kennedy’s Launch Pad 39B, the sur­round­ing re­cov­ery zones, and po­ten­tial abort sites along Artemis IIs flight path. Accurate weather fore­cast­ing is es­sen­tial for pro­tect­ing crew and hard­ware, as even mi­nor changes can im­pact count­down de­ci­sions and flight dy­nam­ics.

NASA as­tro­nauts Reid Wiseman, com­man­der; Vic­tor Glover, pi­lot; and Christina Koch, mis­sion spe­cial­ist; along with CSA (Canadian Space Agency) as­tro­naut Je­remy Hansen, mis­sion spe­cial­ist, were briefed on wind speeds, pre­cip­i­ta­tion, light­ning risk, and sea states for splash­down con­tin­gen­cies, en­sur­ing all safety cri­te­ria are met be­fore pro­ceed­ing with launch op­er­a­tions.

Weather of­fi­cials with NASA and the U. S. Space Force’s Space Launch Delta 45 are track­ing 80% fa­vor­able con­di­tions dur­ing the launch win­dow, with pri­mary con­cerns be­ing the cu­mu­lus cloud rule, flight through pre­cip­i­ta­tion rule, and ground winds.

With the weather brief­ing com­plete, the crew and ground teams re­main aligned and ready to con­tinue to­ward liftoff, keep­ing Artemis II on track for its his­toric mis­sion to send as­tro­nauts around the Moon.

NASA teams also have be­gun liq­uid oxy­gen (LOX) top­ping process for the in­terim cryo­genic propul­sion stage, or SLS (Space Launch System) rocket up­per stage, dur­ing the Artemis II launch count­down. This step fol­lows the fast fill phase and en­sures the liq­uid oxy­gen tank reaches full ca­pac­ity with su­per-cold ox­i­dizer.

Live cov­er­age of Artemis II tank­ing op­er­a­tions con­tin­ues on NASA’s YouTube chan­nel. NASAs full launch cov­er­age be­gins at 1 p.m. EDT on NASA+, Amazon Prime, and YouTube. You can con­tinue to fol­low the Artemis blog from launch to splash­down for mis­sion up­dates.

Liquid oxy­gen (LOX) fast fill is now com­plete for the SLS (Space Launch System) up­per stage, mark­ing an­other ma­jor mile­stone in tank­ing op­er­a­tions. Teams have con­firmed the up­per stage is in good shape and are pro­ceed­ing with the LOX vent and re­lief test. This step helps ver­ify proper pres­sure reg­u­la­tion and en­sures the sys­tem is ready to tran­si­tion into top­ping and, later, re­plen­ish op­er­a­tions.

NASA teams are now main­tain­ing the liq­uid oxy­gen lev­els in the SLS (Space Launch System) rocket core stage through re­plen­ish mode. This phase fol­lows the com­ple­tion of liq­uid oxy­gen fast fill and top­ping, en­sur­ing the ox­i­dizer re­mains at flight-ready lev­els through­out the fi­nal count­down.

NASA teams are in fast fill of liq­uid oxy­gen (LOX) into the in­terim cryo­genic propul­sion stage as part of the Artemis II launch count­down. This phase rapidly loads the ox­i­dizer af­ter chill­down is com­plete, bring­ing the SLS (Space Launch System) rocket up­per stage closer to full readi­ness for its role in send­ing the Orion space­craft into a high Earth or­bit ahead of a prox­im­ity op­er­a­tions demon­stra­tion test and Orion’s translu­nar in­jec­tion burn.

NASA teams have tran­si­tioned the in­terim cryo­genic propul­sion stage liq­uid hy­dro­gen tank to re­plen­ish mode dur­ing the Artemis II countdown. This phase fol­lows the suc­cess­ful top­ping process and en­sures the tank re­mains at flight-ready lev­els all the way to launch.

NASA teams have be­gun the top­ping phase for the in­terim cryo­genic propul­sion stage liq­uid hy­dro­gen (LH2) tank. This crit­i­cal step oc­curs af­ter suc­cess­ful chill­down and vent-and-re­lief checks, en­sur­ing the tank reaches full ca­pac­ity with su­per-cold liq­uid hy­dro­gen.

Replenish is the fi­nal step in the fu­el­ing process, de­signed to main­tain the cor­rect LH2 lev­els as the su­per-cold pro­pel­lant nat­u­rally boils off over time. This con­tin­u­ous, low-rate flow keeps the tanks topped off and ther­mally sta­ble, en­sur­ing the rocket re­mains fully fu­eled and ready for liftoff.

From chill­down to re­plen­ish, every phase of fu­el­ing is care­fully man­aged to pro­tect hard­ware and guar­an­tee mis­sion suc­cess. With re­plen­ish un­der­way, Artemis II is in its fi­nal stretch to­ward launch and hu­man­i­ty’s next gi­ant leap.

Topping is the process of adding small amounts of LH2 to the tanks af­ter fast fill is com­plete, en­sur­ing they re­main at full ca­pac­ity as the su­per-cold pro­pel­lant nat­u­rally boils off. This step is crit­i­cal for main­tain­ing the pre­cise lev­els needed for launch while keep­ing the sys­tem ther­mally sta­ble.

The Artemis II launch team tran­si­tioned to the fast fill of liq­uid hy­dro­gen (LH2) for the in­terim cryo­genic propul­sion stage, or SLS (Space Launch System) rocket upper stage.

After completing the chill­down phase, this step rapidly loads su­per-cold LH2 into the SLS up­per stage tanks, en­sur­ing the up­per stage is fu­eled and ready to per­form its fun­da­men­tal role of rais­ing the Orion space­craft into a high Earth or­bit ahead of a prox­im­ity op­er­a­tions demon­stra­tion test and Orion’s translu­nar in­jec­tion burn.

Fast fill ac­cel­er­ates the fu­el­ing process while main­tain­ing safety, mark­ing an­other ma­jor mile­stone in the count­down as Artemis II moves closer to liftoff.

The Artemis II launch team has be­gun the liq­uid hy­dro­gen chill­down for the in­terim cryo­genic propul­sion stage, or SLS (Space Launch System) rocket upper stage.

This process grad­u­ally cools the in­terim cryo­genic propul­sion stage fuel lines and com­po­nents to cryo­genic tem­per­a­tures us­ing su­per-cold liq­uid hy­dro­gen. The chill­down step is es­sen­tial to pre­vent ther­mal shock and en­sure the stage is prop­erly con­di­tioned for full pro­pel­lant load­ing. By sta­bi­liz­ing the sys­tem at these ex­treme tem­per­a­tures, en­gi­neers guar­an­tee safe and ef­fi­cient fu­el­ing for the up­per stage that will help po­si­tion Orion into high Earth or­bit for its jour­ney to­ward the Moon.

NASA as­tro­nauts Reid Wiseman, Victor Glover, and Christina Koch, along with CSA (Canadian Space Agency) as­tro­naut Je­remy Hansen have of­fi­cially be­gun their launch day with a sched­uled wake-up call at 9:25 a.m., mark­ing the start of their fi­nal prepa­ra­tions for the his­toric Artemis II mis­sion around the Moon.

The Artemis II launch team tran­si­tioned to the fast fill of liq­uid hy­dro­gen (LH2) into the SLS (Space Launch System) rocket core stage.

...

Read the original on www.nasa.gov »

8 1,027 shares, 42 trendiness

NASA’s Artemis II Crew Launches to the Moon (Official Broadcast)

Artemis II is NASAs first crewed mis­sion un­der the Artemis pro­gram and will launch from the agen­cy’s Kennedy Space Center in Florida. It will send NASA as­tro­nauts Reid Wiseman, Victor Glover, Christina Koch, and CSA (Canadian Space Agency) as­tro­naut Jeremy Hansen on an ap­prox­i­mately 10-day jour­ney around the Moon. Among ob­jec­tives, the agency will test the Orion space­craft’s life sup­port sys­tems for the first time with peo­ple and lay the ground­work for fu­ture crewed Artemis mis­sions.

...

Read the original on plus.nasa.gov »

9 989 shares, 173 trendiness

The Git Commands I Run Before Reading Any Code

The first thing I usu­ally do when I pick up a new code­base is­n’t open­ing the code. It’s open­ing a ter­mi­nal and run­ning a hand­ful of git com­mands. Before I look at a sin­gle file, the com­mit his­tory gives me a di­ag­nos­tic pic­ture of the pro­ject: who built it, where the prob­lems clus­ter, whether the team is ship­ping with con­fi­dence or tip­toe­ing around land mines.

The 20 most-changed files in the last year. The file at the top is al­most al­ways the one peo­ple warn me about. Oh yeah, that file. Everyone’s afraid to touch it.”

High churn on a file does­n’t mean it’s bad. Sometimes it’s just ac­tive de­vel­op­ment. But high churn on a file that no­body wants to own is the clear­est sig­nal of code­base drag I know. That’s the file where every change is a patch on a patch. The blast ra­dius of a small edit is un­pre­dictable. The team pads their es­ti­mates be­cause they know it’s go­ing to fight back.

A 2005 Microsoft Research study found churn-based met­rics pre­dicted de­fects more re­li­ably than com­plex­ity met­rics alone. I take the top 5 files from this list and cross-ref­er­ence them against the bug hotspot com­mand be­low. A file that’s high-churn and high-bug is your sin­gle biggest risk.

Every con­trib­u­tor ranked by com­mit count. If one per­son ac­counts for 60% or more, that’s your bus fac­tor. If they left six months ago, it’s a cri­sis. If the top con­trib­u­tor from the over­all short­log does­n’t ap­pear in a 6-month win­dow (git short­log -sn –no-merges –since=“6 months ago”), I flag that to the client im­me­di­ately.

I also look at the tail. Thirty con­trib­u­tors but only three ac­tive in the last year. The peo­ple who built this sys­tem aren’t the peo­ple main­tain­ing it.

One caveat: squash-merge work­flows com­press au­thor­ship. If the team squashes every PR into a sin­gle com­mit, this out­put re­flects who merged, not who wrote. Worth ask­ing about the merge strat­egy be­fore draw­ing con­clu­sions.

Same shape as the churn com­mand, fil­tered to com­mits with bug-re­lated key­words. Compare this list against the churn hotspots. Files that ap­pear on both are your high­est-risk code: they keep break­ing and keep get­ting patched, but never get prop­erly fixed.

This de­pends on com­mit mes­sage dis­ci­pline. If the team writes update stuff” for every com­mit, you’ll get noth­ing. But even a rough map of bug den­sity is bet­ter than no map.

Commit count by month, for the en­tire his­tory of the repo. I scan the out­put look­ing for shapes. A steady rhythm is healthy. But what does it look like when the count drops by half in a sin­gle month? Usually some­one left. A de­clin­ing curve over 6 to 12 months tells you the team is los­ing mo­men­tum. Periodic spikes fol­lowed by quiet months means the team batches work into re­leases in­stead of ship­ping con­tin­u­ously.

I once showed a CTO their com­mit ve­loc­ity chart and they said that’s when we lost our sec­ond se­nior en­gi­neer.” They had­n’t con­nected the time­line be­fore. This is team data, not code data.

Revert and hot­fix fre­quency. A hand­ful over a year is nor­mal. Reverts every cou­ple of weeks means the team does­n’t trust its de­ploy process. They’re ev­i­dence of a deeper is­sue: un­re­li­able tests, miss­ing stag­ing, or a de­ploy pipeline that makes roll­backs harder than they should be. Zero re­sults is also a sig­nal; ei­ther the team is sta­ble, or no­body writes de­scrip­tive com­mit mes­sages.

Crisis pat­terns are easy to read. Either they’re there or they’re not.

These five com­mands take a cou­ple min­utes to run. They won’t tell you every­thing. But you’ll know which code to read first, and what to look for when you get there. That’s the dif­fer­ence be­tween spend­ing your first day read­ing the code­base me­thod­i­cally and spend­ing it wan­der­ing.

This is the first hour of what I do in a code­base au­dit. Here’s what the rest of the week looks like.

...

Read the original on piechowski.io »

10 984 shares, 38 trendiness

Artemis II crew now halfway to Moon as they take 'spectacular' image of Earth

Artemis II is now on a loop­ing path that will carry the crew around the far side of the Moon and back again. It is the first time since 1972 that hu­mans have trav­elled out­side the Earth’s or­bit.

...

Read the original on www.bbc.com »

To add this web app to your iOS home screen tap the share button and select "Add to the Home Screen".

10HN is also available as an iOS App

If you visit 10HN only rarely, check out the the best articles from the past week.

If you like 10HN please leave feedback and share

Visit pancik.com for more.