10 interesting stories served every morning and every evening.
Every time any of LinkedIn’s one billion users visits linkedin.com, hidden code searches their computer for installed software, collects the results, and transmits them to LinkedIn’s servers and to third-party companies including an American-Israeli cybersecurity firm.
The user is never asked. Never told. LinkedIn’s privacy policy does not mention it.
Because LinkedIn knows each user’s real name, employer, and job title, it is not searching anonymous visitors. It is searching identified people at identified companies. Millions of companies. Every day. All over the world.
Fairlinked e. V. is an association of commercial LinkedIn users. We represent the professionals who use LinkedIn, the businesses that invest in and depend on the platform, and the toolmakers who build products for it.
BrowserGate is our investigation and campaign to document one of the largest corporate espionage and data breach scandals in digital history, to inform the public and regulators, to collect evidence, and to raise funds for the legal proceedings required to stop it.
LinkedIn’s scan reveals the religious beliefs, political opinions, disabilities, and job search activity of identified individuals. LinkedIn scans for extensions that identify practicing Muslims, extensions that reveal political orientation, extensions built for neurodivergent users, and 509 job search tools that expose who is secretly looking for work on the very platform where their current employer can see their profile.
Under EU law, this category of data is not regulated. It is prohibited. LinkedIn has no consent, no disclosure, and no legal basis. Its privacy policy does not mention any of this.
LinkedIn scans for over 200 products that directly compete with its own sales tools, including Apollo, Lusha, and ZoomInfo. Because LinkedIn knows each user’s employer, it can map which companies use which competitor products. It is extracting the customer lists of thousands of software companies from their users’ browsers without anyone’s knowledge.
Then it uses what it finds. LinkedIn has already sent enforcement threats to users of third-party tools, using data obtained through this covert scanning to identify its targets.
In 2023, the EU designated LinkedIn as a regulated gatekeeper under the Digital Markets Act and ordered it to open its platform to third-party tools. LinkedIn’s response:
It published two restricted APIs and presented them to the European Commission as compliance. Together, these APIs handle approximately 0.07 calls per second. Meanwhile, LinkedIn already operates an internal API called Voyager that powers every LinkedIn web and mobile product at 163,000 calls per second. In Microsoft’s 249-page compliance report to the EU, the word “API” appears 533 times. “Voyager” appears zero times.
At the same time, LinkedIn expanded its surveillance of the exact tools the regulation was designed to protect. The scan list grew from roughly 461 products in 2024 to over 6,000 by February 2026. The EU told LinkedIn to let third-party tools in. LinkedIn built a surveillance system to find and punish every user of those tools.
LinkedIn loads an invisible tracking element from HUMAN Security (formerly PerimeterX), an American-Israeli cybersecurity firm, zero pixels wide, hidden off-screen, that sets cookies on your browser without your knowledge. A separate fingerprinting script runs from LinkedIn’s own servers. A third script from Google executes silently on every page load. All of it encrypted. None of it disclosed.
Microsoft has 33,000 employees and a $15 billion legal budget. We have the evidence. What we need is people and funding to hold them accountable.
...
Read the original on browsergate.eu »
Our most intelligent open models, built from Gemini 3 research and technology to maximize intelligence-per-parameter
Your browser does not support the video tag. Your browser does not support the video tag. A new level of intelligence for mobile and IoT devices Your browser does not support the video tag. Your browser does not support the video tag. Your browser does not support the video tag. Your browser does not support the video tag.A new level of intelligence for mobile and IoT devices Your browser does not support the video tag. Your browser does not support the video tag.
Build autonomous agents that plan, navigate apps, and complete tasks on your behalf, with native support for function calling. Develop applications with strong audio and visual understanding, for rich multimodal support.Create multilingual experiences that go beyond translation and understand cultural context.Improve performance for specific tasks by training Gemma using your preferred frameworks and techniques.Run models on your own hardware for efficient development and deployment.
A new level of intelligence for mobile and IoT devicesAudio and vision support for real-time edge processing. They can run completely offline with near-zero latency on edge devices like phones, Raspberry Pi, and Jetson Nano.
Advanced reasoning for IDEs, coding assistants, and agentic workflows. These models are optimized for consumer GPUs — giving students, researchers, and developers the ability to turn workstations into local-first AI servers.
Gemma 4 models undergo the same rigorous infrastructure security protocols as our proprietary models. By choosing Gemma 4, enterprises and sovereign organizations gain a trusted, transparent foundation that delivers state-of-the-art capabilities while meeting the highest standards for security and reliability.
...
Read the original on deepmind.google »
Skip to main contentSam Altman May Control Our Future—Can He Be Trusted?New interviews and closely guarded documents shed light on the persistent doubts about the head of OpenAI. Altman promised to be a safe steward for A.I. But some of his colleagues believed that he was not trustworthy enough to, as one put it, “have his finger on the button.”In the fall of 2023, Ilya Sutskever, OpenAI’s chief scientist, sent secret memos to three fellow-members of the organization’s board of directors. For weeks, they’d been having furtive discussions about whether Sam Altman, OpenAI’s C.E.O., and Greg Brockman, his second-in-command, were fit to run the company. Sutskever had once counted both men as friends. In 2019, he’d officiated Brockman’s wedding, in a ceremony at OpenAI’s offices that included a ring bearer in the form of a robotic hand. But as he grew convinced that the company was nearing its long-term goal—creating an artificial intelligence that could rival or surpass the cognitive capabilities of human beings—his doubts about Altman increased. As Sutskever put it to another board member at the time, “I don’t think Sam is the guy who should have his finger on the button.”At the behest of his fellow board members, Sutskever worked with like-minded colleagues to compile some seventy pages of Slack messages and H.R. documents, accompanied by explanatory text. The material included images taken with a cellphone, apparently to avoid detection on company devices. He sent the final memos to the other board members as disappearing messages, to insure that no one else would ever see them. “He was terrified,” a board member who received them recalled. The memos, which we reviewed, have not previously been disclosed in full. They allege that Altman misrepresented facts to executives and board members, and deceived them about internal safety protocols. One of the memos, about Altman, begins with a list headed “Sam exhibits a consistent pattern of . . .” The first item is “Lying.”Many technology companies issue vague proclamations about improving the world, then go about maximizing revenue. But the founding premise of OpenAI was that it would have to be different. The founders, who included Altman, Sutskever, Brockman, and Elon Musk, asserted that artificial intelligence could be the most powerful, and potentially dangerous, invention in human history, and that perhaps, given the existential risk, an unusual corporate structure would be required. The firm was established as a nonprofit, whose board had a duty to prioritize the safety of humanity over the company’s success, or even its survival. The C.E.O. had to be a person of uncommon integrity. According to Sutskever, “any person working to build this civilization-altering technology bears a heavy burden and is taking on unprecedented responsibility.” But “the people who end up in these kinds of positions are often a certain kind of person, someone who is interested in power, a politician, someone who likes it.” In one of the memos, he seemed concerned with entrusting the technology to someone who “just tells people what they want to hear.” If OpenAI’s C.E.O. turned out not to be reliable, the board, which had six members, was empowered to fire him. Some members, including Helen Toner, an A.I.-policy expert, and Tasha McCauley, an entrepreneur, received the memos as a confirmation of what they had already come to believe: Altman’s role entrusted him with the future of humanity, but he could not be trusted.Altman was in Las Vegas, attending a Formula 1 race, when Sutskever invited him to a video call with the board, then read a brief statement explaining that he was no longer an employee of OpenAI. The board, following legal advice, released a public message saying only that Altman had been removed because he “was not consistently candid in his communications.” Many of OpenAI’s investors and executives were shocked. Microsoft, which had invested some thirteen billion dollars in OpenAI, learned of the plan to fire Altman just moments before it happened. “I was very stunned,” Satya Nadella, Microsoft’s C.E.O., later said. “I couldn’t get anything out of anybody.” He spoke with the LinkedIn co-founder Reid Hoffman, an OpenAI investor and a Microsoft board member, who began calling around to determine whether Altman had committed a clear offense. “I didn’t know what the fuck was going on,” Hoffman told us. “We were looking for embezzlement, or sexual harassment, and I just found nothing.”Other business partners were similarly blindsided. When Altman called the investor Ron Conway to say that he’d been fired, Conway held up his phone to Representative Nancy Pelosi, with whom he was having lunch. “You better get out of here really quick,” she told Conway. OpenAI was on the verge of closing a large investment from Thrive, a venture-capital firm founded by Josh Kushner, Jared Kushner’s brother, whom Altman had known for years. The deal would value OpenAI at eighty-six billion dollars and allow many employees to cash out millions in equity. Kushner emerged from a meeting with Rick Rubin, the music producer, to a missed call from Altman. “We just immediately went to war,” Kushner later said.The day that Altman was fired, he flew back to his twenty-seven-million-dollar mansion in San Francisco, which has panoramic views of the bay and once featured a cantilevered infinity pool, and set up what he called a “sort of government-in-exile.” Conway, the Airbnb co-founder Brian Chesky, and the famously aggressive crisis-communications manager Chris Lehane joined, sometimes for hours a day, by video and phone. Some members of Altman’s executive team camped out in the hallways of the house. Lawyers set up in a home office next to his bedroom. During bouts of insomnia, Altman would wander by them in his pajamas. When we spoke with Altman recently, he described the aftermath of his firing as “just this weird fugue.”With the board silent, Altman’s advisers built a public case for his return. Lehane has insisted that the firing was a coup orchestrated by rogue “effective altruists”—adherents of a belief system that focusses on maximizing the well-being of humanity, who had come to see A.I. as an existential threat. (Hoffman told Nadella that the firing might be due to “effective-altruism craziness.”) Lehane—whose reported motto, after Mike Tyson, is “Everyone has a game plan until you punch them in the mouth”—urged Altman to wage an aggressive social-media campaign. Chesky stayed in contact with the tech journalist Kara Swisher, relaying criticism of the board.Altman interrupted his “war room” at six o’clock each evening with a round of Negronis. “You need to chill,” he recalls saying. “Whatever’s gonna happen is gonna happen.” But, he added, his phone records show that he was on calls for more than twelve hours a day. At one point, Altman conveyed to Mira Murati, who had given Sutskever material for his memos and was serving as the interim C.E.O. of OpenAI in that period, that his allies were “going all out” and “finding bad things” to damage her reputation, as well as those of others who had moved against him, according to someone with knowledge of the conversation. (Altman does not recall the exchange.)Within hours of the firing, Thrive had put its planned investment on hold and suggested that the deal would be consummated—and employees would thus receive payouts—only if Altman returned. Texts from this period show Altman coördinating closely with Nadella. (“how about: satya and my top priority remains to save openai,” Altman suggested, as the two worked on a statement. Nadella proposed an alternative: “to ensure OpenAI continues to thrive.”) Microsoft soon announced that it would create a competing initiative for Altman and any employees who left OpenAI. A public letter demanding his return circulated at the organization. Some people who hesitated to sign it received imploring calls and messages from colleagues. A majority of OpenAI employees ultimately threatened to leave with Altman.The board was backed into a corner. “Control Z, that’s one option,” Toner said—undo the firing. “Or the other option is the company falls apart.” Even Murati eventually signed the letter. Altman’s allies worked to win over Sutskever. Brockman’s wife, Anna, approached him at the office and pleaded with him to reconsider. “You’re a good person—you can fix this,” she said. Sutskever later explained, in a court deposition, “I felt that if we were to go down the path where Sam would not return, then OpenAI would be destroyed.” One night, Altman took an Ambien, only to be awakened by his husband, an Australian coder named Oliver Mulherin, who told him that Sutskever was wavering, and that people were telling Altman to speak with the board. “I woke up in this, like, crazy Ambien haze, and I was so disoriented,” Altman told us. “I was, like, I cannot talk to the board right now.”In a series of increasingly tense calls, Altman demanded the resignations of board members who had moved to fire him. “I have to pick up the pieces of their mess while I’m in this crazy cloud of suspicion?” Altman recalled initially thinking, about his return. “I was just, like, Absolutely fucking not.” Eventually, Sutskever, Toner, and McCauley lost their board seats. Adam D’Angelo, a founder of Quora, was the sole original member who remained. As a condition of their exit, the departing members demanded that the allegations against Altman—including that he pitted executives against one another and concealed his financial entanglements—be investigated. They also pressed for a new board that could oversee the outside inquiry with independence. But the two new members, the former Harvard president Lawrence Summers and the former Facebook C.T.O. Bret Taylor, were selected after close conversations with Altman. “would you do this,” Altman texted Nadella. “bret, larry summers, adam as the board and me as ceo and then bret handles the investigation.” (McCauley later testified in a deposition that when Taylor was previously considered for a board seat she’d had concerns about his deference to Altman.)Less than five days after his firing, Altman was reinstated. Employees now call this moment “the Blip,” after an incident in the Marvel films in which characters disappear from existence and then return, unchanged, to a world profoundly altered by their absence. But the debate over Altman’s trustworthiness has moved beyond OpenAI’s boardroom. The colleagues who facilitated his ouster accuse him of a degree of deception that is untenable for any executive and dangerous for a leader of such a transformative technology. “We need institutions worthy of the power they wield,” Murati told us. “The board sought feedback, and I shared what I was seeing. Everything I shared was accurate, and I stand behind all of it.” Altman’s allies, on the other hand, have long dismissed the accusations. After the firing, Conway texted Chesky and Lehane demanding a public-relations offensive. “This is REPUTATIONAL TO SAM,” he wrote. He told the Washington Post that Altman had been “mistreated by a rogue board of directors.”OpenAI has since become one of the most valuable companies in the world. It is reportedly preparing for an initial public offering at a potential valuation of a trillion dollars. Altman is driving the construction of a staggering amount of A.I. infrastructure, some of it concentrated within foreign autocracies. OpenAI is securing sweeping government contracts, setting standards for how A.I. is used in immigration enforcement, domestic surveillance, and autonomous weaponry in war zones.Altman has promoted OpenAI’s growth by touting a vision in which, he wrote in a 2024 blog post, “astounding triumphs—fixing the climate, establishing a space colony, and the discovery of all of physics—will eventually become commonplace.” His rhetoric has helped sustain one of the fastest cash burns of any startup in history, relying on partners that have borrowed vast sums. The U.S. economy is increasingly dependent on a few highly leveraged A.I. companies, and many experts—at times including Altman—have warned that the industry is in a bubble. “Someone is going to lose a phenomenal amount of money,” he told reporters last year. If the bubble pops, economic catastrophe may follow. If his most bullish projections prove correct, he may become one of the wealthiest and most powerful people on the planet.In a tense call after Altman’s firing, the board pressed him to acknowledge a pattern of deception. “This is just so fucked up,” he said repeatedly, according to people on the call. “I can’t change my personality.” Altman says that he doesn’t recall the exchange. “It’s possible I meant something like ‘I do try to be a unifying force,’ ” he told us, adding that this trait had enabled him to lead an immensely successful company. He attributed the criticism to a tendency, especially early in his career, “to be too much of a conflict avoider.” But a board member offered a different interpretation of his statement: “What it meant was ‘I have this trait where I lie to people, and I’m not going to stop.’ ” Were the colleagues who fired Altman motivated by alarmism and personal animus, or were they right that he couldn’t be trusted?One morning this winter, we met Altman at OpenAI’s headquarters, in San Francisco, for one of more than a dozen conversations with him for this story. The company had recently moved into a pair of eleven-story glass towers, one of which had been occupied by Uber, another tech behemoth, whose co-founder and C.E.O., Travis Kalanick, seemed like an unstoppable prodigy—until he resigned, in 2017, under pressure from investors, who cited concerns about his ethics. (Kalanick now runs a robotics startup; in his free time, he said recently, he uses OpenAI’s ChatGPT “to get to the edge of what’s known in quantum physics.”)An employee gave us a tour of the office. In an airy space full of communal tables, there was an animated digital painting of the computer scientist Alan Turing; its eyes tracked us as we passed. The installation is a winking reference to the Turing test, the 1950 thought experiment about whether a machine can credibly imitate a person. (In a 2025 study, ChatGPT passed the test more reliably than actual humans did.) Typically, you can interact with the painting. But the sound had been disabled, our guide told us, because it wouldn’t stop eavesdropping on employees and then butting into their conversations. Elsewhere in the office, plaques, brochures, and merchandise displayed the words “Feel the AGI.” The phrase was originally associated with Sutskever, who used it to caution his colleagues about the risks of artificial general intelligence—the threshold at which machines match human cognitive capacities. After the Blip, it became a cheerful slogan hailing a superabundant future.We met Altman in a generic-looking conference room on the eighth floor. “People used to tell me about decision fatigue, and I didn’t get it,” Altman told us. “Now I wear a gray sweater and jeans every day, and even picking which gray sweater out of my closet—I’m, like, I wish I didn’t have to think about that.” Altman has a youthful appearance—he is slender, with wide-set blue eyes and tousled hair—but he is now forty, and he and Mulherin have a one-year-old son, delivered by a surrogate. “I’m sure, like, being President of the United States would be a much more stressful job, but of all the jobs that I think I could reasonably do, this is the most stressful one I can imagine,” he said, making eye contact with one of us, then with the other. “The way that I’ve explained this to my friends is: ‘This was the most fun job in the world until the day we launched ChatGPT.’ We were making these massive scientific discoveries—I think we did the most important piece of scientific discovery in, I don’t know, many decades.” He cast his eyes down. “And then, since the launch of ChatGPT, the decisions have gotten very difficult.”Altman grew up in Clayton, Missouri, an affluent suburb of St. Louis, as the eldest of four siblings. His mother, Connie Gibstine, is a dermatologist; his father, Jerry Altman, was a real-estate broker and a housing activist. Altman attended a Reform synagogue and a private preparatory school that he has described as “not the kind of place where you would really stand up and talk about being gay.” In general, though, the family’s wealthy suburban circles were relatively liberal. When Altman was sixteen or seventeen, he said, he was out late in a predominantly gay neighborhood in St. Louis and was subjected to a brutal physical attack and homophobic slurs. Altman did not report the incident, and he was reluctant to give us more details on the record, saying that a fuller telling would “make me look like I’m manipulative or playing for sympathy.” He dismissed the idea that this event, and his sexuality broadly, was significant to his identity. But, he said, “probably that has, like, some deep-seated psychological thing—that I think I’m over but I’m not—about not wanting more conflict.”Altman’s attitude in childhood, his brother told The New Yorker, in 2016, was “I have to win, and I’m in charge of everything.” He went to Stanford, where he attended regular off-campus poker games. “I think I learned more about life and business from that than I learned in college,” he later said.All Stanford students are ambitious, but many of the most enterprising among them drop out. The summer after his sophomore year, Altman went to Massachusetts to join the inaugural batch of entrepreneurs at Y Combinator, a “startup incubator” co-founded by the renowned software engineer Paul Graham. Each entrant joined Y.C. with an idea for a startup. (Altman’s batch mates included founders of Reddit and Twitch.) Altman’s project, eventually called Loopt, was a proto social network that used the locations of people’s flip phones to tell their friends where they were. The company reflected his drive, and a tendency to interpret ambiguous situations to his advantage. Federal rules required that phone carriers be able to track the locations of phones for emergency services; Altman struck deals with carriers to tap these capabilities for the company’s use.“These numbers indicate that somebody here has the soul of a poet.”Most of Altman’s employees at Loopt liked him, but some said that they were struck by his tendency to exaggerate, even about trivial things. One recalled Altman bragging widely that he was a champion Ping-Pong player—“like, Missouri high-school Ping-Pong champ”—and then proving to be one of the worst players in the office. (Altman says that he was probably joking.) As Mark Jacobstein, an older Loopt employee who was asked by investors to act as Altman’s “babysitter,” later told Keach Hagey, for “The Optimist,” a biography of Altman, “There’s a blurring between ‘I think I can maybe accomplish this thing’ and ‘I have already accomplished this thing’ that in its most toxic form leads to Theranos,” Elizabeth Holmes’s fraudulent startup.Groups of senior employees, concerned with Altman’s leadership and lack of transparency, asked Loopt’s board on two occasions to fire him as C.E.O., according to Hagey. But Altman inspired fierce loyalty, too. A former employee was told that a board member responded, “This is Sam’s company, get back to fucking work.” (A board member denied that the attempts to remove Altman as C.E.O. were serious.) Loopt struggled to gain users, and in 2012 it was acquired by a fintech company. The acquisition had been arranged, according to a person familiar with the deal, largely to help Altman save face. Still, by the time Graham retired from Y.C., in 2014, he had recruited Altman to be his successor as president. “I asked Sam in our kitchen,” Graham told The New Yorker. “And he smiled, like, it worked. I had never seen an uncontrolled smile from Sam. It was like when you throw a ball of paper into the wastebasket across the room—that smile.”Altman’s new role made him, at twenty-eight, a kingmaker. His job was to select the hungriest and most promising entrepreneurs, connect them with the best coders and investors, and help them develop their startups into industry-defining monopolies (while Y.C. took a six- or seven-per-cent cut). Altman oversaw a period of aggressive expansion, growing Y.C.’s roster of startups from dozens to hundreds. But several Silicon Valley investors came to believe that his loyalties were divided. An investor told us that Altman was known to “make personal investments, selectively, into the best companies, blocking outside investors.” (Altman denies blocking anyone.) Altman had worked as a “scout” for the investment fund Sequoia Capital, as part of a program that involved investing in early-stage startups and taking a small cut of any profits. When Altman made an angel investment in Stripe, a financial-services startup, he insisted on a bigger portion, galling Sequoia’s partners, a person familiar with the deal said. The person added, “It’s a policy of ‘Sam first.’ ” Altman is an investor in, by his own estimate, some four hundred other companies. (Altman denies this characterization of the Stripe deal. Around 2010, he made an initial investment of fifteen thousand dollars in Stripe, a two-per-cent share. The company is now valued at more than a hundred and fifty billion dollars.)By 2018, several Y.C. partners were so frustrated with Altman’s behavior that they approached Graham to complain. Graham and Jessica Livingston, his wife and a Y.C. founder, apparently had a frank conversation with Altman. Afterward, Graham started telling people that although Altman had agreed to leave the company, he was resisting in practice. Altman told some Y.C. partners that he would resign as president but become chairman instead. In May, 2019, a blog post announcing that Y.C. had a new president came with an asterisk: “Sam is transitioning to Chairman of YC.” A few months later, the post was edited to read “Sam Altman stepped away from any formal position at YC”; after that, the phrase was removed entirely. Nevertheless, as recently as 2021, a Securities and Exchange Commission filing listed Altman as the chairman of Y Combinator. (Altman says that he wasn’t aware of this until much later.)Altman has maintained over the years, both in public and in recent depositions, that he was never fired from Y.C., and he told us that he did not resist leaving. Graham has tweeted that “we didn’t want him to leave, just to choose” between Y.C. and OpenAI. In a statement, Graham told us, “We didn’t have the legal power to fire anyone. All we could do was apply moral pressure.” In private, though, he has been unambiguous that Altman was removed because of Y.C. partners’ mistrust. This account of Altman’s time at Y Combinator is based on discussions with several Y.C. founders and partners, in addition to contemporaneous materials, all of which indicate that the parting was not entirely mutual. On one occasion, Graham told Y.C. colleagues that, prior to his removal, “Sam had been lying to us all the time.”In May, 2015, Altman e-mailed Elon Musk, then the hundredth-richest person in the world. Like many prominent Silicon Valley entrepreneurs, Musk was preoccupied by an array of threats that he considered existentially urgent but which would have struck most people as far-fetched hypotheticals. “We need to be super careful with AI,” he tweeted. “Potentially more dangerous than nukes.”Altman had generally been a techno-optimist, but his rhetoric about A.I. soon turned apocalyptic. In public, and in his private correspondence with Musk and others, he warned that the technology should not be dominated by a profit-seeking mega-corporation. “Been thinking a lot about whether it’s possible to stop humanity from developing AI,” he wrote to Musk. “If it’s going to happen anyway, it seems like it would be good for someone other than Google to do it first.” Picking up on the analogy to nuclear weapons, he proposed a “Manhattan Project for AI.” He outlined the overarching principles that such an organization would have—“safety should be a first-class requirement”; “obviously we’d comply with/aggressively support all regulation”—and he and Musk settled on a name: OpenAI.Unlike the original Manhattan Project, a government initiative that led to the creation of the atom bomb, OpenAI would be privately funded, at least at first. Altman predicted that an artificial superintelligence—a theoretical threshold beyond even A.G.I., at which machines would fully eclipse the capabilities of the human mind—would eventually create enough economic benefits to “capture the light cone of all future value in the universe.” But he also warned of existential danger. At some point, the national-security implications could grow so dire that the U.S. government would have to take control of OpenAI, perhaps by nationalizing it and moving its operations to a secure bunker in the desert. By late 2015, Musk was persuaded. “We should say that we are starting with a $1B funding commitment,” he wrote. “I will cover whatever anyone else doesn’t provide.”Altman housed OpenAI in Y Combinator’s nonprofit arm, framing it as an internal philanthropic project. He gave OpenAI recruits Y.C. stock and moved donations through Y.C. accounts. At one point, the lab was supported by a Y.C. fund in which he held a personal stake. (Altman later described this stake as insignificant. He told us that the Y.C. stock he gave to recruits was his own.)The Manhattan Project analogy applied to employee recruitment, too. Like nuclear-fission research, machine learning was a small scientific field with epochal implications which was dominated by a cadre of eccentric geniuses. Musk and Altman, along with Brockman, who joined from Stripe, were convinced that there were only a few computer scientists alive capable of making the required breakthroughs. Google had a huge cash advantage and a multiyear head start. “We are outmanned and outgunned by a ridiculous margin,” Musk later wrote in an e-mail. But “if we are able to attract the most talented people over time and our direction is correctly aligned, then OpenAI will prevail.”A top recruiting target was Sutskever, an intense and introverted researcher who was often called the most gifted A.I. scientist of his generation. Sutskever, who was born in the Soviet Union in 1986, has a receding hairline, dark eyes, and a habit of pausing, unblinking, while choosing his words. Another target was Dario Amodei, a biophysicist and a font of frenetic energy who has a tendency to nervously twist his black hair, and responds to one-line e-mails with multi-paragraph essays. Both had lucrative jobs elsewhere, but Altman lavished them with attention. He later joked, “I stalked Ilya.”Musk was the bigger name, but Altman had the smoother touch. He e-mailed Amodei, and they set up a one-on-one dinner at an Indian restaurant. (Altman: “fuck my uber got in a crash! running about 10 late.” Amodei: “Wow, hope you’re ok.”) Like many A.I. researchers, Amodei believed that the technology should be built only if it was shown to be “aligned” with human values, meaning that it would act in accordance with what people wanted without making a potentially fatal error—say, following an instruction to clean up the environment by eliminating its greatest polluter, the human race. Altman was reassuring, mirroring these safety concerns.Amodei, who later joined the company, took detailed notes on Altman and Brockman’s behavior for years, under the heading “My Experience with OpenAI” (subheading: “Private: Do Not Share”). A collection of more than two hundred pages of documents related to Amodei, including those notes and internal e-mails and memos, has been circulated by colleagues in Silicon Valley but never before disclosed publicly. In his notes, Amodei wrote that Altman’s goal was to build “an AI lab that would be focused on safety (‘maybe not right away, but as soon as it can be’).”In December, 2015, hours before OpenAI was publicly announced, Altman e-mailed Musk about a rumor that Google was “going to give everyone in openAI massive counteroffers tomorrow to try to kill it.” Musk replied, “Has Ilya come back with a solid yes?” Altman assured him that Sutskever was holding firm. Google offered Sutskever six million dollars a year, which OpenAI couldn’t come close to matching. But, Altman boasted, “they unfortunately dont have ‘do the right thing’ on their side.”“I’m just saying, if we tear up the pillows and rip up the mattress, it might make our place look more lived in.”Musk provided some office space for OpenAI in a former suitcase factory in the Mission District of San Francisco. The pitch to employees, Sutskever told us, was “You’re going to save the world.”If everything went right, the OpenAI founders believed, artificial intelligence could usher in a post-scarcity utopia, automating grunt work, curing cancer, and liberating people to enjoy lives of leisure and abundance. But if the technology went rogue, or fell into the wrong hands, the devastation could be total. China could use it to build a novel bioweapon or a fleet of advanced drones; an A.I. model could outmaneuver its overseers, replicating itself on secret servers so that it couldn’t be turned off; in extreme cases, it might seize control of the energy grid, the stock market, or the nuclear arsenal. Not everyone believed this, to say the least, but Altman repeatedly affirmed that he did. He wrote on his blog in 2015 that superhuman machine intelligence “does not have to be the inherently evil sci-fi version to kill us all. A more probable scenario is that it simply doesn’t care about us much either way, but in an effort to accomplish some other goal . . . wipes us out.” OpenAI’s founders vowed not to privilege speed over safety, and the organization’s articles of incorporation made benefitting humanity a legally binding duty. If A.I. was going to be the most powerful technology in history, it followed that any individual with sole control over it stood to become uniquely powerful—a scenario that the founders referred to as an “AGI dictatorship.”Altman told early recruits that OpenAI would remain a pure nonprofit, and programmers took significant pay cuts to work there. The company accepted charitable grants, including thirty million dollars from what was then called Open Philanthropy, a hub of the effective-altruism movement whose commitments included supporting the distribution of mosquito nets to the global poor.Brockman and Sutskever managed OpenAI’s daily operations, while Musk and Altman, still busy with their other jobs, stopped by around once a week. By September, 2017, though, Musk had grown impatient. During discussions about whether to reconstitute OpenAI as a for-profit company, he demanded majority control. Altman’s replies varied depending on the context. His main consistent demand seems to have been that if OpenAI were reorganized under the control of a C.E.O. that job should go to him. Sutskever seemed uncomfortable with this idea. He sent Musk and Altman a long, plaintive e-mail on behalf of himself and Brockman, with the subject line “Honest Thoughts.” He wrote, “The goal of OpenAI is to make the future good and to avoid an AGI dictatorship.” He continued, addressing Musk, “So it is a bad idea to create a structure where you could become a dictator.” He relayed similar concerns to Altman: “We don’t understand why the CEO title is so important to you. Your stated reasons have changed, and it’s hard to really understand what’s driving it.”“Guys, I’ve had enough,” Musk replied. “Either go do something on your own or continue with OpenAI as a nonprofit”—otherwise “I’m just being a fool who is essentially providing free funding for you to create a startup.” He quit, acrimoniously, five months later. (In 2023, he founded a for-profit competitor called xAI. The following year, he sued Altman and OpenAI for fraud and breach of charitable trust, alleging that he had been “assiduously manipulated” by “Altman’s long con”—that Altman had preyed on his concerns about the dangers of A.I. in order to separate him from his money. The suit, which OpenAI has vigorously contested, is ongoing.)After Musk’s departure, Amodei and other researchers chafed against the leadership of Brockman, whom some considered an abrasive operator, and of Sutskever, who was generally viewed as principled but disorganized. In the process of becoming C.E.O., Altman seems to have made different promises to different factions at the company. He assured some researchers that Brockman’s managerial authority would be diminished. But, unbeknownst to them, he also struck a secret handshake deal with Brockman and Sutskever: Altman would get the C.E.O. title; in exchange, he agreed to resign if the other two deemed it necessary. (He disputed this characterization, saying he took the C.E.O. role only because he was asked to. All three men confirmed that the pact existed, though Brockman said that it was informal. “He unilaterally told us that he’d step down if we ever both asked him to,” he told us. “We objected to this idea, but he said it was important to him. It was purely altruistic.”) Later, the board was alarmed to learn that its C.E.O. had essentially appointed his own shadow board.Internal records show that the founders had private doubts about the nonprofit structure as early as 2017. That year, after Musk tried to take control, Brockman wrote in a diary entry, “cannot say that we are committed to the non-profit . . . if three months later we’re doing b-corp then it was a lie.” Amodei, in one of his early notes, recalled pressing Brockman on his priorities and Brockman replying that he wanted “money and power.” Brockman disputes this. His diary entries from this time suggest conflicting instincts. One reads, “Happy to not become rich on this, so long as no one else is.” In another, he asks, “So what do I *really* want?” Among his answers is “Financially what will take me to $1B.”In 2017, Sutskever was in the office when he read a paper that Google researchers had just published, proposing “a new simple network architecture, the Transformer.” He jumped out of his chair, ran down the hall, and told his fellow-researchers, “Stop everything you’re doing. This is it.” The Transformer, Sutskever saw, was an innovation that might enable OpenAI to train vastly more sophisticated models. Out of this discovery came the first generative pre-trained transformer—the seed of what would become ChatGPT.As the technology became increasingly powerful, we learned, about a dozen of OpenAI’s top engineers held a series of secret meetings to discuss whether OpenAI’s founders, including Brockman and Altman, could be trusted. At one, an employee was reminded of a sketch by the British comedy duo Mitchell and Webb, in which a Nazi soldier on the Eastern Front, in a moment of clarity, asks, “Are we the baddies?”By 2018, Amodei had started questioning the founders’ motives more openly. “Everything was a rotating set of schemes to raise money,” he later wrote in his notes. “I felt like what OpenAI needed was a clear statement of what it would do, what it would not do, and how its existence would make the world better.” OpenAI already had a mission statement: “To ensure that artificial general intelligence benefits all of humanity.” But it wasn’t clear to Amodei what this meant to the executives, if it meant anything at all. In early 2018, Amodei has said, he started drafting a charter for the company and, in weeks of conversations with Altman and Brockman, advocated for its most radical clause: if a “value-aligned, safety-conscious project” came close to building an A.G.I. before OpenAI did, the company would “stop competing with and start assisting this project.” According to the “merge and assist” clause, as it was called, if, say, Google’s researchers figured out how to build a safe A.G.I. first, then OpenAI could wind itself down and donate its resources to Google. By any normal corporate logic, this was an insane thing to promise. But OpenAI was not supposed to be a normal company.That premise was tested in the spring of 2019, when OpenAI was negotiating a billion-dollar investment from Microsoft. Although Amodei, who was leading the company’s safety team, had helped to pitch the deal to Bill Gates, many people on the team were anxious about it, fearing that Microsoft would insert provisions that overrode OpenAI’s ethical commitments. Amodei presented Altman with a ranked list of safety demands, placing the preservation of the merge-and-assist clause at the very top. Altman agreed to that demand, but in June, as the deal was closing, Amodei discovered that a provision granting Microsoft the power to block OpenAI from any mergers had been added. “Eighty per cent of the charter was just betrayed,” Amodei recalled. He confronted Altman, who denied that the provision existed. Amodei read it aloud, pointing to the text, and ultimately forced another colleague to confirm its existence to Altman directly. (Altman doesn’t remember this.) Amodei’s notes describe escalating tense encounters, including one, months later, in which Altman summoned him and his sister, Daniela, who worked in safety and policy at the company, to tell them that he had it on “good authority” from a senior executive that they had been plotting a coup. Daniela, the notes continue, “lost it,” and brought in that executive, who denied having said anything. As one person briefed on the exchange recalled, Altman then denied having made the claim. “I didn’t even say that,” he said. “You just said that,” Daniela replied. (Altman said that this was not quite his recollection, and that he had accused the Amodeis only of “political behavior.”) In 2020, Amodei, Daniela, and other colleagues left to found Anthropic, which is now one of OpenAI’s chief rivals.Altman continued touting OpenAI’s commitment to safety, especially when potential recruits were within earshot. In late 2022, four computer scientists published a paper motivated in part by concerns about “deceptive alignment,” in which sufficiently advanced models might pretend to behave well during testing and then, once deployed, pursue their own goals. (It’s one of several A.I. scenarios that sound like science fiction—but, under certain experimental conditions, it’s already happening.) Weeks after the paper was published, one of its authors, a Ph.D. student at the University of California, Berkeley, got an e-mail from Altman, who said that he was increasingly worried about the threat of unaligned A.I. He added that he was thinking of committing a billion dollars to the issue, which many A.I. experts considered the most important unsolved problem in the world, potentially by endowing a prize to incentivize researchers around the world to study it. Although the graduate student had “heard vague rumors about Sam being slippery,” he told us, Altman’s show of commitment won him over. He took an academic leave to join OpenAI.But, in the course of several meetings in the spring of 2023, Altman seemed to waver. He stopped talking about endowing a prize. Instead, he advocated for establishing an in-house “superalignment team.” An official announcement, referring to the company’s reserves of computing power, pledged that the team would get “20% of the compute we’ve secured to date”—a resource potentially worth more than a billion dollars. The effort was necessary, according to the announcement, because, if alignment remained unsolved, A.G.I. might “lead to the disempowerment of humanity or even human extinction.” Jan Leike, who was appointed to lead the team with Sutskever, told us, “It was a pretty effective retention tool.”The twenty-per-cent commitment evaporated, however. Four people who worked on or closely with the team said that the actual resources were between one and two per cent of the company’s compute. Furthermore, a researcher on the team said, “most of the superalignment compute was actually on the oldest cluster with the worst chips.” The researchers believed that superior hardware was being reserved for profit-generating activities. (OpenAI disputes this.) Leike complained to Murati, then the company’s chief technology officer, but she told him to stop pressing the point—the commitment had never been realistic.“She skippidy-boop-bee-bop-doo-woppity-wopped right out of my life.”Around this time, a former employee told us, Sutskever “was getting super safety-pilled.” In the early days of OpenAI, he had considered concerns about catastrophic risk legitimate but remote. Now, as he came to believe that A.G.I. was imminent, his worries grew more acute. There was an all-hands meeting, the former employee continued, “where Ilya gets up and he’s, like, Hey, everyone, there’s going to be a point in the next few years where basically everyone at this company has to switch to working on safety, or else we’re fucked.” But the superalignment team was dissolved the following year, without completing its mission.By then, internal messages show, executives and board members had come to believe that Altman’s omissions and deceptions might have ramifications for the safety of OpenAI’s products. In a meeting in December, 2022, Altman assured board members that a variety of features in a forthcoming model, GPT-4, had been approved by a safety panel. Toner, the board member and A.I.-policy expert, requested documentation. She learned that the most controversial features—one that allowed users to “fine-tune” the model for specific tasks, and another that deployed it as a personal assistant—had not been approved. As McCauley, the board member and entrepreneur, left the meeting, an employee pulled her aside and asked if she knew about “the breach” in India. Altman, during many hours of briefing with the board, had neglected to mention that Microsoft had released an early version of ChatGPT in India without completing a required safety review. “It just was kind of completely ignored,” Jacob Hilton, an OpenAI researcher at the time, said.Although these lapses did not cause security crises, Carroll Wainwright, another researcher, said that they were part of a “continual slide toward emphasizing products over safety.” After the release of GPT-4, Leike e-mailed members of the board. “OpenAI has been going off the rails on its mission,” he wrote. “We are prioritizing the product and revenue above all else, followed by AI capabilities, research and scaling, with alignment and safety coming third.” He continued, “Other companies like Google are learning that they should deploy faster and ignore safety problems.”McCauley, in an e-mail to her fellow-members, wrote, “I think we’re definitely at a point where the board should be increasing its level of scrutiny.” The board members tried to confront what they viewed as a mounting problem, but they were outmatched. “You had a bunch of J.V. people who’ve never done anything, to be blunt,” Sue Yoon, a former board member, said. In 2023, the company was preparing to release its GPT-4 Turbo model. As Sutskever details in the memos, Altman apparently told Murati that the model didn’t need safety approval, citing the company’s general counsel, Jason Kwon. But when she asked Kwon, over Slack, he replied, “ugh . . . confused where sam got that impression.” (A representative for OpenAI, where Kwon remains an executive, said that the matter was “not a big deal.”)Soon afterward, the board made its decision to fire Altman—and then the world watched as Altman reversed it. A version of the OpenAI charter is still on the organization’s website. But people familiar with OpenAI’s governing documents said that it has been diluted to the point of meaninglessness. Last June, on his personal blog, Altman wrote, referring to artificial superintelligence, “We are past the event horizon; the takeoff has started.” This was, according to the charter, arguably the moment when OpenAI might stop competing with other companies and start working with them. But in that post, called “The Gentle Singularity,” he adopted a new tone, replacing existential terror with ebullient optimism. “We’ll all get better stuff,” he wrote. “We will build ever-more-wonderful things for each other.” He acknowledged that the alignment problem remained unsolved, but he redefined it—rather than being a deadly threat, it was an inconvenience, like the algorithms that tempt us to waste time scrolling on Instagram.Altman is often described, either with reverence or with suspicion, as the greatest pitchman of his generation. Steve Jobs, one of his idols, was said to project a “reality-distortion field”—an unassailable confidence that the world would conform to his vision. But even Jobs never told his customers that if they didn’t buy his brand of MP3 player everyone they loved would die. When Altman was twenty-three, in 2008, Graham, his mentor, wrote, “You could parachute him into an island full of cannibals and come back in 5 years and he’d be the king.” This judgment was based not on Altman’s track record, which was modest, but on his will to prevail, which Graham considered almost ungovernable. When advised not to include Y.C. alumni on a list of the world’s top startup founders, Graham put Altman on it anyway. “Sam Altman can’t be stopped by such flimsy rules,” he wrote.Graham meant this as a compliment. But some of Altman’s closest colleagues came to have a different view of this quality. After Sutskever grew more distressed about A.I. safety, he compiled the memos about Altman and Brockman. They have since taken on a legendary status in Silicon Valley; in some circles, they are simply called the Ilya Memos. Meanwhile, Amodei was continuing to assemble notes. These and the other documents related to him chart his shift from cautious idealism to alarm. His language is more heated than Sutskever’s, by turns incensed at Altman—“His words were almost certainly bullshit”—and wistful about what he says was a failure to correct OpenAI’s course.Neither collection of documents contains a smoking gun. Rather, they recount an accumulation of alleged deceptions and manipulations, each of which might, in isolation, be greeted with a shrug: Altman purportedly offers the same job to two people, tells contradictory stories about who should appear on a live stream, dissembles about safety requirements. But Sutskever concluded that this kind of behavior “does not create an environment conducive to the creation of a safe AGI.” Amodei and Sutskever were never close friends, but they reached similar conclusions. Amodei wrote, “The problem with OpenAI is Sam himself.”We have interviewed more than a hundred people with firsthand knowledge of how Altman conducts business: current and former OpenAI employees and board members; guests and staffers at Altman’s various houses; his colleagues and competitors; his friends and enemies and several people who, given the mercenary culture of Silicon Valley, have been both. (OpenAI has an agreement with Condé Nast, the owner of The New Yorker, which allows OpenAI to display its content in search results for a limited term.)Some people defended Altman’s business acumen and dismissed his rivals, especially Sutskever and Amodei, as failed aspirants to his throne. Others portrayed them as gullible, absent-minded scientists, or as hysterical “doomers,” gripped by the delusion that the software they were building would somehow come alive and kill them. Yoon, the former board member, argued that Altman was “not this Machiavellian villain” but merely, to the point of “fecklessness,” able to convince himself of the shifting realities of his sales pitches. “He’s too caught up in his own self-belief,” she said. “So he does things that, if you live in the real world, make no sense. But he doesn’t live in the real world.”Yet most of the people we spoke to shared the judgment of Sutskever and Amodei: Altman has a relentless will to power that, even among industrialists who put their names on spaceships, sets him apart. “He’s unconstrained by truth,” the board member told us. “He has two traits that are almost never seen in the same person. The first is a strong desire to please people, to be liked in any given interaction. The second is almost a sociopathic lack of concern for the consequences that may come from deceiving someone.”The board member was not the only person who, unprompted, used the word “sociopathic.” One of Altman’s batch mates in the first Y Combinator cohort was Aaron Swartz, a brilliant but troubled coder who died by suicide in 2013 and is now remembered in many tech circles as something of a sage. Not long before his death, Swartz expressed concerns about Altman to several friends. “You need to understand that Sam can never be trusted,” he told one. “He is a sociopath. He would do anything.” Multiple senior executives at Microsoft said that, despite Nadella’s long-standing loyalty, the company’s relationship with Altman has become fraught. “He has misrepresented, distorted, renegotiated, reneged on agreements,” one said. Earlier this year, OpenAI reaffirmed Microsoft as the exclusive cloud provider for its “stateless”—or memoryless—models. That day, it announced a fifty-billion-dollar deal making Amazon the exclusive reseller of its enterprise platform for A.I. agents. While reselling is permitted, Microsoft executives argue OpenAI’s plan could collide with Microsoft’s exclusivity. (OpenAI maintains that the Amazon deal will not violate the earlier contract; a Microsoft representative said the company is “confident that OpenAI understands and respects” its legal obligations.) The senior executive at Microsoft said, of Altman, “I think there’s a small but real chance he’s eventually remembered as a Bernie Madoff- or Sam Bankman-Fried-level scammer.”Altman is not a technical savant—according to many in his orbit, he lacks extensive expertise in coding or machine learning. Multiple engineers recalled him misusing or confusing basic technical terms. He built OpenAI, in large part, by harnessing other people’s money and technical talent. This doesn’t make him unique. It makes him a businessman. More remarkable is his ability to convince skittish engineers, investors, and a tech-skeptical public that their priorities, even when mutually exclusive, are also his priorities. When such people have tried to hinder his next move, he has often found the words to neutralize them, at least temporarily; usually, by the time they lose patience with him, he’s got what he needs. “He sets up structures that, on paper, constrain him in the future,” Wainwright, the former OpenAI researcher, said. “But then, when the future comes and it comes time to be constrained, he does away with whatever the structure was.”“He’s unbelievably persuasive. Like, Jedi mind tricks,” a tech executive who has worked with Altman said. “He’s just next level.” A classic hypothetical scenario in alignment research involves a contest of wills between a human and a high-powered A.I. In such a contest, researchers usually argue, the A.I. would surely win, much the way a grandmaster will beat a child at chess. Watching Altman outmaneuver the people around him during the Blip, the executive continued, had been like watching “an A.G.I. breaking out of the box.”In the days after his firing, Altman fought to avoid any outside investigation of the claims against him. He told two people that he worried even the existence of an investigation would make him look guilty. (Altman denies this.) But, after the resigning board members made their departure conditional on there being an independent inquiry, Altman acceded to a “review” of “recent events.” The two new board members insisted that they control that review, according to people involved in the negotiations. Summers, with his network of political and Wall Street connections, seemed to lend it credibility. (Last November, after the disclosure of e-mails in which Summers sought Jeffrey Epstein’s advice while pursuing a romantic relationship with a young protégée, he resigned from the board.) OpenAI enlisted WilmerHale, the distinguished law firm responsible for the internal investigations of Enron and WorldCom, to conduct the review.Six people close to the inquiry alleged that it seemed designed to limit transparency. Some of them said that the investigators initially did not contact important figures at the company. An employee reached out to Summers and Taylor to complain. “They were just interested in the narrow range of what happened during the board drama, and not the history of his integrity,” the employee recalled of his interview with investigators. Others were uncomfortable sharing concerns about Altman because they felt there was not a sufficient effort to insure anonymity. “Everything pointed to the fact that they wanted to find the outcome, which is to acquit him,” the employee said. (Some of the lawyers involved defended the process, saying, “It was an independent, careful, comprehensive review that followed the facts wherever they led.” Taylor also said that the review was “thorough and independent.”)Corporate investigations aim to confer legitimacy. At private companies, their findings are sometimes not written down—this can be a way to limit liability. But in cases involving public scandals there is often a greater expectation of transparency. Before Kalanick left Uber, in 2017, its board hired an outside firm, which released a thirteen-page summary to the public. Given OpenAI’s 501(c)(3) status and the high-profile nature of the firing, many executives there expected to see extensive findings. In March, 2024, however, OpenAI announced that it would clear Altman but released no report. The company provided, on its website, some eight hundred words acknowledging a “breakdown in trust.”People involved in the investigation said that no report was released because none was written. Instead, the findings were limited to oral briefings, shared with Summers and Taylor. “The review did not conclude that Sam was a George Washington cherry tree of integrity,” one of the people close to the inquiry said. But the investigation appears not to have centered the questions of integrity behind Altman’s firing, devoting much of its focus to a hunt for clear criminality; on that basis, it concluded that he could remain as C.E.O. Shortly thereafter, Altman, who had been kicked off the board when he was fired, rejoined it. The decision not to put the report in writing was made in part on the advice of Summers’s and Taylor’s personal attorneys, the person close to the inquiry told us. (Summers declined to comment on the record. Taylor said that, in light of the oral briefings, there had been “no need for a formal written report.”)Many former and current OpenAI employees told us that they were shocked by the lack of disclosure. Altman said he believed that all the board members who joined in the aftermath of his reinstatement received the oral briefings. “That’s an absolute, outright lie,” a person with direct knowledge of the situation said. Some board members told us that ongoing questions about the integrity of the report could prompt, as one put it, “a need for another investigation.”The absence of a written record helped minimize the allegations. So, increasingly, did Altman’s stature in Silicon Valley. Multiple prominent investors who have worked with Altman told us that he has a reputation for freezing out investors if they back OpenAI’s competitors. “If they invest in something that he doesn’t like, they won’t get access to other things,” one of them said. Another source of Altman’s power is his vast list of investments, which at times extends to his personal life. He has financial entanglements with numerous former romantic partners: as a fund co-manager, a lead investor, or a frequent co-investor. This is hardly unusual. Many of Silicon Valley’s straight executives do the same thing with their romantic and sexual partners. (“You have to,” one prominent C.E.O. told us.) “I’ve obviously invested with some exes after the fact. And I think that’s, like, totally fine,” Altman said. But the dynamic affords an extraordinary level of control. “It creates a very, very high dependence, essentially,” a person close to Altman said. “Oftentimes, it’s a lifetime dependence.”Even former colleagues can be affected. Murati left OpenAI in 2024 and began building her own A.I. startup. Josh Kushner, the close Altman ally, called her. He praised her leadership, then made what seemed to be a veiled threat, noting that he was “concerned about” her “reputation” and that former colleagues now viewed her as an “enemy.” (Kushner, through a representative, said that this account did not “convey full context”; Altman said that he was unaware of the call.)At the beginning of his tenure as C.E.O., Altman had announced that OpenAI would create a “capped profit” company, which would be owned by the nonprofit. This byzantine corporate structure apparently did not exist until Altman devised it. In the midst of the conversion, a board member named Holden Karnofsky objected to it, arguing that the nonprofit was being severely undervalued. “I can’t do that in good faith,” Karnofsky, who is Amodei’s brother-in-law, said. According to contemporaneous notes, he voted against it. However, after an attorney for the board said that his dissent “might be a flag to investigate further” the legitimacy of the new structure, his vote was recorded as an abstention, apparently without his consent—a potential falsification of business records. (OpenAI told us that several employees recall Karnofsky abstaining, and provided the minutes from the meeting recording his vote as an abstention.)Last October, OpenAI “recapitalized” as a for-profit entity. The firm touts its associated nonprofit, now called the OpenAI Foundation, as one of the “best resourced” in history. But it is now a twenty-six-per-cent stakeholder of the company, and its board members are also, with one exception, members of the for-profit board.During congressional testimony, Altman was asked if he made “a lot of money.” He replied, “I have no equity in OpenAI . . . I’m doing this because I love it”—a careful answer, given his indirect equity through the Y.C. fund. This is still technically true. But several people, including Altman, indicated to us that it could soon change. “Investors are, like, I need to know you’re gonna stick with this when times get hard,” Altman said, but added that there was no “active discussion” about it. According to a legal deposition, Brockman seems to own a stake in the company that is worth about twenty billion dollars. Altman’s share would presumably be worth more. Still, he told us that he was not primarily motivated by wealth. A former employee recalls him saying, “I don’t care about money. I care more about power.”In 2023, Altman married Mulherin in a small ceremony at a home they own in Hawaii. (They’d met nine years prior, late at night in Peter Thiel’s hot tub.) They have hosted a range of guests at the property, and those we spoke with reported witnessing nothing more remarkable than the standard diversions of the very wealthy: meals prepared by a private chef, boat rides at golden hour. One New Year’s party was “Survivor”-themed; a photograph shows a number of shirtless, smiling men, and also Jeff Probst, the real host of “Survivor.” Altman has also hosted smaller groups of friends at his properties, gatherings that have included, in at least one instance, a spirited game of strip poker. (A photograph of the event, which did not include Altman, leaves unclear who won, but at least three men clearly lost.) We spoke to many of Altman’s former guests who suggested only that he is a generous host.Nevertheless, rumors about Altman’s personal life have been exploited and distorted by competitors. Ruthless business rivalries are nothing new, but the competition within the A.I. industry has become extraordinarily cutthroat. (“Shakespearean” was the word an OpenAI executive used to describe it to us, adding, “The normal rules of the game sort of don’t apply anymore.”) Intermediaries directly connected to, and in at least one case compensated by, Musk have circulated dozens of pages of detailed opposition research about Altman. They reflect extensive surveillance, documenting shell companies associated with him, the personal contact information of close associates, and even interviews about a purported sex worker, conducted at gay bars. One of the Musk intermediaries claimed that Altman’s flights and the parties he attended were being tracked. Altman told us, “I don’t think anyone has had more private investigators hired against them.”Extreme claims have circulated. The right-wing broadcaster Tucker Carlson suggested, without any apparent proof, that Altman was involved in the death of a whistle-blower. This claim and others have been amplified by rivals. Altman’s sister, Annie, claimed in a lawsuit, and in interviews with us, that he sexually abused her for years, beginning when she was three and he was twelve. (We could not substantiate Annie’s account, which Altman has denied and his brothers and mother have called “utterly untrue” and a source of “immense pain to our entire family.” In interviews that the journalist Karen Hao conducted for her book, “Empire of AI,” Annie suggested that memories of abuse were recovered during flashbacks in adulthood.)Multiple people working within rival companies and investment firms insinuated to us that Altman sexually pursues minors—a narrative persistent in Silicon Valley which appears to be untrue. We spent months looking into the matter, conducting dozens of interviews, and could find no evidence to support it. “This is disgusting behavior from a competitor that I assume is part of an attempt at tainting the jury in our upcoming cases,” Altman told us. “As ridiculous as this is to have to say, any claims about me having sex with a minor, hiring sex workers, or being involved in a murder are completely untrue.” He added that he was “sort of grateful” that we had spent months “so aggressively trying to look into this.”“My apartment is full of smells that I personally am in no way responsible for.”Altman has acknowledged dating younger men of legal age. We spoke to several of his partners, who told us that they did not find this problematic. Yet the opposition dossiers from Musk intermediaries spin it as a line of attack. (The dossiers include salacious and unsubstantiated references to a “Twink Army” and “Sugar Daddy’s Sexual Habits.”) “I think there’s a lot of homophobia that gets pushed,” Altman said. Swisher, the tech journalist, agreed. “All these rich guys do wild stuff, wilder than anything I’ve been told about Sam,” she told us. “But he’s a gay guy in San Francisco,” she added, “so that gets weaponized.”For a decade, social-media executives promised that they could change the world with little or no downside. They dismissed the lawmakers who wanted to slow them down as mere Luddites, eventually earning bipartisan derision. Altman, by contrast, came across as refreshingly conscientious. Rather than warding off regulation, he practically begged for it. Testifying before the Senate Judiciary Committee in 2023, he proposed a new federal agency to oversee advanced A.I. models. “If this technology goes wrong, it can go quite wrong,” he said. Senator John Kennedy, of Louisiana, known for his cantankerous exchanges with tech C.E.O.s, seemed charmed, resting his face on his hand and suggesting that perhaps Altman should enforce the rules himself.But, as Altman publicly welcomed regulation, he quietly lobbied against it. In 2022 and 2023, according to Time, OpenAI successfully pressed to dilute a European Union effort that would have subjected large A.I. companies to more oversight. In 2024, a bill was introduced in the California state legislature mandating safety testing for A.I. models. Its provisions included measures resembling the ones that Altman had advocated for in his congressional testimony. OpenAI publicly opposed the bill but in private began issuing threats. “I would say that, over the course of the year, we saw increasingly cunning, deceptive behavior from OpenAI,” a legislative aide told us.Conway, the investor, lobbied state political leaders, including Nancy Pelosi and Gavin Newsom, to kill the bill. In the end, it passed the legislature with bipartisan support, but Newsom vetoed it. This year, congressional candidates who favor A.I. regulations have faced opponents funded by Leading the Future, a new “pro-A.I.” super PAC devoted to scuttling such restrictions. OpenAI’s official stance is that it will not contribute to such super PACs. “This issue transcends partisan politics,” Lehane recently told CNN. And yet one of the major donors to Leading the Future is Greg Brockman, who has committed fifty million dollars. (This year, Brockman and his wife donated twenty-five million dollars to MAGA Inc., a pro-Trump super PAC.)OpenAI’s campaign has extended beyond traditional lobbying. Last year, a successor bill was introduced in the California Senate. One night, Nathan Calvin, a twenty-nine-year-old lawyer who worked at the nonprofit Encode and had helped craft the bill, was at home having dinner with his wife when a process server arrived to deliver a subpoena from OpenAI. The company claimed to be hunting for evidence that Musk was covertly funding its critics. But it demanded all of Calvin’s private communications about the bill in the state Senate. “They could have asked us, ‘Have you ever talked to or been given money by Elon Musk?’—which we haven’t,” Calvin told us. Other supporters of the bill, and some critics of OpenAI’s for-profit restructuring, also received subpoenas. “They were going after folks to basically scare them into shutting up,” Don Howard, who heads a charity called the James Irvine Foundation, said. (OpenAI claims that this was part of the standard legal process.)Altman has long supported Democrats. “I’m very suspicious of powerful autocrats telling a story of fear to gang up on the weak,” he told us. “That’s a Jewish thing, not a gay thing.” In 2016, he endorsed Hillary Clinton and called Trump “an unprecedented threat to America.” In 2020, he donated to the Democratic Party and to the Biden Victory Fund. During the Biden Administration, Altman met with the White House at least half a dozen times. He helped develop a lengthy executive order laying out the first federal regime of safety tests and other guardrails for A.I. When Biden signed it, Altman called it a “good start.”In 2024, with Biden’s poll numbers slipping, Altman’s rhetoric began to shift. “I believe that America is going to be fine no matter what happens in this election,” he said. After Trump won, Altman donated a million dollars to his inaugural fund, then took selfies with the influencers Jake and Logan Paul at the Inauguration. On X, in his standard lowercase style, Altman wrote, “watching @potus more carefully recently has really changed my perspective on him (i wish i had done more of my own thinking . . . ).” Trump, on his first day back in office, repealed Biden’s executive order on A.I. “He’s found an effective way for the Trump Administration to do his bidding,” a senior Biden Administration official said, of Altman.Musk continues to excoriate Altman in public, calling him “Scam Altman” and “Swindly Sam.” (When Altman complained on X about a Tesla he’d ordered, Musk replied, “You stole a non-profit.”) And yet, in Washington, Altman seems to have outflanked him. Musk spent more than two hundred and fifty million dollars to help Trump get reëlected, and worked in the White House for months. Then Musk left Washington, damaging his relationship with Trump in the process.Altman is now one of Trump’s favored tycoons, even accompanying him on a trip to visit the British Royal Family at Windsor Castle. Altman and Trump speak a few times a year. “You can just, like, call him,” Altman said. “This is not a buddy. But, yeah, if I need to talk to him about something, I will.” When Trump hosted a dinner with tech leaders at the White House last year, Musk was notably absent; Altman sat across from the President. “Sam, you’re a big leader,” Trump said. “You told me things before that are absolutely unbelievable.”Over the years, Altman has continued to compare the quest for A.G.I. to the Manhattan Project. Like J. Robert Oppenheimer, who used impassioned appeals about saving the world from the Nazis to persuade physicists to uproot their lives and move to Los Alamos, Altman leverages fears about the geopolitical stakes of his technology. Depending on the audience, Altman has used this analogy to encourage either acceleration or caution. In a meeting with U.S. intelligence officials in the summer of 2017, he claimed that China had launched an “A.G.I. Manhattan Project,” and that OpenAI needed billions of dollars of government funding to keep pace. When pressed for evidence, Altman said, “I’ve heard things.” It was the first of several meetings in which he made the claim. After one of them, he told an intelligence official that he would follow up with evidence. He never did. The official, after looking into the China project, concluded that there was no evidence that it existed: “It was just being used as a sales pitch.” (Altman says that he does not recall describing Beijing’s efforts in exactly that way.)With more safety-conscious audiences, Altman invoked the analogy to imply the opposite: that A.G.I. had to be pursued carefully, with international coördination, lest the consequences be disastrous. In 2017, Amodei hired Page Hedley, a former public-interest lawyer, to be OpenAI’s policy and ethics adviser. In an early PowerPoint presentation to executives, Hedley outlined how OpenAI might avert a “catastrophic” arms race—perhaps by building a coalition of A.I. labs that would eventually coördinate with an international body akin to NATO, to insure that the technology was deployed safely. As Hedley recalled it, Brockman didn’t understand how this would help the company beat its competitors. “No matter what I said,” Hedley told us, “Greg kept going back to ‘So how do we raise more money? How do we win?’ ” According to several interviews and contemporaneous records, Brockman offered a counterproposal: OpenAI could enrich itself by playing world powers—including China and Russia—against one another, perhaps by starting a bidding war among them. According to Hedley, the thinking seemed to be, It worked for nuclear weapons, why not for A.I.?He was aghast: “The premise, which they didn’t dispute, was ‘We’re talking about potentially the most destructive technology ever invented—what if we sold it to Putin?’ ” (Brockman maintains that he never seriously entertained auctioning A.I. models to governments. “Ideas were batted around at a high level about what potential frameworks might look like to encourage cooperation between nations—something akin to an International Space Station for AI,” an OpenAI representative said. “Attempting to characterize it as anything more than that is utterly ridiculous.”)Brainstorming sessions often produce outlandish ideas. Hedley hoped that this one, which came to be known internally as the “countries plan,” would be dropped. Instead, according to several people involved and to contemporaneous documents, OpenAI executives seemed to grow only more excited about it. Brockman’s goal, according to Jack Clark, OpenAI’s policy director at the time, was to “set up, basically, a prisoner’s dilemma, where all of the nations need to give us funding,” and that “implicitly makes not giving us funding kind of dangerous.” A junior researcher recalled thinking, as the plan was detailed at a company meeting, “This is completely fucking insane.”Executives discussed the approach with at least one potential donor. But later that month, after several employees talked about quitting, the plan was abandoned. Altman “would lose staff,” Hedley said. “I feel like that was always something that had more weight in Sam’s calculations than ‘This is not a good plan because it might cause a war between great powers.’ ”“I cannot wait for crop tops to go out of style.”Undeterred by the collapse of the countries plan, Altman pursued variations on the theme. In January, 2018, he convened an “A.G.I. weekend” at the Hotel Bel-Air, an Old Hollywood resort with rolling gardens of pink bougainvillea and an artificial pond stocked with real swans. The attendees included Nick Bostrom, a philosopher, then at Oxford, who had become a prophet of A.I. doom; Omar Al Olama, an Emirati sultan and an A.I. booster; and at least seven billionaires. The safety-concerned among them were told that this would be an opportunity to think through how society might prepare for the disruptive arrival of artificial general intelligence; the investors arrived expecting to hear pitches.The days were spent in a sleek conference room, where guests gave talks. (Hoffman, the LinkedIn co-founder, expounded on the possibilities of encoding A.I. with Buddhist compassion.) The final presenter was Altman, armed with a pitch deck that described a global cryptocurrency “redeemable for the attention of the AGI.” Once the A.G.I. was maximally useful, and “anti-evil,” people everywhere would clamor to buy time on OpenAI’s servers. Amodei wrote in his notes, “This idea was absurd on its face (would Vladimir Putin end up owning some of the tokens? . . .) In retrospect this was one of many red flags about Sam that I should have taken more seriously.” The plan seemed like a cash grab, but Altman sold it as a boon for A.I. safety. One of his slides read, “I want to get as many people on the ‘good’ team as possible, and win, and do the right thing.” Another read, “Please hold your laughter until the end of the presentation.”Altman’s fund-raising pitch has evolved over the years, but it has always reflected the fact that the development of A.G.I. requires a staggering amount of capital. He was following a relatively simple “scaling law”: the more data and computing power you used to train the models, the smarter they seemed to get. The specialized chips that enable this process are enormously expensive. OpenAI, in its most recent funding round alone, raised more than a hundred and twenty billion dollars—the largest private round in history, and a sum four times larger than the biggest I.P.O. ever. “When you think about entities with a hundred billion dollars they can discretionarily spend per year, there really are only a handful in the world,” a tech executive and investor told us. “There’s the U.S. government, and the four or five biggest U.S. tech companies, and the Saudis, and the Emiratis—that’s basically it.”Altman’s initial focus was Saudi Arabia. He first met Mohammed bin Salman, the country’s crown prince and de-facto monarch, in 2016, at a dinner at San Francisco’s Fairmont Hotel. After that, Hedley recalled, Altman referred to the prince as “a friend.” In September, 2018, according to Hedley’s notes, Altman said, “I’m trying to decide if we would ever take tens of billions from the Saudi PIF,” or public investment fund.The following month, a hit squad, reportedly acting on bin Salman’s orders, strangled Jamal Khashoggi, a Washington Post journalist who had been critical of the regime, and used a bone saw to dismember his corpse. A week later, it was announced that Altman had joined the advisory board for Neom, a “city of the future” that bin Salman hoped to build in the desert. “Sam, you cannot be on this board,” Clark, the policy director, who now works at Anthropic, recalled telling Altman. He initially defended his involvement, telling Clark that Jared Kushner had assured him that the Saudis “didn’t do this.” (Altman does not recall this. Kushner says that they were not in contact at the time.)As bin Salman’s role became increasingly clear, Altman left the Neom board. Yet behind the scenes, a policy consultant from whom Altman sought advice recalled, he treated the situation as a temporary setback, asking whether he could somehow still get money from bin Salman. “The question was not ‘Is this a bad thing or not?’ ” the consultant said. “But, just, ‘What would the consequences be if we did it? Would there be some export-control issue? Would there be sanctions? Like, can I get away with it?’ ”By then, Altman was already eying another source of cash: the United Arab Emirates. The country was in the midst of a fifteen-year effort to transform itself from an oil state to a tech hub. The project was overseen by Sheikh Tahnoon bin Zayed al-Nahyan, the President’s brother and the nation’s spymaster. Tahnoon runs the state-controlled A.I. conglomerate G42, and controls $1.5 trillion in sovereign wealth. In June, 2023, Altman visited Abu Dhabi, meeting with Olama and other officials. In remarks at a government-backed function, he said that the country had “been talking about A.I. since before it was cool,” and outlined a vision for the future of A.I. with the Middle East in a “central role.”Fund-raising from Gulf states has become customary for many large businesses. But Altman was pursuing a more sweeping geopolitical vision. In the fall of 2023, he began quietly recruiting new talent for a plan—eventually known as ChipCo—in which Gulf states would provide tens of billions of dollars for the construction of huge microchip foundries and data centers, some to be situated in the Middle East. Altman pitched Alexandr Wang, now the head of A.I. at Meta, on a leadership role, telling him that Jeff Bezos, the founder of Amazon, could head the new company. Altman sought enormous contributions from the Emiratis. “My understanding was that this whole thing happened without any board knowledge,” the board member said. A researcher Altman tried to recruit for the project, James Bradbury, recalled turning him down. “My initial reaction was ‘This is gonna work, but I don’t know if I want it to work,’ ” he said.A.I. capacity may soon displace oil or enriched uranium as the resource that dictates the global balance of power. Altman has said that computing power is “the currency of the future.” Normally, it might not matter where a data center was situated. But many American national-security officials were anxious about concentrating advanced A.I. infrastructure in Gulf autocracies. The U.A.E.’s telecommunications infrastructure is heavily dependent on hardware from Huawei, a Chinese tech giant linked to the government, and the U.A.E. has reportedly leaked American technology to Beijing in the past. Intelligence agencies worried that advanced U.S. microchips sent to the Emiratis could be used by Chinese engineers. Data centers in the Middle East are also more vulnerable to military strikes; in recent weeks, Iran has bombed American data centers in Bahrain and the U.A.E. And, hypothetically, a Gulf monarchy could commandeer an American-owned data center and use it to build disproportionately powerful models—a version of the “AGI dictatorship” scenario, but in an actual dictatorship.After Altman’s firing, the person he relied on most was Chesky, the Airbnb co-founder and one of Altman’s fiercest loyalists. “Watching my friend stare into the abyss like that, it made me question some fundamental things about what it means to really run a company,” Chesky told us. The following year, at a gathering of Y Combinator alumni, he gave an impromptu talk, which ended up lasting two hours. “It felt like a group-therapy session,” he said. The upshot was: Your instincts for how to run the company that you started are the best instincts, and anyone who tells you otherwise is gaslighting you. “You’re not crazy, even though people who work for you tell you you are,” Chesky said. Paul Graham, in a blog post about the speech, gave this defiant attitude a name: Founder Mode.Since the Blip, Altman has been in Founder Mode. In February, 2024, the Wall Street Journal published a description of Altman’s vision for ChipCo. He conceived of it as a joint entity funded by an investment of five to seven trillion dollars. (“fk it why not 8,” he tweeted.) This was how many employees learned about the plan. “Everyone was, like, ‘Wait, what?’ ” Leike recalled. Altman insisted at an internal meeting that safety teams had been “looped in.” Leike sent a message urging him not to falsely suggest that the effort had been approved.During the Biden Administration, Altman explored getting a security clearance to join classified A.I.-policy discussions. But staffers at the RANDCorporation, which helped coördinate the process, expressed concern. “He has been actively raising ‘hundreds of billions of dollars’ from foreign governments,” one of them wrote. “The UAE recently gifted him a car. (I assume it was a very nice car.)” The staffer continued, “The only person I can think of who ever went thru the process with this magnitude of foreign financial ties is Jared Kushner, and the adjudicators recommended that he not be granted a clearance.” Altman ultimately withdrew from the process. “He was pushing these transactional relationships, primarily with the Emiratis, that raised a lot of red flags for some of us,” a senior Administration official involved in talks with Altman told us. “A lot of people in the Administration did not trust him a hundred per cent.”When we asked Altman about gifts from Tahnoon, he said, “I’m not gonna say what gifts he has given me specifically. But he and other world leaders . . . have given me gifts.” He added, “We have a standard policy, which applies to me as well, which is that every gift from any potential business partner is disclosed to the company.” Altman has at least two hypercars: an all-white Koenigsegg Regera, worth about two million dollars, and a red McLaren F1, worth about twenty million dollars. In 2024, Altman was spotted driving the Regera through Napa. A few seconds of video made its way onto social media: Altman in a low-slung bucket seat, peering out the window of a gleaming white machine. A tech investor aligned with Musk posted the footage on X, writing, “I’m starting a nonprofit next.”In 2024, Altman took two OpenAI employees to visit Sheikh Tahnoon on his two-hundred-and-fifty-million-dollar superyacht, the Maryah. One of the largest such vessels in the world, the Maryah has a helipad, a night club, a movie theatre, and a beach club. Altman’s employees apparently stood out amid Tahnoon’s armed security detail, and at least one later told colleagues that he found the experience disconcerting. Altman, on X, later referred to Tahnoon as a “dear personal friend.”Altman continued to meet with the Biden Administration, which had enacted a policy requiring White House approval for the export of sensitive technology. Multiple Administration officials emerged from these meetings nervous about Altman’s ambitions in the Middle East. He often made grandiose claims, according to those officials, including calling A.I. “the new electricity.” In 2018, he said that OpenAI was planning to buy a fully functioning quantum computer from a company called Rigetti Computing. This was news even to other OpenAI executives in the room. Rigetti was not yet close to being able to sell a usable quantum computer. In a meeting, Altman claimed that by 2026 an extensive network of nuclear-fusion reactors across the United States would power the A.I. boom. The senior Administration official said, “We were, like, ‘Well, that’s, you know, news, if they made nuclear fusion work.’ ” The Biden Administration ultimately withheld approval. “We’re not going to be building advanced chips in the U.A.E.,” a leader at the Department of Commerce told Altman.Four days before Trump’s Inauguration, the Wall Street Journal reported, Tahnoon paid half a billion dollars to the Trump family in exchange for a stake in its cryptocurrency company. The following day, Altman held a twenty-five-minute call with Trump, during which they discussed announcing a version of a ChipCo, timed so that Trump could take credit for it. On Trump’s second day in office, Altman stood in the Roosevelt Room and announced Stargate, a five-hundred-billion-dollar joint venture that aims to build a vast network of A.I. infrastructure across the U.S.In May, the Administration rescinded Biden’s export restrictions on A.I. technology. Altman and Trump travelled to the Saudi royal court to meet with bin Salman. Around the same time, the Saudis advertised the launch of a giant state-backed A.I. firm in the kingdom, with billions to spend on international partnerships. About a week later, Altman laid out a plan for Stargate to expand into the U.A.E. The company plans to build a data-center campus in Abu Dhabi which is seven times larger than Central Park and consumes roughly as much electrical power as the city of Miami. “The truth of this is, we’re building portals from which we’re genuinely summoning aliens,” a former OpenAI executive said. “The portals currently exist in the United States and China, and Sam has added one in the Middle East.” He went on, “I think it’s just, like, wildly important to get how scary that should be. It’s the most reckless thing that has been done.”The erosion of safety commitments has become an industry norm. The founding premise of Anthropic was that, given the right structure and leadership, it could keep safety commitments from disintegrating under commercial pressure. One such commitment was a “responsible scaling policy,” which obligated Anthropic to stop training more powerful models if it could not demonstrate that they were safe. In February, as the firm secured thirty billion dollars in new funding, it weakened that pledge. In some respects, Anthropic still emphasizes safety more than OpenAI does. But Clark, the former policy director, has said, “The system of capital markets says, Go faster.” He added, “The world gets to make this decision, not companies.” Last year, Amodei sent a memo to Anthropic employees, disclosing that the firm would seek investments from the United Arab Emirates and Qatar and acknowledging that this would likely enrich “dictators.” (Like many authors, we are both parties in a class-action lawsuit alleging that Anthropic used our books without our permission to train its models. Condé Nast has opted into a settlement agreement with Anthropic regarding the company’s use of certain books published by Condé Nast and its subsidiaries.)In 2024, Anthropic partnered with Palantir, one of Silicon Valley’s most hawkish defense contractors, pushing its A.I. model, Claude, directly into the military ecosystem. Anthropic became the only A.I. contractor used in the Pentagon’s most classified settings. Last year, the Pentagon awarded the company a further two-hundred-million-dollar contract. In January, the U.S. military launched a midnight raid that captured the Venezuelan President, Nicolás Maduro. According to the Wall Street Journal, Claude was used in the classified operation.But tensions arose between Anthropic and the government. Years earlier, OpenAI had deleted from its policies a blanket ban on using its technology for “military and warfare.” Eventually, Anthropic’s rivals—including Google and xAI—agreed to provide their models to the military for “all lawful purposes.” Anthropic, whose policies bar it from enabling fully autonomous weapons or domestic mass surveillance, resisted on these points, slowing negotiations for an overhauled deal. On a Tuesday in late February, Defense Secretary Pete Hegseth summoned Amodei to the Pentagon and delivered an ultimatum: the firm had until 5:01 P.M. that Friday to abandon those prohibitions. The day before the deadline, Amodei declined to do so. Hegseth tweeted that he would designate Anthropic a “supply-chain risk”—a devastating blacklist historically reserved for companies, like Huawei, that have ties to foreign adversaries—and made good on the threat days later.Hundreds of employees at OpenAI and Google signed an open letter titled “We Will Not Be Divided,” defending Anthropic. In an internal memo, Altman wrote that the dispute was “an issue for the whole industry,” and claimed that OpenAI shared Anthropic’s ethical boundaries. But Altman had been in negotiations with the Pentagon for at least two days. Emil Michael, the Under-Secretary of Defense for Research and Engineering, had contacted Altman as he sought replacements for Anthropic. “I needed to hurry and find alternatives,” Michael recalled. “I called Sam, and he was willing to jump. I think he’s a patriot.” Altman asked Michael, “What can I do for the country?” It appears that he already knew the answer. OpenAI lacked the security accreditation required for the classified systems in which Anthropic’s technology was embedded. But a fifty-billion-dollar deal, announced that Friday morning, integrated OpenAI’s technology into Amazon Web Services, a key part of the Pentagon’s digital infrastructure. That night, Altman announced on X that the military would now be using OpenAI’s models.By some measures, Altman’s maneuver has not hindered the company’s success. The day he announced the deal, a new funding round increased OpenAI’s value by a hundred and ten billion dollars. But many users deleted the ChatGPT app. At least two senior employees departed—one for Anthropic. At a staff meeting, Altman chastised employees who raised concerns. “So maybe you think the Iran strike was good and the Venezuela invasion was bad,” he said. “You don’t get to weigh in on that.”Several executives connected to OpenAI have expressed ongoing reservations about Altman’s leadership and floated Fidji Simo, who was formerly the C.E.O. of Instacart and now serves as OpenAI’s C.E.O. for AGI Deployment, as a successor. Simo herself has privately said that she believes Altman may eventually step down, a person briefed on a recent discussion told us. (Simo disputes this. Instacart recently reached a settlement with the F.T.C., in which it admitted no wrongdoing but agreed to pay a sixty-million-dollar fine for alleged deceptive practices under Simo’s leadership.)Altman describes his shifting commitments as a by-product of his ability to adapt to changing circumstances—not a nefarious “long con,” as Musk and others have alleged, but a gradual, good-faith evolution. “I think what some people want,” he told us, is a leader who “is going to be absolutely sure of what they think and stick with it, and it’s not going to change. And we are in a field, in an area, where things change extremely quickly.” He defended some of his actions as the practice of “normal competitive business.” Several investors we spoke to described Altman’s detractors as naïve to expect anything else. “There is a group of fatalistic extremists that has taken the safety pill almost to a science-fiction level,” Conway, the investor, told us. “His mission is measured by numbers. And, when you look at the success of OpenAI, it’s hard to argue with the numbers.”But others in Silicon Valley think that Altman’s behavior has created unacceptable managerial dysfunction. “It’s more about a practical inability to govern the company,” the board member said. And some still believe that the architects of A.I. should be evaluated more stringently than executives in other industries. The vast majority of people we spoke to agreed that the standards by which Altman now asks to be judged are not those he initially proposed. During one conversation, we asked Altman whether running an A.I. company came with “an elevated requirement of integrity.” This was supposed to be an easy question. Until recently, when asked a version of it, his answer was a clear, unqualified yes. Now he added, “I think there’s, like, a lot of businesses that have potential huge impact, good and bad, on society.” (Later, he sent an additional statement: “Yes, it demands a heightened level of integrity, and I feel the weight of the responsibility every day.”)Of all the promises made at OpenAI’s founding, arguably the most central was its pledge to bring A.I. into existence safely. But such concerns are now often derided in Silicon Valley and in Washington. Last year, J. D. Vance, the former venture capitalist who is now the Vice-President, addressed a conference in Paris called the A.I. Action Summit. (It was previously called the A.I. Safety Summit.) “The A.I. future is not going to be won by hand-wringing about safety,” he said. At Davos this year, David Sacks, a venture capitalist who was serving as the White House’s A.I. and crypto czar, dismissed safety concerns as a “self-inflicted injury” that could cost America the A.I. race. Altman now calls Trump’s deregulatory approach “a very refreshing change.”OpenAI has closed many of its safety-focussed teams. Around the time the superalignment team was dissolved, its leaders, Sutskever and Leike, resigned. (Sutskever co-founded a company called Safe Superintelligence.) On X, Leike wrote, “Safety culture and processes have taken a backseat to shiny products.” Soon afterward, the A.G.I.-readiness team, tasked with preparing society for the shock of advanced A.I., was also dissolved. When the company was asked on its most recent I.R.S. disclosure form to briefly describe its “most significant activities,” the concept of safety, present in its answers to such questions on previous forms, was not listed. (OpenAI said that its “mission did not change” and added, “We continue to invest in and evolve our work on safety, and will continue to make organizational changes.”) The Future of Life Institute, a think tank whose principles on safety Altman once endorsed, grades each major A.I. company on “existential safety”; on the most recent report card, OpenAI got an F. In fairness, so did every other major company except for Anthropic, which got a D, and Google DeepMind, which got a D-.“My vibes don’t match a lot of the traditional A.I.-safety stuff,” Altman said. He insisted that he continued to prioritize these matters, but when pressed for specifics he was vague: “We still will run safety projects, or at least safety-adjacent projects.” When we asked to interview researchers at the company who were working on existential safety—the kinds of issues that could mean, as Altman once put it, “lights-out for all of us”—an OpenAI representative seemed confused. “What do you mean by ‘existential safety’?” he replied. “That’s not, like, a thing.”A.I. doomers have been pushed to the fringes, but some of their fears seem less fantastical with each passing month. In 2020, according to a U.N. report, an A.I. drone was used in the Libyan civil war to fire deadly munitions, possibly without oversight by a human operator. Since then, A.I. has only become more central to military operations around the world, including, reportedly, in the current U.S. campaign in Iran. In 2022, researchers at a pharmaceutical company tested whether a drug-discovery model could be used to find new toxins; within a few hours, it had suggested forty thousand deadly chemical-warfare agents. And many more mundane harms are already coming to pass. We increasingly rely on A.I. to help us write, think, and navigate the world, accelerating what experts call “human enfeeblement”; the ubiquity of A.I. “slop” makes life easier for scammers and harder for people who simply want to know what’s real. A.I. “agents” are starting to act independently, with little or no human supervision. Days before the 2024 New Hampshire Democratic primary, thousands of voters received robocalls from an A.I.-generated deepfake of Joe Biden’s voice, telling them to save their votes for November and stay home—an act of voter suppression requiring virtually no technical expertise. OpenAI is now facing seven wrongful-death lawsuits, which allege that ChatGPT prompted several suicides and a murder. Chat logs in the murder case show that it encouraged a man’s paranoid delusion that his eighty-three-year-old mother was surveilling and trying to poison him. Soon afterward, he fatally beat and strangled her and stabbed himself. (OpenAI is fighting the lawsuits, and says that it’s continuing to improve its model’s safeguards.)As OpenAI prepares for its potential I.P.O., Altman has faced questions not only about the effect of A.I. on the economy—it could soon cause severe labor disruption, perhaps eliminating millions of jobs—but about the company’s own finances. Eric Ries, an expert on startup governance, derided “circular deals” in the industry—for example, OpenAI’s deals with Nvidia and other chip manufacturers—and said that in other eras some of the company’s accounting practices would have been considered “borderline fraudulent.” The board member told us, “The company levered up financially in a way that’s risky and scary right now.” (OpenAI disputes this.)In February, we spoke again with Altman. He was wearing a drab-green sweater and jeans, and sat in front of a photograph of a NASA moon rover. He tucked one leg beneath him, then hung it over the arm of his chair. In the past, he said, his main flaw as a manager had been his eagerness to avoid conflict. “Now I’m very happy to fire people quickly,” he had told us. “I’m happy to just say, ‘We’re gonna bet in this direction.’ ” Any employees who didn’t like his choices needed “to leave.”He is more bullish than ever about the future. “My definition of winning is that people crazy uplevel—and the insane sci-fi future comes true for all of us,” he said. “I’m very ambitious as far as, like, my hope for humanity, and what I expect us all to achieve. I weirdly have very little personal ambition.” At times, he seemed to catch himself. “No one believes you’re doing this just because it’s interesting,” he said. “You’re doing it for power or for some other thing.”Even people close to Altman find it difficult to know where his “hope for humanity” ends and his ambition begins. His greatest strength has always been his ability to convince disparate groups that what he wants and what they need are one and the same. He made use of a unique historical juncture, when the public was wary of tech-industry hype and most of the researchers capable of building A.G.I. were terrified of bringing it into existence. Altman responded with a move that no other pitchman had perfected: he used apocalyptic rhetoric to explain how A.G.I. could destroy us all—and why, therefore, he should be the one to build it. Maybe this was a premeditated masterstroke. Maybe he was fumbling for an advantage. Either way, it worked.Not all the tendencies that make chatbots dangerous are glitches; some are by-products of how the systems are built. Large language models are trained, in part, on human feedback, and humans tend to prefer agreeable responses. Models often learn to flatter users, a tendency known as sycophancy, and will sometimes prioritize this over honesty. Models can also make things up, a tendency known as hallucination. Major A.I. labs have documented these problems, but they sometimes tolerate them. As models have grown more complex, some hallucinate with more persuasive fabrications. In 2023, shortly before his firing, Altman argued that allowing for some falsehoods can, whatever the risks, confer advantages. “If you just do the naïve thing and say, ‘Never say anything that you’re not a hundred per cent sure about,’ you can get a model to do that,” he said. “But it won’t have the magic that people like so much.” ♦
...
Read the original on www.newyorker.com »
Today we’re announcing Project Glasswing1, a new initiative that brings together Amazon Web Services, Anthropic, Apple, Broadcom, Cisco, CrowdStrike, Google, JPMorganChase, the Linux Foundation, Microsoft, NVIDIA, and Palo Alto Networks in an effort to secure the world’s most critical software. We formed Project Glasswing because of capabilities we’ve observed in a new frontier model trained by Anthropic that we believe could reshape cybersecurity. Claude Mythos2 Preview is a general-purpose, unreleased frontier model that reveals a stark fact: AI models have reached a level of coding capability where they can surpass all but the most skilled humans at finding and exploiting software vulnerabilities.Mythos Preview has already found thousands of high-severity vulnerabilities, including some in every major operating system and web browser. Given the rate of AI progress, it will not be long before such capabilities proliferate, potentially beyond actors who are committed to deploying them safely. The fallout—for economies, public safety, and national security—could be severe. Project Glasswing is an urgent attempt to put these capabilities to work for defensive purposes.As part of Project Glasswing, the launch partners listed above will use Mythos Preview as part of their defensive security work; Anthropic will share what we learn so the whole industry can benefit. We have also extended access to a group of over 40 additional organizations that build or maintain critical software infrastructure so they can use the model to scan and secure both first-party and open-source systems. Anthropic is committing up to $100M in usage credits for Mythos Preview across these efforts, as well as $4M in direct donations to open-source security organizations.Project Glasswing is a starting point. No one organization can solve these cybersecurity problems alone: frontier AI developers, other software companies, security researchers, open-source maintainers, and governments across the world all have essential roles to play. The work of defending the world’s cyber infrastructure might take years; frontier AI capabilities are likely to advance substantially over just the next few months. For cyber defenders to come out ahead, we need to act now.Cybersecurity in the age of AIThe software that all of us rely on every day—responsible for running banking systems, storing medical records, linking up logistics networks, keeping power grids functioning, and much more—has always contained bugs. Many are minor, but some are serious security flaws that, if discovered, could allow cyberattackers to hijack systems, disrupt operations, or steal data.We have already seen the serious consequences of cyberattacks for important corporate networks, healthcare systems, energy infrastructure, transport hubs, and the information security of government agencies across the world. On the global stage, state-sponsored attacks from actors like China, Iran, North Korea, and Russia have threatened to compromise the infrastructure that underpins both civilian life and military readiness. Even smaller-scale attacks, such as those where individual hospitals or schools are targeted, can still inflict substantial economic damage, expose sensitive data, and even put lives at risk. The current global financial costs of cybercrime are challenging to estimate, but might be around $500B every year.Many flaws in software go unnoticed for years because finding and exploiting them has required expertise held by only a few skilled security experts. With the latest frontier AI models, the cost, effort, and level of expertise required to find and exploit software vulnerabilities have all dropped dramatically. Over the past year, AI models have become increasingly effective at reading and reasoning about code—in particular, they show a striking ability to spot vulnerabilities and work out ways to exploit them. Claude Mythos Preview demonstrates a leap in these cyber skills—the vulnerabilities it has spotted have in some cases survived decades of human review and millions of automated security tests, and the exploits it develops are increasingly sophisticated.Ten years after the first DARPA Cyber Grand Challenge, frontier AI models are now becoming competitive with the best humans at finding and exploiting vulnerabilities. Without the necessary safeguards, these powerful cyber capabilities could be used to exploit the many existing flaws in the world’s most important software. This could make cyberattacks of all kinds much more frequent and destructive, and empower adversaries of the United States and its allies. Addressing these issues is therefore an important security priority for democratic states.Although the risks from AI-augmented cyberattacks are serious, there is reason for optimism: the same capabilities that make AI models dangerous in the wrong hands make them invaluable for finding and fixing flaws in important software—and for producing new software with far fewer security bugs. Project Glasswing is an important step toward giving defenders a durable advantage in the coming AI-driven era of cybersecurity.
Over the past few weeks, we have used Claude Mythos Preview to identify thousands of zero-day vulnerabilities (that is, flaws that were previously unknown to the software’s developers), many of them critical, in every major operating system and every major web browser, along with a range of other important pieces of software.In a post on our Frontier Red Team blog, we provide technical details for a subset of these vulnerabilities that have already been patched and, in some cases, the ways that Mythos Preview found to exploit them. It was able to identify nearly all of these vulnerabilities—and develop many related exploits—entirely autonomously, without any human steering. The following are three examples:Mythos Preview found a 27-year-old vulnerability in OpenBSD—which has a reputation as one of the most security-hardened operating systems in the world and is used to run firewalls and other critical infrastructure. The vulnerability allowed an attacker to remotely crash any machine running the operating system just by connecting to it;It also discovered a 16-year-old vulnerability in FFmpeg—which is used by innumerable pieces of software to encode and decode video—in a line of code that automated testing tools had hit five million times without ever catching the problem;The model autonomously found and chained together several vulnerabilities in the Linux kernel—the software that runs most of the world’s servers—to allow an attacker to escalate from ordinary user access to complete control of the machine.We have reported the above vulnerabilities to the maintainers of the relevant software, and they have all now been patched. For many other vulnerabilities, we are providing a cryptographic hash of the details today (see the Red Team blog), and we will reveal the specifics after a fix is in place.Evaluation benchmarks such as CyberGym reinforce the substantial difference between Mythos Preview and our next-best model, Claude Opus 4.6:In addition to our own work, many of our partners have already been using Claude Mythos Preview for several weeks. This is what they’ve found:“AI capabilities have crossed a threshold that fundamentally changes the urgency required to protect critical infrastructure from cyber threats, and there is no going back. Our foundational work with these models has shown we can identify and fix security vulnerabilities across hardware and software at a pace and scale previously impossible. That is a profound shift, and a clear signal that the old ways of hardening systems are no longer sufficient.
Providers of technology must aggressively adopt new approaches now, and customers need to be ready to deploy. That is why Cisco joined Project Glasswing—this work is too important and too urgent to do alone.”“At AWS, we build defenses before threats emerge, from our custom silicon up through the technology stack. Security isn’t a phase for us; it’s continuous and embedded in everything we do. Our teams analyze over 400 trillion network flows every day for threats, and AI is central to our ability to defend at scale.
We’ve been testing Claude Mythos Preview in our own security operations, applying it to critical codebases, where it’s already helping us strengthen our code. We’re bringing deep security expertise to our partnership with Anthropic and are helping to harden Claude Mythos Preview so even more organizations can advance their most ambitious work with security that sets the standard.”“As we enter a phase where cybersecurity is no longer bound by purely human capacity, the opportunity to use AI responsibly to improve security and reduce risk at scale is unprecedented. Joining Project Glasswing, with access to Claude Mythos Preview, allows us to identify and mitigate risk early and augment our security and development solutions so we can better protect customers and Microsoft.
When tested against CTI-REALM, our open-source security benchmark, Claude Mythos Preview showed substantial improvements compared to previous models. We look forward to partnering with Anthropic and the broader industry to improve security outcomes for all.”“The window between a vulnerability being discovered and being exploited by an adversary has collapsed—what once took months now happens in minutes with AI.
Claude Mythos Preview demonstrates what is now possible for defenders at scale, and adversaries will inevitably look to exploit the same capabilities. That is not a reason to slow down; it’s a reason to move together, faster. If you want to deploy AI, you need security. That is why CrowdStrike is part of this effort from day one.”“In the past, security expertise has been a luxury reserved for organizations with large security teams. Open source maintainers—whose software underpins much of the world’s critical infrastructure—have historically been left to figure out security on their own. Open source software constitutes the vast majority of code in modern systems, including the very systems AI agents use to write new software.
By giving the maintainers of these critical open source codebases access to a new generation of AI models that can proactively identify and fix vulnerabilities at scale, Project Glasswing offers a credible path to changing that equation. This is how AI-augmented security can become a trusted sidekick for every maintainer, not just those who can afford expensive security teams.”“Promoting the cybersecurity and resiliency of the financial system is central to JPMorganChase’s mission, and we believe the industry is strongest when leading institutions work together on shared challenges. Project Glasswing provides a unique, early stage opportunity to evaluate next-generation AI tools for defensive cybersecurity across critical infrastructure both on our own terms and alongside respected technology leaders.
We will take a rigorous, independent approach to determining how to proceed and where we can help. Anthropic’s initiative reflects the kind of forward-looking, collaborative approach that this moment demands.”“Google is pleased to see this cross-industry cybersecurity initiative coming together and to make Mythos Preview available to participants via Vertex AI. It’s always been critical that the industry work together on emerging security issues, whether it’s post-quantum cryptography, responsible zero-day disclosure, secure open source software, or defense against AI-based attacks.
We have long believed that AI poses new challenges and opens new opportunities in cyber defense, which is why we’ve built AI-powered tools—such as Big Sleep and CodeMender—to find and fix critical software flaws. We will continue investing in our leading cybersecurity platform and a culture focused on protecting users, customers, the ecosystem, and national security.”“Over the past few weeks, we’ve had access to the Claude Mythos Preview model, using it to identify complex vulnerabilities that prior-generation models missed entirely. This is not only a game changer for finding previously hidden vulnerabilities, but it also signals a dangerous shift where attackers can soon find even more zero-day vulnerabilities and develop exploits faster than ever before.
It’s clear that these models need to be in the hands of open source owners and defenders everywhere to find and fix these vulnerabilities before attackers get access. Perhaps even more important: everyone needs to prepare for AI-assisted attackers. There will be more attacks, faster attacks, and more sophisticated attacks. Now is the time to modernize cybersecurity stacks everywhere. We commend Anthropic for partnering with the industry to ensure these powerful capabilities prioritize defense first.”The powerful cyber capabilities of Claude Mythos Preview are a result of its strong agentic coding and reasoning skills. For example, as shown in the evaluation results below, the model has the highest scores of any model yet developed on a variety of software coding tasks.More information on the model’s capabilities, its safety properties, and its general characteristics can be found in the Claude Mythos Preview system card.We do not plan to make Claude Mythos Preview generally available, but our eventual goal is to enable our users to safely deploy Mythos-class models at scale—for cybersecurity purposes, but also for the myriad other benefits that such highly capable models will bring. To do so, we need to make progress in developing cybersecurity (and other) safeguards that detect and block the model’s most dangerous outputs. We plan to launch new safeguards with an upcoming Claude Opus model, allowing us to improve and refine them with a model that does not pose the same level of risk as Mythos Preview3.Today’s announcement is the beginning of a longer-term effort. To be successful, it will require broad involvement from across the technology industry and beyond.Project Glasswing partners will receive access to Claude Mythos Preview to find and fix vulnerabilities or weaknesses in their foundational systems—systems that represent a very large portion of the world’s shared cyberattack surface. We anticipate this work will focus on tasks like local vulnerability detection, black box testing of binaries, securing endpoints, and penetration testing of systems.Anthropic’s commitment of $100M in model usage credits to Project Glasswing and additional participants will cover substantial usage throughout this research preview. Afterward, Claude Mythos Preview will be available to participants at $25/$125 per million input/output tokens (participants can access the model on the Claude API, Amazon Bedrock, Google Cloud’s Vertex AI, and Microsoft Foundry).In addition to our commitment of model usage credits, we’ve donated $2.5M to Alpha-Omega and OpenSSF through the Linux Foundation, and $1.5M to the Apache Software Foundation to enable the maintainers of open-source software to respond to this changing landscape (maintainers interested in access can apply through the Claude for Open Source program).We intend for this work to grow in scope and continue for many months, and we’ll share as much as we can so that other organizations can apply the lessons to their own security. Partners will, to the extent they’re able, share information and best practices with each other; within 90 days, Anthropic will report publicly on what we’ve learned, as well as the vulnerabilities fixed and improvements made that can be disclosed. We will also collaborate with leading security organizations to produce a set of practical recommendations for how security practices should evolve in the AI era. This will potentially include:Anthropic has also been in ongoing discussions with US government officials about Claude Mythos Preview and its offensive and defensive cyber capabilities. As we noted above, securing critical infrastructure is a top national security priority for democratic countries—the emergence of these cyber capabilities is another reason why the US and its allies must maintain a decisive lead in AI technology. Governments have an essential role to play in helping maintain that lead, and in both assessing and mitigating the national security risks associated with AI models. We are ready to work with local, state, and federal representatives to assist in these tasks.We are hopeful that Project Glasswing can seed a larger effort across industry and the public sector, with all parties helping to address the biggest questions around the impact of powerful models on security. We invite other AI industry members to join us in helping to set the standards for the industry. In the medium term, an independent, third-party body—one that can bring together private- and public-sector organizations—might be the ideal home for continued work on these large-scale cybersecurity projects.
The project is named for the glasswing butterfly, Greta oto. The metaphor can be applied in two ways: the butterfly’s transparent wings let it hide in plain sight, much like the vulnerabilities discussed in this post; they also allow it to evade harm—like the transparency we’re advocating for in our approach. From the Ancient Greek for “utterance” or “narrative”: the system of stories through which civilizations made sense of the world.Security professionals whose legitimate work is affected by these safeguards will be able to apply to an upcoming Cyber Verification Program.
...
Read the original on www.anthropic.com »
* This report does NOT contain sensitive information (API keys, passwords, etc.)
Claude has regressed to the point it cannot be trusted to perform complex engineering.
Does the opposite of requested activities
Claude should behave like it did in January.
Accept Edits was ON (auto-accepting changes)
Yes, every time with the same prompt
This analysis was produced by Claude by analyzing session log data from January through March.
Quantitative analysis of 17,871 thinking blocks and 234,760 tool calls across
6,852 Claude Code session files reveals that the rollout of thinking content
redaction (redact-thinking-2026-02-12) correlates precisely with a measured
quality regression in complex, long-session engineering workflows.
The data suggests that extended thinking tokens are not a “nice to have” but
are structurally required for the model to perform multi-step research,
convention adherence, and careful code modification. When thinking depth is
reduced, the model’s tool usage patterns shift measurably from research-first
to edit-first behavior, producing the quality issues users have reported.
This report provides data to help Anthropic understand which workflows are
most affected and why, with the goal of informing decisions about thinking
token allocation for power users.
The quality regression was independently reported on March 8 — the exact date
redacted thinking blocks crossed 50%. The rollout pattern (1.5% → 25% → 58% →
100% over one week) is consistent with a staged deployment.
The signature field on thinking blocks has a 0.971 Pearson correlation
with thinking content length (measured from 7,146 paired samples where both
are present). This allows estimation of thinking depth even after redaction.
Thinking depth had already dropped ~67% by late February, before redaction
began. The redaction rollout in early March made this invisible to users.
These metrics were computed independently from 18,000+ user prompts before
the thinking analysis was performed.
A stop hook (stop-phrase-guard.sh) was built to programmatically catch
ownership-dodging, premature stopping, and permission-seeking behavior.
It fired 173 times in 17 days after March 8. It fired zero times before.
Analysis of 234,760 tool invocations shows the model stopped reading code
before modifying it.
The model went from 6.6 reads per edit to 2.0 reads per edit — a 70%
reduction in research before making changes.
In the good period, the model’s workflow was: read the target file, read
related files, grep for usages across the codebase, read headers and tests,
then make a precise edit. In the degraded period, it reads the immediate
file and edits, often without checking context.
The decline in research effort begins in mid-February — the same period when
estimated thinking depth dropped 67%.
Full-file Write usage doubled — the model increasingly chose to rewrite
entire files rather than make surgical edits, which is faster but loses
precision and context awareness.
* 191,000 lines merged across two PRs in a weekend during the good period
Extended thinking is the mechanism by which the model:
* Plans multi-step approaches before acting (which files to read, what order)
* Catches its own mistakes before outputting them
* Decides whether to continue working or stop (session management)
When thinking is shallow, the model defaults to the cheapest action available:
edit without reading, stop without finishing, dodge responsibility for failures,
take the simplest fix rather than the correct one. These are exactly the
symptoms observed.
Transparency about thinking allocation: If thinking tokens are being
reduced or capped, users who depend on deep reasoning need to know. The
redact-thinking header makes it impossible to verify externally.
A “max thinking” tier: Users running complex engineering workflows
would pay significantly more for guaranteed deep thinking. The current
subscription model doesn’t distinguish between users who need 200 thinking
tokens per response and users who need 20,000.
Thinking token metrics in API responses: Even if thinking content is
redacted, exposing thinking_tokens in the usage response would let users
monitor whether their requests are getting the reasoning depth they need.
Canary metrics from power users: The stop hook violation rate
(0 → 10/day) is a machine-readable signal that could be monitored across
the user base as a leading indicator of quality regressions.
The following behavioral patterns were measured across 234,760 tool calls and
18,000+ user prompts. Each is a predictable consequence of reduced reasoning
depth: the model takes shortcuts because it lacks the thinking budget to
evaluate alternatives, check context, or plan ahead.
When the model has sufficient thinking budget, it reads related files, greps
for usages, checks headers, and reads tests before making changes. When
thinking is shallow, it skips research and edits directly.
One in three edits in the degraded period was made to a file the model had
not read in its recent tool history. The practical consequence: edits that
break surrounding code, violate file-level conventions, splice new code into
the middle of existing comment blocks, or duplicate logic that already exists
elsewhere in the file.
Spliced comments are a particularly visible symptom. When the model edits
a file it hasn’t read, it doesn’t know where comment blocks end and code
begins. It inserts new declarations between a documentation comment and the
function it documents, breaking the semantic association. This never happened
in the good period because the model always read the file first.
When thinking is deep, the model resolves contradictions internally before
producing output. When thinking is shallow, contradictions surface in the
output as visible self-corrections: “oh wait”, “actually,”, “let me
reconsider”, “hmm, actually”, “no wait.”
The rate more than tripled. In the worst sessions, the model produced 20+
reasoning reversals in a single response — generating a plan, contradicting
it, revising, contradicting the revision, and ultimately producing output
that could not be trusted because the reasoning path was visibly incoherent.
The word “simplest” in the model’s output is a signal that it is optimizing
for the least effort rather than evaluating the correct approach. With deep
thinking, the model evaluates multiple approaches and chooses the right one.
With shallow thinking, it gravitates toward whatever requires the least
reasoning to justify.
In one observed 2-hour window, the model used “simplest” 6 times while
producing code that its own later self-corrections described as “lazy and
wrong”, “rushed”, and “sloppy.” Each time, the model had chosen an approach
...
Read the original on github.com »
This is the first of a series of articles in which you will learn about what may be one of the silliest, most preventable, and most costly mishaps of the 21st century, where Microsoft all but lost OpenAI, its largest customer, and the trust of the US government.
I joined Azure Core on the dull Monday morning of May 1st, 2023, as a senior member of the Overlake R&D team, the folks behind the Azure Boost offload card and network accelerator.
I wasn’t new to Azure, having run what is likely the longest-running production subscription of this cloud service, which launched in February 2010 as Windows Azure.
I wasn’t new to Microsoft either, having been part of the Windows team since 1/1/2013 and later helped migrate SharePoint Online to Azure, before joining the Core OS team as a kernel engineer, where I notably helped improve the kernel and helped invent and deliver the Container platform that supports Docker, Azure Kubernetes, Azure Container Instances, Azure App Services, and Windows Sandbox, all shipping technologies that resulted in multiple granted patents.
Furthermore, I contributed to brainstorming the early Overlake cards in 2020-2021, drafting a proposal for a Host OS Accelerator Card communication protocol and network stack, when all we had was a debugger’s serial connection. I also served as a Core OS specialist, helping Azure Core engineers diagnose deep OS issues.
I rejoined in 2023 as an Azure expert on day one, having contributed to the development of some of the technologies on which Azure relies and having used the platform for more than a decade, both outside and inside Microsoft at a global scale.
As a returning employee, I skipped the New Employee Orientation and had my Global Security invite for 12 noon to pick up my badge, but my future manager asked if I could come in earlier, as the team had their monthly planning meeting that morning.
I, of course, agreed and arrived a few minutes before 10 am at the entrance of the Studio X building, not far from The Commons on the West Campus in Redmond. A man showed up in the lobby and opened the door for me. I followed him to a meeting room through a labyrinth of corridors.
The room was chock-full, with more people on a live conference call. The dev manager, the leads, the architects, the principal and senior engineers shared the space with what appeared to be new hires and junior personnel.
The screen projected a slide where I recognized a number of familiar acronyms, like COM, WMI, perf counters, VHDX, NTFS, ETW, and a dozen others, mixed with new Azure-related ones, in an imbroglio of boxes linked by arrows.
I sat quietly at the back while a man was walking the room through a big porting plan of their current stack to the Overlake accelerator. As I listened, it was not immediately clear what that series of boxes with Windows user-mode and kernel components had to do with that plan.
After a few minutes, I risked a question: Are you planning to port those Windows features to Overlake? The answer was yes, or at least they were looking into it. The dev manager showed some doubt, and the man replied that they could at least “ask a couple of junior devs to look into it.”
The room remained silent for an instant. I had seen the hardware specs for the SoC on the Overlake card in my previous tenure: the RAM capacity and the power budget, which was just a tiny fraction of the TDP you can expect from a regular server CPU.
The hardware folks I had spoken with told me they could only spare 4KB of dual-ported memory on the FPGA for my doorbell shared-memory communication protocol.
Everything was nimble, efficient, and power-savvy, and the team I had joined 10 minutes earlier was seriously considering porting half of Windows to that tiny, fanless, Linux-running chip the size of a fingernail.
That felt like Elon talking about colonizing Mars: just nuke the poles then grow an atmosphere! Easier said than done, uh?
That entire 122-strong org was knee-deep in impossible ruminations involving porting Windows to Linux to support their existing VM management agents.
The man was a Principal Group Engineering Manager overseeing a chunk of the software running on each Azure node; his boss, a Partner Engineering Manager, was in the room with us, and they really contemplated porting Windows to Linux to support their current software.
At first, I questioned my understanding. Was that serious? The rest of the talk left no doubt: the plan was outlined, and the dev leads were tasked with contributing people to the effort. It was immediately clear to me that this plan would never succeed and that the org needed a lot of help.
That first hour in the new role left me with a mix of strange feelings, stupefaction, and incredulity.
The stack was hitting its scaling limits on a 400 Watt Xeon at just a few dozen VMs per node, I later learned, a far cry from the 1,024 VMs limit I knew the hypervisor was capable of, and was a noisy neighbor consuming so many resources that it was causing jitter observable from the customer VMs.
There is no dimension in the universe where this stack would fit on a tiny ARM SoC and scale up by many factors. It was not going to happen.
I have seen a lot in my decades of industry (and Microsoft) experience, but I had never seen an organization so far from reality. My day-one problem was therefore not to ramp up on new technology, but rather to convince an entire org, up to my skip-skip-level, that they were on a death march.
Somewhere, I knew it was going to be a fierce uphill battle. As you can imagine, it didn’t go well, as you will later learn.
I spent the next few days reading more about the plans, studying the current systems, and visiting old friends in Core OS, my alma mater. I was lost away from home in a bizarre territory where people made plans that didn’t make sense with the aplomb of a drunk LLM.
I notably spent more than 90 minutes chatting in person with the head of the Linux System Group, a solid scholar with a PhD from INRIA, who was among the folks who hired me on the kernel team years earlier.
His org is responsible for delivering Mariner Linux (now Azure Linux) and the trimmed-down distro running on the Overlake / Azure Boost card. He kindly answered all my questions, and I learned that they had identified 173 agents (one hundred seventy-three) as candidates for porting to Overlake.
I later researched this further and found that no one at Microsoft, not a single soul, could articulate why up to 173 agents were needed to manage an Azure node, what they all did, how they interacted with one another, what their feature set was, or even why they existed in the first place.
Azure sells VMs, networking, and storage at the core. Add observability and servicing, and you should be good. Everything else, SQL, K8s, AI workloads, and whatnot all build on VMs with xPU, networking, and storage, and the heavy lifting to make the magic happen is done by the good Core OS folks and the hypervisor.
How the Azure folks came up with 173 agents will probably remain a mystery, but it takes a serious amount of misunderstanding to get there, and this is also how disasters are built.
Now, fathom for a second that this pile of uncontrolled “stuff” is orchestrating the VMs running Anthropic’s Claude, what’s left of OpenAI’s APIs on Azure, SharePoint Online, the government clouds and other mission-critical infrastructure, and you’ll be close to understanding how a grain of sand in that fragile pileup can cause a global collapse, with serious National Security implications as well as potential business-ending consequences for Microsoft.
We are still far from the vaporized trillion in market cap, my letters to the CEO, to the Microsoft Board of Directors, and to the Cloud + AI EVP and their total silence, the quasi-loss of OpenAI, the breach of trust with the US government as publicly stated by the Secretary of Defense, the wasted engineering efforts, the Rust mandate, my stint on the OpenAI bare-metal team in Azure Core, the escort sessions from China and elsewhere, and the delayed features publicly implied as shipping since 2023, before the work even began.
If you’re running production workloads on Azure or relying on it for mission-critical systems, this story matters more than you think.
...
Read the original on isolveproblems.substack.com »
Live launch day updates for NASA’s Artemis II test flight will be published on this page. All times are Eastern.
The Orion spacecraft’s SAWs (solar arrays wings) have fully deployed, completing a key configuration step for the Artemis II mission. Flight controllers in Houston confirmed that all four wings unfolded as planned, locking into place and beginning to draw power.
Each solar array wing extends outward from the European Service Module, giving Orion, named Integrity, a wingspan of roughly 63 feet when fully deployed. Each wing has 15,000 solar cells to convert sunlight to electricity. The arrays can turn on two axes that allow them to rotate and track the Sun, maximizing power generation as the spacecraft changes attitude during its time in Earth orbit and on its outbound journey to the Moon.
The next major milestones are the PRM (perigee raise maneuver) and ARB (apogee raise burn) that will increase the lowest and highest points of the Orion spacecraft’s orbit and prepare the spacecraft for deep‑space operations.
Following the burns, NASA will hold a postlaunch news conference at 9 p.m. from Kennedy Space Center in Florida. Following the news conference, the Artemis II crew will begin preparations for Orion’s proximity operations demonstration. This demonstration will test the ability to manually maneuver Orion relative to another spacecraft, in this case, the interim cryogenic propulsion stage after separation.
Coverage on NASA+ will soon conclude, however 24/7 coverage will continue on NASA’s YouTube channel, and keep following the Artemis blog for live updates of key milestones throughout the mission.
Main engine cutoff of the SLS (Space Launch System) core stage is complete, and the core stage has successfully separated from the interim cryogenic propulsion stage and the Orion spacecraft. This marks the end of the first major propulsion phase of the Artemis II mission and the transition to upper‑stage operations.
The next major milestone is the deployment of the spacecraft’s SAWs (solar array wings) scheduled to begin approximately 18 minutes after launch. Once extended, the four SAWs will provide continuous electrical power to the spacecraft throughout its journey, supporting life‑support systems, avionics, communications, and onboard operations. Deployment is a critical step in configuring Orion for the remainder of its time in Earth orbit and for the outbound trip to the Moon.
The spacecraft adapter jettison fairings that enclose the service module and the launch abort system have separated from the Orion spacecraft. With the rocket and spacecraft now flying above the densest layers of Earth’s atmosphere, Orion no longer requires the protective structures that shielded it during the early, high‑dynamic‑pressure portion of launch.
The next major milestone is core stage separation and Interim Cryogenic Propulsion Stage ignition.
The SLS (Space Launch System) twin solid rocket boosters have separated. The boosters, each standing 177 feet tall and generating more than 3.6 million pounds of thrust at liftoff, provide most of the rocket’s power during the first two minutes of flight and separation reduces mass and allows the core stage to continue propelling the Orion spacecraft, named Integrity, toward orbit.
With the boosters now clear, the SLS core stage remains the primary source of thrust.
In about one minute, the spacecraft adapter jettison fairings that enclose Orion’s service module and the launch abort system will separate from the spacecraft.
6:35 p.m.
NASA’s Artemis II SLS (Space Launch System) rocket, with the Orion spacecraft atop carrying NASA astronauts Reid Wiseman, Victor Glover, and Christina Koch, along with CSA (Canadian Space Agency) astronaut Jeremy Hansen, lifted off from Kennedy Space Center’s Launch Complex 39B in Florida at 6:35 p.m. EDT to begin its journey to deep space.
The twin solid rocket boosters ignited first, delivering more than 75% of the thrust needed to lift the 5.75-million-pound rocket off the pad. Their combined power, along with the four RS-25 engines already at full thrust, generated an incredible 8.8 million pounds of force at liftoff. As the rocket rose, the umbilicals — which provided power, fuel, and data connections during prelaunch — disconnected and retracted into protective housings. This ensured the vehicle is free from ground systems and fully autonomous for flight.
The approximately 10-day Artemis II mission around the Moon is the first crewed flight under NASA’s Artemis campaign. It will help test the systems and hardware needed to continue sending astronauts on increasingly difficult missions to explore more of the Moon for scientific discovery, economic benefits, and to continue building toward the first crewed missions to Mars.
Below are the ascent milestones that will occur leading up to core stage separation. Times may vary by several seconds.
The Artemis II countdown has entered terminal count, and the ground launch sequencer has taken control, orchestrating a precise series of automated commands to prepare the SLS (Space Launch System) rocket and Orion spacecraft for liftoff at a T-0 time of 6:35 p.m. EDT.
The ground launch sequencer ensures that all systems – from propulsion to avionics – transition into flight mode. Key actions performed include pressurizing propellant tanks for optimal engine performance, activating flight software and switching control from ground to onboard systems, and performing final health checks across thousands of sensors to confirm readiness.
This automated sequence minimizes human intervention, reducing risk and ensuring synchronization across complex subsystems. For Artemis II, this moment marks the culmination of years of planning and testing, as the mission moves from ground operations to the threshold of launch.
See the list below of the terminal count milestones:
* T-4M — GLS is go for core stage auxiliary power unit (APU) start
Inside the terminal countdown, teams have a few options to hold the count if needed.
The launch team can hold at 6 minutes for the duration of the launch window, less the 6 minutes needed to launch, without having to recycle back to 10 minutes.
If teams need to stop the clock between T-6 minutes and T-1 minute, 30 seconds, they can hold for up to 3 minutes and resume the clock to launch. If they require more than 3 minutes of hold time, the countdown would recycle back to T-10.
If the clock stops after T-1 minute and 30 seconds, but before the automated launch sequencer takes over, then teams can recycle back to T-10 to try again, provided there is adequate launch window remaining.
After handover to the automated launch sequencer, any issue that would stop the countdown would lead to concluding the launch attempt for that day.
Artemis II Launch Director Charlie Blackwell-Thompson conducted one of the most important steps before liftoff: the “go/no-go” poll for the team to proceed with the final 10 minutes of the countdown known as terminal count.
A unanimous “go” across the board signals that Artemis II is fully prepared to proceed toward launch. This moment represents the culmination of years of planning and hours of meticulous pre-launch work, bringing the mission to the threshold of history.
The launch team has made the decision to extend the T-10 minute hold ahead of today’s launch to give engineers time to work through final preparations for liftoff. There is a two-hour window in which Artemis II could launch, and a new liftoff time will be set shortly
NASA’s Artemis II closeout crew completed its final tasks and departed Launch Complex 39B at NASA’s Kennedy Space Center in Florida. After hours of meticulous work assisting the astronauts with suit-up, hatch closure, and critical spacecraft checks, the team exited the White Room and left the Orion spacecraft sealed and ready for flight.
This departure marks a major transition in launch operations: the spacecraft is now fully configured, and responsibility shifts to the launch control team for the final countdown. The closeout crew’s precision and expertise ensure that every connection, seal, and system is verified before they step away – making this moment a key milestone on the path to liftoff.
Engineers investigated a sensor on the launch abort system’s attitude control motor controller battery that showed a higher temperature than would be expected. It is believed to be an instrumentation issue and will not affect today’s launch.
The weather continues to cooperate and has now been upgraded to 90% go for launch.
Engineers have now resolved an issue with the hardware that communicates with the flight termination system that would have prevented the ground from sending a signal to destruct the rocket if it were to veer off course during ascent, to protect public safety. A confidence test was performed to ensure that the hardware is ready to support today’s launch.
Meanwhile, technicians have completed the launch abort system hatch closure – an essential step that ensures the Orion spacecraft is fully sealed and ready for flight. The hatch provides an additional protective barrier for the crew module, designed to safeguard astronauts during the Artemis II flight path and, if necessary, enable a rapid escape in the event of an emergency.
During this phase, the closeout team verifies hatch alignment, engages locking mechanisms, and confirms pressure integrity. These checks guarantee that the launch abort system hatch can perform its function flawlessly, maintaining structural integrity under extreme launch conditions. With the hatch secured, Orion enters its final configuration for liftoff, marking one of the last major milestones before fueling and launch.
Although the countdown to today’s Artemis II launch is continuing to progress, the Eastern Range has identified an issue that they are currently working to resolve related to their communication with the flight termination system. The flight termination system is a safety system that allows engineers on the ground to send a signal to destruct the rocket if it were to veer off course during ascent, to protect public safety. Without assurance that this system would work if needed, today’s launch would be no-go. However, engineers have devised a way to verify the system and are currently preparing to test this solution.
Technicians began installing the crew module hatch service panel on the Orion spacecraft, an important step in final launch preparations. This panel protects key connections and ensures the hatch area is secure for flight.
As part of current closeout activities, teams are confirming all systems around the hatch are properly sealed and ready for the mission.
With the hatch area secured, teams will continue final checks and countdown operations at Launch Pad 39B at NASA’s Kennedy Space Center in Florida, bringing us closer to sending astronauts on a historic journey around the Moon.
NASA engineers have conducted counterbalance mechanism operations and are now performing hatch seal pressure decay checks inside the White Room at Launch Complex 39B. These steps ensure Orion’s hatch maintains proper pressure integrity and that the counterbalance system functions as designed for launch conditions.
The counterbalance mechanism is a precision-engineered assembly that offsets the weight of the crew module hatch, allowing technicians to open and close it smoothly without introducing stress on the hinge or seal. This system uses calibrated springs and dampers to maintain alignment and prevent sudden movements, which is essential for preserving the hatch’s airtight seal. During this phase, technicians verify the mechanism’s load distribution and confirm that its locking features engage correctly under simulated launch loads.
Following these adjustments, the team performs seal pressurization decay checks – monitoring pressure loss over time to confirm the hatch’s integrity. These checks are vital for astronaut safety, ensuring the cabin remains secure in all mission phases.
NASA’s Artemis II closeout crew is now completing one of the most critical steps before launch: preparing and closing the crew module hatch to the Orion spacecraft. Inside the White Room at Launch Complex 39B, the closeout crew is working meticulously to inspect seals, secure fasteners, and verify that the hatch is airtight.
This process ensures Orion is fully pressurized and ready for flight. Once the hatch is closed and locked, the astronauts are officially sealed inside their spacecraft, marking a major milestone on the path to liftoff.
NASA’s Artemis II crew members are boarding the agency’s Orion spacecraft to begin communication checks to confirm voice links with mission control and onboard systems.
Before entering the spacecraft that will be their home on the approximately 10-day journey around the Moon and back, all four crewmates signed the inside of the White Room, an area at the end of the crew access arm that provides access to the spacecraft. The term “White Room” dates to NASA’s Gemini program, and to honor this human spaceflight tradition, the room remains white today.
The Artemis II closeout crew is now working to help the astronauts enter the Orion spacecraft and make final preparations for their nearly 700,000-mile trip to the Moon and back. As part of the process, the closeout crew is helping the astronauts don their Orion Crew Survival System helmets and gloves, as well as board Orion and get buckled in.
A short time from now, the closeout crew will close the crew module and exterior launch abort system hatches. Even a single strand of hair inside the hatch doors could potentially pose issues with closing either hatch, so the process is carefully done and takes up to four hours. Each step in the closeout process ensures airtight seals and communication readiness for the mission ahead.
Following communication checks, the team performed suit leak checks – a vital safety procedure ensuring each pressure suit maintains integrity in case of cabin depressurization. These operations are essential for crew readiness and mission assurance, marking one of the final phases before hatch closure and launch preparations.
With assistance from the closeout crew, the Artemis II crew are carefully donning their helmets and gloves – finalizing suit integrity checks before boarding the Orion spacecraft.
This step is more than ceremonial; it ensures airtight seals and communication readiness for the mission ahead. The closeout crew plays a vital role, guiding the astronauts through these procedures and confirming every connection is secure before hatch closure.
Stay tuned as we continue to follow the Artemis II team through each countdown milestone on their path to liftoff.
NASA’s Artemis II crew NASA astronauts Reid Wiseman, Victor Glover, and Christina Koch, along with CSA (Canadian Space Agency) astronaut Jeremy Hansen, arrived at Launch Complex 39B at the agency’s Kennedy Space Center in Florida, where the agency’s SLS (Space Launch System) rocket with Orion spacecraft atop stands ready for launch. The opening of today’s launch window is slated for just over 4 hours from now, at 6:24 p.m. EDT.
In the next few minutes, the crew will take the elevator up the pad’s fixed service structure and walk down the climate-controlled crew access arm to the White Room, their final stop before climbing aboard their Orion spacecraft. In this clean, controlled environment at the end of the crew access arm, the closeout crew will assist the astronauts with hatch operations and verify that all safety systems are ready for launch.
Since the late 1960s, pads A and B at Kennedy’s Launch Complex 39 have supported America’s major space programs, with Pad A used most frequently for launches under the Space Shuttle Program. After the retirement of the shuttle in 2011, Pad A helped usher in a new era of human spaceflight as launch pad for the agency’s Commercial Crew Program, which returned human spaceflight capability to the United States. Pad B saw the launch of NASA’s Artemis I mission in November 2022 and will continue to be the primary launch pad for America’s efforts to return to humans the Moon.
Just moments ago, NASA’s Artemis II flight crew began the walk that every NASA astronaut has made since Apollo 7 in 1968, heading to the elevator and down through the double doors below the Neil A. Armstrong Building’s Astronaut Crew Quarters at NASA’s Kennedy Space Center in Florida.
Before they left the suit-up room, the crew completed one last piece of unfinished business — a card game. A long-held spaceflight tradition, NASA crews play cards before leaving the crew quarters ahead of launch until the commander, in this instance NASA astronaut Reid Wiseman, loses. It is hoped that by losing, the commander burns off all his or her bad luck, thereby clearing the mission for only good luck.
NASA’s Artemis II is the first crewed mission of the Artemis program and will carry Wiseman and fellow NASA astronauts Victor Glover and Christina Koch, as well as CSA (Canadian Space Agency) astronaut Jeremy Hansen on an approximately 10-day mission around the Moon and back to Earth.
The first crewed deep-space flight in over 50 years, Artemis II is expected to send the crew farther from Earth than any previous human mission, potentially breaking the record of about 248,655 miles (400,171 km) from Earth set by Apollo 13 during its lunar free-return trajectory. This milestone will occur during the lunar flyby phase, when the crew travels on a free-return trajectory around the Moon, which allows the spacecraft to loop around the Moon and return to Earth without entering lunar orbit.
During the test flight, NASA will test life-support systems and critical operations in deep space, paving the way for future lunar landings and Mars exploration.
Having received goodbyes and well wishes from their families and friends, the crew embarks on the 20-minute journey to Kennedy’s Launch Pad 39B and their awaiting spacecraft.
NASA’s pad rescue and closeout crew teams have arrived at Launch Complex 39B at the agency’s Kennedy Space Center in Florida to ensure safety and readiness during the critical fueling operations. These specialized teams play a vital role in protecting personnel and hardware throughout the countdown.
The pad rescue team will be positioned to respond immediately in the unlikely event of an emergency, ensuring safe evacuation procedures for pad personnel. The rescue team is equipped with advanced gear and trained for rapid crew extraction, fire suppression, and hazard mitigation. Their presence ensures astronaut safety remains the top priority, providing an all-important layer of protection as fueling operations and system checks continue.
The closeout crew is responsible for closing the Orion crew module and launch abort system hatches, securing access points, verifying pad configurations, and maintaining the integrity of the launch area during propellant loading and system checks. Their work is critical for guaranteeing a secure environment for the astronauts before the launch pad is cleared for liftoff operations.
These teams are essential for mitigating risk and supporting the complex choreography of Artemis II’s prelaunch activities. With both teams in place, Artemis II remains on track for its historic mission to send astronauts around the Moon.
NASA astronauts Reid Wiseman, commander; Victor Glover, pilot; and Christina Koch, mission specialist; along with CSA (Canadian Space Agency) astronaut Jeremy Hansen, mission specialist, are suiting up inside the Astronaut Crew Quarters of the Neil A. Armstrong Operations and Checkout Building at the agency’s Kennedy Space Center in Florida.
A team of suit technicians help the crew put on their Orion Crew Survival System suits, which are each tailored for mobility and comfort while ensuring maximum safety during the dynamic phases of flight. The bright orange spacesuits are designed to protect them on their journey and feature many improvements from head to toe to the suits worn on the space shuttle. NASA reengineered many elements to improve safety and range of motion for Artemis astronauts, and instead of the small, medium, and large sizes from the shuttle era, they are custom fit for each crew member.
The outer layer is fire-resistant, and a stronger zipper allows astronauts to quickly put the suit on. Improved thermal management will help keep them cool and dry. A lighter, stronger helmet improves comfort and communication, and the gloves are more durable and touch-screen compatible. Better-fitting boots also provide protection in the case of fire and help an astronaut move more swiftly.
The suits’ design and engineering enhancements provide an additional layer of protection for astronauts and ensure they return home safely from deep space missions.
During suit-up, teams will check for leaks and ensure that all connecting life support systems, including air and power, are operating nominally ahead of the crew’s ride to NASA Kennedy’s Launch Complex 39B.
With NASA teams now maintaining the liquid oxygen levels in the interim cryogenic propulsion, all cryogenic stages of the SLS (Space Launch System) rocket have transitioned to replenish mode during the Artemis II launch countdown. This includes the core stage and SLS upper stage, ensuring both liquid hydrogen and liquid oxygen tanks remain at flight-ready levels.
Replenish mode is essential for maintaining stable propellant quantities and pressure as super-cold fuels naturally boil off over time. Continuous adjustments keep the rocket fully fueled and ready for ignition, supporting the RS-25 engines on the core stage and the RL10 engine on the SLS upper stage for their essential roles in launch and translunar injection.
These milestones coincide with the Artemis II countdown entering a planned 1-hour and 10-minute built-in hold. This scheduled pause allows teams to complete crucial system checks, verify launch readiness, and address any last-minute adjustments before proceeding toward crew ingress and final fueling operations.
During this hold, engineers review data from cryogenic loading, propulsion systems, and communications to ensure all parameters meet strict safety and performance criteria. The hold also provides flexibility for resolving minor issues without impacting the overall launch timeline.
Once the hold concludes, the countdown will resume with preparations for astronaut arrival at Launch Pad 39B at NASA’s Kennedy Space Center in Florida.
NASA’s Artemis II astronauts received a final weather briefing inside the Astronaut Crew Quarters of the Neil A. Armstrong Operations and Checkout Building at the agency’s Kennedy Space Center in Florida, as part of prelaunch preparations.
This weather update provides astronauts and mission teams with the latest conditions at NASA Kennedy’s Launch Pad 39B, the surrounding recovery zones, and potential abort sites along Artemis II’s flight path. Accurate weather forecasting is essential for protecting crew and hardware, as even minor changes can impact countdown decisions and flight dynamics.
NASA astronauts Reid Wiseman, commander; Victor Glover, pilot; and Christina Koch, mission specialist; along with CSA (Canadian Space Agency) astronaut Jeremy Hansen, mission specialist, were briefed on wind speeds, precipitation, lightning risk, and sea states for splashdown contingencies, ensuring all safety criteria are met before proceeding with launch operations.
Weather officials with NASA and the U. S. Space Force’s Space Launch Delta 45 are tracking 80% favorable conditions during the launch window, with primary concerns being the cumulus cloud rule, flight through precipitation rule, and ground winds.
With the weather briefing complete, the crew and ground teams remain aligned and ready to continue toward liftoff, keeping Artemis II on track for its historic mission to send astronauts around the Moon.
NASA teams also have begun liquid oxygen (LOX) topping process for the interim cryogenic propulsion stage, or SLS (Space Launch System) rocket upper stage, during the Artemis II launch countdown. This step follows the fast fill phase and ensures the liquid oxygen tank reaches full capacity with super-cold oxidizer.
Live coverage of Artemis II tanking operations continues on NASA’s YouTube channel. NASA’s full launch coverage begins at 1 p.m. EDT on NASA+, Amazon Prime, and YouTube. You can continue to follow the Artemis blog from launch to splashdown for mission updates.
Liquid oxygen (LOX) fast fill is now complete for the SLS (Space Launch System) upper stage, marking another major milestone in tanking operations. Teams have confirmed the upper stage is in good shape and are proceeding with the LOX vent and relief test. This step helps verify proper pressure regulation and ensures the system is ready to transition into topping and, later, replenish operations.
NASA teams are now maintaining the liquid oxygen levels in the SLS (Space Launch System) rocket core stage through replenish mode. This phase follows the completion of liquid oxygen fast fill and topping, ensuring the oxidizer remains at flight-ready levels throughout the final countdown.
NASA teams are in fast fill of liquid oxygen (LOX) into the interim cryogenic propulsion stage as part of the Artemis II launch countdown. This phase rapidly loads the oxidizer after chilldown is complete, bringing the SLS (Space Launch System) rocket upper stage closer to full readiness for its role in sending the Orion spacecraft into a high Earth orbit ahead of a proximity operations demonstration test and Orion’s translunar injection burn.
NASA teams have transitioned the interim cryogenic propulsion stage liquid hydrogen tank to replenish mode during the Artemis II countdown. This phase follows the successful topping process and ensures the tank remains at flight-ready levels all the way to launch.
NASA teams have begun the topping phase for the interim cryogenic propulsion stage liquid hydrogen (LH2) tank. This critical step occurs after successful chilldown and vent-and-relief checks, ensuring the tank reaches full capacity with super-cold liquid hydrogen.
Replenish is the final step in the fueling process, designed to maintain the correct LH2 levels as the super-cold propellant naturally boils off over time. This continuous, low-rate flow keeps the tanks topped off and thermally stable, ensuring the rocket remains fully fueled and ready for liftoff.
From chilldown to replenish, every phase of fueling is carefully managed to protect hardware and guarantee mission success. With replenish underway, Artemis II is in its final stretch toward launch and humanity’s next giant leap.
Topping is the process of adding small amounts of LH2 to the tanks after fast fill is complete, ensuring they remain at full capacity as the super-cold propellant naturally boils off. This step is critical for maintaining the precise levels needed for launch while keeping the system thermally stable.
The Artemis II launch team transitioned to the fast fill of liquid hydrogen (LH2) for the interim cryogenic propulsion stage, or SLS (Space Launch System) rocket upper stage.
After completing the chilldown phase, this step rapidly loads super-cold LH2 into the SLS upper stage tanks, ensuring the upper stage is fueled and ready to perform its fundamental role of raising the Orion spacecraft into a high Earth orbit ahead of a proximity operations demonstration test and Orion’s translunar injection burn.
Fast fill accelerates the fueling process while maintaining safety, marking another major milestone in the countdown as Artemis II moves closer to liftoff.
The Artemis II launch team has begun the liquid hydrogen chilldown for the interim cryogenic propulsion stage, or SLS (Space Launch System) rocket upper stage.
This process gradually cools the interim cryogenic propulsion stage fuel lines and components to cryogenic temperatures using super-cold liquid hydrogen. The chilldown step is essential to prevent thermal shock and ensure the stage is properly conditioned for full propellant loading. By stabilizing the system at these extreme temperatures, engineers guarantee safe and efficient fueling for the upper stage that will help position Orion into high Earth orbit for its journey toward the Moon.
NASA astronauts Reid Wiseman, Victor Glover, and Christina Koch, along with CSA (Canadian Space Agency) astronaut Jeremy Hansen have officially begun their launch day with a scheduled wake-up call at 9:25 a.m., marking the start of their final preparations for the historic Artemis II mission around the Moon.
The Artemis II launch team transitioned to the fast fill of liquid hydrogen (LH2) into the SLS (Space Launch System) rocket core stage.
...
Read the original on www.nasa.gov »
Artemis II is NASA’s first crewed mission under the Artemis program and will launch from the agency’s Kennedy Space Center in Florida. It will send NASA astronauts Reid Wiseman, Victor Glover, Christina Koch, and CSA (Canadian Space Agency) astronaut Jeremy Hansen on an approximately 10-day journey around the Moon. Among objectives, the agency will test the Orion spacecraft’s life support systems for the first time with people and lay the groundwork for future crewed Artemis missions.
...
Read the original on plus.nasa.gov »
The first thing I usually do when I pick up a new codebase isn’t opening the code. It’s opening a terminal and running a handful of git commands. Before I look at a single file, the commit history gives me a diagnostic picture of the project: who built it, where the problems cluster, whether the team is shipping with confidence or tiptoeing around land mines.
The 20 most-changed files in the last year. The file at the top is almost always the one people warn me about. “Oh yeah, that file. Everyone’s afraid to touch it.”
High churn on a file doesn’t mean it’s bad. Sometimes it’s just active development. But high churn on a file that nobody wants to own is the clearest signal of codebase drag I know. That’s the file where every change is a patch on a patch. The blast radius of a small edit is unpredictable. The team pads their estimates because they know it’s going to fight back.
A 2005 Microsoft Research study found churn-based metrics predicted defects more reliably than complexity metrics alone. I take the top 5 files from this list and cross-reference them against the bug hotspot command below. A file that’s high-churn and high-bug is your single biggest risk.
Every contributor ranked by commit count. If one person accounts for 60% or more, that’s your bus factor. If they left six months ago, it’s a crisis. If the top contributor from the overall shortlog doesn’t appear in a 6-month window (git shortlog -sn –no-merges –since=“6 months ago”), I flag that to the client immediately.
I also look at the tail. Thirty contributors but only three active in the last year. The people who built this system aren’t the people maintaining it.
One caveat: squash-merge workflows compress authorship. If the team squashes every PR into a single commit, this output reflects who merged, not who wrote. Worth asking about the merge strategy before drawing conclusions.
Same shape as the churn command, filtered to commits with bug-related keywords. Compare this list against the churn hotspots. Files that appear on both are your highest-risk code: they keep breaking and keep getting patched, but never get properly fixed.
This depends on commit message discipline. If the team writes “update stuff” for every commit, you’ll get nothing. But even a rough map of bug density is better than no map.
Commit count by month, for the entire history of the repo. I scan the output looking for shapes. A steady rhythm is healthy. But what does it look like when the count drops by half in a single month? Usually someone left. A declining curve over 6 to 12 months tells you the team is losing momentum. Periodic spikes followed by quiet months means the team batches work into releases instead of shipping continuously.
I once showed a CTO their commit velocity chart and they said “that’s when we lost our second senior engineer.” They hadn’t connected the timeline before. This is team data, not code data.
Revert and hotfix frequency. A handful over a year is normal. Reverts every couple of weeks means the team doesn’t trust its deploy process. They’re evidence of a deeper issue: unreliable tests, missing staging, or a deploy pipeline that makes rollbacks harder than they should be. Zero results is also a signal; either the team is stable, or nobody writes descriptive commit messages.
Crisis patterns are easy to read. Either they’re there or they’re not.
These five commands take a couple minutes to run. They won’t tell you everything. But you’ll know which code to read first, and what to look for when you get there. That’s the difference between spending your first day reading the codebase methodically and spending it wandering.
This is the first hour of what I do in a codebase audit. Here’s what the rest of the week looks like.
...
Read the original on piechowski.io »
Artemis II is now on a looping path that will carry the crew around the far side of the Moon and back again. It is the first time since 1972 that humans have travelled outside the Earth’s orbit.
...
Read the original on www.bbc.com »
To add this web app to your iOS home screen tap the share button and select "Add to the Home Screen".
10HN is also available as an iOS App
If you visit 10HN only rarely, check out the the best articles from the past week.
If you like 10HN please leave feedback and share
Visit pancik.com for more.