In June I had a conversation with chief scientist Ilya Sutskever at OpenAI’s headquarters, as I reported out WIRED’s October cover story. Among the topics we discussed was the unusual structure of the company.
OpenAI began as a nonprofit research lab whose mission was to develop artificial intelligence on par or beyond human level—termed artificial general intelligence or AGI—in a safe way. The company discovered a promising path in large language models that generate strikingly fluid text, but developing and implementing those models required huge amounts of computing infrastructure and mountains of cash. This led OpenAI to create a commercial entity to draw outside investors, and it netted a major partner: Microsoft. Virtually everyone in the company worked for this new for-profit arm. But limits were placed on the company’s commercial life. The profit delivered to investors was to be capped—for the first backers at 100 times what they put in—after which OpenAI would revert to a pure nonprofit. The whole shebang was governed by the original nonprofit’s board, which answered only to the goals of the original mission and maybe God.
Sutskever did not appreciate it when I joked that the bizarre org chart that mapped out this relationship looked like something a future GPT might come up with when prompted to design a tax dodge. “We are the only company in the world which has a capped profit structure,” he admonished me. “Here is the reason it makes sense: If you believe, like we do, that if we succeed really well, then these GPUs are going to take my job and your job and everyone’s jobs, it seems nice if that company would not make truly unlimited amounts of returns.” In the meantime, to make sure that the profit-seeking part of the company doesn’t shirk its commitment to making sure that the AI doesn’t get out of control, there’s that board, keeping an eye on things.
This would-be guardian of humanity is the same board that fired Sam Altman last Friday, saying that it no longer had confidence in the CEO because “he was not consistently candid in his communications with the board, hindering its ability to exercise its responsibilities.” No examples of that alleged behavior were provided, and almost no one at the company knew about the firing until just before it was publicly announced. Microsoft CEO Satya Nadella and other investors got no advance notice. The four directors, representing a majority of the six-person board, also kicked OpenAI president and chairman Greg Brockman off the board. Brockman quickly resigned.
After speaking to someone familiar with the board’s thinking, it appears to me that in firing Altman the directors believed they were executing their mission of making sure the company develops powerful AI safely—as was its sole reason for existing. Increasing profits or ChatGPT usage, maintaining workplace comity, and keeping Microsoft and other investors happy were not of their concern. In the view of directors Adam D’Angelo, Helen Toner, and Tasha McCauley—and Sutskever—Altman didn’t deal straight with them. Bottom line: The board no longer trusted Altman to pursue OpenAI’s mission. If the board can’t trust the CEO, how can it protect or even monitor progress on the mission?
I can’t say whether Altman’s conduct truly endangered OpenAI’s mission, but I do know this: The board seems to have missed the possibility that a poorly explained execution of a beloved and charismatic leader might harm that mission. The directors appear to have thought that they would give Altman his walking papers and unfussily slot in a replacement. Instead, the consequences were immediate and volcanic. Altman, already something of a cult hero, became even revered in this new narrative. He did little or nothing to dissuade the outcry that followed. To the board, Altman’s effort to reclaim his post, and the employee revolt of the past few days, is kind of a vindication that it was right to dismiss him. Clever Sam is still up to something! Meanwhile, all of Silicon Valley blew up, tarnishing OpenAI’s status, maybe permanently.
Altman’s fingerprints do not appear on the open letter released yesterday and signed by more than 95 percent of OpenAI’s roughly 770 employees that says the directors are “incapable of overseeing OpenAI.” It says that if the board members don’t reinstate Altman and resign, the workers who signed may quit and join a new advanced AI research division at Microsoft, formed by Altman and Brockman. This threat did not seem to dent the resolve of the directors, who apparently felt like they were being asked to negotiate with terrorists. Presumably one director feels differently—Sutskever, who now says he regrets his actions. His signature appears on the you-quit-or-we’ll-quit letter. Having apparently deleted his distrust of Altman, the two have been sending love notes to each other on X, the platform owned by another fellow OpenAI cofounder, now estranged from the project.
Venture capitalists and employees could now get some return on the money or sweat that they invested in the company—but the nonprofit’s board still maintained ultimate say over the for-profit business through several new legal provisions, according to OpenAI.
The directors’ primary fiduciary duty remained to uphold its mission of safe development of artificial general intelligence beneficial to all of humanity. Only a minority of directors could have financial stakes in the for-profit company, and the for-profit company’s founding documents require that it give priority to public benefits over maximizing profits.
The revised structure unlocked a torrent of funding to OpenAI, in particular from Microsoft, ultimately allowing OpenAI to marshal the cloud computing power needed to create ChatGPT.
Among the new board crew helming this unique structure were Shivon Zilis, a longtime associate of Elon Musk and later mother of twins with the entrepreneur, who joined in 2019 after serving as adviser. Will Hurd, a former Republican congressman, signed up in 2021.
Concentration of Power
In 2023, OpenAI’s board started to shrink, narrowing its bench of experience and setting up the conditions for Altman’s ouster. Hoffman left in January, according to his LinkedIn profile, and he later cited potential conflicts of interest with other AI investments. Zilis resigned in March, and Hurd in July to focus on an unsuccessful run for US president.
Those departures shrank OpenAI’s board to just six directors, one less than the maximum allowed in its original bylaws. With Brockman, Sutskever, and Altman still members of the group, it was evenly split among executives and people from outside of OpenAI—no longer majority independent, as Altman weeks earlier had testified to US senators.
The dramatic turn came Friday when, according to Brockman, chief scientist Sutskever informed him and Altman about their removals from the board shortly before a public announcement of the changes, which also included Altman’s firing as CEO because “he was not consistently candid in his communications with the board.” Brockman subsequently resigned from his role as OpenAI’s president. Sutskever reportedly had been concerned about his diminished role inside OpenAI and Altman’s fast-paced commercialization of its technologies.
The leadership upheaval threw OpenAI into crisis, but arguably the board functioned as intended—as an entity independent of the for-profit company and empowered to act as it sees necessary to accomplish the project’s overall mission. Sutskever and the three independent directors would form the majority needed to make changes without notice under the initial bylaws. Those rules allow for removals of any director, including the chair, at any time by fellow directors with or without cause.
Altman’s ouster shows an organization that was meant to align superintelligent AI with humanity failing to align the values of even its own board members and leadership. Adding a profit-seeking component to the nonprofit project turned it into an AI powerhouse. Launching products was supposed to provide not only profits but also opportunities to learn how to better control and develop beneficial AI. Now it’s unclear whether the current leadership thinks that can be done without breaching the project’s original promise to create AGI safely.
Murati faces the challenge of convincing OpenAI’s staff and backers that it still has a workable philosophy for developing AI. She must also feed the company’s hunger for cash to operate the expansive infrastructure behind projects like ChatGPT. At the time he was pink-slipped, Altman was reportedly seeking billions of new investment, in a funding round to be led by Thrive Capital. The company is undoubtedly less attractive to funders than it was only 24 hours ago. (Thrive’s CEO, Joshua Kushner, did not respond to an email.)
In addition, anyone whose CEO nameplate includes the tag “interim” will face additional hurdles in anything they do. The sooner OpenAI appoints a permanent leader, the better.
Whoever OpenAI’s new leader turns out to be, they look set to inherit a team riven by whether they stand with the current leaders, Sutskever and Murati, or the departed bosses, Altman and Brockman. One of the three researchers reported to have quit over the putsch was director of research Jakub Pachocki, a coinventor of GPT-4—a crucial loss, and we can expect more to follow.
OpenAI may now be at a severe disadvantage in the fierce race for AI talent. Top researchers are being secured by multimillion-dollar payment packages, but for the most passionate, money is a secondary consideration to the question of how more powerful AI is to be developed and deployed. If OpenAI is seen as a place ridden with palace intrigues that distract from deciding how best to create and disseminate humanity’s most consequential invention, top talent will be reluctant to commit. Elite researchers might instead look to Anthropic, an AI developer started by ex-OpenAI employees in 2021—or maybe whatever new project Altman and Brockman start.
Altman’s trajectory until now has been a classic hero’s journey in the Joseph Campbell sense. From the moment I first met him, when he came to my Newsweek office in 2007 as CEO of a startup called Loopt, he exuded a burning passion to fulfill technology’s biggest challenges and also a striking personal humility. When I accompanied him in London this year during his whirlwind tour to promote “human-positive” AI—and yet also recommend that it be regulated to prevent disaster—I saw him addressing crowds, posing for selfies, and even engaging a few protesters to hear out their concerns. But I also sensed that the task was stressful, possibly triggering one of his periodic migraine headaches, like the one he fought off when testifying before the Senate.
Just last week, Altman appeared to have mastered the prodigious challenges that came with his new power and prominence. At OpenAI’s developer day on November 6, he was confident and meticulously rehearsed as he introduced a raft of new products, laying claim to the technosphere’s ultimate peacock perch: a showman unveiling mind-bending advances in the mode of Steve Jobs. It seemed that Altman finally felt at home in the spotlight. But then the lights went out. Sam Altman will have to create AGI somewhere else. OpenAI may still be in the hunt—but only after it picks up the pieces.
Meta’s WhatsApp messaging service, as well as the encrypted platform Signal, threatened to leave the UK over the proposals.
Ofcom’s proposed rules say that public platforms—those that aren’t encrypted—should use “hash matching” to identify CSAM. That technology, which is already used by Google and others, compares images to a preexisting database of illegal images using cryptographic hashes—essentially, encrypted identity codes. Advocates of the technology, including child protection NGOs, have argued that this preserves users’ privacy as it doesn’t mean actively looking at their images, merely comparing hashes. Critics say that it’s not necessarily effective, as it’s relatively easy to deceive the system. “You only have to change one pixel and the hash changes completely,” Alan Woodward, professor of cybersecurity at Surrey University, told WIRED in September, before the act became law.
It is unlikely that the same technology could be used in private, end-to-end encrypted communications without undermining those protections.
In 2021, Apple said it was building a “privacy preserving” CSAM detection tool for iCloud, based on hash matching. In December last year, it abandoned the initiative, later saying that scanning users’ private iCloud data would create security risks and “inject the potential for a slippery slope of unintended consequences. Scanning for one type of content, for instance, opens the door for bulk surveillance and could create a desire to search other encrypted messaging systems across content types.”
Andy Yen, founder and CEO of Proton, which offers secure email, browsing and other services, says that discussions about the use of hash matching are a positive step “compared to where the Online Safety [Act] started.”
“While we still need clarity on the exact requirements for where hash matching will be required, this is a victory for privacy,” Yen says. But, he adds, “hash matching is not the privacy-protecting silver bullet that some might claim it is and we are concerned about the potential impacts on file sharing and storage services…Hash matching would be a fudge that poses other risks.”
The hash-matching rule would apply only to public services, not private messengers, according to Whitehead. But “for those [encrypted] services, what we are saying is: ‘Your safety duties still apply,’” she says. These platforms will have to deploy or develop “accredited” technology to limit the spread of CSAM, and further consultations will take place next year.
“The framework enables a set of binding requirements for federal agencies to put in place safeguards for the use of AI so that we can harness the benefits and enable the public to trust the services the federal government provides,” says Jason Miller, OMB’s deputy director for management.
The draft memo highlights certain uses of AI where the technology can harm rights or safety, including health care, housing, and law enforcement—all situations where algorithms have in the past resulted in discrimination or denial of services.
Examples of potential safety risks mentioned in the OMB draft include automation for critical infrastructure like dams and self-driving vehicles like the Cruise robotaxis that were shut down last week in California and are under investigation by federal and state regulators after a pedestrian struck by a vehicle was dragged 20 feet. Examples of how AI could violate citizens rights in the draft memo include predictive policing, AI that can block protected speech, plagiarism- or emotion-detection software, tenant-screening algorithms, and systems that can impact immigration or child custody.
According to OMB, federal agencies currently use more than 700 algorithms, though inventories provided by federal agencies are incomplete. Miller says the draft memo requires federal agencies to share more about the algorithms they use. “Our expectation is that in the weeks and months ahead, we’re going to improve agencies’ abilities to identify and report on their use cases,” he says.
Vice President Kamala Harris mentioned the OMB memo alongside other responsible AI initiatives in remarks today at the US Embassy in London, a trip made for the UK’s AI Safety Summit this week. She said that while some voices in AI policymaking focus on catastrophic risks like the role AI can some day play in cyberattacks or the creation of biological weapons, bias and misinformation are already being amplified by AI and affecting individuals and communities daily.
Merve Hickok, author of a forthcoming book about AI procurement policy and a researcher at the University of Michigan, welcomes how the OMB memo would require agencies to justify their use of AI and assign specific people responsibility for the technology. That’s a potentially effective way to ensure AI doesn’t get hammered into every government program, she says.
But the provision of waivers could undermine those mechanisms, she fears. “I would be worried if we start seeing agencies use that waiver extensively, especially law enforcement, homeland security, and surveillance,” she says. “Once they get the waiver it can be indefinite.”