On Friday, the board of directors of the nonprofit organization overseeing AI company OpenAI shocked the tech world by firing CEO Sam Altman and removing president Greg Brockman from the board. Brockman resigned hours later.
The move was even more of a surprise given the unusual nature of OpenAI’s corporate structure: per OpenAI’s own description of its corporate structure, directors hold no equity in OpenAI or other compensation; Altman himself only held shares indirectly through a “small” investment made by Y Combinator, where he was previously president.
According to OpenAI’s corporate governance, directors’ key fiduciary duty is not to maintain shareholder value, but to the company’s mission of creating a safe AGI, or artificial general intelligence, “that is broadly beneficial.” Profits, the company said, were secondary to that mission. OpenAI first began posting the names of its board of directors on its website in July, following the departures of Reid Hoffman, Shivon Zilis and Will Hurd earlier this year, according to an archived version of the site on the Wayback Machine.
One AI-focused venture capitalist noted that following the departure of Hoffman, OpenAI’s non-profit board lacked much traditional governance. “These are not the business or operating leaders you would want governing the most important private company in the world,” they said.
Here’s who made the decision for Altman to be fired, and for Brockman to be removed from its board of directors. Update: Altman didn’t get a vote, The Information has reported. Brockman posted an account of his version of events to X that indicated the board had acted without his knowledge as well.
OpenAI did not respond to a request for comment.
Adam D’Angelo, the CEO of answers site Quora, joined OpenAI’s board in April 2018. At the time, he wrote: “I continue to think that work toward general AI (with safety in mind) is both important and underappreciated.” In an interview with Forbes in January, D’Angelo argued that one of OpenAI’s strengths was its capped-profit business structure and nonprofit control. “There’s no outcome where this organization is one of the big five technology companies,” D’Angelo said. “This is something that’s fundamentally different, and my hope is that we can do a lot more good for the world than just become another corporation that gets that big.”
Tasha McCauley is an adjunct senior management scientist at RAND Corporation, a job she started earlier in 2023, according to her LinkedIn profile. She previously cofounded Fellow Robots, a startup she launched with a colleague from Singularity University, where she’d served as a director of an innovation lab, and then cofounded GeoSim Systems, a geospatial technology startup where she served as CEO until last year. With her husband Joseph Gorden-Levitt, she was a signer of the Asilomar AI Principles, a set of 23 AI governance principles published in 2017. (Altman, OpenAI cofounder Iyla Sutskever and former board director Elon Musk also signed.)
McCauley currently sits on the advisory board of British-founded international Center for the Governance of AI alongside fellow OpenAI director Helen Toner. And she’s tied to the Effective Altruism movement through the Centre for Effective Altruism; McCauley sits on the U.K. board of the Effective Ventures Foundation, its parent organization.
Ilya Sutskever is now the sole remaining cofounder of OpenAI on its overseeing board of directors. He joined the company after receiving a computer science PhD at the University of Toronto, cofounding a project called DNNResearch briefly after, and then serving as a research scientist at Google until the end of 2015. He was OpenAI’s initial research director, and became chief scientist in 2018. Sutskever was co-author of a key paper in neural networks with legendary AI academic Geoffrey Hinton in 2012 and helped lead the AlphaGo project, which used artificial intelligence to conquer the ancient and complex board game, a key milestone in modern AI research history.
In July, OpenAI announced that Suskever was co-leading a team that would take 20% of OpenAI’s compute and focus it on “superalignment,” helping to develop technological solutions for how to supervise AI if it were to someday grow smarter than humans. Sutskever’s most recent post on X, the social media platform formerly known as Twitter, was on October 6, when he wrote: “If you value intelligence above all other human qualities, you’re gonna [sic] have a bad time.”
Helen Toner, director of strategy and foundational research grants at Georgetown’s Center for Security and Emerging Technology, joined OpenAI’s board of directors in September 2021. Her role: to think about safety in a world where OpenAI’s creation had global influence. “I greatly value Helen’s deep thinking around the long-term risks and effects of AI,” Brockman said in a statement at the time.
More recently, Toner has been making headlines as an expert on China’s AI landscape and the potential role of AI regulation in a geopolitical face-off with the Asian giant. Toner had lived in Beijing in between roles at Open Philanthropy and her current job at CSET, researching its AI ecosystem, per her corporate biography. In June, she co-authored an essay for Foreign Affairs on “The Illusion of China’s AI Prowess” that argued — in opposition to Altman’s cited U.S. Senate testimony — that regulation wouldn’t slow down the U.S. in a race between the two nations.
Former board directors (who were not involved in Altman’s firing)
Reid Hoffman was one of OpenAI’s first investors, but the former LinkedIn cofounder and billionaire invested out of his charitable foundation, not from his venture capital firm Greylock. (The first VC check into OpenAI was Vinod Khosla of Khosla Ventures, an outspoken OpenAI advocate but not a board director.) A longtime advocate for OpenAI even as he invested in a number of newer AI startups, Hoffman announced in a LinkedIn post in March that he was stepping down from its board to avoid potential conflicts of interest. The decision included “months of conversations with Sam, Greylock colleagues, and friends,” he wrote. “I remain an ally to OpenAI and on call for anything that I can do to help the organization and its mission of beneficial AI for humanity,” he added.
Will Hurd, a former Texas congressman, joined the board of OpenAI in May 2021 to provide public policy expertise. “He deeply understands both artificial intelligence as well as public policy, both of which are critical to a successful future for AI,” Altman wrote then. But Hurd resigned in July, the month after announcing a presidential campaign for the 2024 Republican nomination. By October, he’d dropped out of that race, too.
Holden Karnofsky, director of AI strategy at Open Philanthropy, joined OpenAI’s board of directors in 2017 following the nonprofit’s recommendation of a $30 million grant to the AI company over three years. At the time, Karnofsky was Open Philanthropy’s executive director (he briefly took a leave of absence in early 2023 and has since returned to lead its AI risk initiatives). Karnofsky is married to Anthropic cofounder Daniela Amodei, who was an executive at OpenAI when Open Philanthropy announced its grant decision. In a relationship disclosure, he noted that “OpenAI researchers Dario Amodei and Paul Christiano are both technical advisors to Open Philanthropy and live in the same house as Holden. In addition, Holden is engaged to Dario’s sister Daniela.” Karnofsky stepped down from OpenAI’s board in 2021 when Amodei left the company to start Anthropic.
Elon Musk, who helms X, SpaceX, Tesla, Neuralink and the Boring Company, cofounded OpenAI in 2015 and resigned from its board in 2018 after having pledged $1 billion in funding. Corporate filings showed that only $15 million definitely came from Musk, TechCrunch reported. Musk has claimed that he is “the reason OpenAI exists,” and has openly criticized the company in the past. He left the board citing a conflict of interest with Tesla.
Shivon Zilis is director of operations and special projects at Elon Musk’s brain implant company, Neuralink. Zilis joined OpenAI as an advisor in 2016 and as a board member in 2020, but reportedly left her position in the wake of comments from Musk that criticized the company. (Zilis and Musk are the parents of twin toddlers, Strider and Azure.) “I’m still confused as to how a non-profit to which I donated ~$100M somehow became a $30B market cap for-profit. If this is legal, why doesn’t everyone do it?” Musk tweeted. Zilis currently sits on the board of Shield AI.
MORE FROM FORBES
MORE FROM FORBESSam Altman Out At OpenAIBy David JeansMORE FROM FORBESOpenAI President And Co-Founder Quits Over Sam Altman FiringBy Richard NievaMORE FROM FORBESInside ChatGPT's Breakout Moment And The Race To Put AI To WorkBy Alex Konrad