Elon Musk may still hold OpenAI to account, but its board of directors cannot

Stay informed with free updates

The plaintiff is conflicted. The legal arguments appear weak. And, in places, the 35-page lawsuit Elon Musk filed last week in California Superior Court against OpenAI reads like a mash-up between a sci-fi movie script and a letter from a jilted lover. But Musk’s claim that OpenAI has violated its founding charter and endangers humanity by prioritizing profit over safety could yet turn into the most substantive move yet to scrutinize the company’s attempts to develop intelligence general artificial.

Since stunningly releasing its ChatGPT chatbot in November 2022, OpenAI has emerged as the world’s hottest startup with over 100 million weekly users. The FT reported last month that OpenAI had surpassed $2 billion in year-on-year revenue and had jumped to a private market valuation of more than $80 billion. The company has attracted $13 billion in investment from Microsoft as other investors, including Singapore’s giant Temasek fund, clamor to come on board.

Yet OpenAI began as a much less audacious group in 2015. As Musk’s lawsuit makes clear, OpenAI was founded as a nonprofit research lab with a mission to develop AGI, a generalizable form of AI that surpasses the capabilities human in most sectors. for the public good. Alarmed by Google’s dominance of artificial intelligence and the possible existential risks of AGI, Musk teamed up with Sam Altman, then president of Y Combinator, to create a different kind of research organization, “free from financial obligations.” “Our primary fiduciary duty is to humanity,” the company said. To that end, he promised to share his designs, templates, and code.

According to the lawsuit, Musk provided much of OpenAI’s early funding, contributing more than $44 million between 2016 and 2020. But the nonprofit entity has had trouble competing for talent with deep-pocketed Google DeepMind, also intending to pursue AGI. The extraordinary computing power needed to develop cutting-edge generative AI models has also sucked OpenAI into the vortex of cloud computing provider Microsoft.

That intense commercial pressure led OpenAI to create a for-profit entity and subsequently set fire to the “establishment agreement” in 2023, according to the lawsuit, accepting Microsoft’s massive investment. OpenAI has been transformed into “a closed-source program Indeed subsidiary of the largest technology company in the world.” Its flagship model GPT-4 has also been incorporated into Microsoft’s services, primarily serving the giant’s proprietary business interests. The OpenAI board’s failed attempt to replace Altman as CEO last year reflected, at least in part, tensions between the core purpose of the company’s founding and its newfound intent to make money.

Of course, OpenAI disputes Musk’s version of events and has moved to dismiss his legal claims. In a blog post, it was claimed that Musk had supported OpenAI’s move to create a for-profit business entity and had even wanted to incorporate the company into his Tesla automotive business. Musk has since launched his own artificial intelligence company, xAI, to compete with OpenAI and has tried to poach some of its researchers. “It is possible that Musk is simply techwashing and creating chaos in the market,” the Center for AI Policy said.

But Musk has a strong moral, if not legal, rationale. If OpenAI were able to evolve from a sheltered nonprofit enjoying charitable status into a for-profit enterprise, then all startups would be built this way. And, as the Altman firing and rehiring fiasco demonstrated, OpenAI’s board of directors cannot be counted on to provide robust oversight on its own.

Time to create effective governance regimes for powerful AI companies is quickly running out. This week, Anthropic, led by researchers who broke away from OpenAI in 2021, launched its Claude 3 model, which some users say outperforms GPT-4. “I think AGI is already here,” Blaise Agüera y Arcas, a top AI researcher at Google, told me last week. This outcome could generate great value but also pose significant risks, he argued in an essay co-authored with Peter Norvig.

Regulators are currently investigating the competitive implications of Microsoft’s tie-up with OpenAI. But the US administration’s promises to create an Artificial Intelligence Security Institute to monitor leading companies appear to be going nowhere. Some might dismiss the dispute between Musk and Altman as a boring legal battle between billionaire tech brothers. But whatever his motivations, Musk is performing a notable public service by imposing greater transparency and accountability on OpenAI.

john.thornhill@ft.com

Source link

Leave a Reply

Your email address will not be published. Required fields are marked *