Search Immortality Topics:



OpenAI departures: Why cant former employees talk, but the new ChatGPT release can? – Vox.com

Posted: May 25, 2024 at 2:42 am

Editors note, May 18, 2024, 7:30 pm ET: This story has been updated to reflect OpenAI CEO Sam Altmans tweet on Saturday afternoon that the company was in the process of changing its offboarding documents.

On Monday, OpenAI announced exciting new product news: ChatGPT can now talk like a human.

It has a cheery, slightly ingratiating feminine voice that sounds impressively non-robotic, and a bit familiar if youve seen a certain 2013 Spike Jonze film. Her, tweeted OpenAI CEO Sam Altman, referencing the movie in which a man falls in love with an AI assistant voiced by Scarlett Johansson.

But the product release of ChatGPT 4o was quickly overshadowed by much bigger news out of OpenAI: the resignation of the companys co-founder and chief scientist, Ilya Sutskever, who also led its superalignment team, as well as that of his co-team leader Jan Leike (who we put on the Future Perfect 50 list last year).

The resignations didnt come as a total surprise. Sutskever had been involved in the boardroom revolt that led to Altmans temporary firing last year, before the CEO quickly returned to his perch. Sutskever publicly regretted his actions and backed Altmans return, but hes been mostly absent from the company since, even as other members of OpenAIs policy, alignment, and safety teams have departed.

But what has really stirred speculation was the radio silence from former employees. Sutskever posted a pretty typical resignation message, saying Im confident that OpenAI will build AGI that is both safe and beneficialI am excited for what comes next.

Leike ... didnt. His resignation message was simply: I resigned. After several days of fervent speculation, he expanded on this on Friday morning, explaining that he was worried OpenAI had shifted away from a safety-focused culture.

Questions arose immediately: Were they forced out? Is this delayed fallout of Altmans brief firing last fall? Are they resigning in protest of some secret and dangerous new OpenAI project? Speculation filled the void because no one who had once worked at OpenAI was talking.

It turns out theres a very clear reason for that. I have seen the extremely restrictive off-boarding agreement that contains nondisclosure and non-disparagement provisions former OpenAI employees are subject to. It forbids them, for the rest of their lives, from criticizing their former employer. Even acknowledging that the NDA exists is a violation of it.

If a departing employee declines to sign the document, or if they violate it, they can lose all vested equity they earned during their time at the company, which is likely worth millions of dollars. One former employee, Daniel Kokotajlo, who posted that he quit OpenAI due to losing confidence that it would behave responsibly around the time of AGI, has confirmed publicly that he had to surrender what would have likely turned out to be a huge sum of money in order to quit without signing the document.

While nondisclosure agreements arent unusual in highly competitive Silicon Valley, putting an employees already-vested equity at risk for declining or violating one is. For workers at startups like OpenAI, equity is a vital form of compensation, one that can dwarf the salary they make. Threatening that potentially life-changing money is a very effective way to keep former employees quiet.

OpenAI did not respond to a request for comment in time for initial publication. After publication, an OpenAI spokespersonsent me this statement: We have never canceled any current or former employees vested equity nor will we if people do not sign a release or nondisparagement agreement when they exit.

Sources close to the company I spoke to told me that this represented a change in policy as they understood it.When I askedthe OpenAI spokespersonif thatstatement representeda change,theyreplied, This statement reflects reality.

On Saturday afternoon, a little more than a day after this article published, Altman acknowledged in a tweet that there had been a provision in the companys off-boarding documents about potential equity cancellation for departing employees, but said the company was in the process of changing that language.

All of this is highly ironic for a company that initially advertised itself as OpenAI that is, as committed in its mission statements to building powerful systems in a transparent and accountable manner.

OpenAI long ago abandoned the idea of open-sourcing its models, citing safety concerns. But now it has shed the most senior and respected members of its safety team, which should inspire some skepticism about whether safety is really the reason why OpenAI has become so closed.

OpenAI has spent a long time occupying an unusual position in tech and policy circles. Their releases, from DALL-E to ChatGPT, are often very cool, but by themselves they would hardly attract the near-religious fervor with which the company is often discussed.

What sets OpenAI apart is the ambition of its mission: to ensure that artificial general intelligence AI systems that are generally smarter than humans benefits all of humanity. Many of its employees believe that this aim is within reach; that with perhaps one more decade (or even less) and a few trillion dollars the company will succeed at developing AI systems that make most human labor obsolete.

Which, as the company itself has long said, is as risky as it is exciting.

Superintelligence will be the most impactful technology humanity has ever invented, and could help us solve many of the worlds most important problems, a recruitment page for Leike and Sutskevers team at OpenAI states. But the vast power of superintelligence could also be very dangerous, and could lead to the disempowerment of humanity or even human extinction. While superintelligence seems far off now, we believe it could arrive this decade.

Naturally, if artificial superintelligence in our lifetimes is possible (and experts are divided), it would have enormous implications for humanity. OpenAI has historically positioned itself as a responsible actor trying to transcend mere commercial incentives and bring AGI about for the benefit of all. And theyve said they are willing to do that even if that requires slowing down development, missing out on profit opportunities, or allowing external oversight.

We dont think that AGI should be just a Silicon Valley thing, OpenAI co-founder Greg Brockman told me in 2019, in the much calmer pre-ChatGPT days. Were talking about world-altering technology. And so how do you get the right representation and governance in there? This is actually a really important focus for us and something we really want broad input on.

OpenAIs unique corporate structure a capped-profit company ultimately controlled by a nonprofit was supposed to increase accountability. No one person should be trusted here. I dont have super-voting shares. I dont want them, Altman assured Bloombergs Emily Chang in 2023. The board can fire me. I think thats important. (As the board found out last November, it could fire Altman, but it couldnt make the move stick. After his firing, Altman made a deal to effectively take the company to Microsoft, before being ultimately reinstated with most of the board resigning.)

But there was no stronger sign of OpenAIs commitment to its mission than the prominent roles of people like Sutskever and Leike, technologists with a long history of commitment to safety and an apparently genuine willingness to ask OpenAI to change course if needed. When I said to Brockman in that 2019 interview, You guys are saying, Were going to build a general artificial intelligence, Sutskever cut in.Were going to do everything that can be done in that direction while also making sure that we do it in a way thats safe, he told me.

Their departure doesnt herald a change in OpenAIs mission of building artificial general intelligence that remains the goal. But it almost certainly heralds a change in OpenAIs interest in safety work; the company hasnt announced who, if anyone, will lead the superalignment team.

And it makes it clear that OpenAIs concern with external oversight and transparency couldnt have run all that deep. If you want external oversight and opportunities for the rest of the world to play a role in what youre doing, making former employees sign extremely restrictive NDAs doesnt exactly follow.

This contradiction is at the heart of what makes OpenAI profoundly frustrating for those of us who care deeply about ensuring that AI really does go well and benefits humanity. Is OpenAI a buzzy, if midsize tech company that makes a chatty personal assistant, or a trillion-dollar effort to create an AI god?

The companys leadership says they want to transform the world, that they want to be accountable when they do so, and that they welcome the worlds input into how to do it justly and wisely.

But when theres real money at stake and there are astounding sums of real money at stake in the race to dominate AI it becomes clear that they probably never intended for the world to get all that much input. Their process ensures former employees those who know the most about whats happening inside OpenAI cant tell the rest of the world whats going on.

The website may have high-minded ideals, but their termination agreements are full of hard-nosed legalese. Its hard to exercise accountability over a company whose former employees are restricted to saying I resigned.

ChatGPTs new cute voice may be charming, but Im not feeling especially enamored.

Update, May 18, 7:30 pm ET: This story was published on May 17 and has been updated multiple times, most recently to include Sam Altmans response on social media.

A version of this story originally appeared in theFuture Perfectnewsletter.Sign up here!

Youve read 1 article in the last month

Here at Vox, we believe in helping everyone understand our complicated world, so that we can all help to shape it. Our mission is to create clear, accessible journalism to empower understanding and action.

If you share our vision, please consider supporting our work by becoming a Vox Member. Your support ensures Vox a stable, independent source of funding to underpin our journalism. If you are not ready to become a Member, even small contributions are meaningful in supporting a sustainable model for journalism.

Thank you for being part of our community.

Swati Sharma

Vox Editor-in-Chief

We accept credit card, Apple Pay, and Google Pay. You can also contribute via

Continued here:

OpenAI departures: Why cant former employees talk, but the new ChatGPT release can? - Vox.com

Recommendation and review posted by G. Smith