Search Immortality Topics:



OpenAI’s six-member board will decide ‘when we’ve attained AGI’ – VentureBeat

Posted: November 16, 2023 at 3:06 pm

Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.

According to OpenAI, the six members of its nonprofit board of directors will determine when the company has attained AGI which it defines as a highly autonomous system that outperforms humans at most economically valuable work. Thanks to a for-profit arm that is legally bound to pursue the Nonprofits mission, once the board decides AGI, or artificial general intelligence, has been reached, such a system will be excluded from IP licenses and other commercial terms with Microsoft, which only apply to pre-AGI technology.

But as the very definition of artificial general intelligence is far from agreed-upon, what does it mean to have a half-dozen people deciding on whether or not AGI has been reached for OpenAI, and therefore, the world? And what will the timing and context of that possible future decision mean for its biggest investor, Microsoft?

The information was included in a thread on X over the weekend by OpenAI developer advocate Logan Kilpatrick. Kilpatrick was responding to a comment by Microsoft president Brad Smith, who at a recent panel with Meta chief scientist Yann LeCun tried to frame OpenAI as more trustworthy because of its nonprofit status even though the Wall Street Journal recently reported that OpenAI is seeking a new valuation of up to $90 billion in a sale of existing shares.

Smith said: Meta is owned by shareholders. OpenAI is owned by a non-profit. Which would you have more confidence in? Getting your technology from a non-profit or a for profit company that is entirely controlled by one human being?

The AI Impact Tour

Connect with the enterprise AI community at VentureBeats AI Impact Tour coming to a city near you!

In his thread, Kilpatrick quoted from the Our structure page on OpenAIs website, which offers details about OpenAIs complex nonprofit/capped profit structure. According to the page, OpenAIs for-profit subsidiary is fully controlled by the OpenAI nonprofit (which is registered in Delaware). While the for-profit subsidiary, OpenAI Global, LLC which appears to have shifted from the limited partnership OpenAI LP, which was previously announced in 2019, about three years after founding the original OpenAI nonprofit is permitted to make and distribute profit, it is subject to the nonprofits mission.

It certainly sounds like once OpenAI achieves their stated mission of reaching AGI, Microsoft will be out of the loop even though at last weeks OpenAI Dev Day, OpenAI CEO Sam Altman told Microsoft CEO Satya Nadella that I think we have the best partnership in techIm excited for us to build AGI together.

And a new interview with Altman in the Financial Times, Altman said the OpenAI/Microsoft partnership was working really well and that he expected to raise a lot more over time. Asked if Microsoft would keep investing further, Altman said: Id hope sotheres a long way to go, and a lot of compute to build out between here and AGI... training expenses are just huge.

From the beginning, OpenAIs structure details say, Microsoft accepted our capped equity offer and our request to leave AGI technologies and governance for the Nonprofit and the rest of humanity.

An OpenAI spokesperson told VentureBeat that OpenAIs mission is to build AGI that is safe and beneficial for everyone. Our board governs the company and consults diverse perspectives from outside experts and stakeholders to help inform its thinking and decisions.We nominate and appoint board members based on their skills, experience and perspective on AI technology, policy and safety.

Currently, the OpenAI nonprofit board of directors is made up of chairman and president Greg Brockman, chief scientist Ilya Sutskever, and CEO Sam Altman, as well as non-employees Adam DAngelo, Tasha McCauley, and Helen Toner.

DAngelo, who is CEO of Quora, as well as tech entrepreneur McCauley and Honer, who isdirector of strategy for the Center for Security and Emerging Technology at Georgetown University, all have been tied to the Effective Altruism movement which came under fire earlier this year for its ties to Sam Bankman-Fried and FTX, as well as its dangerous take on AI safety. And OpenAI has long had its own ties to EA: For example, In March 2017, OpenAI received a grant of $30 million from Open Philanthropy, which is funded by Effective Altruists. And Jan Leike, who leads OpenAIs superalignment team, reportedly identifies with the EA movement.

The OpenAI spokesperson said that None of our board members areeffective altruists, adding that non-employee board members are not effective altruists; their interactions with the EA community are focused on topics related to AI safety or to offer the perspective of someone not closely involved in the group.

Suzy Fulton, who offers outsourced general counsel and legal services to startups and emerging companies in the tech sector, told VentureBeat that while in many circumstances, it would be unusual to have a board make this AGI determination, OpenAIs nonprofit board owes its fiduciary duty to supporting its mission of providing safe AGI that is broadly beneficial.

They believe the nonprofit boards beneficiary is humanity, whereas the for-profit one serves its investors, she explained. Another safeguard that they are trying to build in is having the Board majority independent, where the majority of the members do not have equity in Open AI.

Was this the right way to set up an entity structure and a board to make this critical determination? We may not know the answer until their Board calls it, Fulton said.

Anthony Casey, a professor at The University of Chicago Law School, agreed that having the board decide something as operationally specific as AGI is unusual, but he did not think there is any legal impediment.

It should be fine to specifically identify certain issues that must be made at the Board level, he said. Indeed, if an issue is important enough, corporate law generally imposes a duty on the directors to exercise oversight on that issue, particularly mission-critical issues.

Not all experts believe, however, that artificial general intelligence is coming anytime soon, while some question whether it is even possible.

According to Merve Hickok, president of the Center for AI and Digital Policy, which filed a claim with the FTC in March saying the agency should investigate OpenAI and order the company to halt the release of GPT models until necessary safeguards are established, OpenAI, as an organization, does suffer from diversity of perspectives. Their focus on AGI, she explained, have ignored current impact of AI models and tools.

However, she disagreed with any debate about the size or diversity of the OpenAI board in the context of who gets to determine whether or not OpenAI has attained AGI saying it distracts from discussions about whether their underlying mission and claim is even legitimate.

This would shift the focus, and de facto legitimize the claims that AGI is possible, she said.

But does OpenAIs lack of a clear definition of AGI or whether there will even be one AGI skirt the issue? For example, an OpenAI blog post from February 2023 said the first AGI will be just a point along the continuum of intelligence.And in January 2023 LessWrong interview, CEO Sam Altman said that the future I would like to see is where access to AI is super democratized, where there are several AGIs in the world that can help allow for multiple viewpoints and not have anyone get too powerful.

Still, its hard to say what OpenAIs vague definition of AGI will really mean for Microsoft especially without having full details about the operating agreement between the two companies. For example, Casey said, OpenAIs structure and relationship with Microsoft could lead to some big dispute if OpenAI is sincere about its non-profit mission.

There are a few nonprofits that own for profits, he pointed out the most notable being the Hershey Trust. But they wholly own the for-profit. In that case, it is easy because there is no minority shareholder to object, he explained. But here Microsofts for-profit interests could directly conflict with the non-profit interest of the controlling entity.

The cap on profits is easy to implement, he added, but the hard thing is what to do if meeting the maximum profit conflicts with the mission of the non-profit? Casey added that default rules would say that hitting the profit is the priority and the managers have to put that first (subject to broad discretion under the business judgment rule).

Perhaps, he continued, Microsoft said, Dont worry, we are good either way. You dont owe us any duties. That just doesnt sound like the way Microsoft would negotiate.

VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.

Visit link:

OpenAI's six-member board will decide 'when we've attained AGI' - VentureBeat

Recommendation and review posted by G. Smith