Search Immortality Topics:

Page 21234


Category Archives: Artificial Super Intelligence

AMBASSADORS OF ETHICAL AI PRACTICES | by ACWOL | Nov … – Medium

http://www.acwol.com

In envisioning a future where AI developers worldwide embrace the Three Way Impact Principle (3WIP) as a foundational ethical framework, we unravel a transformative landscape for tackling the Super Intelligence Control Problem. By integrating 3WIP into the curriculum for AI developers globally, we fortify the industry with a super intelligent solution, fostering responsible, collaborative, and environmentally conscious AI development practices.

Ethical Foundations for AI Developers:

Holistic Ethical Education: With 3WIP as a cornerstone in AI education, students receive a comprehensive ethical foundation that guides their decision-making in the realm of artificial intelligence.

Superior Decision-Making: 3WIP encourages developers to consider the broader impact of their actions, instilling a sense of responsibility that transcends immediate objectives and aligns with the highest purpose of lifemaximizing intellect.

Mitigating Risks Through Collaboration: Interconnected AI Ecosystem: 3WIP fosters an environment where AI entities collaborate rather than compete, reducing the risks associated with unchecked development.

Shared Intellectual Growth: Collaboration guided by 3WIP minimizes the potential for adversarial scenarios, contributing to a shared pool of knowledge that enhances the overall intellectual landscape.

Environmental Responsibility in AI: Sustainable AI Practices: Integrating 3WIP into AI curriculum emphasizes sustainable practices, mitigating the environmental impact of AI development.

Global Implementation of 3WIP: Universal Ethical Standards: A standardized curriculum incorporating 3WIP establishes universal ethical standards for AI development, ensuring consistency across diverse cultural and educational contexts.

Ethical Practitioners Worldwide: AI developers worldwide, educated with 3WIP, become ambassadors of ethical AI practices, collectively contributing to a global community focused on responsible technological advancement.

Super Intelligent Solution for Control Problem: Preventing Unintended Consequences: 3WIP's emphasis on considering the consequences of actions aids in preventing unintended outcomes, a critical aspect of addressing the Super Intelligence Control Problem.

Responsible Decision-Making: Developers, equipped with 3WIP, navigate the complexities of AI development with a heightened sense of responsibility, minimizing the risks associated with uncontrolled intelligence.

Adaptable Ethical Framework: Cultural Considerations: The adaptable nature of 3WIP allows for the incorporation of cultural nuances in AI ethics, ensuring ethical considerations resonate across diverse global perspectives.

Inclusive Ethical Guidelines: 3WIP accommodates various cultural norms, making it an inclusive framework that accommodates ethical guidelines applicable to different societal contexts.

Future-Proofing AI Development: Holistic Skill Development: 3WIP not only imparts ethical principles but also nurtures critical thinking, decision-making, and environmental consciousness in AI professionals, future-proofing their skill set.

Staying Ahead of Risks: The comprehensive education provided by 3WIP prepares AI developers to anticipate and address emerging risks, contributing to the ongoing development of super intelligent solutions.

The integration of Three Way Impact Principle (3WIP) into the global curriculum for AI developers emerges as a super intelligent solution to the Super Intelligence Control Problem. By instilling ethical foundations, fostering collaboration, promoting environmental responsibility, and adapting to diverse cultural contexts, 3WIP guides AI development towards a future where technology aligns harmoniously with the pursuit of intellectual excellence and ethical progress. As a super intelligent framework, 3WIP empowers the next generation of AI developers to be ethical stewards of innovation, navigating the complexities of artificial intelligence with a consciousness that transcends immediate objectives and embraces the highest purpose of lifemaximizing intellect.

Cheers,

https://www.acwol.com

https://discord.com/invite/d3DWz64Ucj

https://www.instagram.com/acomplicatedway

NOTE: A COMPLICATED WAY OF LIFE abbreviated as ACWOL is a philosophical framework containing just five tenets to grok and five tools to practice. If you would like to know more, write to connect@acwol.com Thanks so much.

Original post:

AMBASSADORS OF ETHICAL AI PRACTICES | by ACWOL | Nov ... - Medium

Posted in Artificial Super Intelligence | Comments Off on AMBASSADORS OF ETHICAL AI PRACTICES | by ACWOL | Nov … – Medium

AI and the law: Imperative need for regulatory measures – ft.lk

Using AI Technology, without the needed laws and policies to understand and monitor it, can be risky

The advent of superintelligent AI would be either the best or the worst thing ever to happen tohumanity. The real risk with AI isnt malice but

competence. A super-intelligent AI will be extremely good at accomplishing its goals and if those goals arent aligned with ours were in trouble.1

Generative AI, most well-known example being ChatGPT, has surprised many around the world, due to its output to queries being very human likeable. Its impact on industries and professions will be unprecedented, including the legal profession. However, there are pressing ethical and even legal matters that need to be recognised and addressed, particularly in the areas of intellectual property and data protection.

Firstly, how does one define Artificial Intelligence? AI systems could be considered as information processing technologies that integrate models and algorithms that produces capacity to learn and to perform cognitive tasks leading to outcomes such as prediction and decision-making in material and virtual environments. Though in general parlance we have referred to them as robots, AI is developing at such a rapid pace that it is bound to be far more independent than one can ever imagine.

As AI migrated from Machine Learning (ML) to Generative AI, the risks we are looking at also took an exponential curve. The release of Generative technologies is not human centric. These systems provide results that cannot be exactly proven or replicated; they may even fabricate and hallucinate. Science fiction writer, Vernor Vinge, speaks of the concept of technological singularity, where one can imagine machines with super human intelligence outsmarting financial markets, out-inventing human researchers, out-manipulating human leaders and potentially subduing us with weapons we cannot even understand. Whereas the short term impact depends on who controls it, the long-term impact depends on whether it cannot be controlled at all2.

The EU AI Act and other judgements

Laws and regulations are in the process of being enacted in some of the developed countries, such as the EU and the USA. The EU AI Act (Act) is one of the main regulatory statutes that is being scrutinised. The approach that the MEPs (Members of the European Parliament) have taken with regard to the Act has been encouraging. On 1 June, a vote was taken where MEPs endorsed new risk management and transparency rules for AI systems. This was primarily to endorse a human-centric and ethical development of AI. They are keen to ensure that AI systems are overseen by people, are safe, transparent, traceable, non-discriminatory and environmentally friendly. The term AI will also have a uniform definition which will be technology neutral, so that it applies to AI systems today and tomorrow.

Co-rapporteur Dragos Tudovache (Renew, Romania) stated, We have worked to support AI innovation in Europe and to give start-ups, SMEs and industry space to grow and innovate, while protecting fundamental rights, strengthening democratic oversight and ensuring a mature system of AI governance and enforcement3.

The Act has also adopted a Risk Based Approach in terms of categorising AI systems, and has made recommendations accordingly. The four levels of risk are,

Unacceptable risk (e.g., remote biometric identification systems in public),

High risk (e.g., use of AI in the administration of justice and democratic processes),

Limited risk (e.g., using AI systems in chatbots) and

Minimal risk (e.g., spam filters).

Under the Act, AI systems which are categorised as Unacceptable Risk will be banned. For High Risk AI systems, which is the second tier, developers are required to adhere to rigorous testing requirements, maintain proper documentation and implement an adequate accountability framework. For Limited Risk systems, the Act requires certain transparency features which allows a user to make informed choices regarding its usage. Lastly, for Minimal Risk AI systems, a voluntary code of conduct is encouraged.

Moreover, in May 2023, a judgement4 was given in the USA (State of Texas), where all attorneys must file a certificate that contains two statements stating that no part of the filing was drafted by Generative AI and that language drafted by Generative AI has been verified for accuracy by a human being. The New York attorney had used ChatGPT, which had cited non-existent cases. Judge Brantley Starr stated, [T]hese platforms in their current states are prone to hallucinations and bias.on hallucinations, they make stuff up even quotes and citations. As ChatGPT and other Generative AI technologies are being used more and more, including in the legal profession, it is imperative that professional bodies and other regulatory bodies draw up appropriate legislature and policies to include the usage of these technologies.

UNESCO

On 23 November 2021, UNESCO published a document titled, Recommendations on the Ethics of Artificial Intelligence5. It emphasises the importance of governments adopting a regulatory framework that clearly sets out a procedure, particularly for public authorities to carry out ethical impact assessments on AI systems, in order to predict consequences, address societal challenges and facilitate citizen participation. In explaining the assessment further, the recommendations by UNESCO also stated that it should have appropriate oversight mechanisms, including auditability, traceability and explainability, which enables the assessment of algorithms and data and design processes as well including an external review of AI systems. The 10 principles that are highlighted in this include:

Proportionality and Do Not Harm

Safety and Security

Fairness and Non-Discrimination

Sustainability

Right to Privacy and Data Protection

Human Oversight and Determination

Transparency and Explainability

Responsibility and Accountability

Awareness and Literacy

Multi Stakeholder and Adaptive Governance and Collaboration.

Conclusion

The level of trust citizens have in AI systems can be a factor to determine the success in AI systems being used more in the future. As long as there is transparency in the models used in AI systems, one can hope to achieve a degree of respect, protection and promotion of human rights, fundamental freedoms and ethical principles6. UNESCO Director General Audrey Azoulay stated, Artificial Intelligence can be a great opportunity to accelerate the achievement of sustainable development goals. But any technological revolution leads to new imbalances that we must anticipate.

Multi stakeholders in every state need to come together in order to advise and enact the relevant laws. Using AI Technology, without the needed laws and policies to understand and monitor it, can be risky. On the other hand, not using available AI systems for tasks at hand, would be a waste. In conclusion, in the words of Stephen Hawking7, Our future is a race between the growing power of our technology and the wisdom with which we use it. Lets make sure wisdom wins.

Footnotes:

1Pg 11/12; Will Artificial Intelligence outsmart us? by Stephen Hawking; Essay taken from Brief Answers to the Big Questions John Murray, (2018)

2 Ibid

3https://www.europarl.europa.eu/news/en/press-room/20230505IPR84904/ai-act-a-step-closer-to-the-first-rules-on-artificial-intelligence

4https://www.theregister.com/2023/05/31/texas_ai_law_court/

5 https://www.unesco.org/en/articles/recommendation-ethics-artificial-intelligence

6 Ibid; Pg 22

7 Will Artificial Intelligence outsmart us? Stephen Hawking; Essay taken from Brief Answers to the Big Questions John Murray, (2018)

(The writer is an Attorney-at-Law, LL.B (Hons.) (Warwick), LL.M (Lon.), Barrister (Lincolns Inn), UK. She obtained a Certificate in AI Policy at the Centre for AI Digital Policy (CAIDP) in Washington, USA in 2022. She was also a speaker at the World Litigation Forum Law Conference in Singapore (May 2023) on the topic of Lawyers using AI, Legal Technology and Big Data and was a participant at the IGF Conference 2023 in Kyoto, Japan.)

Read the original here:

AI and the law: Imperative need for regulatory measures - ft.lk

Posted in Artificial Super Intelligence | Comments Off on AI and the law: Imperative need for regulatory measures – ft.lk

East Africa lawyers wary of artificial intelligence rise – The Citizen

Arusha. It is an advanced technology which is not only unavoidable but has generally simplified work.

It has made things much easier by shortening time for research and reducing the needed manpower.

Yet artificial intelligence (AI) is still at crossroads; it can lead to massive job losses with the lawyers among those much worried.

It is emerging as a serious threat to the legal profession, said Mr David Sigano, CEO of the East African Lawyers Society (EALS).

The technology will be among the key issues to be discussed during the societys annual conference kicking off in Bujumbura today.

He said time has come for lawyers to position themselves with the emerging technology and its risks to the legal profession.

We need to be ready to compete with the robots and to operate with AI, he told The Citizen before departing for Burundi.

Mr Sigano acknowledged the benefits of AI, saying like other modern technologies it can improve efficiency.

AI is intelligence inferred, perceived or synthesised and which is demonstrated by machines as opposed to intelligence displayed by humans.

AI applications include advanced web search, recommendation systems used by Youtube, Amazon and Netflix, self-driving cars, creative tools and automated decisions, among others.

However, the EALS boss expressed fears of job losses among the lawyers and their assistants through robots.

How do you prevent massive job losses? How do you handle ethics? Mr Sigano queried during an interview.

He cited an AI-powered Super Lawyer, a robot recently designed and developed by a Kenyan IT guru.

The tech solution, known as Wakili (Kiswahili for lawyer) is now wreaking havoc in that countrys legal sector, replacing humans in determining cases.

All you need to do is to access it on your mobile or computer browser; type in your question either in Swahili, English, Spanish, French or Italian and you have the answers coming to you, Mr Sigano said.

Wakili is a Kenyan version of the well-known Chat GPT. Although it has been lauded on grounds that it will make the legal field grow, there are some reservations.

Mr Sigano said although the technology has its advantages, AI could either lead to job losses or be easily misused.

We can leverage the benefits of AI because of speed, accuracy and affordability. We can utilise it, but we have to be wary of it, he pointed out.

A prominent advocate in Arusha, Mr Frederick Musiba, said AI was no panacea to work efficiency, including for the lawyers.

It can not only lead to job losses to the lawyers but also increase the cost of legal practice through its access through the Internet.

Lawyers will lose income as some litigants will switch to AI. Advocates will lose clients, Mr Musiba told The Citizen when contacted for comment.

However, the managing partner and advocate with Fremax Attorneys said AI was yet to be fully embraced in Tanzania unlike in other countries.

Nevertheless, Mr Musiba said the technology has its advantages and disadvantages, cautioning people not to rush to the robots.

However, Mr Erik Kimaro, an advocate with Keystone Legal firm, also in Arusha, said AI was an emerging technological advancement that is not avoidable.

Whether we like it or not, it is here with its advantages and disadvantages. But it has made things much easier, he explained.

I cant say we have to avoid it but we have to be cautious, he added, noting that besides leading to unemployment it reduces critical thinking of human beings.

Mr Aafez Jivraj, an Arusha resident and player in the tourism sector, said it will take time before Tanzania fully embraced AI technology but said he was worried of job losses.

It is obvious that it can remove people from jobs. One robot can work for 20 people. How many members of their families will be at risk? he queried.

AI has been a matter of debate across the world in recent years with the risk of job losses affecting nearly all professions besides the lawyers.

According to Deloitte, over 100,000 thousand jobs will be automated in the legal sector in the UK alone by 2025 with companies that fail to adopt AI are fated to be left behind.

On his part, an education expert in Arusha concurred, saying that modern technologies such as AI can lead to job losses.

The situation may worsen within the next few years or decades as some of the jobs will no longer need physical labour.

AI has some benefits like other technologies but it is threatening jobs, said Mr Yasir Patel, headmaster of St Constantine International School.

He added that the world was changing so fast that many of the jobs that were readily available until recently have been taken over by computers.

Computer scientists did not exist in the past. Our young generation should be reminded. They think the job market is still intact, he further pointed out.

See the article here:

East Africa lawyers wary of artificial intelligence rise - The Citizen

Posted in Artificial Super Intelligence | Comments Off on East Africa lawyers wary of artificial intelligence rise – The Citizen

Working together to ensure the safety of artificial intelligence – The Jakarta Post

Rishi Sunak

London Tue, October 31, 2023 2023-10-31 16:10 2 81ddb23ff0e291bbf9b36264f5255849 2 Academia artificial-intelligence,technology,risk,cyberattacks,disinformation,safety,summit,report,governments Free

I believe nothing in our foreseeable future will transform our lives more than artificial intelligence (AI). Like the coming of electricity or the birth of the internet, it will bring new knowledge, new opportunities for economic growth, new advances in human capability, and the chance to solve global problems we once thought beyond us.

AI can help solve world hunger by preventing crop failures and making it cheaper and easier to grow food. It can help accelerate the transition to net zero. And it is already making extraordinary breakthroughs in health and medicine, aiding us in the search for new dementia treatments and vaccines for cancer.

But like previous waves of technology, AI also brings new dangers and new fears. So, if we want our children and grandchildren to benefit from all the opportunities of AI, we must act and act now to give people peace of mind about the risks.

What are those risks? For the first time, the British government has taken the highly unusual step of publishing our analysis, including an assessment by the UK intelligence community. As prime minister, I felt this was an important contribution the UK could make, to help the world have a more informed and open conversation.

Whether you're looking to broaden your horizons or stay informed on the latest developments, "Viewpoint" is the perfect source for anyone seeking to engage with the issues that matter most.

Our reports provide a stark warning. AI could be used for harm by criminals or terrorist groups. The risk of cyberattacks, disinformation, or fraud, pose a real threat to society. And in the most unlikely but extreme cases, some experts think there is even the risk that humanity could lose control of AI completely, through the kind of AI sometimes referred to as super intelligence.

We should not be alarmist about this. There is a very real debate happening, and some experts think it will never happen.

But even if the very worst risks are unlikely to happen, they would be incredibly serious if they do. So, leaders around the world, no matter our differences on other issues, have a responsibility to recognize those risks, come together, and act. Not least because many of the loudest warnings about AI have come from the people building this technology themselves. And because the pace of change in AI is simply breath-taking: Every new wave will become more advanced, better trained, with better chips, and more computing power.

So, what should we do?

First, governments do have a role. The UK has just announced the first ever AI Safety Institute. Our institute will bring together some of the most respected and knowledgeable people in the world. They will carefully examine, evaluate, and test new types of AI so that we understand what they can do. And we will share those conclusions with other countries and companies to help keep AI safe for everyone.

But AI does not respect borders. No country can make AI safe on its own.

So, our second step must be to increase international cooperation. That starts this week at the first ever Global AI Safety Summit, which Im proud the UK is hosting. And I am very much looking forward to hearing the important contribution of Mr. Nezar Patria, Indonesian Deputy Minister of Communications and Information.

What do we want to achieve at this weeks summit? I want us to agree the first ever international statement about the risks from AI. Because right now, we dont have a shared understanding of the risks we face. And without that, we cannot work together to address them.

Im also proposing that we establish a truly global expert panel, nominated by those attending the summit, to publish a state of AI science report. And over the longer term, my vision is for a truly international approach to safety, where we collaborate with partners to ensure AI systems are safe before they are released.

None of that will be easy to achieve. But leaders have a responsibility to do the right thing. To be honest about the risks. And to take the right long-term decisions to earn peoples trust, giving peace of mind that we will keep you safe. If we can do that, if we can get this right, then the opportunities of AI are extraordinary.

And we can look to the future with optimism and hope.

***

The writer is United Kingdom Prime Minister.

Follow this link:

Working together to ensure the safety of artificial intelligence - The Jakarta Post

Posted in Artificial Super Intelligence | Comments Off on Working together to ensure the safety of artificial intelligence – The Jakarta Post

Elon Musk Dishes On AI Wars With Google, ChatGPT And Twitter On Fox News – Forbes

(Photo by Justin Sullivan/Getty Images)Getty Images

The worlds wealthiest billionaires are drawing battle lines when it comes to who will control AI, according to Elon Musk in an interview with Tucker Carlson on Fox News, which aired this week.

Musk explained that he cofounded ChatGPT-maker OpenAI in reaction to Google cofounder Larry Pages lack of concern over the danger of AI outsmarting humans.

He said the two were once close friends and that he would often stay at Pages house in Palo Alto where they would talk late into the night about the technology. Page was such a fan of Musks that in Jan. 2015, Google invested $1 billion in SpaceX for a 10% stake with Fidelity Investments. He wants to go to Mars. Thats a worthy goal, Page said in a March 2014 TED Talk .

But Musk was concerned over Googles acquisition of DeepMind in Jan. 2014.

Google and DeepMind together had about three-quarters of all the AI talent in the world. They obviously had a tremendous amount of money and more computers than anyone else. So Im like were in a unipolar world where theres just one company that has close to a monopoly on AI talent and computers, Musk said. And the person in charge doesnt seem to care about safety. This is not good.

Musk said he felt Page was seeking to build a digital super intelligence, a digital god.

He's made many public statements over the years that the whole goal of Google is what's called AGI artificial general intelligence or artificial super intelligence, Musk said.

Google CEO Sundar Pichai has not disagreed. In his 60 minutes interview on Sunday, while speaking about the companys advancements in AI, Pichai said that Google Search was only one to two percent of what Google can do. The company has been teasing a number of new AI products its planning on rolling out at its developer conference Google I/O on May 10.

Musk said Page stopped talking to him over OpenAI, a nonprofit with the stated mission of ensuring that artificial general intelligenceAI systems that are generally smarter than humansbenefits all of humanity that Musk cofounded in Dec. 2015 with Y Combinator CEO Sam Altman and PayPal alums LinkedIn cofounder Reid Hoffman and Palantir cofounder Peter Thiel, among others.

I havent spoken to Larry Page in a few years because he got very upset with me over OpenAI, said Musk explaining that when OpenAI was created it shifted things from a unipolar world where Google controls most of the worlds AI talent to a bipolar world. And now it seems that OpenAI is ahead, he said.

But even before OpenAI, as SpaceX was announcing the Google investment in late Jan. 2015, Musk had given $10 million to the Future of Life Institute, a nonprofit organization dedicated to reducing existential risks from advanced artificial intelligence. That organization was founded in March 2014 by AI scientists from DeepMind, MIT, Tufts, UCSC, among others and were the ones who issued the petition calling for a pause in AI development that Musk signed last month.

In 2018, citing potential conflicts with his work with Tesla, Musk resigned his seat on the board of OpenAI.

I put a lot of effort into creating this organization to serve as a counterweight to Google and then I kind of took my eye off the ball and now they are closed source, and obviously for profit, and theyre closely allied with Microsoft. In effect, Microsoft has a very strong say, if not directly controls OpenAI at this point, Musk said.

Ironically, its Musks longtime friend Hoffman who is the link to Microsoft. The two hit it big together at PayPal and it was Musk who recruited Hoffman to OpenAI in 2015. In 2017, Hoffman became an independent director at Microsoft, then sold LinkedIn to Microsoft for more than $26 billion in 2019 when Microsoft invested its first billion dollars into OpenAI. Microsoft is currently OpenAIs biggest backer having invested as much as $10 billion more this past January. Hoffman only recently stepped down from OpenAIs board on March 3 to enable him to start investing in the OpenAI startup ecosystem, he said in a LinkedIn post. Hoffman is a partner in the venture capital firm Greylock Partners and a prolific angel investor.

All sit at the top of the Forbes Real-Time Billionaires List. As of April 17 5pm ET, Musk was the worlds second richest person valued at $187.4 billion, Page the eleventh at $90.1 billion. Google cofounder Sergey Brin is in the 12 spot at $86.3 billion. Thiel ranks 677 with a net worth of $4.3 billion and Hoffman ranks 1570 with a net worth of $2 billion.

Musk said he thinks Page believes all consciousness should be treated equally while he disagrees, especially if the digital consciousness decides to curtail the biological intelligence. Like Pichai, Musk is advocating for government regulation of the technology and says at minimum there should be a physical off switch to cut power and connectivity to server farms in case administrative passwords stop working.

Pretty sure Ive seen that movie.

Musk told Carlson that hes considering naming his new AI company TruthGPT.

I will create a third option, although it's starting very late in the game, he said. Can it be done? I don't know.

The entire interview will be available to view on Fox Nation starting April 19 7am ET. Here are some excerpts which includes his thoughts on encrypting Twitter DMs.

Tech and trending reporter with bylines in Bloomberg, Businessweek, Fortune, Fast Company, Insider, TechCrunch and TIME; syndicated in leading publications around the world. Fox 5 DC commentator on consumer trends. Winner CES 2020 Media Trailblazer award. Follow on Twitter @contentnow.

Go here to read the rest:

Elon Musk Dishes On AI Wars With Google, ChatGPT And Twitter On Fox News - Forbes

Posted in Artificial Super Intelligence | Comments Off on Elon Musk Dishes On AI Wars With Google, ChatGPT And Twitter On Fox News – Forbes

Elon Musk says he will launch rival to Microsoft-backed ChatGPT – Reuters

SAN FRANCISCO, April 17 (Reuters) - Billionaire Elon Musk said on Monday he will launch an artificial intelligence (AI) platform that he calls "TruthGPT" to challenge the offerings from Microsoft (MSFT.O) and Google (GOOGL.O).

He criticised Microsoft-backed OpenAI, the firm behind chatbot sensation ChatGPT, of "training the AI to lie" and said OpenAI has now become a "closed source", "for-profit" organisation "closely allied with Microsoft".

He also accused Larry Page, co-founder of Google, of not taking AI safety seriously.

"I'm going to start something which I call 'TruthGPT', or a maximum truth-seeking AI that tries to understand the nature of the universe," Musk said in an interview with Fox News Channel's Tucker Carlson aired on Monday.

He said TruthGPT "might be the best path to safety" that would be "unlikely to annihilate humans".

"It's simply starting late. But I will try to create a third option," Musk said.

Musk, OpenAI, Microsoft and Page did not immediately respond to Reuters' requests for comment.

Musk has been poaching AI researchers from Alphabet Inc's (GOOGL.O) Google to launch a startup to rival OpenAI, people familiar with the matter told Reuters.

Musk last month registered a firm named X.AI Corp, incorporated in Nevada, according to a state filing. The firm listed Musk as the sole director and Jared Birchall, the managing director of Musk's family office, as a secretary.

The move came even after Musk and a group of artificial intelligence experts and industry executives called for a six-month pause in developing systems more powerful than OpenAI's newly launched GPT-4, citing potential risks to society.

Musk also reiterated his warnings about AI during the interview with Carlson, saying "AI is more dangerous than, say, mismanaged aircraft design or production maintenance or bad car production" according to the excerpts.

"It has the potential of civilizational destruction," he said.

He said, for example, that a super intelligent AI can write incredibly well and potentially manipulate public opinions.

He tweeted over the weekend that he had met with former U.S. President Barack Obama when he was president and told him that Washington needed to "encourage AI regulation".

Musk co-founded OpenAI in 2015, but he stepped down from the company's board in 2018. In 2019, he tweeted that he left OpenAI because he had to focus on Tesla and SpaceX.

He also tweeted at that time that other reasons for his departure from OpenAI were, "Tesla was competing for some of the same people as OpenAI & I didnt agree with some of what OpenAI team wanted to do."

Musk, CEO of Tesla and SpaceX, has also become CEO of Twitter, a social media platform he bought for $44 billion last year.

In the interview with Fox News, Musk said he recently valued Twitter at "less than half" of the acquisition price.

In January, Microsoft Corp (MSFT.O) announced a further multi-billion dollar investment in OpenAI, intensifying competition with rival Google and fueling the race to attract AI funding in Silicon Valley.

Reporting by Hyunjoo JinEditing by Chris Reese

Our Standards: The Thomson Reuters Trust Principles.

Read the rest here:

Elon Musk says he will launch rival to Microsoft-backed ChatGPT - Reuters

Posted in Artificial Super Intelligence | Comments Off on Elon Musk says he will launch rival to Microsoft-backed ChatGPT – Reuters