Search Immortality Topics:

Page 21234..»


Category Archives: Artificial General Intelligence

Forget Dystopian Scenarios AI Is Pervasive Today, and the Risks Are Often Hidden – The Good Men Project

ByAnjana Susarla, Michigan State University

The turmoil at ChatGPT-maker OpenAI, bookended by the board of directors firing high-profile CEO Sam Altman on Nov. 17, 2023, and rehiring him just four days later, has put a spotlight on artificial intelligence safety and concerns about the rapid development of artificial general intelligence, or AGI. AGI is loosely defined as human-level intelligence across a range of tasks.

The OpenAI board stated that Altmans termination was for lack of candor, but speculation has centered on a rift between Altman and members of the board over concerns that OpenAIs remarkable growth products such as ChatGPT and Dall-E have acquired hundreds of millions of users worldwide has hindered the companys ability to focus on catastrophic risks posed by AGI.

OpenAIs goal of developing AGI has become entwined with the idea of AI acquiring superintelligent capabilities and the need to safeguard against the technology being misused or going rogue. But for now, AGI and its attendant risks are speculative. Task-specific forms of AI, meanwhile, are very real, have become widespread and often fly under the radar.

As a researcher of information systems and responsible AI, I study how these everyday algorithms work and how they can harm people.

AI plays a visible part in many peoples daily lives, from face recognition unlocking your phone to speech recognition powering your digital assistant. It also plays roles you might be vaguely aware of for example, shaping your social media and online shopping sessions, guiding your video-watching choices and matching you with a driver in a ride-sharing service.

AI also affects your life in ways that might completely escape your notice. If youre applying for a job, many employers use AI in the hiring process. Your bosses might be using it to identify employees who are likely to quit. If youre applying for a loan, odds are your bank is using AI to decide whether to grant it. If youre being treated for a medical condition, your health care providers might use it to assess your medical images. And if you know someone caught up in the criminal justice system, AI could well play a role in determining the course of their life.

Many of the AI systems that fly under the radar have biases that can cause harm. For example, machine learning methods use inductive logic, which starts with a set of premises, to generalize patterns from training data. A machine learning-based resume screening tool was found to be biased against women because the training data reflected past practices when most resumes were submitted by men.

The use of predictive methods in areas ranging from health care to child welfare could exhibit biases such as cohort bias that lead to unequal risk assessments across different groups in society. Even when legal practices prohibit discrimination based on attributes such as race and gender for example, in consumer lending proxy discrimination can still occur. This happens when algorithmic decision-making models do not use characteristics that are legally protected, such as race, and instead use characteristics that are highly correlated or connected with the legally protected characteristic, like neighborhood. Studies have found that risk-equivalent Black and Latino borrowers pay significantly higher interest rates on government-sponsored enterprise securitized and Federal Housing Authority insured loans than white borrowers.

Another form of bias occurs when decision-makers use an algorithm differently from how the algorithms designers intended. In a well-known example, a neural network learned to associate asthma with a lower risk of death from pneumonia. This was because asthmatics with pneumonia are traditionally given more aggressive treatment that lowers their mortality risk compared to the overall population. However, if the outcome from such a neural network is used in hospital bed allocation, then those with asthma and admitted with pneumonia would be dangerously deprioritized.

Biases from algorithms can also result from complex societal feedback loops. For example, when predicting recidivism, authorities attempt to predict which people convicted of crimes are likely to commit crimes again. But the data used to train predictive algorithms is actually about who is likely to get re-arrested.

The Biden administrations recent executive order and enforcement efforts by federal agencies such as the Federal Trade Commission are the first steps in recognizing and safeguarding against algorithmic harms.

And though large language models, such as GPT-3 that powers ChatGPT, and multimodal large language models, such as GPT-4, are steps on the road toward artificial general intelligence, they are also algorithms people are increasingly using in school, work and daily life. Its important to consider the biases that result from widespread use of large language models.

For example, these models could exhibit biases resulting from negative stereotyping involving gender, race or religion, as well as biases in representation of minorities and disabled people. As these models demonstrate the ability to outperform humans on tests such as the bar exam, I believe that they require greater scrutiny to ensure that AI-augmented work conforms to standards of transparency, accuracy and source crediting, and that stakeholders have the authority to enforce such standards.

Ultimately, who wins and loses from large-scale deployment of AI may not be about rogue superintelligence, but about understanding who is vulnerable when algorithmic decision-making is ubiquitous.

Anjana Susarla, Professor of Information Systems, Michigan State University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

***

Premium Members get to view The Good Men Project with NO ADS. Need more info? A complete list of benefits is here.

Photo credit: iStockPhoto.com

Original post:

Forget Dystopian Scenarios AI Is Pervasive Today, and the Risks Are Often Hidden - The Good Men Project

Posted in Artificial General Intelligence | Comments Off on Forget Dystopian Scenarios AI Is Pervasive Today, and the Risks Are Often Hidden – The Good Men Project

The Most Important AI Innovations of 2024 | by AI News | Dec, 2023 – DataDrivenInvestor

Photo by Johannes Plenio on Unsplash

In the fast-paced realm of artificial intelligence (AI), 2024 will be a transformative year, marking a profound shift in our understanding of AI capabilities and its real-world applications. While some developments have been a culmination of years of progress, others have emerged as groundbreaking innovations. In this article, well explore the most important AI innovations that will define 2024.

The term multimodality may sound technical, but its implications are revolutionary. In essence, it refers to an AI systems ability to process diverse types of data, extending beyond text to include images, video, audio, and more. In 2023, the public witnessed the debut of powerful multimodal AI models, with OpenAIs GPT-4 leading the way. This model allows users to upload not only text but also images, enabling the AI to see and interpret visual content.

Google DeepMinds Gemini, unveiled in December, further advanced multimodality, showcasing the models capacity to work with images and audio. This breakthrough opens doors to endless possibilities, such as seeking dinner suggestions based on a photo of your fridge contents. According to Shane Legg, co-founder of Google DeepMind, the shift towards fully multimodal AI marks a significant landmark, indicating a more grounded understanding of the world.

The promise of multimodality extends beyond mere utility; it enables models to be trained on diverse data sets, including images, video, and audio. This wealth of information enhances the models capabilities, propelling them towards the ultimate goal of artificial general intelligence that matches human intellect.

Read the original here:

The Most Important AI Innovations of 2024 | by AI News | Dec, 2023 - DataDrivenInvestor

Posted in Artificial General Intelligence | Comments Off on The Most Important AI Innovations of 2024 | by AI News | Dec, 2023 – DataDrivenInvestor

OpenAI’s six-member board will decide ‘when we’ve attained AGI’ – VentureBeat

Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.

According to OpenAI, the six members of its nonprofit board of directors will determine when the company has attained AGI which it defines as a highly autonomous system that outperforms humans at most economically valuable work. Thanks to a for-profit arm that is legally bound to pursue the Nonprofits mission, once the board decides AGI, or artificial general intelligence, has been reached, such a system will be excluded from IP licenses and other commercial terms with Microsoft, which only apply to pre-AGI technology.

But as the very definition of artificial general intelligence is far from agreed-upon, what does it mean to have a half-dozen people deciding on whether or not AGI has been reached for OpenAI, and therefore, the world? And what will the timing and context of that possible future decision mean for its biggest investor, Microsoft?

The information was included in a thread on X over the weekend by OpenAI developer advocate Logan Kilpatrick. Kilpatrick was responding to a comment by Microsoft president Brad Smith, who at a recent panel with Meta chief scientist Yann LeCun tried to frame OpenAI as more trustworthy because of its nonprofit status even though the Wall Street Journal recently reported that OpenAI is seeking a new valuation of up to $90 billion in a sale of existing shares.

Smith said: Meta is owned by shareholders. OpenAI is owned by a non-profit. Which would you have more confidence in? Getting your technology from a non-profit or a for profit company that is entirely controlled by one human being?

The AI Impact Tour

Connect with the enterprise AI community at VentureBeats AI Impact Tour coming to a city near you!

In his thread, Kilpatrick quoted from the Our structure page on OpenAIs website, which offers details about OpenAIs complex nonprofit/capped profit structure. According to the page, OpenAIs for-profit subsidiary is fully controlled by the OpenAI nonprofit (which is registered in Delaware). While the for-profit subsidiary, OpenAI Global, LLC which appears to have shifted from the limited partnership OpenAI LP, which was previously announced in 2019, about three years after founding the original OpenAI nonprofit is permitted to make and distribute profit, it is subject to the nonprofits mission.

It certainly sounds like once OpenAI achieves their stated mission of reaching AGI, Microsoft will be out of the loop even though at last weeks OpenAI Dev Day, OpenAI CEO Sam Altman told Microsoft CEO Satya Nadella that I think we have the best partnership in techIm excited for us to build AGI together.

And a new interview with Altman in the Financial Times, Altman said the OpenAI/Microsoft partnership was working really well and that he expected to raise a lot more over time. Asked if Microsoft would keep investing further, Altman said: Id hope sotheres a long way to go, and a lot of compute to build out between here and AGI... training expenses are just huge.

From the beginning, OpenAIs structure details say, Microsoft accepted our capped equity offer and our request to leave AGI technologies and governance for the Nonprofit and the rest of humanity.

An OpenAI spokesperson told VentureBeat that OpenAIs mission is to build AGI that is safe and beneficial for everyone. Our board governs the company and consults diverse perspectives from outside experts and stakeholders to help inform its thinking and decisions.We nominate and appoint board members based on their skills, experience and perspective on AI technology, policy and safety.

Currently, the OpenAI nonprofit board of directors is made up of chairman and president Greg Brockman, chief scientist Ilya Sutskever, and CEO Sam Altman, as well as non-employees Adam DAngelo, Tasha McCauley, and Helen Toner.

DAngelo, who is CEO of Quora, as well as tech entrepreneur McCauley and Honer, who isdirector of strategy for the Center for Security and Emerging Technology at Georgetown University, all have been tied to the Effective Altruism movement which came under fire earlier this year for its ties to Sam Bankman-Fried and FTX, as well as its dangerous take on AI safety. And OpenAI has long had its own ties to EA: For example, In March 2017, OpenAI received a grant of $30 million from Open Philanthropy, which is funded by Effective Altruists. And Jan Leike, who leads OpenAIs superalignment team, reportedly identifies with the EA movement.

The OpenAI spokesperson said that None of our board members areeffective altruists, adding that non-employee board members are not effective altruists; their interactions with the EA community are focused on topics related to AI safety or to offer the perspective of someone not closely involved in the group.

Suzy Fulton, who offers outsourced general counsel and legal services to startups and emerging companies in the tech sector, told VentureBeat that while in many circumstances, it would be unusual to have a board make this AGI determination, OpenAIs nonprofit board owes its fiduciary duty to supporting its mission of providing safe AGI that is broadly beneficial.

They believe the nonprofit boards beneficiary is humanity, whereas the for-profit one serves its investors, she explained. Another safeguard that they are trying to build in is having the Board majority independent, where the majority of the members do not have equity in Open AI.

Was this the right way to set up an entity structure and a board to make this critical determination? We may not know the answer until their Board calls it, Fulton said.

Anthony Casey, a professor at The University of Chicago Law School, agreed that having the board decide something as operationally specific as AGI is unusual, but he did not think there is any legal impediment.

It should be fine to specifically identify certain issues that must be made at the Board level, he said. Indeed, if an issue is important enough, corporate law generally imposes a duty on the directors to exercise oversight on that issue, particularly mission-critical issues.

Not all experts believe, however, that artificial general intelligence is coming anytime soon, while some question whether it is even possible.

According to Merve Hickok, president of the Center for AI and Digital Policy, which filed a claim with the FTC in March saying the agency should investigate OpenAI and order the company to halt the release of GPT models until necessary safeguards are established, OpenAI, as an organization, does suffer from diversity of perspectives. Their focus on AGI, she explained, have ignored current impact of AI models and tools.

However, she disagreed with any debate about the size or diversity of the OpenAI board in the context of who gets to determine whether or not OpenAI has attained AGI saying it distracts from discussions about whether their underlying mission and claim is even legitimate.

This would shift the focus, and de facto legitimize the claims that AGI is possible, she said.

But does OpenAIs lack of a clear definition of AGI or whether there will even be one AGI skirt the issue? For example, an OpenAI blog post from February 2023 said the first AGI will be just a point along the continuum of intelligence.And in January 2023 LessWrong interview, CEO Sam Altman said that the future I would like to see is where access to AI is super democratized, where there are several AGIs in the world that can help allow for multiple viewpoints and not have anyone get too powerful.

Still, its hard to say what OpenAIs vague definition of AGI will really mean for Microsoft especially without having full details about the operating agreement between the two companies. For example, Casey said, OpenAIs structure and relationship with Microsoft could lead to some big dispute if OpenAI is sincere about its non-profit mission.

There are a few nonprofits that own for profits, he pointed out the most notable being the Hershey Trust. But they wholly own the for-profit. In that case, it is easy because there is no minority shareholder to object, he explained. But here Microsofts for-profit interests could directly conflict with the non-profit interest of the controlling entity.

The cap on profits is easy to implement, he added, but the hard thing is what to do if meeting the maximum profit conflicts with the mission of the non-profit? Casey added that default rules would say that hitting the profit is the priority and the managers have to put that first (subject to broad discretion under the business judgment rule).

Perhaps, he continued, Microsoft said, Dont worry, we are good either way. You dont owe us any duties. That just doesnt sound like the way Microsoft would negotiate.

VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.

Visit link:

OpenAI's six-member board will decide 'when we've attained AGI' - VentureBeat

Posted in Artificial General Intelligence | Comments Off on OpenAI’s six-member board will decide ‘when we’ve attained AGI’ – VentureBeat

Game-playing DeepMind AI can beat top humans at chess, Go and poker – New Scientist

Shall we play a game?

mccool/Alamy

A single artificial intelligence can beat human players in chess, Go, poker and other games that require a variety of strategies to win. The AI, called Student of Games, was created by Google DeepMind, which says it is a step towards an artificial general intelligence capable of carrying out any task with superhuman performance.

Martin Schmid, who worked at DeepMind on the AI but who is now at a start-up called EquiLibre Technologies, says that the Student of Games (SoG) model can trace its lineage back to two projects. One was DeepStack, the AI created by a team including Schmid at the University of Alberta in Canada and which was the first to beat human professional players at poker. The other was DeepMinds AlphaZero, which has beaten the best human players at games like chess and Go.

The difference between those two models is that one focused on imperfect-knowledge games those where players dont know the state of all other players, such as their hands in poker and one focused on perfect-knowledge games like chess, where both players can see the position of all pieces at all times. The two require fundamentally different approaches. DeepMind hired the whole DeepStack team with the aim of building a model that could generalise across both types of game, which led to the creation of SoG.

Schmid says that SoG begins as a blueprint for how to learn games, and then improve at them through practice. This starter model can then be set loose on different games and teach itself how to play against another version of itself, learning new strategies and gradually becoming more capable. But while DeepMinds previous AlphaZero could adapt to perfect-knowledge games, SoG can adapt to both perfect and imperfect-knowledge games, making it far more generalisable.

The researchers tested SoG on chess, Go, Texas holdem poker and a board game called Scotland Yard, as well as Leduc holdem poker and a custom-made version of Scotland Yard with a different board, and found that it could beat several existing AI models and human players. Schmid says it should be able learn to play other games as well. Theres many games that you can just throw at it and it would be really, really good at it.

This wide-ranging ability comes at a slight cost in performance compared with DeepMinds more specialised algorithms, but SoG can nonetheless easily beat even the best human players at most games it learns. Schmid says that SoG learns to play against itself in order to improve at games, but also to explore the range of possible scenarios from the present state of a game even if it is playing an imperfect-knowledge one.

When youre in a game like poker, its so much harder to figure out; how the hell am I going to search [for the best strategic next move in a game] if I dont know what cards the opponent holds? says Schmid. So there was some some set of ideas coming from AlphaZero, and some set of ideas coming from DeepStack into this big big mix of ideas, which is Student of Games.

Michael Rovatsos at the University of Edinburgh, UK, who wasnt involved in the research, says that while impressive, there is still a very long way to go before an AI can be thought of as generally intelligent, because games are settings in which all rules and behaviours are clearly defined, unlike the real world.

The important thing to highlight here is that its a controlled, self-contained, artificial environment where what everything means, and what the outcome of every action is, is crystal clear, he says. The problem is a toy problem because, while it may be very complicated, its not real.

Topics:

Read more here:

Game-playing DeepMind AI can beat top humans at chess, Go and poker - New Scientist

Posted in Artificial General Intelligence | Comments Off on Game-playing DeepMind AI can beat top humans at chess, Go and poker – New Scientist

Sam Altman Seems to Imply That OpenAI Is Building God – Futurism

Ever since becoming CEO of OpenAI in 2019, cofounder Sam Altman has made the company's number one missionto build an "artificial general intelligence" (AGI) that is both "safe" and can benefit "all of humanity."

And while we haven't really come to an agreement on what would actually count as AGI, Altman's own vision remains as lofty as it is vague.

Take this new interview with the Financial Times where Altman dished on the upcoming GPT-5 and described AGI as a "magic intelligence in the sky," which sounds an awful lot like he's implying his company is building a God-like entity.

OpenAI's own definition of AGI is a "system that outperforms humans at most economically valuable work," a far more down-to-earth description of what amounts to an omnipotent "superintelligence" for Altman.

In an interview with The Atlantic earlier this year, Altman painted a rosy and speculative vision an AGI-powered future, describing a utopian society in which "robots that use solar power for energy can go and mine and refine all of the minerals that they need," all without the requiring the input of "human labor."

And Altman isn't the only one invoking the language of a God-like AI in the sky.

"Were creating God," an AI engineer working on large language models told Vanity Fair in September. "We're creating conscious machines."

In April, Tesla CEO and OpenAI cofounder Elon Musk who recently launched his own AI chatbot called Grok, despite warning about the possibility of an evil AI outsmarting humans and taking over the world for many years told Fox News that Google founder Larry Page "wanted a sort of digital super-intelligence" which would eventually become "basically a digital god, if you will, as soon as possible."

"The reason Open AI exists at all is that Larry Page and I used to be close friends and I would stay at his house in Palo Alto and I would talk to him late in the night about AI safety," Musk added. "At least my perception was that Larry was not taking AI safety seriously enough."

Musk ragequit OpenAI in 2018 over disagreements with the company's direction, a year before Altman was appointed CEO.

For someone so dead-set on AGI, the only trouble is that Altman still sometimes sounds very hazy on the details.

"The vision is to make AGI, figure out how to make it safe...and figure out the benefits," he told the FT,in a vague statement that lacks the degree of specificity you'd expect from the head of a company talking about its number one goal.

But to keep the ball rolling in the meantime, Altman told the newspaper that OpenAI will likely ask Microsoft for even more money, following a $10 billion investment by the tech giant earlier this year.

"Theres a long way to go, and a lot of compute to build out between here and AGI," he told the FT, arguing that "training expenses are just huge."

OpenAI is also conveniently allowing its own board to decide when we've reached AGI, according to the company's website, suggesting there's clearly plenty of wriggle room when it comes to an already hard-to-pin-down topic.

Whether we'll all be witness to a divine ascension of technology or,heck, a robot that can help middle schoolers with their homework remains unclear at best.

Even Altman seemingly hasyet to figure out what the "magic intelligence in the sky" will mean for modern society.

But one thing is for certain: it'll be an extremely expensive endeavor, and he's looking for more investment.

More on AGI: Google AI Chief Says There's a 50% Chance We'll Hit AGI in Just 5 Years

More here:

Sam Altman Seems to Imply That OpenAI Is Building God - Futurism

Posted in Artificial General Intelligence | Comments Off on Sam Altman Seems to Imply That OpenAI Is Building God – Futurism

AI 2023: risks, regulation & an ‘existential threat to humanity’ – RTE.ie

Opinion: AI's quickening pace of development has led to a plethora of coverage and concern over what might come next

These days the public is inundated with news stories about the rise of artificial intelligence and the ever quickening pace of development in the field. The last year has been particularly noteworthy in this regard and the most noteworthy stories came as ChatGPT was introduced to the world in November 2022.

This is one of many Generative AI systems which can almost instantaneously create text on any topic, in any style, of any length, and at a human level of performance. Of course, the text might not be factual, nor might it make sense, but it almost always does.

ChatGPT is a "large language model". It's large in that it has been trained on enormous amounts of text almost everything that is available in a computer-readable form and it produces extremely sophisticated output of a level of competence we would expect of a human. This can be seen as a big sibling to the predictive text system on your smartphone that helps by predicting the next word you might want to type.

We need your consent to load this rte-player contentWe use rte-player to manage extra content that can set cookies on your device and collect data about your activity. Please review their details and accept them to load the content.Manage Preferences

From RT 2fm's Dave Fanning Show, Prof Barry O'Sullivan on the rise of AI

But ChatGPT doesn't do this just at word level, but at the level of entire passages of text. It can also compose answers to complex queries from the user. For example, ChatGPT takes the prompt "how can I make something that flies from cardboard?" and answers with clear instructions, explains the principles of flight that can be utilised and how to incorporate them into your design.

The most powerful AI systems, those using machine learning, are built using huge amounts of data. Arthur C. Clarke said that "any sufficiently advanced technology is indistinguishable from magic". For many years now, there has been growing evidence that the manner in which these systems are created can have considerable negative consequences. For example, AI systems have been shown to replicate and magnify human biases. Some AI systems have been shown to amplify gender and racial biases, often due to hidden biases in the data used to train them. They have also been shown to be brittle in the sense that they can be easily fooled by carefully formulated or manipulated queries.

AI systems have also been built to perform tasks that raise considerable ethical questions such as, for example, predicting the sexual orientation of individuals. There is growing concern about the impact of AI on employment and the future of work. Will AI automate so many tasks that entire jobs will disappear and will this lead to an unemployment crisis? These risks are often referred to as the "short-term" risks of AI. On the back of issues like these, there is a considerable focus on the ethics of AI, how AI can be made trustworthy and safe and the many international initiatives related to the regulation of AI.

We need your consent to load this rte-player contentWe use rte-player to manage extra content that can set cookies on your device and collect data about your activity. Please review their details and accept them to load the content.Manage Preferences

From RT Radio 1's Morning Ireland, Prof Barry O'Sullivan discusses an open letter signed by key figures in artificial intelligence who want powerful AI systems to be suspended amid fears of a threat to humanity.

We have recently also seen a considerable focus on the "long-term" risks of AI which tend to be far more dystopian. Some believe that general purpose AI and, ultimately, artificial general intelligence are on the horizon. Todays AI systems, often referred to as "narrow AI systems", tend to be capable of performing one task well, such as, for example, navigation, movie recommendation, production scheduling and medical diagnosis.

On the other hand, general purpose AI systems can perform many different tasks at a human-level of performance. Take a step further and artificial general intelligence systems would be able to perform all the tasks that a human can and with far greater reliability.

Whether we will ever get to that point, or even if we really would want to, is a matter of debate in the AI community and beyond. However, these systems will introduce a variety of risks, including the extreme situation where AI systems will be so advanced that they would pose an existential threat to humanity. Those who argue that we should be concerned about these risks sometimes compare artificial general intelligence to an alien race, that the existence of this extraordinarily advanced technology would be tantamount to us living with an advanced race of super-human aliens.

We need your consent to load this rte-player contentWe use rte-player to manage extra content that can set cookies on your device and collect data about your activity. Please review their details and accept them to load the content.Manage Preferences

From RT Radio 1's This Week, fears over AI becoming too powerful and endangering humans has been a regular sci-fi theme in film and TV for decades, but could it become a reality?

While I strongly believe that we need to address both short-term and long-term risks associated with AI, we should not let the dystopian elements distract our focus from the very real issues raised by AI today. In terms of existential threat to humanity, the clear and present danger comes from climate change rather than artificial general intelligence. We already see the impacts of climate change across the globe and throughout society. Flooding, impacts on food production and the risks to human wellbeing are real and immediate concerns.

Just like the role AI played in the discovery of the Covid-19 vaccines, the technology has a lot to offer in dealing with climate change. For almost two decades the field of computational sustainability has used the methods of artificial intelligence, data science, mathematics, and computer science, to the challenges of balancing societal, economic, and environmental resources to secure the future well-being of humanity, very much addressing the Sustainable Development Goals agenda.

AI has been used to design sustainable and climate-friendly policies. It has been used to efficiently manage fisheries and plan and monitor natural resources and industrial production. Rather than being seen as an existential threat to humanity, AI should be seen as a tool to help with the greatest threat there exists to humanity today: climate change.

Of course, we cannot let AI develop in a way that is without guardrails and without proper oversight. I am confident that the fact that there is active debate about the risks of AI, and that there are regulatory frameworks being put in place internationally, that we will tame the genie that is AI.

Prof Barry O'Sullivan appears on Game Changer: AI & You which airs on RT One at 10:15pm tonight

The views expressed here are those of the author and do not represent or reflect the views of RT

More:

AI 2023: risks, regulation & an 'existential threat to humanity' - RTE.ie

Posted in Artificial General Intelligence | Comments Off on AI 2023: risks, regulation & an ‘existential threat to humanity’ – RTE.ie