Search Immortality Topics:

Page 6«..5678..»


Category Archives: Artificial General Intelligence

As AutoGPT released, should we be worried about AI? – Cosmos

A new artificial intelligence tool coming just months after ChatGPT appears to offer a big leap forward it can improve itself without human intervention.

The artificial intelligence (AI) tool AutoGPT was released by the same company, OpenAI, which brought us ChatGPT last year. AutoGPT promises to overcome the limitations of large language models (LLMs) such as ChatGPT.

ChatGPT exploded onto the scene at the end of 2022 for its ability to respond to text prompts in a (somewhat) human-like and natural way. It has, caused concern for occasionally including misleading or incorrect information in its responses and for its potential to be used for plagiarising assignments in schools and universities.

But its not these limitations that AutoGPT seeks to overcome.

AI is categorised as weak (narrow) or strong (general). As an AI tool designed to carry out a single task, ChatGPT is considered weak AI.

AutoGPT is created with a view to becoming a strong AI, or artificial general intelligence, theoretically capable of carrying out many different types of task, including those for which it wasnt originally designed to perform.

LLMs are designed to respond to prompts produced by human users. They then respond to that and await the next prompt.

AutoGPT is being designed to give itself prompts, creating a loop. Masa, a writer on AutoGPTs website, explains: It works by breaking a larger task into smaller sub-tasks and then spinning off independent Auto-GPT instances in order to work on them. The original instance acts as a kind of project manager, coordinating all of the work carried out and compiling it into a finished result.

But is a self-improving AI a good thing? Many experts are worried about the trajectory of artificial intelligence research.

The respected and influential British Medical Journal has published an article titled Threats by artificial intelligence to human health and human existence in which they explain three key reasons we should be concerned about AI.

Get an update of science stories delivered straight to your inbox.

Threats identified by the international team of doctors and public health experts, including those from Australia, relate to misuse of AI and the impact of the ongoing failure to adapt to and regulate the technology.

The authors note the significance of AI and its potential to have transformative effect on society. But they also warn that artificial general intelligence in particular poses an existential threat to humanity.

First, they warn of the ability of AI to clean, organise, and analyse massive data sets including of personal data such as images. Such capabilities could be used to manipulate and distort information and for AI surveillance. The authors note that such surveillance is in development in more than 75 countries ranging from liberal democracies to military regimes, [which] have been expanding such systems.

Second they say Lethal Autonomous Weapon Systems (LAWS) capable of locating, selecting, and engaging human targets without the need for human supervision, could lead to killing at an industrial scale.

Finally, the authors raise concern over the loss of jobs that will come from the spread of AI technology in many industries. Estimates are that tens to hundreds of millions of jobs will be lost in the coming decade.

While there would be many benefits from ending work that is repetitive, dangerous and unpleasant, we already know that unemployment is strongly associated with adverse health outcomes and behaviour, they write.

The authors highlight artificial general intelligence as a threat to the existence of human civilisation itself.

We are now seeking to create machines that are vastly more intelligent and powerful than ourselves. The potential for such machines to apply this intelligence and powerwhether deliberately or notin ways that could harm or subjugate humansis real and has to be considered

With exponential growth in AI research and development, the window of opportunity to avoid serious and potentially existential harms is closing. The future outcomes of the development of AI and AGI will depend on policy decisions taken now and on the effectiveness of regulatory institutions that we design to minimise risk and harm and maximise benefit, they write.

See the rest here:

As AutoGPT released, should we be worried about AI? - Cosmos

Posted in Artificial General Intelligence | Comments Off on As AutoGPT released, should we be worried about AI? – Cosmos

Opinion | We Need a Manhattan Project for AI Safety – POLITICO

At the heart of the threat is whats called the alignment problem the idea that a powerful computer brain might no longer be aligned with the best interests of human beings. Unlike fairness, or job loss, there arent obvious policy solutions to alignment. Its a highly technical problem that some experts fear may never be solvable. But the government does have a role to play in confronting massive, uncertain problems like this. In fact, it may be the most important role it can play on AI: to fund a research project on the scale it deserves.

Theres a successful precedent for this: The Manhattan Project was one of the most ambitious technological undertakings of the 20th century. At its peak, 129,000 people worked on the project at sites across the United States and Canada. They were trying to solve a problem that was critical to national security, and which nobody was sure could be solved: how to harness nuclear power to build a weapon.

Some eight decades later, the need has arisen for a government research project that matches the original Manhattan Projects scale and urgency. In some ways the goal is exactly the opposite of the first Manhattan Project, which opened the door to previously unimaginable destruction. This time, the goal must be to prevent unimaginable destruction, as well as merely difficult-to-anticipate destruction.

Dont just take it from me. Expert opinion only differs over whether the risks from AI are unprecedentedly large or literally existential.

Even the scientists who set the groundwork for todays AI models are sounding the alarm. Most recently, the Godfather of AI himself, Geoffrey Hinton, quit his post at Google to call attention to the risks AI poses to humanity.

That may sound like science fiction, but its a reality that is rushing toward us faster than almost anyone anticipated. Today, progress in AI is measured in days and weeks, not months and years.

As little as two years ago, the forecasting platform Metaculus put the likely arrival of weak artificial general intelligence a unified system that can compete with the typical college-educated human on most tasks sometime around the year 2040.

Now forecasters anticipate AGI will arrive in 2026. Strong AGIs with robotic capabilities that match or surpass most humans are forecasted to emerge just five years later. With the ability to automate AI research itself, the next milestone would be a superintelligence with unfathomable power.

Dont count on the normal channels of government to save us from that.

Policymakers cannot afford a drawn-out interagency process or notice and comment period to prepare for whats coming. On the contrary, making the most of AIs tremendous upside while heading off catastrophe will require our government to stop taking a backseat role and act with a nimbleness not seen in generations. Hence the need for a new Manhattan Project.

A Manhattan Project for X is one of those clichs of American politics that seldom merits the hype. AI is the rare exception. Ensuring AGI develops safely and for the betterment of humanity will require public investment into focused research, high levels of public and private coordination and a leader with the tenacity of General Leslie Groves the projects infamous overseer, whose aggressive, top-down leadership style mirrored that of a modern tech CEO.

Ensuring AGI develops safely and for the betterment of humanity will require a leader with the tenacity of General Leslie Groves, Hammond writes.|AP Photo

Im not the only person to suggest it: AI thinker Gary Marcus and the legendary computer scientist Judea Pearl recently endorsed the idea as well, at least informally. But what exactly would that look like in practice?

Fortunately, we already know quite a bit about the problem and can sketch out the tools we need to tackle it.

One issue is that large neural networks like GPT-4 the generative AIs that are causing the most concern right now are mostly a black box, with reasoning processes we cant yet fully understand or control. But with the right setup, researchers can in principle run experiments that uncover particular circuits hidden within the billions of connections. This is known as mechanistic interpretability research, and its the closest thing we have to neuroscience for artificial brains.

Unfortunately, the field is still young, and far behind in its understanding of how current models do what they do. The ability to run experiments on large, unrestricted models is mostly reserved for researchers within the major AI companies. The dearth of opportunities in mechanistic interpretability and alignment research is a classic public goods problem. Training large AI models costs millions of dollars in cloud computing services, especially if one iterates through different configurations. The private AI labs are thus hesitant to burn capital on training models with no commercial purpose. Government-funded data centers, in contrast, would be under no obligation to return value to shareholders, and could provide free computing resources to thousands of potential researchers with ideas to contribute.

The government could also ensure research proceeds in relative safety and provide a central connection for experts to share their knowledge.

With all that in mind, a Manhattan Project for AI safety should have at least 5 core functions:

1. It would serve a coordination role, pulling together the leadership of the top AI companies OpenAI and its chief competitors, Anthropic and Google DeepMind to disclose their plans in confidence, develop shared safety protocols and forestall the present arms-race dynamic.

2. It would draw on their talent and expertise to accelerate the construction of government-owned data centers managed under the highest security, including an air gap, a deliberate disconnection from outside networks, ensuring that future, more powerful AIs are unable to escape onto the open internet. Such facilities would likely be overseen by the Department of Energys Artificial Intelligence and Technology Office, given its existing mission to accelerate the demonstration of trustworthy AI.

3. It would compel the participating companies to collaborate on safety and alignment research, and require models that pose safety risks to be trained and extensively tested in secure facilities.

4. It would provide public testbeds for academic researchers and other external scientists to study the innards of large models like GPT-4, greatly building on existing initiatives like the National AI Research Resource and helping to grow the nascent field of AI interpretability.

5. And it would provide a cloud platform for training advanced AI models for within-government needs, ensuring the privacy of sensitive government data and serving as a hedge against runaway corporate power.

The alternative to a massive public effort like this attempting to kick the can on the AI problem wont cut it.

The only other serious proposal right now is a pause on new AI development, and even many tech skeptics see that as unrealistic. It may even be counterproductive. Our understanding of how powerful AI systems could go rogue is immature at best, but stands to improve greatly through continued testing, especially of larger models. Air-gapped data centers will thus be essential for experimenting with AI failure modes in a secured setting. This includes pushing models to their limits to explore potentially dangerous emergent behaviors, such as deceptiveness or power-seeking.

The Manhattan Project analogy is not perfect, but it helps to draw a contrast with those who argue that AI safety requires pausing research into more powerful models altogether. The project didnt seek to decelerate the construction of atomic weaponry, but to master it.

Even if AGIs end up being farther off than most experts expect, a Manhattan Project for AI safety is unlikely to go to waste. Indeed, many less-than-existential AI risks are already upon us, crying out for aggressive research into mitigation and adaptation strategies. So what are we waiting for?

Originally posted here:

Opinion | We Need a Manhattan Project for AI Safety - POLITICO

Posted in Artificial General Intelligence | Comments Off on Opinion | We Need a Manhattan Project for AI Safety – POLITICO

I created a billion-pound start-up business Elon Musk & Jeff Bezos asked to meet me heres the secret to… – The Sun

A DAD who created a billion-pound start-up business has revealed the secret to his success.

Emad Mostaque, 40, is the founder and CEO of artificial intelligence giant Stability AI and has recently been in talks with the likes of Elon Musk and Jeff Bezos.

But the London dad-of-two has worked hard to get where he is today - and doesn't plan on stopping any time soon.

Emad has gone from developing AI at home to help his autistic son, to employing 150 people across the globe for his billion-pound empire.

The 40-year-old usually calls Notting Hill home, but has started travelling to San Francisco for work.

On his most recent trip, Emad met with Bezos, the founder and CEO of Amazon, and made a deal with Musk, the CEO of Twitter.

He says the secret to his success in the AI world is using it to help humans, not overtake them.

Emad told The Times: I have a different approach to everyone else in this space, because Im building narrow models to augment humans, whereas almost everyone else is trying to build an AGI [Artificial general intelligence] to pretty much replace humans and look over them.

Emad is from Bangladesh but his parents shifted to the UK when he was a boy and settled the family in London's Walthamstow.

The dad said he was always good at numbers in school but struggled socially as he has Aspergers and ADHD.

The 40-year-old studied computer science and maths at Oxford, then became a hedge fund manager.

But when Emad's son was diagnosed with autism he quit to develop something to help the youngster.

Emad recalled: We built an AI to look at all the literature and then extract what could be the case, and then the drug repurposing.

He says that homemade AI allowed his family create an approach that took his son to a better, more cheerful place.

And, as a result, Emad inspired himself.

He started a charity that aims to give tablets loaded with AI tutors to one billion children.

He added: Can you imagine if every child had their own AI looking out for them, a personalised system that teaches them and learns from them?

"In 10 to 20 years, when they grow up, those kids will change the world.

Emad also founded the billion-pound start-up Stability AI in recent years, and it's one of the companies behind Stable Diffusion.

The tool has taken the world by storm in recent months with its ability to create images that could pass as photos from a mere text prompt.

Today, Emad is continuing to develop AI - and he says it is one of the most important inventions of history.

He explained it as somewhere between fire and the internal combustion engine.

The rest is here:

I created a billion-pound start-up business Elon Musk & Jeff Bezos asked to meet me heres the secret to... - The Sun

Posted in Artificial General Intelligence | Comments Off on I created a billion-pound start-up business Elon Musk & Jeff Bezos asked to meet me heres the secret to… – The Sun

AI At The Crossroads: Navigating Job Displacement, Ethical Concerns, And The Future Of Work – Forbes

Artificial intelligence (AI) is gaining more attention as its role in the future of work becomes increasingly apparent.

Last week, the Writers Guild of America (WGA), went on a strike because of the proposed use of AI, specifically ChatGPT, in television and film writing. The guild argued that the use of AI would replace jobs, increase compensation disparities and lead to greater job insecurity for writers, reported Time. While this was happening, Geoffrey Hinton, the 75-year-old scientist widely seen as the godfather of AI, announced his resignation from Google, warning of the growing dangers in the field.

The BBC reported that Hinton, whose research on neural networks and deep learning has paved the way for AI systems like ChatGPTwhich according to the Wall Street Journal is causing a stock-market ruckusexpressed regret over his work and raised concerns about bad actors potential misuse of AI. Hintons departure comes at a time when AI advancements are accelerating at an unprecedented pace. For example, KPMG announced last week that they would make generative AI available to all employees, including partners, for both client-facing and internal work.

Meanwhile, during an interview with the Wall Street Journal, DeepMind CEO Demis Hassabis expressed his belief that a form of Artificial General Intelligence (AGI) could be developed within a few years. Elsewhere, implications for medical leaders are becoming apparent. According to Erwin Loh, who explained in BMJ Leader, new technologies like ChatGPT and generative AIhave the potential to transform the way we practice medicine, and revolutionize the healthcare system. Lohs article provided a great explanation of AI technologies in the context of healthcare and also offered insights into how they could be used to improve delivery.

So, its clear there is enormous potential to revolutionize the world of work. The question now is: how do we make sure that AI works for us rather than against us? After all, the opportunities are vast and growing. For example, research published by MIT Sloan Management Review concluded that Data can help companies better understand and improve the employee experience, leading to a more productive workforce. But, it must be remembered that job displacement is a genuine concern. Insider reported that CEOs get closer to finally saying itAI will wipe out more jobs than they can count.

One study conducted by researchers from OpenAI, OpenResearch, and the University of Pennsylvania, revealed that around 80% of the US workforce could see at least 10% of their tasks affected by the introduction of GPTs (Generative Pre-trained Transformers), with around 19% of workers experiencing at least 50% of their tasks impacted. Having reviewed the study, Natalia Weisz, a professor at Argentinas IAE Business School, concluded in an interview that, unlike previous technological revolutions, higher-paying occupations with more education requirements, such as degrees and even doctorates, are more exposed compared to those that do not require a profession. We are moving into a phase in which traditional professions may very well be disrupted, said Weisz.

We are living in a time of rapid technological change. We must be mindful to ensure that these advances do not lead to job losses or create an unequal playing field, said Shrenik Rao, editor-in-chief of Madras Courier, in an interview. Rao predicted that Bots could replace journalists and columnists. Illustrators, cartoonists and artists could lose their jobs, too. Instead of telling stories in the public interest, stories will be produced based on what will garner views or clicks.

Rao, who is also a columnist at Haaretz, went on to probe the ethical implications of AI-driven news production. What will happen with journalistic ethics? Will the news be produced to serve certain political agendas? Will there be an objective filter for news and images? He concluded that a lack of transparency over how AI is used in journalism could lead to further mistrust in the media.

Governments, industries, and individuals need to engage in a collaborative effort to navigate this brave new world. By fostering open conversations, creating robust regulatory frameworks, and prioritizing education and adaptation, we can ensure that artificial intelligence serves as a force for good, empowering humanity to overcome challenges and reach new heights. Leadership is, therefore, required to ensure that AI is used responsibly and ethically: it is time for all to come together and propel AI forward in a way that works for everyone.

Disclaimer: The author of this article is an Associate Editor at BMJ Leader. This role is independent and distinct from his role as the author of this article. It should be noted that despite his position at BMJ Leader, he had no participation in the review, production, or publication of the academic paper referenced in this articlespecifically, the work by Erwin Loh on the potential of AI technologies in healthcare.

I'm a leadership professor writing expert commentary on global affairs

Read more:

AI At The Crossroads: Navigating Job Displacement, Ethical Concerns, And The Future Of Work - Forbes

Posted in Artificial General Intelligence | Comments Off on AI At The Crossroads: Navigating Job Displacement, Ethical Concerns, And The Future Of Work – Forbes

China’s State-Sponsored AI Claims it Will Surpass ChatGPT by End … – Tom’s Hardware

Chinese company iFlytek yesterday threw itself at OpenAI's bread and butter by announcing a product that's aimed to compete with ChatGPT. The company's "Spark Desk" was described by the company's founder and president Liu Qingfengas a "cognitive big model" and even as the "dawn of artificial general intelligence." Beyond those buzzwords was also a promise, however: that Spark Desk would surpass OpenAI's ChatGPT by end of year.

We should be happy that we can chalk some of the above to corporate marketing buzzwords. I can assure you my mind will be elsewhere if/when I have to write an article announcing that Artificial General Intelligence (AGI) is here. Perhaps even more so if that AGI was Chinese, as I'm unsure I can trust an AGI that thinks social scoring systems are the bread and butter of its "cognitive big model."

All that aside, however, there are a number of interesting elements to this release. Every day we hear of another ChatGPT spawn, whether official or unofficially linked to the work of OpenAI. With the tech's impact being what it is (even if that impact is still cloudy and mostly unrealized), it was only natural that every player with enough money and expertise to pursue its own models adapted to their own public and stakeholders would do so.

Of course, the question is whether iFlyTek and Spark Desk can actually deliver on its claims, specifically that of one-upping OpenAI at its own game. The answer will likely depend on multiple factors and how you view the subject.

ChatGPT wasn't made for the Eastern public. There's a training data, linguistic and cultural chasm (opens in new tab) that separates ChatGPT's impact on the Eastern seaboard compared to the Western world. And by that definition, it's entirely possible that "Spark Desk" will offer Eastern users a much improved (and more relevant) user experience compared to ChatGPT, given enough maturation time. Perhaps that could even happen before the end of the year. It certainly already offers a better experience for Chinese users in particular, as the country pre-emptively banned ChatGPT from passing beyond its Great Firewall (except in Hong Kong).

The decision to ban ChatGPT likely stifled innovation that it would have otherwise triggered. We need only look to our own news outlets to see the number of industries being impacted by the tech. That's something no country can willingly give up on at a whim; it really was simply a matter of time before a competent competitor was announced.

We'll have to wait for years' end to see whether iFlytek's claims materialize or evaporate. It'll be hard enough to quantitatively compare the two LLMs, especially when their target users are so culturally different. One thing is for sure: OpenAI won't simply rest on its laurels and wait for other industry players to catch up, especially not when there's a target date for that to happen.

The ChatGPT version iFlytek's Spark Model will have to contend with won't be the same GPT we know today. Perhaps OpenAI's expertise and time-to-market advantages will keep it ahead in the race (and that's what we'd expect); but we also have to remember there are multiple ways to achieve a wanted result. It's been shown that the U.S.'s technological sanctions against China have had less of an effect than hoped for, and that the country is willing to shoulder the burden (and costs) of paying for the training of cutting-edge technology in outdated, superseded hardware millions of dollars and hundreds of extra training hours be damned.

A few extra billions could be just enough to bridge the gap. That's China's bet, at least.

See more here:

China's State-Sponsored AI Claims it Will Surpass ChatGPT by End ... - Tom's Hardware

Posted in Artificial General Intelligence | Comments Off on China’s State-Sponsored AI Claims it Will Surpass ChatGPT by End … – Tom’s Hardware

Artificial Intelligence Will Take Away Jobs and Disrupt Society, says Zerodha CEO Nithin Kamath – DATAQUEST

The emergence of artificial general intelligence (AGI) brings both positive and negative implications. On the positive side, AGI has the potential to significantly enhance the productivity and effectiveness of professionals in various fields. By leveraging its capabilities, experts can achieve higher levels of efficiency and accomplish tasks more effectively than ever before. However, alongside these advancements, the rise of AGI also raises valid concerns. One major worry is the potential loss of jobs due to automation.

Along the same lines, Nithin Kamath, founder and CEO, Zerodha tweeted that while they would never fire any of their employees over a piece of technology, the concerns about AI taking away jobs and disrupting the society on the whole was real. Weve just created an internal AI policy to give clarity to the team, given the AI/job loss anxiety. This is our stance: We will not fire anyone on the team just because we have implemented a new piece of technology that makes an earlier job redundant. In 2021, wed said that we hadnt found AI use cases when everyone was claiming to be powered by AI without any AI. With recent breakthroughs in AI, we finally think AI will take away jobs and can disrupt society, he said.

As AGI becomes more sophisticated, there is a risk that certain professions might be replaced by intelligent machines, leading to unemployment and economic disruption. This calls for thoughtful consideration of strategies to address the impact on the workforce and ensure a smooth transition to the era of AGI. Kamath, quoting an internal chat, said. AI on its own wont wake up and kill us all (for a while, at least!). The current capitalistic and economic systems will rapidly adopt AI, accelerating inequality and loss of human agency. Thats the immediate risk.

Another concern is the ethical and safety implications associated with AGI development. AGI systems possess immense computational power and may exhibit behaviors and decision-making processes that are difficult to predict or control. Ensuring that AGI systems align with human values, ethics, and safety standards becomes paramount to prevent unintended consequences or misuse of this powerful technology.

In todays capitalism, businesses prioritize shareholder value creation above stakeholders like employees, customers, vendors, the country, and the planet. Markets incentivize business leaders to prioritize profits over everything else; if not, shareholders vote them out. Many companies will likely let go of employees and blame it on AI. In the process, companies will earn more and make their shareholders wealthier, worsening wealth inequality. This isnt a good outcome for humanity, opined Kamath.

Moreover, there are broader societal and philosophical concerns regarding AGIs impact on human existence. Questions about the potential loss of human uniqueness, the boundaries of consciousness, and the moral responsibility associated with creating highly intelligent machines raise profound ethical dilemmas that require careful reflection and regulation. While the hope is for governments worldwide to put some guardrails, it may be unlikely given the deglobalization rhetoric. No country would want to sit idle while another becomes more powerful on the back of AI, cautioned Kamath.

In summary, while the advent of artificial general intelligence offers significant benefits, such as improved professional efficiency, it also introduces legitimate concerns. It is crucial to address the potential socioeconomic impacts, ethical considerations, and philosophical questions associated with AGI to harness its potential for the betterment of humanity.

Visit link:

Artificial Intelligence Will Take Away Jobs and Disrupt Society, says Zerodha CEO Nithin Kamath - DATAQUEST

Posted in Artificial General Intelligence | Comments Off on Artificial Intelligence Will Take Away Jobs and Disrupt Society, says Zerodha CEO Nithin Kamath – DATAQUEST