Search Immortality Topics:



The Future of AI: What to Expect in the Next 5 Years – TechTarget

Posted: January 28, 2024 at 2:35 am

For the first half of the 20th century, the concept of artificial intelligence held meaning almost exclusively for science fiction fans. In literature and cinema, androids, sentient machines and other forms of AI sat at the center of many of science fiction's high-water marks -- from Metropolis to I, Robot. In the second half of the last century, scientists and technologists began earnestly attempting to realize AI.

At the 1956 Dartmouth Summer Research Project on Artificial Intelligence, co-host John McCarthy introduced the phrase artificial intelligence and helped incubate an organized community of AI researchers.

Often AI hype outpaced the actual capacities of anything those researchers could create. But in the last moments of the 20th century, significant AI advances started to rattle society at large. When IBM's Deep Blue defeated chess master Gary Kasparov, the game's reigning champion, the event seemed to signal not only a historic and singular defeat in chess history -- the first time that a computer had beaten a top player -- but also that a threshold had been crossed. Thinking machines had left the realm of sci-fi and entered the real world.

The era of big data and the exponential growth of computational power in accord with Moore's Law has subsequently enabled AI to sift through gargantuan amounts of data and learn how to accomplish tasks that had previously been accomplished only by humans.

The effects of this machine renaissance have permeated society: Voice recognition devices such as Alexa, recommendation engines like those used by Netflix to suggest which movie you should watch next based on your viewing history, and the modest steps taken by driverless cars and other autonomous vehicles are emblematic. But the next five years of AI development will likely lead to major societal changes that go well beyond what we've seen to date.

Speed of life. The most obvious change that many people will feel across society is an increase in the tempo of engagements with large institutions. Any organization that engages regularly with large numbers of users -- businesses, government units, nonprofits -- will be compelled to implement AI in the decision-making processes and in their public- and consumer-facing activities. AI will allow these organizations to make most of the decisions much more quickly. As a result, we will all feel life speeding up.

End of privacy. Society will also see its ethical commitments tested by powerful AI systems, especially privacy. AI systems will likely become much more knowledgeable about each of us than we are about ourselves. Our commitment to protecting privacy has already been severely tested by emerging technologies over the last 50 years. As the cost of peering deeply into our personal data drops and more powerful algorithms capable of assessing massive amounts of data become more widespread, we will probably find that it was a technological barrier more than an ethical commitment that led society to enshrine privacy.

Thicket of AI law. We can also expect the regulatory environment to become much trickier for organizations using AI. Presently all across the planet, governments at every level, local to national to transnational, are seeking to regulate the deployment of AI. In the U.S. alone, we can expect an AI law thicket as city, state and federal government units draft, implement and begin to enforce new AI laws. And the European Union will almost certainly implement its long-awaited AI regulation within the next six to 12 business quarters. The legal complexity of doing business will grow considerably in the next five years as a result.

Human-AI teaming. Much of society will expect businesses and government to use AI as an augmentation of human intelligence and expertise, or as a partner, to one or more humans working toward a goal, as opposed to using it to displace human workers. One of the effects of artificial intelligence having been born as an idea in century-old science fiction tales is that the tropes of the genre, chief among them dramatic depictions of artificial intelligence as an existential threat to humans, are buried deep in our collective psyche. Human-AI teaming, or keeping humans in any process that is being substantially influenced by artificial intelligence, will be key to managing the resultant fear of AI that permeates society.

The following industries will be affected most by AI:

The notion that AI poses an existential risk to humans has existed almost as long as the concept of AI itself. But in the last two years, as generative AI has become a hot topic of public discussion and debate, fear of AI has taken on newer undertones.

Arguably the most realistic form of this AI anxiety is a fear of human societies losing control to AI-enabled systems. We can already see this happening voluntarily in use cases such as algorithmic trading in the finance industry. The whole point of such implementations is to exploit the capacities of synthetic minds to operate at speeds that outpace the quickest human brains by many orders of magnitude.

However, the existential threats that have been posited by Elon Musk, Geoffrey Hinton and other AI pioneers seem at best like science fiction, and much less hopeful than much of the AI fiction created 100 years ago.

The more likely long-term risk of AI anxiety in the present is missed opportunities. To the extent that organizations in this moment might take these claims seriously and underinvest based on those fears, human societies will miss out on significant efficiency gains, potential innovations that flow from human-AI teaming, and possibly even new forms of technological innovation, scientific knowledge production and other modes of societal innovation that powerful AI systems can indirectly catalyze.

Michael Bennett is director of educational curriculum and business lead for responsible AI in The Institute for Experiential Artificial Intelligence at Northeastern University in Boston. Previously, he served as Discovery Partners Institute's director of student experiential immersion learning programs at the University of Illinois. He holds a J.D. from Harvard Law School.

View post:

The Future of AI: What to Expect in the Next 5 Years - TechTarget

Recommendation and review posted by G. Smith