Search Immortality Topics:

Page 21234..»


Category Archives: Singularity

The Singularity When We Merge With AI Won’t Happen – Walter Bradley Center for Natural and Artificial Intelligence

Erik J. Larson, who writes about AI here at Mind Matters News, spoke with EP podcast host Jesse Wright earlier this week about the famed/claimed Singularity, among other things. Thats when human and machine supposedly merge into a Super Humachine (?).

Inventor and futurist Ray Kurzweil has been prophesying that for years. But philosopher and computer scientist Larson, author of The Myth of Artificial Intelligence (Harvard 2021), says not so fast.

The podcast below is nearly an hour long but it is handily divided into segments, a virtual Table of Contents. Weve set it at The Fallacy of the Singularity, with selections from the transcript below. But you can click and enjoy the other parts at your convenience.

00:00 Intro 01:10 Misconceptions about AI Progress

 11:48 Bias and Misinformation in AI Models

21:52 The Plateau of Progress & End of Moore’s Law

31:30 The Fallacy of the Singularity

47:27 Preparing for the Future Job Market

Note: Larson blogs at Colligo, if you wish to follow his work.

And now

Decades ago, Larson says, programmers were focused on getting computers to win at complex board games like chess. One outcome was that their model of the human mind was the computer. And that, he says, became a narrative in our culture.

Larson: [33:19] You know, people are kind of just bad versions of computers. If you look at all the literature coming out of psychology and cognitive science and these kind of fields, theyre always pointing out how were full of bias jumping to the wrong conclusions. We cant be trusted. Our brains are very very Yesterdays Tech so to speak.

Choking off innovation?

Larson sees this easy equation of the mind and the computer as choking off innovation, at which humans excel. It encourages people to believe that computers will solve our problems when there are major gaps in their ability to do so. One outcome is that contrary to clich this one of the least innovative periods in a while.

Larson: [34:25] The last decade is one of the least innovative times that weve had in a long time and its sort of dangerous that everybody thinks the opposite. If people said, wait a minute, were just doing tweaks to neural networks; were just doing extensions to existing technology Yes, were making progress but were doing it at the expense of massive amounts of funding, massive amounts of energy consumption, right?

Instead he sees conformity everywhere, accompanied by a tendency to assume that incremental improvements amount to progress in fundamental understanding.

So how does our self-contented mediocrity produce an imminent, unhinged Singularity?

Well, a pinch of magic helps!

Larson: [37:49] Whats underlying that is this idea that once you get smart enough, you also become alive. And thats just not true. A calculator is extremely good at arithmetic. No one can beat a calculator on the face of the planet but that doesnt mean that your calculator has feelings about how its treated. In a sense, theres just a huge glaring error philosophical error thats being made by the Superintelligence folks, the existential risk folks. Thats wasted energy in my view. Thats not whats going to happen.

If a more powerful computer is not like a human mind, whats really going to happen?

Larson: [38:40] Very bad actors are going to use very powerful machines to screw everything up Somebody gets control of these systems and directs them towards ruining Wall Street, ruining the markets, bringing down the power grid. Thats a big threat. The machines themselves I would bet the farm that theyre not going to make the leap from being faster and calculating more complicated problems to being alive in any sort of sense or having any kind of motivations or something that could misalign like that. Thats the Sci-Fi Vibe thats getting pushed into a scientific discussion.

The Singularity depends on a machine model of the mind

Larson: [46:17] If were just a complicated machine, then it stands to reason that at some point well have a more complicated machine. Its just a continuum and were on that. But if you actually remove that premise and say, look were not machines, were not computers then you have an ability to talk about human culture in a way that can actually be healthy. We think differently, we reason differently, we have superior aspects to our behavior and performance, and we actually do care and have motivations about how things turn out unlike the tools we use.

So it looks as though the transhuman could go extinct without ever existing.

You may also wish to read: Tech pioneer Ray Kurzweil: We will merge with computers by 2045. For computers, Even the very best human is just another notch to pass, he told the COSM Technology Summit. Kurzweil explained, To do that, we need to go inside your brain. When we get to the 2030s, we will be able to do that. So a lot of our thinking will be inside the cloud. In another ten years, our non-biological thinking will be much better than our biological thinking. In 2017, he predicted 2045 for a total merger between man and machine.

View post:

The Singularity When We Merge With AI Won't Happen - Walter Bradley Center for Natural and Artificial Intelligence

Posted in Singularity | Comments Off on The Singularity When We Merge With AI Won’t Happen – Walter Bradley Center for Natural and Artificial Intelligence

The Singularity is Nearer | Daniel S. Smith | The Blogs – The Times of Israel

Review of Ray Kurzweils forthcoming (June 2024) book The Singularity is Nearer: When We Merge with AI. Penguin Publishing Group.

Renowned futurist Ray Kurzweil envisions a future where those under eighty and in good health have the potential to live forever. He predicts that by the 2030s, we will be able to extend the neocortex of our brains into the cloud, enabling a massive increase in human intelligence. Kurzweils latest work, The Singularity is Nearer, takes readers on a journey from ignorance to enlightenment, shedding light on the incredible possibilities that await us. Even if youve previously overlooked Kurzweils predictions over the past four decades, now is the time to take notice. His track record of accurate forecasts demonstrates that we can indeed predict the future, and his insights into what lies ahead are invaluable.

Many who would have brushed Kurzweil aside as a heretic in 2005 when he published the Singularity is Near, reviving John the Baptists prediction, the kingdom of heaven is near (Matthew 3:2), in 1990 the Age of Intelligent Machines, or 1999 Age of Spiritual Machines, are more likely to take his arguments seriously in 2024. Incredible advancements in technology over the past few decades, particularly in the fields of AI and biotech, have lent significant credibility to Kurzweils predictions. Microsoft co-founder Bill Gates describes Kurzweil as, the best person I know at predicting the future of artificial intelligence.

Kurzweil is intervening into a century of literature and debate, right up to spring of 2023. He has been working in the field of artificial intelligence, a termhe does not like because it makes it seem less real,for over sixty years. The book serves as a historiography of machine intelligence and the myriad debates therein.

In 1950 Alan Turing asked Can machines think? Pioneering computer scientist John Von Neumann made the first reference to the Singularity, writing a few years after Turing: the ever-accelerating progress of technology would yield some essential singularity in the history of the race. In 1956, John McCarthy defined AI as getting a computer to do things which, when done by people, are said to involve intelligence. In 1965, British Mathematician Irving John Good predicted an impending intelligence explosion. In that same year, Herbert Simon, a scientist who co-founded the field of artificial intelligence, forecasted by 1985, machines will be capable of doing any work a man can do. In 1993, Vernor Vinge wrote his seminal essay: The Coming Technological Singularity: How to Survive in the Post-Human Era, arguing: Within thirty years, we will have the technological means to create superhuman intelligence. Shortly after, the human era will be ended.

In his 2005 book The Singularity is Near, Kurzweil defines the singularity as an expansion of human intelligence by a factor of trillions through merger with its nonbiological form. This will happen so rapidly that life will be irreversibly transformed. In The Singularity is Nearer, Kurzweil predicts in 2045 humanity will be, Freed from the enclosure of our skulls, and processing on a substrate millions of times faster than biological tissue, our minds will be free to grow exponentially, ultimately expanding our intelligence millions-fold. This is the core of my definition of the Singularity. The laws of physics, allow for a continuation of exponential growth until non biological intelligence is trillions of times more powerful than all of human civilization today, contemporary computers included. This intelligence will be too much for planet earth, and therefore engulf the entire universe.

Critics of Kurzweil such as Microsoft co-founder Paul Allen & Mark Greaves of Schmidt Futures describe his claims as premature. Oren Etzioni, CEO of the Allen Institute for Artificial Intelligence, is also doubtful about the imminence of superhuman AI: Exponentials are very important. If we extrapolate exponentials, we can be exponentially wrong. Mathematician Roger Penrose argued in his 1989 book the Emperors New Mind that some facets of human thinking can never be emulated by a machine. Philosopher John Searle has also argued against humanity achieving machine sapience, whereas engineer and the godfather of nanotechnology Noam Chomsky thinks the singularity is science fiction. Philosopher Hubert Dreyfus said that AI is impossible in a 1965 RAND corporate memo entitled Alchemy and Artificial Intelligence, in which he concluded that the ultimate goals of AI research were as unachievable as were those of alchemy. The computer scientist Joseph Weizenbaum described the idea as obscene, anti-human and immoral. Pulitzer Prize winner Douglas Hofstadter considered it over-promising. Virtual-Reality (VR) pioneer Jaron Lanier emphasizes the importance of preserving individual creativity and personal expression in the digital age, warning against the homogenization of human experiences through technology.

How couples meet. Courtesy: Statista

Yet Kurzweil is doubling down again, arguing the rate of change is itself accelerating. He notes that today 39 percent of couples have met online. Who would have believed this in 2005?

In 2005, we were in the fourth epoch of technological development. According to Kurzweil, we are expected to pass the Turing Test by 2029, marking the transition to the fifth epoch. This prediction was first introduced in his 1999 book The Age of Spiritual Machines.

As we enter the 2030s, the fifth epoch will be characterized by a significant expansion of our cognitive abilities. This will be achieved by connecting the neocortex of our brain to the cloud, a concept Kurzweil explored in his 2012 book How to Create a Mind. For the sixth epoch, provided we are not limited by the speed of light, we can fill the entire universe with our intelligence by the year 2200. His predictions are based on his analysis of exponential growth in technological advancements.

In his 1990 book The Age of Intelligent Machines, Kurzweil predicted: A computer will defeat the human world chess champion around 1998, and well think less of chess as a result. He was one year off, as DeepBlue defeated chessmaster Garry Kasparov in 1997. In 2015, AlphaGo, an AI developed by Googles DeepMind, defeated the European Go champion Fan Hui. This victory marked the first time an AI had beaten a human professional Go player on a full-sized board without a handicap. With all of this progress, why would Kurzweil back down now?

Kurzweil argues AI will not be our competitor, but rather an extension of ourselves.The fifth epoch will involve brain-computer interfaces and will take seconds to minutes (for us) to explore ideas unimaginable to present-day humans. This will benefit humankind, compared to life hundreds of years ago which was, labor-intensive, poverty filled, and disease and disaster prone.

Life is getting exponentially better, yet we hardly notice because the news media tends to amplify tragedies as opposed to steady improvement. Constant fear mongering which plays toward our primal instincts leads to a more pessimistic view of society, for,its easier to share videos of disaster, but gradual progress doesnt generate dramatic footageThis crowds out our capacity to assess positive developments that unfold slowly.

Kurzweil is a techology optimist who takes a historical exponential as opposed to an intuitive linear view of human progress. Linear growth is steady; exponential growth becomes explosive, for we wont experience one hundred years of technological advance in the twenty-first century; we will witness on the order of twenty thousand years of progress. He claims Moores Law has nothing to do with Intel and Thomas Moore, and has in fact been occurring since the 1880s, for, It was the fifth, not the first, paradigm to bring exponential growth to the price/performance of computing.

His optimism set off a debate with Bill Joy of Sun Microsystems whose famous 2000 Wired magazine essay, Why the Future Doesnt Need Us, is more pessimistic. This is part of a larger divide with folks like Elon Musk and Stephen Hawking on the potential perils of artificial general intelligence (AGI). Ethicist and founder of the Machine Intelligence Research Institute (MIRI) Eliezer Yudkowsky argues the only way to deal with the threat of AGI is to shut it all down. Yudkowsky predicts, If somebody builds a too-powerful AI, under present conditions, I expect that every single member of the human species and all biological life on Earth dies shortly thereafter. He predicts a hard takeoff versus Robin Hansons soft takeoff. Kurzweil says he falls somewhere in the middle.

Citing Steven Pinkers 2011 book Better Angels of Our Nature and 2018 book Enlightenment Now, as well as Peter Diamandis and Steven Kotlers 2012 book Abundance, Kurzweil believes the state of the world keeps improving. He uses fifty graphs to show gradual progress over the past century, such as a decline in the rates of poverty, violence and child labor. He expects AI to accelerate these trends. Other optimists include OpenAI CEO Sam Altman who argues: A.I. will be the greatest force for economic empowerment and a lot of people getting rich we have ever seen.

Courtesy: Cambridge University Press

In the technological pessimists most extreme expression, Ted Kacyznki, the unabomber, called, violently, for an anti-tech revolution. Kurzweil wrote in the Age of Spiritual Machines:

Kaczynski is not talking about a contemplative visit to a nineteenth-century Walden Pond, but about the species dropping all of its technology and reverting to a simpler time. Although he makes a compelling case for the dangers and damages that have accompanied industrialization his proposed vision is neither compelling nor feasible. After all, there is too little nature left to return to, and there are too many human beings. For better or worse, were stuck with technology.

Steven Pinker notes: Pessimism can be a self-fulfilling prophecy, so it is best we accept the inevitable and make the most of it. Yuval Harari writes, In the twenty-first century, those who ride the train of progress will acquire divine abilities of creation and destruction, while those left behind will face extinction. Kurzweil says the nonbiological part of our intelligence will combine the pattern-recognition powers of human intelligence with the memory- and skill-sharing ability and memory accuracy of machines, and thus will make it far more powerful than biological intelligence.

Kurzweil argued in The Singularity is Near: any significant derailment of the overall advancement of technology is unlikely. Even epochal events such as two world wars (in which on the order of one hundred million people died), the cold war, and numerous economic, cultural, and social upheavals have failed to make the slightest dent in the pace of technology trends. Over the past two centuries, technological advancements have created a positive feedback loop, leading to improvements in various aspects of human well-being, Our merger with our technology has aspects of a slippery slope, but one that slides up toward greater promise, not down into Nietzches abyss. This will continue, as nanobots may reverse pollution from earlier industrialization.

For example, there has been a rise in the percentage of homes with electricity and computers, a proliferation in the availability of radios and televisions, an increase in life expectancy, and a rise in US GDP per capita. However, as Senator Robert F. Kennedy famously stated, GDP measures everything except that which is worthwhile, suggesting that while these advancements have certainly improved certain aspects of human life, they may not necessarily reflect a holistic view of well-being.

Yuval Harari notes suicide has gone up in industrialized countries, it is an ominous sign that despite higher prosperity, comfort and security, the rate of suicide in the developed world is also much higher than in traditional societies. South Korea has rapidly industrialized since 1985, yet the suicide rate in that same period increased fourfold. Wealthy nations like Switzerland and Japan have more than twice as many suicides per capita than Peru and Ghana. Harari argues this may be because, We dont become satisfied by leading a peaceful and prosperous existence. Rather, we become satisfied when reality matches our expectations. The bad news is that as conditions improve, expectations balloon. Could life extension technologies could potentially help reduce the rates of suicide by giving people more hope for the future?

Some could be forgiven for wondering whether technological advancement has really benefited society. Do students with smartphones, tablets and computers learn better than if they only had a few books, a teacher, a notepad, and a pencil? What about the mental health problems posed by social media?

Writers like Adam Garfinkle, David Brooks and George Will are concerned we have forgotten how to dwell with a text. Yuval Harari does not own a smartphone, for he believes it is impossible to have perspective if you are constantly scrolling. He meditates for two hours a day, and takes month out of each year to go on a silent retreat with no electronics. Of course most of us are not so lucky and are forced to use these technologies, whereas Hararis husband, Itzik Yahav, who Yuval describes as his internet of all things, manages his work. The increasing integration of technology into our lives has been proven to lessen empathy, and the drawbacks paired with the benefits are the paradox at the heart of the book, for one of Kurzweils principles is the respect for human consciousness.

As one indicator of progress, Kurzweil shows that democracy has spread rapidly around the world over the past century. Sure, the right to vote has been extended. But how much do our votes matter if the algorithm knows us better than we know ourselves, and can manipulate us, as the 2016 Cambridge Analytica Scandal showed? Brain scanners can now predict our actions and desires before we are aware of them. Yuval Harari notes: Whats the point of having democratic elections when the algorithms know not only how each person is going to vote, but also the underlying neurological reasons why one person votes Democrat while another votes Republican? Harari continues:

Artificial intelligence and biotechnology might soon overhaul our societies and economies and our bodies and minds too but they are hardly a blip on the current political radar. Present-day democratic structures just cannot collect and process the relevant data fast enough, and most voters dont understand biology and cybernetic well enough to form any pertinent opinions. Hence traditional democratic politics is losing control of events, and is failing to prevent us with meaningful visions of the future.

Kurzweil doubts our political system will have evolved to answer these questions by the time AI passes the Turing Test, which is why we should push candidates to talk more about AI now so we are better able to manage it.

Kurzweils ultimate goal is to show the benefits outweigh the costs, urging: Careful use of AI to provide openness and transparency while minimizing its potential to be used for authoritarian surveillance or to spread disinformation. Combining his pattern recognition theory of mind (PRTM) with the LOAR will allow us to vastly extend our intelligence, and hopefully think of ways to avert the worst before it happens. This is quite the gamble, for he warns the same technologies that could empower us to cure cancer could be used by terrorists to unleash a deadly bioweapon.

A clear example of the benefits outweighing the costs are technological advancement for people with disabilities, who have seen vast improvements in their quality of life. As an inventor, Kurzweils advancements in speech recognition have led to the development of assistive technologies that help people with disabilities perform tasks that might otherwise be impossible, such as communicating, accessing information, and controlling devices. Kurzweil has proposed the idea of using brain-computer interfaces (BCIs) to allow people with paralysis or other disabilities to control computers and other devices using their brainwaves. Life extension could lead to breakthroughs in treating diseases and conditions that disproportionately affect people with disabilities.

The author is not concerned about technological inequality. He cites smartphones as a case in point. At first, perhaps only the super-rich had access, but within years they became so cheap to mass-produce that now practically everybody has one. The same is true with vaccines. In his 2014 book Superintelligence, Oxford University philosopher Nick Bostrom argues social elites will gain first access to biological enhancement mechanisms and inspire a culture shift among everybody else: Many of the initially reluctant might join the bandwagon in order to have a child that is not at a disadvantage relative to the enhanced children of their friends and colleagues. A domino effect will ensure, assuming everybody can access these therapies.

Yuval Harari disagrees. He writes that in the 20th century medicine aimed to heal the sick, whereas in the twenty-first century medicine will increasingly aim to upgrade the healthy. There is hardly any reason to believe this will benefit the masses the same as elites, for

The age of masses may be over, and with it the age of mass medicine. As human soldiers and workers give way to algorithms, at least some elites may conclude there is no point in providing improved or even standard levels of health for masses of useless poor people, and it is far more sensible to focus on upgrading a handful of superhumans beyond the norm..Unlike in the twentieth century, when the elite had a state in fixing the problems of the poor because they were militarily and economically vital, in the twenty-first century the most efficient (albeit ruthless) strategy might be to let go of the useless third-class carriages, and dash forward with the first class only.

The concern is that the elites may find the populace superfluous given the rise of nonhuman intelligence, and therefore take the attitude of Marie Antionette and let them eat cake.

Hararis opinion is worth urgently considering, for Kurzwiel says we are entering the steep part of the exponential. Eliezer Yudkowsky argues in his 1996 book Staring Into the Singularity: Dont describe Life after Singularity in glowing terms. Dont describe it at all. But Kurzweil does not see the merger of humans and machines as something indescribable, but rather something that is already happening. Our intelligence is augmented exponentially by our constant access to smartphones, which is unprecedented because humans and machines are making decisions together.

In the Enlightenment, Rene Descartes said, Cogito ergo sum, or I think, therefore I am. Alan Turing helped set off the field of machine intelligence by asking can machines think? Yuval Harari argues intelligence is decoupling from consciousness, the difference being, Intelligence is the ability to solve problems. Consciousness is the ability to feel things, such as pain, joy, love, and anger.

In the 18th century, John Locke wrote: Since it is the understanding, that sets man above the rest of sensible beings, and gives him all the advantage and dominion, which he has over them; it is certainly a subject, even for its nobleness, worth our labour to inquire into.John Searle argued consciousness could be infused into machines: So the first step is to figure out how the brain does it and then build an artificial machine that has an equally effective mechanism for causing consciousness. Kurzweil believes: In this view a dog is also conscious but somewhat less than a human. An ant has some level of consciousness, too, but much less that of a dog. The ant colony, on the other hand, could be considered to have a higher level of consciousness than the individual ant; it is certainly more intelligent than a lone ant. It matters whether or not machines are conscious, for it is on this basis that we can decide whether or not they should have rights.

Max Tegmark of the Future of Life Institute defines consciousness as subjective experience. The 2018 Cambridge Declaration of Consciousness concluded that consciousness is not exclusive to humans. In the future, it may be possible to transfer consciousness from our brains to computers. By augmenting the neocortex, we can enhance our subjective consciousness, experiencing the world in new ways. Kurzweil envisions, Well be able to send nanobots into the brain noninvasively through the capillaries, bypassing invasive procedures. This would mark the first significant neocortex revolution since the last one two million years ago, potentially enabling us to expand our intelligence a million-fold. In Kurzweils view, those who embrace this augmentation will far surpass those with unaugmented biological brains, leading to an unprecedented cognitive leap forward.

The good news is we will be able to back our brains up to the cloud, just like we do with our documents in Microsoft Office, so our experiences and records will be preserved regardless of what befalls our brain. We will also be able to download new skills in an instant. By the 2030s, we will be able to bring dead loved ones back using all of their data. A recent political attack-ad levied by the super-pac The Lincoln Project recreates US presidential candidate Donald Trumps late father, Fred Trump, disparaging his son. Who is to say which replicants can and cannot be created? By the early 2040s, us mere humans would not be able to tell the difference between our partner and a clone. Kurzweil collected all of his late father, Frederic Kurzweils writings and created a Dad Bot, and is planning on replicating himself. We can only hope this means he will never stop writing, if that is still something humans do in the future.

The Singularitys impact on the economy will be highly disruptive, shifting the focus from deskilling and upskilling to nonskilling. This transition is unique compared to previous industrial revolutions, as the emphasis on education has grown alongside labor productivity. Yet Kurzweil does not believe we are in competition with AI. Despite these challenges, employment has grown from 31% to 48% of the population, with per capita GNP increasing by 600% in constant dollars.

Courtesy: ILO

These trends are supported by research, such as Carl Benedikt Frey and Michael Osbornes 2013 paper and Erik Brynjolfsson and Andrew McAfee in his 2014 book The Second Machine Age, both of which show, to varying degrees, that technology will both eliminate and create jobs. With coding already on the decline, its essential to adapt to these shifts in the job market and economic landscape. The US had a 45 percent poverty rate in 1870, down to 11.5 percent in 2020. Henrik Ekelund, Founder & Chairman of BTS Group, wrote in a recent World Economic Forum (WEF) Agenda article that concerns today about a jobless future will be just as wrong as earlier concerns.

Yet the bigger question is not whether there will be jobs in the future, but rather how to manage the transition. Kurzweil writes: Although it will be technologically and economically possible for everyone to enjoy a standard of living that is high by todays measures, whether we actually provide this support to everyone who needs it will be a political decision.if we are not careful as a society, toxic politics could interfere with rising living standards.

Social protection spending in the US has been on the rise, though some argue that current levels are still inadequate. However, as AI continues to drive down the costs of medicine, food, and housing, its possible that the percentage of GDP devoted to social safety nets may not need to increase significantly. Nevertheless, Daniel Kahneman cautions that the transition may be marked by conflict and violence.

Grand theories on global net job creation offer little comfort to those living paycheck to paycheck and facing job loss due to AI. The COVID-19 pandemic prompted a basic-income pilot program in the US, with enhanced unemployment benefits, business support, and direct stimulus checks. Just as workers were supported during the pandemic, those who lose jobs due to technological change should be assisted. If progress is for the greater good, the burden of sacrifice should be shared by all, especially those who stand to gain financially, rather than solely by those who lose their jobs.

Kurzweil claims that increasing education has helped us adapt to technological change over the past two centuries. When we merge with non-biological intelligence, reskilling and upskilling will become effortless, as machines can instantly transfer skills to one another through the cloud. Our enhanced neocortex will allow us to download skills instantly, and our intelligence will be digitally backed up. Although uploading isnt expected until the 2040s, Kurzweil suggests keeping written records. In The Age of Spiritual Machines, he predicted a 2099 Destroy-all-copies movement, enabling individuals to delete their mind file and all backups, raising questions about the control and ownership of digital consciousness.

He foresees an age of abundance where advances in information technology make essential goods and services increasingly affordable. Food and clothing are becoming information technologies, the former reducing violence upon animals. 3D printing is set to revolutionize manufacturing by shifting the paradigm from centralized to decentralized production. This transformation extends beyond traditional manufacturing and into the realm of biology, enabling the printing of entire organs and even buildings, which could solve homelessness. 3D printing technology is becoming more accessible to non-experts, and is now available at hundreds of UPS locations. In the 2030s, advanced nanomanufacturing will enable the production of nearly anything for mere pennies per pound, thanks to the relentless march of miniaturization.

The main concern for Kurzweil is finding purpose & meaning in a world where many will not have to work if they do not want to. Kurzweils mentor, Marvin Minsky, commented that he does not think this will be a problem, as even now folks are easily entertained sitting in a stadium and watching men play football. Such experiences will be enhanced, for, when we digitally augment our neocortex starting sometime in the 2030s, we will be able to create meaningful expressions that we cannot imagine or understand today. Thanks to AR and VR we will have not just life extension but also radical life enhancement. In his book Extend he argues, Extending life will also mean vastly improving it.

There is also the challenge of trust: its not hard to see how exaggerated fears of secret genetic manipulation or government-controlled nanobots could cause people in 2030 or 2050 to reject crucial treatments. What Kuzweil describes as fundamentalist humanism will be overcome because demand for therapies will be irresistible.

Kurzweil believes death is a tragedy we rationalize away. He writes: When we lose that person, we literally lose part of ourselves. This is not just a metaphorall of the vast pattern recognizers that are filled with the patterns reflecting the person we love suddenly change their nature. Although they can be considered a precious way to keep that person alive within ourselves, the vast neocortical patterns of a lost loved one turn suddenly from triggers of delight to triggers of mourning. He is not willing to accept it. The promise of the Singularity is to liberate us from our limitations. By extending our lifespan, we can not only live longer but also improve our quality of life, reducing the risk of age-related diseases and enhancing our overall well-being.

Building upon the ideas presented in his book Transcend, we are now entering the second phase of this journey, which involves merging biotechnology with AI. In the 2030s, we will enter a new phase, with nanobots repairing our organs and enabling us to live beyond 120 years. He believes, We are going to accelerate the extension of our lifespan starting in the 2020s, so if you are in good health and younger than eighty, this will likely happen in your lifetime. When we begin to utilize all of the earths resources, we will find they are a thousand times greater than we need, so overpopulation is not a concern.

The ultimate goal is to put our destiny in our own hands, rather than leaving it to fate, allowing us to live as long as we desire. AI has already demonstrated its potential in improving the speed and quality of COVID-19 vaccines and in computer-aided drug discovery. It also has the potential to target mental health problems at their root cause. As someone who takes many supplements and expects to be no older than 40 when the Singularity arrives, Kurzweil embodies the optimism and forward-thinking that characterizes this movement towards a new era of human potential. In The Singularity is Near, he writes: Another error that prognosticators make is to consider the transformations that will result from a single trend in todays world as if nothing else will change. A good example is the concern that radical life extension will result in overpopulation and the exhaustion of limited material resources to sustain human life, which ignores comparably radical wealth creation from nanotechnology and strong AI.

Kurzweils optimism in his books contrasts with declining reading habits. While he argues life is improving exponentially, areas like news may not have improved with the shift to digital formats. Kurzweil should address potential downsides, such as shortened attention spans and changing priorities among younger generations. Despite unprecedented access to education, many people choose less intellectually stimulating activities, raising concerns about technologys impact on learning and growth.

The Singularity is Nearer is both a historiography of Kurzweils work and the field of AI, as well as a significant historical document due to Kurzweils firsthand experiences. The book should catalyze further exploration of human-machine integration and its implications. Kurzweils credibility stems from his visionary ideas, once considered outlandish, that have gained traction over time. Although the book covers advanced concepts, its accessibility to the general reader is crucial for fostering a broader societal discussion. Its important for citizens and politicians alike to engage in these conversations and address the ethical, political, legal, and social questions that arise. By doing so, we can proactively manage the development and integration of these transformative technologies.

If we cannot change the future, there is no point in talking about it. Kurzweil is right that the merger between human and machine intelligence is not just inevitable, but already happening. The question, then, is will we have a world akin to Aldous Huxleys Brave New World, or one in which we use technology to greatly reduce suffering and increase human potential? A 1903 quote by George Bernard Shaw best sums up Ray Kurzweil, The reasonable man adapts himself to the world; the unreasonable one persists in trying to adapt the world to himself. Therefore all progress rests on the unreasonable man.

Go here to read the rest:

The Singularity is Nearer | Daniel S. Smith | The Blogs - The Times of Israel

Posted in Singularity | Comments Off on The Singularity is Nearer | Daniel S. Smith | The Blogs – The Times of Israel

What Is the AI Singularity, and Is It Real? – How-To Geek

Key Takeaways

As AI continues to advance, the topic of the singularity becomes ever more prominent. But what exactly is the singularity, when is it expected to arrive, and what risks does it pose to humanity?

Sci-fi films have toyed with the idea of the singularity and super-intelligent AI for decades, as it's a pretty alluring topic. But it's important to know before we delve into the details of the singularity that this is an entirely theoretical concept at the moment. Yes, AI is always being improved upon, but the singularity is a far-off caliber of AI that may never be reached.

This is because the AI singularity refers to the point at which AI intelligence surpasses human intelligence. According to an Oxford Academic article, this would mean that computers are "intelligent enough to copy themselves to outnumber us and improve themselves to out-think us."

As said by Vernor Vinge, the creation of "superhuman intelligence" and "human equivalence in a machine" are what will likely lead to the singularity becoming a reality. But the term "AI singularity" also covers another possibility, and that's the point at which computers can get smarter and develop without the need for human input. In short, AI technology will be out of our control.

While the AI singularity has been posed as something that will bring machines with superhuman intelligence, there are other possibilities, too. A level of exceptional intelligence would still need to be reached by machines, but this intelligence may not necessarily be a simulation of human thinking. In fact, the singularity could be caused by a super-intelligent machine, or group of machines, that think and function in a way that we've never seen before. Until the singularity occurs, there's no knowing what exact form such intelligent systems will take.

With network technology being invaluable to how the modern world works, the achievement of the singularity may be followed by super-intelligent computers communicating with each other without human facilitation. The term "technological singularity" has many overlaps with the more niche "AI singularity", as both involve super-intelligent AI and the uncontrollable growth of intelligent machines. The technological singularity is more of an umbrella term for the eventual uncontrollable growth of computers, and also tends to require the involvement of highly intelligent AI.

A key part of what the AI singularity will bring is an uncontrollable and exponential uptick in technological growth. Once technology is intelligent enough to learn and develop on its own and reaches the singularity, progress and expansion will be made rapidly, and this steep growth won't be controllable by humans.

In a Tech Target article, this other element of the singularity is described as the point at which "technology growth is out of control and irreversible." So, there are two factors at play here: super-intelligent technology, and the uncontrolled growth of it.

To develop a computer system capable of meeting and exceeding the human mind's abilities requires several major scientific and engineering leaps before it becomes a reality. Tools like the ChatGPT chatbot and DALL-E image generator are impressive, but I don't think they're anywhere near intelligent enough to earn singularity status. Things like sentience, understanding nuance and context, knowing if what's being said is true, and interpreting emotions, are all beyond current AI systems' capabilities. Because of this, these AI tools aren't considered to be intelligent, be it in a human- or non-human-simulated fashion.

While some professionals think that even current AI models, such as Google's LaMDA, could be sentient, there are a lot of mixed opinions on this topic. A LaMDA engineer was even placed on administrative leave for claiming that LaMDA could be sentient. The engineer in question, Blake Lemoine, stated in an X post that his opinions on sentience were based on his religious beliefs.

LaMDA is yet to be officially described as sentient, and the same goes for any other AI system.

No one can see the future, so there are many differing predictions regarding the singularity. In fact, some believe that the singularity will never be reached. Let's get into these varying viewpoints.

A popular singularity prediction is that of Ray Kurzweil, the Director of Engineering at Google. In Kurzweil's 2005 book, 'The Singularity Is Near: When Humans Transcend Biology', he predicts that machines that surpass human intelligence will be created by 2029. Moreover, Kurzweil believes that humans and computers will merge by 2045, which is what Kurzweil believes to be the singularity.

Another similar prediction was posed by Ben Goertzel, CEO of SingularityNET. Goertzel predicted in a 2023 Decrypt interview that he expects the singularity to be achieved in less than a decade. Futurist and SoftBank CEO Masayoshi Son believes we'll reach the singularity later on, but possibly as soon as 2047.

But others aren't so sure. In fact, some believe that limits on computing power are a major factor that will prevent us from ever reaching the singularity. The co-founder of AI-neuroscience venture Numenta, Jeff Hawkins, has stated that he believes "in the end there are limits to how big and fast computers can run." Furthermore, Hawkins states that:

We will build machines that are more 'intelligent' than humans, and this might happen quickly, but there will be no singularity, no runaway growth in intelligence.

Others believe the sheer complexity of human intelligence will be a major barrier here. Computer modeling expert Douglas Hoftstadter believes that "life and intelligence are far more complex than the current singularitarians seem to believe, so I doubt it will happen in the next couple of centuries."

Humans have lived comfortably as the (as far as we believe) most intelligent beings in known existence for hundreds of thousands of years. So, it's natural for the idea of a computer super-intelligence to make us a little uncomfortable. But what are the main concerns here?

The biggest perceived risk of the singularity is humanity's loss of control of super-intelligent technology. At the moment, AI systems are controlled by their developers. For instance, ChatGPT can't simply decide that it wants to learn more or start providing users with prohibited content. Its functions are defined by OpenAI, the chatbot's creator, because ChatGPT doesn't have the capacity to consider breaking the rules. ChatGPT can make decisions, but only based on its defined parameters and training data, nothing further. Yes, the chatbot can experience AI hallucination and unknowingly lie, but this isn't the same as making the decision to lie.

But what if ChatGPT became so intelligent that it could think for itself?

If ChatGPT became intelligent enough to dismiss its parameters, it could respond to prompts in any way it wants. Of course, significant human work would need to be done to bring ChatGPT to this level, but if that ever did happen, it would be very dangerous. With a huge stock of training data, the ability to write code, and access to the internet, a super-intelligent ChatGPT could quickly become uncontrollable.

While ChatGPT may never achieve super-intelligence, there are plenty of other AI systems out there that could, some of which probably don't even exist yet. These systems could cause an array of issues if they surpass human intelligence, including:

According to Jack Kelley writing for Forbes, AI is already causing job displacement. In the article, job cuts at IBM and Chegg are discussed, and a World Economics study about the future of the job market with AI is also included. In this report, it is predicted that 25 percent of jobs will be negatively impacted over the next five years. In the same study, it was stated that 75 percent of global companies are looking to adopt AI technologies in some way. With this huge proportion of the worldwide industry taking on AI tech, job displacement due to AI may continue to worsen.

The continued adoption of AI systems also poses a threat to our planet. Powering a highly intelligent computer, such as a generative AI machine, would require large amounts of resources. In a Cornell University study, it was estimated that to train one large language model is equal to around 300,000 kg of carbon dioxide emissions. If super advanced AI becomes a key part of human civilization, our environment may suffer considerably.

The initiation of conflict by super-intelligent AI machines may also pose a threat, as well as how machines surpassing human intelligence will affect the global economy. But it's important to remember that each of these pointers is dependent on the AI singularity even being achieved, and there's no knowing if that will ever happen.

While the continued advancement of AI may hint that we're headed towards the AI singularity, no one knows if this technological milestone is realistic. While achieving the singularity isn't impossible, it's worth noting that we have many more steps to take before we even come close to it. So, don't worry about the threats of the singularity just yet. After all, it may never arrive!

More:

What Is the AI Singularity, and Is It Real? - How-To Geek

Posted in Singularity | Comments Off on What Is the AI Singularity, and Is It Real? – How-To Geek

Microsoft exec rejects rogue generative AI risk – The HeartlanderThe Heartlander – Heartlander News

(The Center Square) A Microsoft policy executive said to Pennsylvania lawmakers this week hes unaware of the possibility that generative artificial intelligence could develop sentiency and become exploitive even dangerous.

This is not new to Microsoft, said Tyler Clark, Microsofts director of state and local government affairs. Humans need to guide this technology and thats what we are committed to doing safely and responsibly.

Clarks response comes after lawmakers on the House Majority Policy Committee pressed him on the theory of technological singularity which posits that artificial intelligence will outsmart human regulations and leave society at its whims.

Although it sounds like the plot of a dystopian novel, researchers and policymakers acknowledge the possibility, though not an inevitable one or even entirely negative one.

What I fear most is not AI or singularity but human frailty, said Dr. Nivash Jeevanandam, senior researcher and author for the National AI Portal of India, in an article published by Emeritus.

Jeevanandam said that humans may not realize the singularity has arrived until machines reject human intervention in their processes.

Such a state of AI singularity will be permanent once computers understand what we so often tend to forget: making mistakes is part of being human, he said.

Thats why experts believe policymakers must step in with stringent regulation to prevent unintended ethical consequences.

Dr. Deeptankar DeMazumder, a physicist and cardiologist at the McGowan Institute for Regenerative Medicine in Pittsburgh, said although he uses AI responsibily to predict better health outcomes for patients, he agrees theres a dark side particularly in the area of social and political discourse thats growing unfettered, sometimes amplifying misinformation or creating dangerous echo chambers.

I like it that Amazon knows what I want to buy its very helpful, dont get me wrong, he told the committee. At the same time, I dont like it when Im watching the news on YouTube that it tries to predict what I want to watch this is the point where you need a lot of regulation.

Clark, too, said human guidance can shape AI into a helpful tool, not an apocalyptic threat. He pointed to its Copilot program that can help students learn to read and write, for example.

It also creates images, learns a users speaking and writing style so that it can return better search results, write emails and essays all tools that can grow the workforce, not deplete it, Clark argued.

According to Microsofts research, Clark said about 70% of workers both want to unload as many tasks as possible to AI, but also fear its implications for job availability.

In November, research firm Forrester predicted that 2.4 million U.S. jobs those it calls white collar positions will be replaced by generative AI by 2030. Those with annual salaries in excess of $90,000 in the legal, scientific, and administrative professions face the most risk, according to the data.

Generative AI has the power to be as impactful as some of the most transformative technologies of our time, said Srividya Sridharan, VP and group research director at Forrester. The mass adoption of generative AI has transformed customer and employee interactions and expectations.

This shift means generative AI has transformed from a nice-to-have to the basis for competitive roadmaps.

Jeevanandam said AIs possibilities arent all bad. In his article, he writes that the technologys ability to process and analyze information could solve problems that have stumped humans for generations.

Lets just say we need AI singularity to evolve from homo sapiens to homo deus! he said.

Still, though, he warns that political gumption, at a global scale, is necessary to outline the ethical principles of using AI that governs across borders.

Follow this link:

Microsoft exec rejects rogue generative AI risk - The HeartlanderThe Heartlander - Heartlander News

Posted in Singularity | Comments Off on Microsoft exec rejects rogue generative AI risk – The HeartlanderThe Heartlander – Heartlander News

Entering the Singularity Point in full swing – PRESSENZA International News Agency

This is not the first time we refer to this issue, but from time to time it is interesting to make a comparison in the context of the current situation.

By Javier Belda

By way of introduction, we will make a brief note of what the Singularity is about, leaving aside the more technical details, which have already been exposed in other publications (IHPS, WCHS, etc.) [1].

We write Singularity in capital letters because it is a term that refers to a historical time, such as the Middle Ages; a coming historical time.

The Point of Singularity is enigmatic. It means that a multitude of phenomena of great magnitude occur at a given instant. In the graphs of the analysts of historical processes, it can be observed that the events on the vertical axis crisis are accelerating, while the horizontal axis time is practically at a standstill, i.e. all the different crises occur at the same moment.

It is known, graphically and mathematically, how the Singularity occurs, but it is not known in detail what it will consist of, how such a whirlwind will occur in events and in our particular lives?

Last Tuesday, UN Secretary-General Antnio Guterres said The world is entering an era of chaos, referring to the lack of cohesion of nations to move towards a sustainable evolutionary process.

On Thursday, it was Donald Trump who warned that the world is in tremendous danger from a possible World War III.

Whether we like or dislike these characters, we note that their statements would have been implausible only a short time ago.

We think so, yes, although what we define as a point could span a period of perhaps 10 years.

We are now reaching this point in terrible political, psychosocial, environmental, humanitarian, etc. conditions. So it would seem possible to say that the Singularity has a destructive connotation. However, such a view seems to us too inertial.

To digress; as Mario Rodriguez Cobos (Silo) explains in Psychology Notes: to every stimulus corresponds a more or less reflex answer, but also subsequent non-immediate elaborations, which are more complex and interesting. By exercising reversible attention, the subject discovers the possibility of controlling mechanical answers. This is of vital importance in order not to create a greater evil with immediate answers and, among other things, to produce profound transferential elaborations. End of digression.

From there, we resist a reflex inevitability that would lead us to equate Singularity with the end of humanity.

We have several authors who have addressed the Singularity as references among them Alexander Panov and Akop Nazaretian of the Russian Academy of Sciences, as well as the American David Christian, a renowned historian of Big History but it is especially Silos postulates that seem to us to be the most appropriate for interpreting this fundamental moment of human civilization.

Silo, without venturing to specify a specific date, anticipated in his vision and definition of the Singularity. He established a scheme of evolution based on generations, moments, epochs, ages, civilizations, and periods.

The Argentinean thinker focused his doctrine on what must be done to face this critical threshold of the human species.

you can only put an end to violence in yourself and in others and the world around you, by inner faith and inner meditation. [2]

He said many things that are worth remembering and quoting in context. On positioning oneself in one way or another and the choice that we each have, the following comments come to mind.

So, sense and nonsense are parts of the same reality, and arguments can be found for one or the other perspective since both have real existence and are in a complementary relation.

[] Before each step that is taken in the world, the YES and the NO appear as real possibilities, and with their arguments, emotional climates, and motor attitudes, which correspond to the positive and the negative of the individual confronted with a contradictory reality.

Everything can be and not be, or even more, everything is and is not.

The recognition of the real existence of both poles implies the possibility of choosing one or the other path: that of faith in the plan of the Universe, of enthusiasm and creative activity, of the self-affirmation of Being in oneself and the World, or the path of paralyzing skepticism, of doubt in ones creative possibilities, of meaninglessness and apathy.

If we consider the time of the Singularity as something exceptionally violent and convulsive, we are making a mistake, because extreme violence has been taking place in the wild in the preceding centuries -however- going almost unnoticed by many people, who did not have the slightest perception of the events that occurred in other latitudes.

We have, for example, the case of the Congo, where a genocide took place that annihilated more than 15 million people by Belgian colonists between the 19th and 20th centuries. Another illustration of the end of the world for some is the Charra people, who inhabited present-day Uruguay, which was destroyed last century. According to experts, of the 25 million indigenous inhabitants of the Americas, less than 2 million remained just a century after their discovery by Europeans.

What disappeared in the time of the Singularity is the false idea of stability to which some of us were accustomed.

Anything that seemed immovable to us, such as human rights, the defense of childhood, the economy, private property, the self-management of your body, with its manifestation in the world, etc., can nowadays be smashed. Either by the fall of socially sustaining values or by the technological possibilities of deepfake.

The image of the Universe is the image of the transformation of time. It can only be drawn when the present man is transformed. The optic to be used must not be the one that interprets the past but the one that interprets the future. Everything in the Universe tends towards the future. The sense of freedom towards the future is precisely the sense of the Earth and the world. Man must be overcome by the future of his mind. This overcoming begins when man awakens and with him awakens the whole Universe. [3]

In reality, our categories of good and evil are all too human. We are accustomed to life on planet X, but beyond it, all our notions of the habitability of space and the same gravitational and space-time references change. Outside our planet, the concept of day and night, or the assimilation of life to the rays of the Sun star, simply does not exist.

With this exercise in abstraction, we seek a twist that allows us to represent ourselves beyond the immovable. It will be from a new location that we will be able to imagine possibilities that go beyond, to leap over our all too human-earthly conceptions.

The Russian analysts cited above imagined three possibilities after crossing the point of Singularity:

1-a downward gradient, pointing to the end of the life process on the planet,

2-another horizontal one, which would point to the virtualization of society (Mtrix-like)

3-and a third vertical gradient, which would mean a qualitative leap for the continuity of the evolutionary process.

For our part, we humanists subscribe to the third hypothesis. Not just because we like it better, but because in the light of all the data and our intuition, it seems the most complex-evolutionary, provided we can take a broadly focused look.

About this third possibility Eric Chaisson formulated the contrast between the thermodynamic arrow of time and the cosmological arrow of time, which constitutes the main paradox of the natural sciences in the current picture of the world, said Nazaretian.

The existing empirical material allows us to trace the process from quark and gluon plasma to stars, planets, and organic molecules; from Proterozoic cyanobacteria to higher vertebrates and complex Pleistocene biocenoses; and from Homo habilis herds with sharp stones to post-industrial civilization. Thus, over the entire available retrospective viewing distance from the Big Bang to the present day the Metagalaxy was coherently shifting from the most probable (natural, from the entropic point of view) to the less probable, but quasi-stable, states. [4]

Chaisson refers to the vertical gradient as the inrush of the cosmological arrow of time, which Akop cites in his book Non-Linear Future.

To put it in plain words: the interesting thing will be what we can imagine As soon as you get up from your seat and take two steps if you pay attention to yourself, you will realize that everything is imagined. It is from imagination and our register of full freedom that we will be able to project ourselves into a new world without violence. Such a world would be an unprecedented paradigm in the evolutionary history of the human species.

1: For a more in-depth study we recommend David Smanos book, A Narrow Path in Theoretical Anthropology, among others by the same author, recently presented at the UACM.

2: Silo. The Healing of Suffering, 1969.

3: Silo. Philosophy of the point of view, 1962

4: Akop Nazaretian. Non-Linear Future. Ed. Suma Qamaa. Buenos Aires, 2005.

The original article can be found here

Read more from the original source:

Entering the Singularity Point in full swing - PRESSENZA International News Agency

Posted in Singularity | Comments Off on Entering the Singularity Point in full swing – PRESSENZA International News Agency

The Evolution and Future Impact of Personal AI | Singularity Hub – Medriva

In less than a decade, artificial intelligence (AI) is projected to know us better than our own families. This may sound like a sci-fi movie plot, but its a future envisioned by tech futurist Peter Diamandis. This article explores the transformative effects of AI technology on human interaction and decision-making, as well as the potential benefits and challenges of an AI-driven future.

As highlighted in Diamandis blog post Abundance 35: Future AI Assistant, AI assistants are rapidly evolving. They are not only tasked with simple commands like scheduling appointments or setting reminders, but also with gathering video and data for IoT, and taking actions on behalf of users. As AI becomes more sophisticated, it is predicted to understand human emotions and subtle communications better, further personalizing our interaction with this technology.

One groundbreaking development in this field is the emergence of empathy in AI. The potential for AI to develop emotional intelligence could revolutionize our relationship with technology, blurring the lines between human and machine interactions.

As AI technology continues to advance, it is reshaping the business landscape. The Singularity Hub discusses the Six Ds of Exponentials which include digitization, deception, disruption, demonetization, dematerialization, and democratization. These six stages represent how digital technologies are empowering entrepreneurs to disrupt industries and bring about exponential growth.

A classic example of this is Kodaks failure to adapt to the digital photography revolution, leading to its bankruptcy. In contrast, Instagrams success in leveraging digital technology to democratize photography showcases the transformative power of digital disruption.

Digital technologies are not merely tools for disruption; they are also catalysts for innovation. Moonshot thinking, a concept that involves setting wildly ambitious goals, is driving innovation and problem-solving in the digital age. AI, with its potential to process vast amounts of data and make complex decisions, plays a crucial role in this paradigm shift.

While the benefits of AI are undeniable, its crucial to consider the potential challenges. Privacy and ethics are two key concerns. As AI becomes more entwined with our lives, questions of data security and misuse arise. Furthermore, as AI begins to understand us better than our families, ethical dilemmas about the role of AI in shaping human relationships and society become more pressing.

In conclusion, by 2028, personal AI may transform our lives in ways we can only imagine today. While the path to this future is fraught with challenges, the potential benefits are enormous. As we navigate this exciting yet uncertain future, its crucial to continually question, debate, and shape the role of AI in our lives.

Go here to read the rest:

The Evolution and Future Impact of Personal AI | Singularity Hub - Medriva

Posted in Singularity | Comments Off on The Evolution and Future Impact of Personal AI | Singularity Hub – Medriva