Search Immortality Topics:

Page 7«..4567


Category Archives: Singularity

GPT-4, AGI, and the Hunt for Superintelligence – IEEE Spectrum

For decades, the most exalted goal of artificial intelligence has been the creation of an artificial general intelligence, or AGI, capable of matching or even outperforming human beings on any intellectual task. Its an ambitious goal long regarded with a mixture of awe and apprehension, because of the likelihood of massive social disruption any such AGI would undoubtedly cause. For years, though, such discussions were theoretical. Specific predictions forecasting AGIs arrival were hard to come by.

But now, thanks to the latest large language models from the AI research firm OpenAI, the concept of an artificial general intelligence suddenly seems much less speculative. OpenAIs latest LLMsGPT-3.5, GPT-4, and the chatbot/interface ChatGPThave made believers out of many previous skeptics. However, as spectacular tech advances often do, they seem also to have unleashed a torrent of misinformation, wild assertions, and misguided dread. Speculation has erupted recently about the end of the world-wide web as we know it, end-runs around GPT guardrails, and AI chaos agents doing their worst (the latter of which seems to be little more than clickbait sensationalism). There were scattered musings that GPT-4 is a step towards machine consciousness, and, more ridiculously, that GPT-4 is itself slightly conscious. There were also assertions that GPT-5, which OpenAIs CEO Sam Altman said last week is not currently being trained, will itself be an AGI.

The number of people who argue that we wont get to AGI is becoming smaller and smaller.Christoph Koch, Allen Institute

To provide some clarity, IEEE Spectrum contacted Christof Koch, chief scientist of the Mindscope Program at Seattles Allen Institute. Koch has a background in both AI and neuroscience and is the author of three books on consciousness as well as hundreds of articles on the subject, including features for IEEE Spectrum and Scientific American.

Christof Koch on...

What would be the important characteristics of an artificial general intelligence as far as youre concerned? How would it go beyond what we have now?

Christof Koch: AGI is ill defined because we dont know how to define intelligence. Because we dont understand it. Intelligence, most broadly defined, is sort of the ability to behave in complex environments that have multitudes of different events occurring at a multitude of different time scales, and successfully learning and thriving in such environments.

Im more interested in this idea of an artificial general intelligence. And I agree that even if youre talking about AGI, its somewhat nebulous. People have different opinions.

Koch: Well, by one definition, it would be like an intelligent human, but vastly quicker. So you can ask itlike Chat GPTyou can ask it any question, and you immediately get an answer, and the answer is deep. Its totally researched. Its articulated and you can ask it to explain why. I mean, this is the remarkable thing now about Chat GPT, right? It can give you its train of thought. In fact, you can ask it to write code, and then you can ask it, please explain it to me. And it can go through the program, line by line, or module by module, and explain what it does. Its a train-of-thought type of reasoning thats really quite remarkable.

You know, thats one of the things that has emerged out of these large language models. Most people think about AGI in terms of human intelligence, but with infinite memory and with totally rational abilities to thinkunlike us. We have all these biases. Were swayed by all sorts of things that we like or dislike, given our upbringing and culture, etcetera, and supposedly AGI would be less amenable to that. And maybe able to do it vastly faster, right? Because if it just depends on the underlying hardware and the hardware keeps on speeding up and you can go into the cloud, then of course you could be like a human except a hundred times faster. And thats what Nick Bostrom called a superintelligence.

What GPT-4 shows, very clearly, is that there are different routes to intelligence.Christoph Koch, Allen Institute

Youve touched on this idea of superintelligence. Im not sure what this would be, except something that would be virtually indistinguishable from a humana very, very smart humanexcept for its enormous speed. And presumably, accuracy. Is this something you believe?

Koch: Thats one way to think about it. Its just like very smart people. But it can take those very smart people, like Albert Einstein, years to complete their insights and finish their work. Or to think and reason through something, it may take us, say, half an hour. But an AGI may be able to do this in one second. So if thats the case, and its reasoning is effective, it may as well be superintelligent.

So this is basically the singularity idea, except for the self-creation and self-perpetuation.

Koch: Well, yeah, I mean the singularity Id like to stay away from that, because thats yet another sort of more nebulous idea: that machines will be able to design themselves, each successive generation better than the one before, and then they just take off and totally escape our control. I dont find that useful to think about in the real world. But if you return to where we are today, we have today amazing networks, amazing algorithms, that anyone can log on to and use, that already have emergent abilities that are unpredictable. They have become so large that they can do things that they werent directly trained for.

Lets go back to the basic way these networks are trained. You give them a string of text or tokens. Lets call it text. And then the algorithm predicts the next word, and the next word, and the next word, ad infinitum. And everything we see now comes just out of this very simple thing applied to vast reams of human-generated writing. You feed it all text that people have written. Its read all of Wikipedia. Its read all of, I dont know, the Reddits and Subreddits and many thousands of books from Project Gutenberg and all of that stuff. It has ingested what people have written over the last century. And then it mimics that. And so, who would have thought that that leads to something that could be called intelligent? But it seems that it does. It has this emergent, unpredictable behavior.

For instance, although it wasnt trained to write love letters, it can write love letters. It can do limericks. It can generate jokes. I just asked it to generate some trivia questions. You can ask it to generate computer code. It was also trained on code, on GitHub. It speaks many languagesI tested it in German.

So you just mentioned that it can write jokes. But it has no concept of humor. So it doesnt know why a joke works. Does that matter? Or will it matter?

Koch: It may not matter. I think what it shows, very clearly, is that there are different routes to intelligence. One way you get to intelligence, is human intelligence. You take a baby, you expose this baby to its family, its environment, the child goes to school, it reads, etc. And then it understands in some sense, right?

In the long term, I think everything is on the table. And yes, I think we need to worry about existential threats.Christoph Koch, Allen Institute

Although many people, if you ask them why a joke is funny, they cant really tell you, either. The ability of many people to understand things is quite limited. If you ask people, well, why is this joke funny? Or how does that work? Many people have no idea. And so [GPT-4] may not be that different from many people. These large language models demonstrate quite clearly that you do not have to have a human-level type of understanding in order to compose text that to all appearances was written by somebody who has had a secondary or tertiary education.

Chat GPT reminds me of a widely read, smart, undergraduate student who has an answer for everything, but whos also overly confident of his answers and, quite often, his answers are wrong. I mean, thats a thing with Chat GPT. You cant really trust it. You always have to check because very often it gets the answer right, but you can ask other questions, for example about math, or attributing a quote, or a reasoning problem, and the answer is plainly wrong.

This is a well-known weakness youre referring to, a tendency to hallucinate or make assertions that seem semantically and syntactically correct, but are actually completely incorrect.

Koch: People do this constantly. They make all sorts of claims and often theyre simply not true. So again, this is not that different from humans. But I grant you, for practical applications right now, you can not depend on it. You always have to check other sourcesWikipedia, or your own knowledge, etc. But thats going to change.

The elephant in the room, it seems to me that were kind of dancing around, all of us, is consciousness. You and Francis Crick, 25 years ago, among other things, speculated that planning for the future and dealing with the unexpected may be part of the function of consciousness. And it just so happens that thats exactly what GPT-4 has trouble with.

Koch: So, consciousness and intelligence. Lets think a little bit about them. Theyre quite different. Intelligence ultimately is about behaviors, about acting in the world. If youre intelligent, youre going to do certain behaviors and youre not going to do some other behaviors. Consciousness is very different. Consciousness is more a state of being. Youre happy, youre sad, you see something, you smell something, you dread something, you dream something, you fear something, you imagine something. Those are all different conscious states.

Now, it is true that with evolution, we see in humans and other animals and maybe even squids and birds, etc., that they have some amount of intelligence and that goes hand in hand with consciousness. So at least in biological creatures, consciousness and intelligence seem to go hand in hand. But for engineered artifacts like computers, that does not have to be at all the case. They can be intelligent, maybe even superintelligent, without feeling like anything.

Its not consciousness that we need to be concerned about. Its their motivation and high intelligence that we need to be concerned with.Christoph Koch, Allen Institute

And certainly theres one of the two dominant theories of consciousness, the Integrated Information Theory of consciousness, that says you can never simulate consciousness. It cant be computed, cant be simulated. It has to be built into the hardware. Yes, you will be able to build a computer that simulates a human brain and the way people think, but it doesnt mean its conscious. We have computer programs that simulate the gravity of the black hole at the center of our galaxy, but funny enough, no one is concerned that the astrophysicist who runs the computer simulation on a laptop is going to be sucked into the laptop. Because the laptop doesnt have the causal power of a black hole. And same thing with consciousness. Just because you can simulate the behavior associated with consciousness, including speech, including speaking about it, doesnt mean that you actually have the causal power to instantiate consciousness. So by that theory, it would say, these computers, while they might be as intelligent or even more intelligent than humans, they will never be conscious. They will never feel.

Which you dont really need, by the way, for anything practical. If you want to build machines that help us and serve our goals by providing text and predicting the weather or the stock market, writing code, or fighting wars, you dont really care about consciousness. You care about reasoning and motivation. The machine needs to be able to predict and then based on that prediction, do certain things. And even for the doomsday scenarios, its not consciousness that we need to be concerned about. Its their motivation and high intelligence that we need to be concerned with. And that can be independent of consciousness.

Why do we need to be concerned about those?

Koch: Look, were the dominant species on the planet, for better or worse, because we are the most intelligent and the most aggressive. Now we are building creatures that are clearly getting better and better at mimicking one of our unique hallmarksintelligence. Of course, some people, the military, independent state actors, terrorist groups, they will want to marry that advanced intelligent machine technology to warfighting capability. Its going to happen sooner or later. And then you have machines that might be semiautonomous or even fully autonomous and that are very intelligent and also very aggressive. And thats not something that we want to do without very, very careful thinking about it.

But that kind of mayhem would require both the ability to plan and also mobility, in the sense of being embodied in something, a mobile form.

Koch: Correct, but thats already happening. Think about a car, like a Tesla. Fast forward another ten years. You can put the capability of something like a GPT into a drone. Look what the drone attacks are doing right now. The Iranian drones that the Russians are buying and launching into Ukraine. Now imagine, that those drones can tap into the cloud and gain superior, intelligent abilities.

Theres a recent paper by a team of authors at Microsoft, and they theorize about whether GPT-4 has a theory of mind.

Koch: Think about a novel. Any novels about what the protagonist thinks, and then what he or she imputes what others think. Much of modern literature is about, what do people think, believe, fear, or desire. So its not surprising that GPT-4 can answer such questions.

Is that really human-level understanding? Thats a much more difficult question to grok. Does it matter? is a more relevant question. If these machines behave like they understand us, yeah, I think its a further step on the road to artificial generalized intelligence, because then they begin to understand our motivationincluding maybe not just generic human motivations, but the motivation of a specific individual in a specific situation, and what that implies.

When people say in the long term this is dangerous, that doesnt mean, well, maybe in 200 years. This could mean maybe in three years, this could be dangerous.Christoph Koch, Allen Institute

Another risk, which also gets a lot of attention, is the idea that these models could be used to produce disinformation on a staggering scale and with staggering flexibility.

Koch: Totally. You see it already. There were already some deep fakes around the Donald Trump arrest, right?

So it would seem that this is going to usher in some kind of new era, really. I mean, into a society that is already reeling with disinformation spread by social media. Or amplified by social media, I should say.

Koch: I agree. Thats why I was one of the early signatories on this proposal that was circulating from the Future of Life Institute, that calls on the tech industry to pause for at least for half a year before releasing the next, more powerful large language model. This isnt a plea to stop the development of ever more powerful models. Were just saying, lets just hit pause here in order to try to understand and safeguard. Because its changing so very rapidly. The basic invention that made this possible are transformer networks, right? And they were only published in 2017, in a paper by Google Brain, Attention Is All You Need. And then GPT, the original GPT, was born the next year, in 2018. GPT-2 in 2019, I think, and last year, GPT-3 and ChatGPT. And now GPT-4. So where are we going to be ten years from now?

Do you think the upsides are going to outweigh whatever risks we will face in the shorter term? In other words, will it ultimately pay off?

Koch: Well, it depends what your long-term view is on this. If its existential risk, if theres a possibility of extinction, then, of course, nothing can justify it. I cant read the future, of course. Theres no question that these methodsI mean, I see it already in my own workthese large language models make people more powerful programmers. You can more quickly gain new knowledge or take existing knowledge and manipulate it. They are certainly force multipliers for people that have knowledge or skills.

Ten years ago, this wasnt even imaginable. I remember even six or seven years ago people arguing, well, these large language models are very quickly going to saturate. If you scale them up, you cant really get much farther this way. But that turned out to be wrong. Even the inventors themselves have been surprised, particularly, by this emergence of these new capabilities, like the ability to tell jokes, explain a program, and carrying out a particular task without having been trained on that task.

Well, thats not very reassuring. Tech is releasing these very powerful model systems. And the people themselves that program them say, we cant predict what new behaviors are going to emerge from these very large models. Well, gee, that makes me worry even more. So in the long term, I think everything is on the table. And yes, I think we need to worry about existential threats. Unfortunately, when you talk to AI people at AI companies, they typically say, oh, thats just all laughable. Thats all hysterics. Lets talk about the practical things right now. Well, of course, they would say that because theyre being paid to advance this technology and theyre being paid extraordinarily well. So, of course, theyre always going to push it.

I sense that the consensus has really swung because of GPT-3.5 and GPT-4. Has really swung that its only a matter of time before we have an AGI. Would you agree with that?

Koch: Yes. I would put it differently though: the number of people who argue that we wont get to AGI is becoming smaller and smaller. Its a rear-guard action, fought by people mostly in the humanities: Well, but they still cant do this. They still cant write Death in Venice. Which is true. Right now, none of these GPTs has produced a novel. You know, a 100,000-word novel. But I suspect its also just going to be a question of time before they can do that.

If you had to guess, how much time would you say that thats going to be?

Koch: I dont know. Ive given up. Its very difficult to predict. It really depends on the available training material you have. Writing a novel requires long-term character development. If you think about War and Peace or Lord of the Rings, you have characters developing over a thousand pages. So the question is, when can AI get these sorts of narratives? Certainly its going to be faster than we think.

So as I said, when people say in the long term this is dangerous, that doesnt mean, well, maybe in 200 years. This could mean maybe in three years, this could be dangerous. When will we see the first application of GPT to warlike endeavors? That could happen by the end of this year.

But the only thing I can think of that could happen in 2023 using a large language model is some sort of concerted propaganda campaign or disinformation. I mean, I dont see it controlling a lethal robot, for example.

Koch: Not right now, no. But again, we have these drones, and drones are getting very good. And all you need, you need a computer that has access to the cloud and can access these models in real time. So thats just a question of assembling the right hardware. And Im sure this is what militaries, either conventional militaries or terrorists organizations, are thinking about and will surprise us one day with such an attack. Right now, what could happen? You could get deep fakes ofall sorts of nasty deep fakes or people declaring war or an imminent nuclear attack. I mean, whatever your dark fantasy gives rise to. Its the world we now live in.

Well, what are your best-case scenarios? What are you hopeful about?

Koch: Well muddle through, like weve always muddled through. But the cats out of the bag. If you extrapolate these current trends three or five years from now, and given this very steep exponential rise in the power of these large language models, yes, all sorts of unpredictable things could happen. And some of them will happen. We just dont know which ones.

From Your Site Articles

Related Articles Around the Web

See the article here:

GPT-4, AGI, and the Hunt for Superintelligence - IEEE Spectrum

Posted in Singularity | Comments Off on GPT-4, AGI, and the Hunt for Superintelligence – IEEE Spectrum

Why the Brains Connections to the Body Are Crisscrossed – Quanta Magazine

Dazzling intricacies of brain structure are revealed every day, but one of the most obvious aspects of brain wiring eludes neuroscientists. The nervous system is cross-wired, so that the left side of the brain controls the right half of the body and vice versa. Every doctor relies upon this fact in performing neurological exams, but when I asked my doctor last week why this should be, all I got was a shrug. So I asked Catherine Carr, a neuroscientist at the University of Maryland, College Park. No good answer, she replied. I was surprised such a fundamental aspect of how our brain and body are wired together, and no one knew why?

Nothing that we know of stops the right side of the brain from connecting with the right side of the body. That wiring scheme would seem much simpler and less prone to errors. In the embryonic brain, the crossing of the wires across the midline an imaginary line dividing the right and left halves of the body requires a kind of molecular traffic cop to somehow direct the growing nerve fibers to the right spot on the opposite side of the body. Far simpler just to keep things on the same side.

Yet this neural cross wiring is ubiquitous in the animal kingdom even the neural connections in lowly nematode worms are wired with left-right reversal across the animals midline. And many of the traffic cop molecules that direct the growth of neurons in these worms do the same in humans. For evolution to have conserved this arrangement so doggedly, surely theres some benefit to it, but biologists still arent certain what it is. An intriguing answer, however, has come from the world of mathematics.

The key to that solution lies in exactly how neural circuits are laid out within brain tissue. Neurons that make connections between the brain and the body are organized to create a virtual map in the cerebral cortex. If a neuroscientist sticks an electrode into the brain and finds that neurons there receive input from the thumb, for example, then neurons next to it in the cerebral cortex will connect to the index finger. This mapping phenomenon is called somatotopy, Greek for body mapping, but its not limited to the physical body. The 3D external world we perceive through vision and our other senses is mapped onto the brain in the same way.

Creating an internal map of neural connections that accurately reflects spatial relations in the world makes sense. Consider how complicated it would be to wire neural circuits if the neurons were scattered willy-nillythroughout the brain. But while this internal neural mapping of connections solves a biological problem, it raises a geometric one: the topological challenge of projecting 3D space onto a 2D surface. Odd things happen when we do this. On a 2D map, an airplane taking the most direct path between two cities appears to travel in an arc, and satellites orbiting the globe appear to oscillate in a sinusoidal path.

And mapping 3D space onto a 2D plane in the brain seems to explain why our nervous system is cross-wired: Counterintuitive as it may seem, directing nerve fibers across the midline is the topologically simplest way to avoid errors, according to work from the biomedical engineer Troy Shinbrot and his neuroscientist colleague Wise Young, both at Rutgers University. They showed that this is true for any system where a central control mechanism interacts with a 3D environment. If the connections were wired without crossing, a geometric singularity confounding left/right and up/down information would arise.

Before getting into details of why this happens, we need to recognize another fundamental property that is so ingrained in us, its easy to forget. That is the very concept of midline, of left and right. It exists in certain symmetrical objects, and it arises from a geometric frame of reference centered on our own bilaterally symmetrical body. A radially symmetrical jellyfish swirling in the current has no left or right. Spotting a jellyfish in the current, we might say, Look, the jellyfish is drifting to the right. But if youre speaking face to face with someone across the water, that becomes my right your left. It can take humans years of dancing the hokeypokey to learn this difficult concept, and some never quite manage it.

Since left and right depend on a frame of reference, people frequently confuse the letters d and b, and p and q, but they rarely confuse q and d. In the first two cases, the identical shapes are flipped along the vertical axis (swapping left and right), and in the second, they are flipped along the horizontal axis (swapping up and down). As bilaterally symmetrical creatures we never mistake up and down, because those directions are always the same, regardless of viewpoint, but left and right are relative to an object.

Similarly, when we look in a mirror, we perceive ourselves as an image that appears to swap left and right, turning letters on our T-shirts backward. But what is really happening is a front-back transformation. Photons travel in straight lines to and from the mirror. They show your face as it is seen by the mirror, not according to the mental perception you form from the inside looking out. Both your real right hand extended out to the side and the mirror image of that hand are pointing in the same compass direction. Letters on your T-shirt appear reversed for the same reason that the name Quanta would appear flipped, asQuanta if you wrote it with your finger on a frosty window and then went outside to look at it.

Now imagine that the windowpane is the 2D surface of your skin. The neural map in your brain for the touch receptors on your skin will similarly flip the orientation of the writing pressed against your skin from the outside. The point is that mapping from different perspectives, and especially from 3D onto a bilaterally symmetrical plane, sets up some significant topological problems.

To better understand this, lets imagine the brain and body as two parallel planes. One could stitch a thread directly from a point on the body plane to the corresponding point on the brain plane. Likewise, a second thread could be run directly between a second pair of points without crossing the first thread. But in real life, the brain is a three-dimensional structure, with organic shapes and a highly folded cerebral cortex; our body is similarly three-dimensional.

That third dimension changes everything. The simplest way to introduce three-dimensionality into the 2D maps in our brain is to fold in the edges of the body plane 90 degrees, representing (for example) the skin of your chest folded around the sides of your rib cage. Folds in the cerebral cortex introduce a third dimension there too. Now, since the fibers must pass through the midline, because thats where the central nervous system in our body runs, the two fibers become crossed.

What would happen if the two-dimensional planes representing brain and body were folded in opposite directions as mirror images and connected point to point without crossed pathways?

The horizontal x and y axes for both the body and the brains body map would retain the same orientation, but they would have opposite directions for their vertical z axes. The folds in the maps create what in mathematical terms are called geometric singularities places where a property diverges or becomes ill-defined.

This alteration in the geometry of the perceived world means that the price of keeping our neural connections uncrossed would be steep. Picture an ant crawling across your body. To make sense of the sensation as the ant crawls up your chest and then traverses over to your shoulder, your brain would have to switch from one somatotopic map to another one with the opposite z-axis orientation. Your perception of 3D space would be inverted. A central control or sensation network would be confounded by the need to change orientations this way.

This abstraction may be difficult to visualize, so lets try a more concrete example. Imagine two small panes of glass set at right angles, with one upright and one flat, that have the labels front and bottom etched on them. The pad of your finger is pressed against the back of the front one so it can feel the etched lettering. We can imagine how to represent the brains map of the perceptions through the finger: If the connections between the fingertips and the brain dont cross, then the perceived front label will be flipped top to bottom, for the reasons described above.

Now imagine that your finger rotates downward to press against the bottom pane instead. The physical environment of the finger hasnt changed at all, but the map of the perceptions has: Now the bottom label is flipped and the front label isnt.

Look more closely, though, at the two perception maps. You cant simply rotate one to turn it into the other, even though a small physical rotation is all that the finger did. What this shows is that for the nervous system to maintain uncrossed connections, the brain would need to keep flipping one axis of its body maps as the body parts moved, which would be impossibly complex.

While there are lots of solutions to this wiring problem, the most elegant is to have two bilaterally symmetrical systems of wiring between the brain and the body, with the connections from each side of the body crossing the midline.

Now, this all makes sense mathematically, but its important to note that we dont know for certain that this is truly why our brains and bodies are connected the way they are. There is very little biological research on this intriguing question. The convenient dodge often heard is that the scientific method tells us what, not why. But whether or not this explanation is correct, its an example of how we can sometimes solve enduring biological puzzles by changing our frame of reference.

See more here:

Why the Brains Connections to the Body Are Crisscrossed - Quanta Magazine

Posted in Singularity | Comments Off on Why the Brains Connections to the Body Are Crisscrossed – Quanta Magazine