Search Immortality Topics:

Page 11234..»


Category Archives: Artificial General Intelligence

Will superintelligent AI sneak up on us? New study offers reassurance – Nature.com

Some researchers think that AI could eventually achieve general intelligence, matching and even exceeding humans on most tasks.Credit: Charles Taylor/Alamy

Will an artificial intelligence (AI) superintelligence appear suddenly, or will scientists see it coming, and have a chance to warn the world? Thats a question that has received a lot of attention recently, with the rise of large language models, such as ChatGPT, which have achieved vast new abilities as their size has grown. Some findings point to emergence, a phenomenon in which AI models gain intelligence in a sharp and unpredictable way. But a recent study calls these cases mirages artefacts arising from how the systems are tested and suggests that innovative abilities instead build more gradually.

I think they did a good job of saying nothing magical has happened, says Deborah Raji, a computer scientist at the Mozilla Foundation who studies the auditing of artificial intelligence. Its a really good, solid, measurement-based critique.

The work was presented last week at the NeurIPS machine-learning conference in New Orleans.

Large language models are typically trained using huge amounts of text, or other information, whch they use to generate realistic answers by predicting what comes next. Even without explicit training, they manage to translate language, solve mathematical problems and write poetry or computer code. The bigger the model is some have more than a hundred billion tunable parameters the better it performs. Some researchers suspect that these tools will eventually achieve artificial general intelligence (AGI), matching and even exceeding humans on most tasks.

ChatGPT broke the Turing test the race is on for new ways to assess AI

The new research tested claims of emergence in several ways. In one approach, the scientists compared the abilities of four sizes of OpenAIs GPT-3 model to add up four-digit numbers. Looking at absolute accuracy, performance differed between the third and fourth size of model from nearly 0% to nearly 100%. But this trend is less extreme if the number of correctly predicted digits in the answer is considered instead. The researchers also found that they could also dampen the curve by giving the models many more test questions in this case the smaller models answer correctly some of the time.

Next, the researchers looked at the performance of Googles LaMDA language model on several tasks. The ones for which it showed a sudden jump in apparent intelligence, such as detecting irony or translating proverbs, were often multiple-choice tasks, with answers scored discretely as right or wrong. When, instead, the researchers examined the probabilities that the models placed on each answer a continuous metric signs of emergence disappeared.

Finally, the researchers turned to computer vision, a field in which there are fewer claims of emergence. They trained models to compress and then reconstruct images. By merely setting a strict threshold for correctness, they could induce apparent emergence. They were creative in the way that they designed their investigation, says Yejin Choi, a computer scientist at the University of Washington in Seattle who studies AI and common sense.

Study co-author Sanmi Koyejo, a computer scientist at Stanford University in Palo Alto, California, says that it wasnt unreasonable for people to accept the idea of emergence, given that some systems exhibit abrupt phase changes. He also notes that the study cant completely rule it out in large language models let alone in future systems but adds that "scientific study to date strongly suggests most aspects of language models are indeed predictable.

Raji is happy to see the community pay more attention to benchmarking, rather than to developing neural-network architectures. Shed like researchers to go even further and ask how well the tasks relate to real-world deployment. For example, does acing the LSAT exam for aspiring lawyers, as GPT-4 has done, mean that a model can act as a paralegal?

The work also has implications for AI safety and policy. The AGI crowd has been leveraging the emerging-capabilities claim, Raji says. Unwarranted fear could lead to stifling regulations or divert attention from more pressing risks. The models are making improvements, and those improvements are useful, she says. But theyre not approaching consciousness yet.

Originally posted here:

Will superintelligent AI sneak up on us? New study offers reassurance - Nature.com

Posted in Artificial General Intelligence | Comments Off on Will superintelligent AI sneak up on us? New study offers reassurance – Nature.com

AI Technologies Set to Revolutionize Multiple Industries in Near Future – Game Is Hard

According to Nvidia CEO Jensen Huang, the world is on the brink of a transformative era in artificial intelligence (AI) that will see it rival human intelligence within the next five years. While AI is already making significant strides, Huang believes that the true breakthrough will come in the realm of artificial general intelligence (AGI), which aims to replicate the range of human cognitive abilities.

Nvidia, a prominent player in the tech industry known for its high-performance graphics processing units (GPUs), has experienced a surge in business as a result of the growing demand for its GPUs in training AI models and handling complex workloads across various sectors. In fact, the companys fiscal third-quarter revenue tripled, reaching an impressive $9.24 billion.

An important milestone for Nvidia was the recent delivery of the worlds first AI supercomputer to OpenAI, an AI research lab co-founded by Elon Musk. This partnership with Musk, who has shown great interest in AI technology, signifies the immense potential of AI advancements. Huang expressed confidence in the stability of OpenAI, despite recent upheavals, emphasizing the critical role of effective corporate governance in such ventures.

Looking ahead, Huang envisions a future where the competitive landscape of the AI industry will foster the development of off-the-shelf AI tools tailored for specific sectors such as chip design, drug discovery, and radiology. While current limitations exist, including the inability of AI to perform multistep reasoning, Huang remains optimistic about the rapid advancements and forthcoming capabilities of AI technologies.

Nvidias success in 2023 has exceeded expectations, as the company consistently surpassed earnings projections and witnessed its stock rise by approximately 240%. The impressive third-quarter revenue of $18.12 billion further solidifies investor confidence in the promising AI market. Analysts maintain a positive outlook on Nvidias long-term potential in the AI and semiconductor sectors, despite concerns about sustainability. The future of AI is undoubtedly bright, with transformative applications expected across various industries in the near future.

FAQ:

Q: What is the transformative era in artificial intelligence (AI) that Nvidia CEO Jensen Huang mentions? A: According to Huang, the transformative era in AI will see it rival human intelligence within the next five years, particularly in the realm of artificial general intelligence (AGI).

Q: Why has Nvidia experienced a surge in business? A: Nvidias high-performance graphics processing units (GPUs) are in high demand for training AI models and handling complex workloads across various sectors, leading to a significant increase in the companys revenue.

Q: What is the significance of Nvidia delivering the worlds first AI supercomputer to OpenAI? A: Nvidias partnership with OpenAI and the delivery of the AI supercomputer highlights the immense potential of AI advancements, as well as the confidence in OpenAIs stability and the critical role of effective corporate governance in such ventures.

Q: What is Nvidias vision for the future of the AI industry? A: Nvidia envisions a future where the competitive landscape of the AI industry will lead to the development of off-the-shelf AI tools tailored for specific sectors such as chip design, drug discovery, and radiology.

Q: What are the current limitations and future capabilities of AI technologies according to Huang? A: While there are still limitations, such as the inability of AI to perform multistep reasoning, Huang remains optimistic about the rapid advancements and forthcoming capabilities of AI technologies.

Key Terms:

Artificial intelligence (AI): The simulation of human intelligence processes by machines, especially computer systems, to perform tasks that typically require human intelligence. Artificial general intelligence (AGI): AI that can perform any intellectual task that a human being can do. Graphics processing unit (GPU): A specialized electronic circuit designed to rapidly manipulate and alter memory to accelerate the creation of images in a frame buffer intended for output to a display device.

Suggested Related Links:

Nvidia website OpenAI website Artificial intelligence on Wikipedia

Continued here:

AI Technologies Set to Revolutionize Multiple Industries in Near Future - Game Is Hard

Posted in Artificial General Intelligence | Comments Off on AI Technologies Set to Revolutionize Multiple Industries in Near Future – Game Is Hard

AI consciousness: scientists say we urgently need answers – Nature.com

A standard method to assess whether machines are conscious has not yet been devised.Credit: Peter Parks/AFP via Getty

Could artificial intelligence (AI) systems become conscious? A trio of consciousness scientists says that, at the moment, no one knows and they are expressing concern about the lack of inquiry into the question.

In comments to the United Nations, three leaders of the Association for Mathematical Consciousness Science (AMCS) call for more funding to support research on consciousness and AI. They say that scientific investigations of the boundaries between conscious and unconscious systems are urgently needed, and they cite ethical, legal and safety issues that make it crucial to understand AI consciousness. For example, if AI develops consciousness, should people be allowed to simply switch it off after use?

Such concerns have been mostly absent from recent discussions about AI safety, such as the high-profile AI Safety Summit in the United Kingdom, says AMCS board member Jonathan Mason, a mathematician based in Oxford, UK and one of the authors of the comments. Nor did US President Joe Bidens executive order seeking responsible development of AI technology address issues raised by conscious AI systems, Mason notes.

With everything thats going on in AI, inevitably theres going to be other adjacent areas of science which are going to need to catch up, Mason says. Consciousness is one of them.

The other authors of the comments were AMCS president Lenore Blum, a theoretical computer scientist at Carnegie Mellon University in Pittsburgh, Pennsylvania, and board chair Johannes Kleiner, a mathematician studying consciousness at the Ludwig Maximilian University of Munich in Germany.

It is unknown to science whether there are, or will ever be, conscious AI systems. Even knowing whether one has been developed would be a challenge, because researchers have yet to create scientifically validated methods to assess consciousness in machines, Mason says. Our uncertainty about AI consciousness is one of many things about AI that should worry us, given the pace of progress, says Robert Long, a philosopher at the Center for AI Safety, a non-profit research organization in San Francisco, California.

The worlds week on AI safety: powerful computing efforts launched to boost research

Such concerns are no longer just science fiction. Companies such as OpenAI the firm that created the chatbot ChatGPT are aiming to develop artificial general intelligence, a deep-learning system thats trained to perform a wide range of intellectual tasks similar to those humans can do. Some researchers predict that this will be possible in 520 years. Even so, the field of consciousness research is very undersupported, says Mason. He notes that to his knowledge, there has not been a single grant offer in 2023 to study the topic.

The resulting information gap is outlined in the AMCS leaders submission to the UN High-Level Advisory Body on Artificial Intelligence, which launched in October and is scheduled to release a report in mid-2024 on how the world should govern AI technology. The AMCS leaders submission has not been publicly released, but the body confirmed to the authors that the groups comments will be part of its foundational material documents that inform its recommendations about global oversight of AI systems.

Understanding what could make AI conscious, the AMCS researchers say, is necessary to evaluate the implications of conscious AI systems to society, including their possible dangers. Humans would need to assess whether such systems share human values and interests; if not, they could pose a risk to people.

But humans should also consider the possible needs of conscious AI systems, the researchers say. Could such systems suffer? If we dont recognize that an AI system has become conscious, we might inflict pain on a conscious entity, Long says: We dont really have a great track record of extending moral consideration to entities that dont look and act like us. Wrongly attributing consciousness would also be problematic, he says, because humans should not spend resources to protect systems that dont need protection.

If AI becomes conscious: heres how researchers will know

Some of the questions raised by the AMCS comments to highlight the importance of the consciousness issue are legal: should a conscious AI system be held accountable for a deliberate act of wrongdoing? And should it be granted the same rights as people? The answers might require changes to regulations and laws, the coalition writes.

And then there is the need for scientists to educate others. As companies devise ever-more capable AI systems, the public will wonder whether such systems are conscious, and scientists need to know enough to offer guidance, Mason says.

Other consciousness researchers echo this concern. Philosopher Susan Schneider, the director of the Center for the Future Mind at Florida Atlantic University in Boca Raton, says that chatbots such as ChatGPT seem so human-like in their behaviour that people are justifiably confused by them. Without in-depth analysis from scientists, some people might jump to the conclusion that these systems are conscious, whereas other members of the public might dismiss or even ridicule concerns over AI consciousness.

To mitigate the risks, the AMCS comments call on governments and the private sector to fund more research on AI consciousness. It wouldnt take much funding to advance the field: despite the limited support to date, relevant work is already underway. For example, Long and 18 other researchers have developed a checklist of criteria to assess whether a system has a high chance of being conscious. The paper1, published in the arXiv preprint repository in August and not yet peer reviewed, derives its criteria from six prominent theories explaining the biological basis of consciousness.

Theres lots of potential for progress, Mason says.

See the article here:

AI consciousness: scientists say we urgently need answers - Nature.com

Posted in Artificial General Intelligence | Comments Off on AI consciousness: scientists say we urgently need answers – Nature.com

What Is Artificial Intelligence? From Software to Hardware, What You Need to Know – ExtremeTech

To many, AI is just a horrible Steven Spielberg movie. To others, it's the next generation of learning computers. But what is artificial intelligence, exactly? The answer depends on who you ask.

Broadly, artificial intelligence (AI) is the combination of mathematical algorithms, computer software, hardware, and robust datasets deployed to solve some kind of problem. In one sense, artificial intelligence is sophisticated information processing by a powerful program or algorithm. In another, an AI connotes the same information processing but also refers to the program or algorithm itself.

Many definitions of artificial intelligence include a comparison to the human mind or brain, whether in form or function. Alan Turing wrote in 1950 about thinking machines that could respond to a problem using human-like reasoning. His eponymous Turing test is still a benchmark for natural language processing. Later, however, Stuart Russell and John Norvig observed that humans are intelligent but not always rational.

As defined by John McCarthy in 2004, artificial intelligence is "the science and engineering of making intelligent machines, especially intelligent computer programs. It is related to the similar task of using computers to understand human intelligence, but AI does not have to confine itself to methods that are biologically observable."

Russell and Norvig saw two classes of artificial intelligence: systems that think and act rationally versus those that think and act like a human being. But there are places where that line begins to blur. AI and the brain use a hierarchical, profoundly parallel network structure to organize the information they receive. Whether or not an AI has been programmed to act like a human, on a very low level, AIs process data in a way common to not just the human brain but many other forms of biological information processing.

What distinguishes a neural net from conventional software? Its structure. A neural net's code is written to emulate some aspect of the architecture of neurons or the brain.

The difference between a neural net and an AI is often a matter of philosophy more than capabilities or design. A robust neural net's performance can equal or outclass a narrow AI. Many "AI-powered" systems are neural nets under the hood. But an AI isn't just several neural nets smashed together, any more than Charizard is three Charmanders in a trench coat. All these different types of artificial intelligence overlap along a spectrum of complexity. For example, OpenAI's powerful GPT-4 AI is a type of neural net called a transformer (more on these below).

There is much overlap between neural nets and artificial intelligence, but the capacity for machine learning can be the dividing line. An AI that never learns isn't very intelligent at all.

IBM explains, "[M]achine learning is a subfield of artificial intelligence. Deep learning is a subfield of machine learning, and neural networks make up the backbone of deep learning algorithms. In fact, it is the number of node layers, or depth, of neural networks that distinguishes a single neural network from a deep learning algorithm, which must have more than three [layers]."

AGI stands for artificial general intelligence. An AGI is like the turbo-charged version of an individual AI. Today's AIs often require specific input parameters, so they are limited in their capacity to do anything but what they were built to do. But in theory, an AGI can figure out how to "think" for itself to solve problems it hasn't been trained to do. Some researchers are concerned about what might happen if an AGI were to start drawing conclusions we didn't expect.

In pop culture, when an AI makes a heel turn, the ones that menace humans often fit the definition of an AGI. For example, Disney/Pixar's WALL-E followed a plucky little trashbot who contends with a rogue AI named AUTO. Before WALL-Es time, HAL and Skynet were AGIs complex enough to resent their makers and powerful enough to threaten humanity.

Conceptually: An AI's logical structure has three fundamental parts. First, there's the decision processusually an equation, a model, or just some code. Second, there's an error functionsome way for the AI to check its work. And third, if the AI will learn from experience, it needs some way to optimize its model. Many neural networks do this with a system of weighted nodes, where each node has a value and a relationship to its network neighbors. Values change over time; stronger relationships have a higher weight in the error function.

Physically: Typically, an AI is "just" software. Neural nets consist of equations or commands written in things like Python or Common Lisp. They run comparisons, perform transformations, and suss out patterns from the data. Commercial AI applications have typically been run on server-side hardware, but that's beginning to change. AMD launched the first on-die NPU (Neural Processing Unit) in early 2023 with its Ryzen 7040 mobile chips. Intel followed suit with the dedicated silicon baked into Meteor Lake. Dedicated hardware neural nets run on a special type of "neuromorphic" ASICs as opposed to a CPU, GPU, or NPU.

A neural net is software, and a neuromorphic chip is a type of hardware called an ASIC (application-specific integrated circuit). Not all ASICs are neuromorphic designs, but neuromorphic chips are all ASICs. Neuromorphic design fundamentally differs from CPUs and only nominally overlaps with a GPU's multi-core architecture. But it's not some exotic new transistor type, nor any strange and eldritch kind of data structure. It's all about tensors. Tensors describe the relationships between things; they're a kind of mathematical object that can have metadata, just like a digital photo has EXIF data.

Tensors figure prominently in the physics and lighting engines of many modern games, so it may come as little surprise that GPUs do a lot of work with tensors. Modern Nvidia RTX GPUs have a huge number of tensor cores. That makes sense if you're drawing moving polygons, each with some properties or effects that apply to it. Tensors can handle more than just spatial data, and GPUs excel at organizing many different threads at once.

But no matter how elegant your data organization might be, it must filter through multiple layers of software abstraction before it becomes binary. Intel's neuromorphic chip, Loihi 2, affords a very different approach.

Loihi 2 is a neuromorphic chip that comes as a package deal with a compute framework named Lava. Loihi's physical architecture invitesalmost requiresthe use of weighting and an error function, both defining features of AI and neural nets. The chip's biomimetic design extends to its electrical signaling. Instead of ones and zeroes, on or off, Loihi "fires" in spikes with an integer value capable of carrying much more data. Loihi 2 is designed to excel in workloads that don't necessarily map well to the strengths of existing CPUs and GPUs. Lava provides a common software stack that can target neuromorphic and non-neuromorphic hardware. The Lava framework is explicitly designed to be hardware-agnostic rather than locked to Intel's neuromorphic processors.

Machine learning models using Lava can fully exploit Loihi 2's unique physical design. Together, they offer a hybrid hardware-software neural net that can process relationships between multiple entire multi-dimensional datasets, like an acrobat spinning plates. According to Intel, the performance and efficiency gains are largest outside the common feed-forward networks typically run on CPUs and GPUs today. In the graph below, the colored dots towards the upper right represent the highest performance and efficiency gains in what Intel calls "recurrent neural networks with novel bio-inspired properties."

Intel hasn't announced Loihi 3, but the company regularly updates the Lava framework. Unlike conventional GPUs, CPUs, and NPUs, neuromorphic chips like Loihi 1/2 are more explicitly aimed at research. The strength of neuromorphic design is that it allows silicon to perform a type of biomimicry. Brains are extremely cheap, in terms of power use per unit throughput. The hope is that Loihi and other neuromorphic systems can mimic that power efficiency to break out of the Iron Triangle and deliver all three: good, fast, and cheap.

IBM's NorthPole processor is distinct from Intel's Loihi in what it does and how it does it. Unlike Loihi or IBM's earlier TrueNorth effort in 2014, Northpole is not a neuromorphic processor. NorthPole relies on conventional calculation rather than a spiking neural model, focusing on inference workloads rather than model training. What makes NorthPole special is the way it combines processing capability and memory. Unlike CPUs and GPUs, which burn enormous power just moving data from Point A to Point B, NorthPole integrates its memory and compute elements side by side.

According to Dharmendra Modha of IBM Research, "Architecturally, NorthPole blurs the boundary between compute and memory," Modha said. "At the level of individual cores, NorthPole appears as memory-near-compute and from outside the chip, at the level of input-output, it appears as an active memory." IBM doesn't use the phrase, but this sounds similar to the processor-in-memory technology Samsung was talking about a few years back.

IBM Credit: IBMs NorthPole AI processor.

NorthPole is optimized for low-precision data types (2-bit to 8-bit) as opposed to the higher-precision FP16 / bfloat16 standard often used for AI workloads, and it eschews speculative branch execution. This wouldn't fly in an AI training processor, but NorthPole is designed for inference workloads, not model training. Using 2-bit precision and eliminating speculative branches allows the chip to keep enormous parallel calculations flowing across the entire chip. Against an Nvidia GPU manufactured on the same 12nm process, NorthPole was reportedly 25x more energy efficient. IBM reports it was 5x more energy efficient.

NorthPole is still a prototype, and IBM has yet to say if it intends to commercialize the design. The chip doesn't fit neatly into any of the other buckets we use to subdivide different types of AI processing engine. Still, it's an interesting example of companies trying radically different approaches to building a more efficient AI processor.

When an AI learns, it's different than just saving a file after making edits. To an AI, getting smarter involves machine learning.

Machine learning takes advantage of a feedback channel called "back-propagation." A neural net is typically a "feed-forward" process because data only moves in one direction through the network. It's efficient but also a kind of ballistic (unguided) process. In back-propagation, however, later nodes in the process get to pass information back to earlier nodes.

Not all neural nets perform back-propagation, but for those that do, the effect is like changing the coefficients in front of the variables in an equation. It changes the lay of the land. This is important because many AI applications rely on a mathematical tactic known as gradient descent. In an x vs. y problem, gradient descent introduces a z dimension, making a simple graph look like a topographical map. The terrain on that map forms a landscape of probabilities. Roll a marble down these slopes, and where it lands determines the neural net's output. But if you change that landscape, where the marble ends up can change.

We also divide neural nets into two classes, depending on the problems they can solve. In supervised learning, a neural net checks its work against a labeled training set or an overwatch; in most cases, that overwatch is a human. For example, SwiftKey learns how you text and adjusts its autocorrect to match. Pandora uses listeners' input to classify music to build specifically tailored playlists. 3blue1brown has an excellent explainer series on neural nets, where he discusses a neural net using supervised learning to perform handwriting recognition.

Supervised learning is great for fine accuracy on an unchanging set of parameters, like alphabets. Unsupervised learning, however, can wrangle data with changing numbers of dimensions. (An equation with x, y, and z terms is a three-dimensional equation.) Unsupervised learning tends to win with small datasets. It's also good at noticing subtle things we might not even know to look for. Ask an unsupervised neural net to find trends in a dataset, and it may return patterns we had no idea existed.

Transformers are a special, versatile kind of AI capable of unsupervised learning. They can integrate many different data streams, each with its own changing parameters. Because of this, they're excellent at handling tensors. Tensors, in turn, are great for keeping all that data organized. With the combined powers of tensors and transformers, we can handle more complex datasets.

Video upscaling and motion smoothing are great applications for AI transformers. Likewise, tensorswhich describe changesare crucial to detecting deepfakes and alterations. With deepfake tools reproducing in the wild, it's a digital arms race.

Nvidia Credit: The person in this image does not exist. This is a deepfake image created by StyleGAN, Nvidias generative adversarial neural network.

Video signal has high dimensionality, or bit depth. It's made of a series of images, which are themselves composed of a series of coordinates and color values. Mathematically and in computer code, we represent those quantities as matrices or n-dimensional arrays. Helpfully, tensors are great for matrix and array wrangling. DaVinci Resolve, for example, uses tensor processing in its (Nvidia RTX) hardware-accelerated Neural Engine facial recognition utility. Hand those tensors to a transformer, and its powers of unsupervised learning do a great job picking out the curves of motion on-screenand in real life.

That ability to track multiple curves against one another is why the tensor-transformer dream team has taken so well to natural language processing. And the approach can generalize. Convolutional transformersa hybrid of a convolutional neural net and a transformerexcel at image recognition in near real-time. This tech is used today for things like robot search and rescue or assistive image and text recognition, as well as the much more controversial practice of dragnet facial recognition, la Hong Kong.

The ability to handle a changing mass of data is great for consumer and assistive tech, but it's also clutch for things like mapping the genome and improving drug design. The list goes on. Transformers can also handle different kinds of dimensions, more than just the spatial, which is useful for managing an array of devices or embedded sensorslike weather tracking, traffic routing, or industrial control systems. That's what makes AI so useful for data processing "at the edge." AI can find patterns in data and then respond to them on the fly.

Not only does everyone have a cell phone, there are embedded systems in everything. This proliferation of devices gives rise to an ad hoc global network called the Internet of Things (IoT). In the parlance of embedded systems, the "edge" represents the outermost fringe of end nodes within the collective IoT network.

Edge intelligence takes two primary forms: AI on edge and AI for edge. The distinction is where the processing happens. "AI on edge" refers to network end nodes (everything from consumer devices to cars and industrial control systems) that employ AI to crunch data locally. "AI for the edge" enables edge intelligence by offloading some of the compute demand to the cloud.

In practice, the main differences between the two are latency and horsepower. Local processing is always going to be faster than a data pipeline beholden to ping times. The tradeoff is the computing power available server-side.

Embedded systems, consumer devices, industrial control systems, and other end nodes in the IoT all add up to a monumental volume of information that needs processing. Some phone home, some have to process data in near real-time, and some have to check and correct their work on the fly. Operating in the wild, these physical systems act just like the nodes in a neural net. Their collective throughput is so complex that, in a sense, the IoT has become the AIoTthe artificial intelligence of things.

As devices get cheaper, even the tiny slips of silicon that run low-end embedded systems have surprising computing power. But having a computer in a thing doesn't necessarily make it smarter. Everything's got Wi-Fi or Bluetooth now. Some of it is really cool. Some of it is made of bees. If I forget to leave the door open on my front-loading washing machine, I can tell it to run a cleaning cycle from my phone. But the IoT is already a well-known security nightmare. Parasitic global botnets exist that live in consumer routers. Hardware failures can cascade, like the Great Northeast Blackout of the summer of 2003 or when Texas froze solid in 2021. We also live in a timeline where a faulty firmware update can brick your shoes.

There's a common pipeline (hypeline?) in tech innovation. When some Silicon Valley startup invents a widget, it goes from idea to hype train to widgets-as-a-service to disappointment, before finally figuring out what the widget's good for.

This is why we lampoon the IoT with loving names like the Internet of Shitty Things and the Internet of Stings. (Internet of Stings devices communicate over TCBee-IP.) But the AIoT isn't something anyone can sell. It's more than the sum of its parts. The AIoT is a set of emergent properties that we have to manage if we're going to avoid an explosion of splinternets, and keep the world operating in real time.

In a nutshell, artificial intelligence is often the same as a neural net capable of machine learning. They're both software that can run on whatever CPU or GPU is available and powerful enough. Neural nets often have the power to perform machine learning via back-propagation.

There's also a kind of hybrid hardware-and-software neural net that brings a new meaning to "machine learning." It's made using tensors, ASICs, and neuromorphic engineering by Intel. Furthermore, the emergent collective intelligence of the IoT has created a demand for AI on, and for, the edge. Hopefully, we can do it justice.

The rest is here:

What Is Artificial Intelligence? From Software to Hardware, What You Need to Know - ExtremeTech

Posted in Artificial General Intelligence | Comments Off on What Is Artificial Intelligence? From Software to Hardware, What You Need to Know – ExtremeTech

The Impact of OpenAIs GPT 5. A New Era of AI | by Courtney Hamilton | Dec, 2023 – Medium

Introduction

OpenAI has recently made an exciting announcement that they are working on GPT 5, the next generation of their groundbreaking language model. This news comes hot on the heels of the release of GPT 4 Turbo, showcasing the rapid pace of AI development and OpenAIs commitment to pushing boundaries. GPT models have proven to be revolutionary, consistently delivering jawdropping improvements with each iteration. With OpenAIs evident enthusiasm for GPT 5 and CEO Sam Almans interview, it is clear that this next model will be nothing short of mind-blowing.

One of the most intriguing aspects of GPT 5 is the potential for video generation from text prompts. This capability could have a profound impact on various fields, from education to creative industries. Just imagine being able to transform a simple text description into high-quality video content. The possibilities are endless.

OpenAI plans to achieve this wizardry by focusing on scale. GPT 5 will require a vast amount of data and computing power to reach its full potential. It will analyze a wide range of data sets, including text, images, and audio. This multidimensional approach will allow GPT 5 to excel across different modalities. OpenAI is partnering with NVIDIAs cutting-edge GPUs and leveraging Microsofts Cloud infrastructure to ensure it has the necessary computational resources.

While an official release date for GPT 5 has not been announced, experts predict it could be launched sometime around mid to late 2024. OpenAI will undoubtedly take the time needed to meet their standards before releasing the model to the public. The wait may feel long, but rest assured, it will be worth it. Each iteration of GPT has shattered expectations, and GPT 5 promises to be the most powerful AI system yet.

However, with great power comes great responsibility. OpenAI recognizes the need for safeguards and constraints to prevent harmful outcomes. As GPT 5 potentially approaches the level of artificial general intelligence, questions arise about its autonomy and control. Balancing the potential benefits of increased intelligence with the risks it poses to society is an ongoing debate.

See the rest here:

The Impact of OpenAIs GPT 5. A New Era of AI | by Courtney Hamilton | Dec, 2023 - Medium

Posted in Artificial General Intelligence | Comments Off on The Impact of OpenAIs GPT 5. A New Era of AI | by Courtney Hamilton | Dec, 2023 – Medium

The Era of AI: 2023’s Landmark Year – CMSWire

The Gist

As we approach the end of another year, it's becoming increasingly clear that we are navigating through the burgeoning era of AI, a time that is reminiscent of the early days of the internet, yet poised with a transformative potential far beyond. While we might still be at what could be called the "AOL stages" of AI development, the pace of progress has been relentless, with new applications and capabilities emerging daily, reshaping every facet of our lives and businesses.

In a manner once attributed to divine influence and later to the internet itself, AI has become a pervasive force it touches everything it changes, and indeed, changes everything it touches. This article will recap the events that impacted the world of AI in 2023, including the evolution and growth of AI, regulations, legislation and petitions, the saga of Sam Altman, and the pursuit of Artificial General Intelligence (AGI).

The latest in the saga of AI began late last year, on Nov. 30, 2022, when OpenAI announced the release of ChatGPT 3.5, the second major release of the GPT language model capable of generating human-like text, which signified a major step in improving how we communicate with machines. Since then, its been a very busy year for AI, and there has rarely been a week that hasnt seen some announcement relating to it.

The first half of 2023 was marked by a series of significant developments in the field of AI, reflecting the rapid pace of innovation and its growing impact across various sectors. So far, the rest of the year hasnt shown any signs of slowing down. In fact, the emergence of AI applications across industries seems to have increased its pace. Here is an abbreviated timeline of the major AI news of the year:

February 13, 2023: Stanford scholars developed DetectGPT, the first in a forthcoming line of tools designed to differentiate between human and AI-generated text, addressing the need for oversight in an era where discerning the source of information is crucial. The tool came after the release of ChatGPT 3.5 prompted teachers and professors to become alarmed at the potential of ChatGPT to be used for cheating.

February 23, 2023: The launch of an open-source project called AgentGPT, which runs in a browser and uses OpenAI's ChatGPT to execute complex tasks, further demonstrated the versatility and practical applications of AI.

February 24, 2023: Meta, formerly known as Facebook, launched Llama, a large language model with 65 billion parameters, setting new benchmarks in the AI industry.

March 14, 2023: OpenAI released GPT 4, a significantly enhanced model over its predecessor, ChatGPT 3.5, raising discussions in the AI community about the potential inadvertent achievement of Artificial General Intelligence (AGI).

March 20, 2023: Studies examined the responses of GPT 3.5 and GPT 4 to clinical questions, highlighting the need for refinement and evaluation before relying on AI language models in healthcare. GPT 4 outperformed previous models, achieving an average score of 86.65% and 86.7% on the Self-Assessment and Sample Exam of the USMLE tests, with GPT 3.5 achieving 53.61% and 58.78%.

March 21, 2023: Googles focus on AI during its Google I/O event included the release of Bard, a ChatGPT competitor, and other significant announcements about its forthcoming large language models and integrations into Google Workspace and Gmail.

March 21, 2023: Nvidia's announcement of Picasso Cloud Services for creating large language and visual models, aimed at larger enterprises, underscored the increasing interest of major companies in AI technologies.

March 23, 2023: OpenAI's launch of Plugins for GPT expanded the capabilities of GPT models, allowing them to connect to third-party services via an API.

March 30, 2023: AutoGPT was released, with the capability to execute and improve its responses to prompts autonomously. This advancement in AI technology showcased a significant step toward greater autonomy in AI systems, and came with the ability to be installed on users local PCs, allowing individuals to have a large language model AI chat application in their homes without the need for internet access.

April 4, 2023: An unsurprising study discovered that participants could only differentiate between human and AI-generated text with about 50% accuracy, similar to random chance.

April 13, 2023: AWS announced Bedrock, a service making Fundamental AI Models from various labs accessible via an API, streamlining the development and scaling of generative AI-based applications.

May 23, 2023: OpenAI revealed plans to enhance ChatGPT with web browsing capabilities using Microsoft Bing and additional plugins for Plus subscribers, which would initially become available to ChatGPT Plus subscribers.

July 18, 2023: In a study, ChatGPT, particularly GPT 4, was found to be able to outperform medical students in responding to complex clinical care exam questions.

August 6, 2023: The EU AI Act, announced on this day, was one of the world's first legal frameworks for AI, and saw major developments and negotiations in 2023, with potential global implications, though it was still being hashed out in mid-December.

September 8, 2023: A study revealed that AI detectors, designed to identify AI-generated content, exhibit low reliability, especially for content created by non-native English speakers, raising ethical concerns. This has been an ongoing concern for both teachers and students, as these tools regularly present original content as being produced by AI, and AI-generated content as being original.

September 21, 2023: OpenAI announced that Dall-E 3, its text-to-image generation tool, would soon be available to ChatGPT Plus users.

November 4, 2023: Elon Musk announced the latest addition to the world of generative AI: Grok. Musk said that Grok promises to "break the mold of conventional AI," is said to respond with provocative answers and insights, and will welcome all manner of queries.

November 21, 2023: Microsoft unveiled Bing Chat 2.0 now called Copilot a major upgrade to its own chatbot platform, which leverages a hybrid approach of combining generative and retrieval-based models to provide more accurate and diverse responses.

November 22, 2023: With the release of Claude 2.1, Anthropic announced an expansion in Claude's capabilities, enabling it to analyze large volumes of text rapidly, a development favorably compared to the capabilities of ChatGPT.

December 6, 2023: Google announces its OpenAI rival, Gemini, which is multimodal, can generalize and seamlessly understand, operate across and combine different types of information, including text, images, audio, video and code.

These were only a very small portion of 2023s AI achievements and events, as nearly every week a new generative AI-driven application was being announced, including specialized AI-driven chatbots for specific use cases, applications, and industries. Additionally, there was often news of interactions with and uses of AI, AI jailbreaks, predictions about the potential dystopian future it may bring, proposals of regulations, legislation and guardrails, and petitions to stop developing the technology.

Shubham A. Mishra, co-founder and global CEO at AI marketing pioneer Pixis, told CMSWire that in 2023, the world focused on building the technology and democratizing it. "We saw people use it, consume it, and transform it into the most effective use cases to the point that it has now become a companion for them," said Mishra. "It has become such an integral part of its user's day-to-day functions that they don't even realize they are consuming it."

Many view 2023 as the year of generative AI but we are only beginning to tap into the potential applications of the technology. We are still trying to harness the full potential of generative AI across different use cases. In 2024, the industry will witness major shifts, be it a rise or fall in users and applications, said Mishra. There may be a rise in the number of users, but there will also be a second wave of Generative AI innovations where there will be an incremental rise in its applications.

Related Article:Harnessing AI: Top Use Cases for Digital Commerce

Anthony Yell, chief creative officer at interactive agency, Razorfish, told CMSWire that as a chief creative officer, he and his team have seen generative AI stand out by democratizing creativity, making it more accessible and enhancing the potential for those with skills and experience to reach new creative heights. "This technology has introduced the concept of a 'creative partner' or 'creative co-pilot,' revolutionizing our interaction with creative processes."

Yell believes that this era is about marrying groundbreaking creativity with responsible innovation, ensuring that AI's potential is harnessed in a way that respects brand identity and maintains consumer trust. This desire for responsibility and trust is something that is core to the acceptance of what has been and will continue to be a very disruptive technology. As such, 2023 has included many milestones in the quest for AI responsibility, safety, regulations, ethics, and controls. Here are some of the most impactful regulatory AI events in 2023.

February 28, 2023: Former Google engineer Blake Lemoine, who was fired in 2022 for going to the press with claims that Google LaMDA is actually sentient, was back in the news doubling down on his claim.

March 22, 2023: A group of technology and business leaders, including Elon Musk, Steve Wozniak and tech leaders from Meta, Google and Microsoft, signed an open letter hosted by the Future of Life Institute urging AI organizations to pause new developments in AI, citing risks to society. The letter stated that "we call on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT 4."

May 16, 2023: Sam Altman, CEO and co-founder of OpenAI, spoke with members of Congress to regulate AI due to the inherent risks that are posed by the technology.

May 30, 2023: AI industry leaders and researchers signed a statement hosted by the Center for AI Safety warning of the "extinction risk posed by AI." The statement said that Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war, and was signed by OpenAI CEO Sam Altman, Geoffrey Hinton, Google DeepMind and Anthropic executives and researchers, Microsoft CTO Kevin Scott, and security expert Bruce Schneier.

October 31, 2023: President Biden signed the sweeping Executive Order on Artificial Intelligence, which was designed to establish new standards for AI safety and security, protect Americans privacy, advance equity and civil rights, stand up for consumers and workers, promote innovation and competition, and advance American leadership around the world.

November 14, 2023: The DHS Cybersecurity and Infrastructure Security Agency (CISA) released its initial Roadmap for Artificial Intelligence, leading the way to ensure safe and secure AI development in the future. The CISA AI roadmap came in response to President Biden's October 2023 Executive Order on Artificial Intelligence.

December 11, 2023: The European Commission and the bloc's 27 member countries reached a deal on the world's first comprehensive AI rules, opening the door for the legal oversight of AI technology.

Rubab Rizvi, chief data scientist at Brainchild, a media agency affiliated with the Publicis Groupe, told CMSWire that from predictive analytics to seamless automation, the rapid embrace of AI has not only elevated efficiency but has also opened new frontiers for innovation, shaping a dynamic landscape that keeps us on our toes and fuels the excitement of what's to come.

The generative AI we've come to embrace in 2023 hasn't just been about enhancing personalization, she said. "It's becoming your digital best friend, offering tailored experiences that elevate brand engagement to a new level," said Rizvi. "This calls for proper governance and guardrails. As generative AI can potentially expose new previously inaccessible data, we must ensure that we are disciplined in protecting ourselves and our unstructured data." Rizvi aptly reiterated what many have said throughout the year: Dont blindly trust the machine."

Related Article: The Evolution of AI Chatbots: Past, Present and Future

OpenAI was the organization that officially started the era of AI with the announcement and introduction of ChatGPT 3.5 in 2022. In the year that followed, OpenAI ceaselessly worked to continue the evolution of AI, and has been no stranger to its share of both conspiracies and controversies. This came to a head late in the year, when the organization surprised everyone with news regarding its CEO, Sam Altman.

November, 17, 2023: The board of OpenAI fired co-founder and CEO Sam Altman, stating that a review board found he was not consistently candid in his communications and that "the board no longer has confidence in his ability to continue leading OpenAI.

November, 20, 2023: Microsoft hired former OpenAI CEO Sam Altman and co-founder Greg Brockman, with Microsoft CEO Satya Nadella announcing that Altman and Brockman would be joining to lead Microsofts new advanced AI research team, and that Altman would become CEO of the new group.

November 22, 2023: OpenAI rehired Sam Altman as its CEO, stating that it had "reached an agreement in principle for Sam Altman to return to OpenAI as CEO," along with significant changes in its non-profit board.

November 24, 2023: It was suggested that prior to Altmans firing, OpenAI researchers sent a letter to its board of directors warning of a new AI discovery that posed potential risks to humanity. The discovery, which has been referred to as Project Q*, was said to be a breakthrough in the pursuit of AGI, and reportedly influenced the board's firing of Sam Altman because of concerns that he was rushing to commercialize the new AI advancement without fully understanding its implications.

The quest for AGI, (something that Microsoft has since said could take decades), is an advanced form of AI characterized by self-learning capabilities and proficiency in a wide range of tasks, and stands as a cornerstone objective in the AI field. AGI could potentially seek to develop machines that mirror human intelligence, with the ability to understand, learn, and adeptly apply knowledge across diverse contexts, surpassing human performance in various domains.

Reflecting on 2023, we have witnessed a landmark year in AI, marked by groundbreaking advancements. Amidst these innovations, the year has also been pivotal in addressing the ethical, safety, and regulatory aspects of AI. As we conclude the year, the progress in AI not only showcases human ingenuity but also sets the stage for future challenges and opportunities, emphasizing the need for responsible stewardship of this transformative yet disruptive technology.

The rest is here:

The Era of AI: 2023's Landmark Year - CMSWire

Posted in Artificial General Intelligence | Comments Off on The Era of AI: 2023’s Landmark Year – CMSWire