-
The Future Of Nano Technology
Categories
- Ai
- Alan Watts
- Anatomy
- Andropause
- Anti-Aging Medicine
- Arthritis
- Artificial General Intelligence
- Artificial Intelligence
- Artificial Super Intelligence
- Ataxia
- Autism
- Biochemistry
- BioEngineering
- Biotechnology
- Bitcoin
- Chemistry
- Cryonics
- Cryptocurrency
- David Sinclair
- Dementia
- Diet Science
- Diseases
- Eczema
- Elon Musk
- Futurism
- Gene Medicine
- Gene Therapy
- Gene therapy
- Genetic Medicine
- Genetic Therapy
- Global News Feed
- Healthy Lifestyle
- Healthy Living
- HGH Physicians
- Hormone Optimization
- Hormone Replacement Therapy
- Hormone Replacement Treatment
- Human Genetic Engineering
- Human Immortality
- Human Longevity
- Human Reproduction
- Hypogonadism
- Hypopituitarism
- Hypothyroidism
- Immortality
- Immortality Medicine
- Inflammation
- Injectable Growth Hormone
- Integrative Medicine
- Life Skills
- Longevity
- Longevity Medicine
- Low T
- Machine Learning
- Mars Colony
- Medical School
- Menopause
- multiple-sclerosis
- Nano Medicine
- Nanomedicine
- Nanotechnology
- Neurology
- Parkinson's disease
- Pharmacogenomics
- Protein Folding
- Psoriasis
- Quantum Computing
- Regenerative Medicine
- Resveratrol
- Sermorelin Physicians
- Singularity
- Spacex
- Stem Cell Therapy
- Stem Cells
- Stemcell Therapy
- Testosterone
- Testosterone Physicians
- Transhuman
- Transhumanism
- Transhumanist
- Uncategorized
- Veganism
- Vegetarianism
- Vitamin Research
- Wellness
-
Recent Posts
- Cheap longevity drug? Researchers aim to test if metformin can slow down aging : Shots – Health News – NPR
- The U.S. Needs to ‘Get It Right’ on AI – TIME
- Big Tech keeps spending billions on AI. There’s no end in sight. – The Washington Post
- Racist AI Deepfake of Baltimore Principal Leads to Arrest – The New York Times
- A Baltimore-area teacher is accused of using AI to make his boss appear racist – NPR
Archives
Popular Key Word Searches
- centraltph
- bicarbonate and growth immunity ray peat
- vrcc neurology
- bibliotecapleyades/amrita-longevity-immortality
- cbr xmen anatomy
- Medical genetics wikipedia
- immortality medicine
- GrabPay
- Grab Pay Philippines
- GrabPay Vietnam
- GrabPay Philippines
- dr weil psoriasis
- what does recovered mean covid-19
- tony pantalleresco
- tony pantalleresco herbalist book
- herbsplusbeadworks
- herbsplusbeadworks website
- hailie vanderven
- princeton longevity center scam
- aetna genetic testing policy
- anatomy of hell
- biggie
- longevity claims
- augmentinforce tony pantalleresco
- tony pantalleresco website
Search Immortality Topics: |
Category Archives: Artificial General Intelligence
Will superintelligent AI sneak up on us? New study offers reassurance – Nature.com
Some researchers think that AI could eventually achieve general intelligence, matching and even exceeding humans on most tasks.Credit: Charles Taylor/Alamy
Will an artificial intelligence (AI) superintelligence appear suddenly, or will scientists see it coming, and have a chance to warn the world? Thats a question that has received a lot of attention recently, with the rise of large language models, such as ChatGPT, which have achieved vast new abilities as their size has grown. Some findings point to emergence, a phenomenon in which AI models gain intelligence in a sharp and unpredictable way. But a recent study calls these cases mirages artefacts arising from how the systems are tested and suggests that innovative abilities instead build more gradually.
I think they did a good job of saying nothing magical has happened, says Deborah Raji, a computer scientist at the Mozilla Foundation who studies the auditing of artificial intelligence. Its a really good, solid, measurement-based critique.
The work was presented last week at the NeurIPS machine-learning conference in New Orleans.
Large language models are typically trained using huge amounts of text, or other information, whch they use to generate realistic answers by predicting what comes next. Even without explicit training, they manage to translate language, solve mathematical problems and write poetry or computer code. The bigger the model is some have more than a hundred billion tunable parameters the better it performs. Some researchers suspect that these tools will eventually achieve artificial general intelligence (AGI), matching and even exceeding humans on most tasks.
ChatGPT broke the Turing test the race is on for new ways to assess AI
The new research tested claims of emergence in several ways. In one approach, the scientists compared the abilities of four sizes of OpenAIs GPT-3 model to add up four-digit numbers. Looking at absolute accuracy, performance differed between the third and fourth size of model from nearly 0% to nearly 100%. But this trend is less extreme if the number of correctly predicted digits in the answer is considered instead. The researchers also found that they could also dampen the curve by giving the models many more test questions in this case the smaller models answer correctly some of the time.
Next, the researchers looked at the performance of Googles LaMDA language model on several tasks. The ones for which it showed a sudden jump in apparent intelligence, such as detecting irony or translating proverbs, were often multiple-choice tasks, with answers scored discretely as right or wrong. When, instead, the researchers examined the probabilities that the models placed on each answer a continuous metric signs of emergence disappeared.
Finally, the researchers turned to computer vision, a field in which there are fewer claims of emergence. They trained models to compress and then reconstruct images. By merely setting a strict threshold for correctness, they could induce apparent emergence. They were creative in the way that they designed their investigation, says Yejin Choi, a computer scientist at the University of Washington in Seattle who studies AI and common sense.
Study co-author Sanmi Koyejo, a computer scientist at Stanford University in Palo Alto, California, says that it wasnt unreasonable for people to accept the idea of emergence, given that some systems exhibit abrupt phase changes. He also notes that the study cant completely rule it out in large language models let alone in future systems but adds that "scientific study to date strongly suggests most aspects of language models are indeed predictable.
Raji is happy to see the community pay more attention to benchmarking, rather than to developing neural-network architectures. Shed like researchers to go even further and ask how well the tasks relate to real-world deployment. For example, does acing the LSAT exam for aspiring lawyers, as GPT-4 has done, mean that a model can act as a paralegal?
The work also has implications for AI safety and policy. The AGI crowd has been leveraging the emerging-capabilities claim, Raji says. Unwarranted fear could lead to stifling regulations or divert attention from more pressing risks. The models are making improvements, and those improvements are useful, she says. But theyre not approaching consciousness yet.
Originally posted here:
Will superintelligent AI sneak up on us? New study offers reassurance - Nature.com
Posted in Artificial General Intelligence
Comments Off on Will superintelligent AI sneak up on us? New study offers reassurance – Nature.com
AI Technologies Set to Revolutionize Multiple Industries in Near Future – Game Is Hard
According to Nvidia CEO Jensen Huang, the world is on the brink of a transformative era in artificial intelligence (AI) that will see it rival human intelligence within the next five years. While AI is already making significant strides, Huang believes that the true breakthrough will come in the realm of artificial general intelligence (AGI), which aims to replicate the range of human cognitive abilities.
Nvidia, a prominent player in the tech industry known for its high-performance graphics processing units (GPUs), has experienced a surge in business as a result of the growing demand for its GPUs in training AI models and handling complex workloads across various sectors. In fact, the companys fiscal third-quarter revenue tripled, reaching an impressive $9.24 billion.
An important milestone for Nvidia was the recent delivery of the worlds first AI supercomputer to OpenAI, an AI research lab co-founded by Elon Musk. This partnership with Musk, who has shown great interest in AI technology, signifies the immense potential of AI advancements. Huang expressed confidence in the stability of OpenAI, despite recent upheavals, emphasizing the critical role of effective corporate governance in such ventures.
Looking ahead, Huang envisions a future where the competitive landscape of the AI industry will foster the development of off-the-shelf AI tools tailored for specific sectors such as chip design, drug discovery, and radiology. While current limitations exist, including the inability of AI to perform multistep reasoning, Huang remains optimistic about the rapid advancements and forthcoming capabilities of AI technologies.
Nvidias success in 2023 has exceeded expectations, as the company consistently surpassed earnings projections and witnessed its stock rise by approximately 240%. The impressive third-quarter revenue of $18.12 billion further solidifies investor confidence in the promising AI market. Analysts maintain a positive outlook on Nvidias long-term potential in the AI and semiconductor sectors, despite concerns about sustainability. The future of AI is undoubtedly bright, with transformative applications expected across various industries in the near future.
FAQ:
Q: What is the transformative era in artificial intelligence (AI) that Nvidia CEO Jensen Huang mentions? A: According to Huang, the transformative era in AI will see it rival human intelligence within the next five years, particularly in the realm of artificial general intelligence (AGI).
Q: Why has Nvidia experienced a surge in business? A: Nvidias high-performance graphics processing units (GPUs) are in high demand for training AI models and handling complex workloads across various sectors, leading to a significant increase in the companys revenue.
Q: What is the significance of Nvidia delivering the worlds first AI supercomputer to OpenAI? A: Nvidias partnership with OpenAI and the delivery of the AI supercomputer highlights the immense potential of AI advancements, as well as the confidence in OpenAIs stability and the critical role of effective corporate governance in such ventures.
Q: What is Nvidias vision for the future of the AI industry? A: Nvidia envisions a future where the competitive landscape of the AI industry will lead to the development of off-the-shelf AI tools tailored for specific sectors such as chip design, drug discovery, and radiology.
Q: What are the current limitations and future capabilities of AI technologies according to Huang? A: While there are still limitations, such as the inability of AI to perform multistep reasoning, Huang remains optimistic about the rapid advancements and forthcoming capabilities of AI technologies.
Key Terms:
Artificial intelligence (AI): The simulation of human intelligence processes by machines, especially computer systems, to perform tasks that typically require human intelligence. Artificial general intelligence (AGI): AI that can perform any intellectual task that a human being can do. Graphics processing unit (GPU): A specialized electronic circuit designed to rapidly manipulate and alter memory to accelerate the creation of images in a frame buffer intended for output to a display device.
Suggested Related Links:
Nvidia website OpenAI website Artificial intelligence on Wikipedia
Continued here:
AI Technologies Set to Revolutionize Multiple Industries in Near Future - Game Is Hard
Posted in Artificial General Intelligence
Comments Off on AI Technologies Set to Revolutionize Multiple Industries in Near Future – Game Is Hard
AI consciousness: scientists say we urgently need answers – Nature.com
A standard method to assess whether machines are conscious has not yet been devised.Credit: Peter Parks/AFP via Getty
Could artificial intelligence (AI) systems become conscious? A trio of consciousness scientists says that, at the moment, no one knows and they are expressing concern about the lack of inquiry into the question.
In comments to the United Nations, three leaders of the Association for Mathematical Consciousness Science (AMCS) call for more funding to support research on consciousness and AI. They say that scientific investigations of the boundaries between conscious and unconscious systems are urgently needed, and they cite ethical, legal and safety issues that make it crucial to understand AI consciousness. For example, if AI develops consciousness, should people be allowed to simply switch it off after use?
Such concerns have been mostly absent from recent discussions about AI safety, such as the high-profile AI Safety Summit in the United Kingdom, says AMCS board member Jonathan Mason, a mathematician based in Oxford, UK and one of the authors of the comments. Nor did US President Joe Bidens executive order seeking responsible development of AI technology address issues raised by conscious AI systems, Mason notes.
With everything thats going on in AI, inevitably theres going to be other adjacent areas of science which are going to need to catch up, Mason says. Consciousness is one of them.
The other authors of the comments were AMCS president Lenore Blum, a theoretical computer scientist at Carnegie Mellon University in Pittsburgh, Pennsylvania, and board chair Johannes Kleiner, a mathematician studying consciousness at the Ludwig Maximilian University of Munich in Germany.
It is unknown to science whether there are, or will ever be, conscious AI systems. Even knowing whether one has been developed would be a challenge, because researchers have yet to create scientifically validated methods to assess consciousness in machines, Mason says. Our uncertainty about AI consciousness is one of many things about AI that should worry us, given the pace of progress, says Robert Long, a philosopher at the Center for AI Safety, a non-profit research organization in San Francisco, California.
The worlds week on AI safety: powerful computing efforts launched to boost research
Such concerns are no longer just science fiction. Companies such as OpenAI the firm that created the chatbot ChatGPT are aiming to develop artificial general intelligence, a deep-learning system thats trained to perform a wide range of intellectual tasks similar to those humans can do. Some researchers predict that this will be possible in 520 years. Even so, the field of consciousness research is very undersupported, says Mason. He notes that to his knowledge, there has not been a single grant offer in 2023 to study the topic.
The resulting information gap is outlined in the AMCS leaders submission to the UN High-Level Advisory Body on Artificial Intelligence, which launched in October and is scheduled to release a report in mid-2024 on how the world should govern AI technology. The AMCS leaders submission has not been publicly released, but the body confirmed to the authors that the groups comments will be part of its foundational material documents that inform its recommendations about global oversight of AI systems.
Understanding what could make AI conscious, the AMCS researchers say, is necessary to evaluate the implications of conscious AI systems to society, including their possible dangers. Humans would need to assess whether such systems share human values and interests; if not, they could pose a risk to people.
But humans should also consider the possible needs of conscious AI systems, the researchers say. Could such systems suffer? If we dont recognize that an AI system has become conscious, we might inflict pain on a conscious entity, Long says: We dont really have a great track record of extending moral consideration to entities that dont look and act like us. Wrongly attributing consciousness would also be problematic, he says, because humans should not spend resources to protect systems that dont need protection.
If AI becomes conscious: heres how researchers will know
Some of the questions raised by the AMCS comments to highlight the importance of the consciousness issue are legal: should a conscious AI system be held accountable for a deliberate act of wrongdoing? And should it be granted the same rights as people? The answers might require changes to regulations and laws, the coalition writes.
And then there is the need for scientists to educate others. As companies devise ever-more capable AI systems, the public will wonder whether such systems are conscious, and scientists need to know enough to offer guidance, Mason says.
Other consciousness researchers echo this concern. Philosopher Susan Schneider, the director of the Center for the Future Mind at Florida Atlantic University in Boca Raton, says that chatbots such as ChatGPT seem so human-like in their behaviour that people are justifiably confused by them. Without in-depth analysis from scientists, some people might jump to the conclusion that these systems are conscious, whereas other members of the public might dismiss or even ridicule concerns over AI consciousness.
To mitigate the risks, the AMCS comments call on governments and the private sector to fund more research on AI consciousness. It wouldnt take much funding to advance the field: despite the limited support to date, relevant work is already underway. For example, Long and 18 other researchers have developed a checklist of criteria to assess whether a system has a high chance of being conscious. The paper1, published in the arXiv preprint repository in August and not yet peer reviewed, derives its criteria from six prominent theories explaining the biological basis of consciousness.
Theres lots of potential for progress, Mason says.
See the article here:
AI consciousness: scientists say we urgently need answers - Nature.com
Posted in Artificial General Intelligence
Comments Off on AI consciousness: scientists say we urgently need answers – Nature.com
The Impact of OpenAIs GPT 5. A New Era of AI | by Courtney Hamilton | Dec, 2023 – Medium
Introduction
OpenAI has recently made an exciting announcement that they are working on GPT 5, the next generation of their groundbreaking language model. This news comes hot on the heels of the release of GPT 4 Turbo, showcasing the rapid pace of AI development and OpenAIs commitment to pushing boundaries. GPT models have proven to be revolutionary, consistently delivering jawdropping improvements with each iteration. With OpenAIs evident enthusiasm for GPT 5 and CEO Sam Almans interview, it is clear that this next model will be nothing short of mind-blowing.
One of the most intriguing aspects of GPT 5 is the potential for video generation from text prompts. This capability could have a profound impact on various fields, from education to creative industries. Just imagine being able to transform a simple text description into high-quality video content. The possibilities are endless.
OpenAI plans to achieve this wizardry by focusing on scale. GPT 5 will require a vast amount of data and computing power to reach its full potential. It will analyze a wide range of data sets, including text, images, and audio. This multidimensional approach will allow GPT 5 to excel across different modalities. OpenAI is partnering with NVIDIAs cutting-edge GPUs and leveraging Microsofts Cloud infrastructure to ensure it has the necessary computational resources.
While an official release date for GPT 5 has not been announced, experts predict it could be launched sometime around mid to late 2024. OpenAI will undoubtedly take the time needed to meet their standards before releasing the model to the public. The wait may feel long, but rest assured, it will be worth it. Each iteration of GPT has shattered expectations, and GPT 5 promises to be the most powerful AI system yet.
However, with great power comes great responsibility. OpenAI recognizes the need for safeguards and constraints to prevent harmful outcomes. As GPT 5 potentially approaches the level of artificial general intelligence, questions arise about its autonomy and control. Balancing the potential benefits of increased intelligence with the risks it poses to society is an ongoing debate.
See the rest here:
The Impact of OpenAIs GPT 5. A New Era of AI | by Courtney Hamilton | Dec, 2023 - Medium
Posted in Artificial General Intelligence
Comments Off on The Impact of OpenAIs GPT 5. A New Era of AI | by Courtney Hamilton | Dec, 2023 – Medium
The Era of AI: 2023’s Landmark Year – CMSWire
The Gist
As we approach the end of another year, it's becoming increasingly clear that we are navigating through the burgeoning era of AI, a time that is reminiscent of the early days of the internet, yet poised with a transformative potential far beyond. While we might still be at what could be called the "AOL stages" of AI development, the pace of progress has been relentless, with new applications and capabilities emerging daily, reshaping every facet of our lives and businesses.
In a manner once attributed to divine influence and later to the internet itself, AI has become a pervasive force it touches everything it changes, and indeed, changes everything it touches. This article will recap the events that impacted the world of AI in 2023, including the evolution and growth of AI, regulations, legislation and petitions, the saga of Sam Altman, and the pursuit of Artificial General Intelligence (AGI).
The latest in the saga of AI began late last year, on Nov. 30, 2022, when OpenAI announced the release of ChatGPT 3.5, the second major release of the GPT language model capable of generating human-like text, which signified a major step in improving how we communicate with machines. Since then, its been a very busy year for AI, and there has rarely been a week that hasnt seen some announcement relating to it.
The first half of 2023 was marked by a series of significant developments in the field of AI, reflecting the rapid pace of innovation and its growing impact across various sectors. So far, the rest of the year hasnt shown any signs of slowing down. In fact, the emergence of AI applications across industries seems to have increased its pace. Here is an abbreviated timeline of the major AI news of the year:
February 13, 2023: Stanford scholars developed DetectGPT, the first in a forthcoming line of tools designed to differentiate between human and AI-generated text, addressing the need for oversight in an era where discerning the source of information is crucial. The tool came after the release of ChatGPT 3.5 prompted teachers and professors to become alarmed at the potential of ChatGPT to be used for cheating.
February 23, 2023: The launch of an open-source project called AgentGPT, which runs in a browser and uses OpenAI's ChatGPT to execute complex tasks, further demonstrated the versatility and practical applications of AI.
February 24, 2023: Meta, formerly known as Facebook, launched Llama, a large language model with 65 billion parameters, setting new benchmarks in the AI industry.
March 14, 2023: OpenAI released GPT 4, a significantly enhanced model over its predecessor, ChatGPT 3.5, raising discussions in the AI community about the potential inadvertent achievement of Artificial General Intelligence (AGI).
March 20, 2023: Studies examined the responses of GPT 3.5 and GPT 4 to clinical questions, highlighting the need for refinement and evaluation before relying on AI language models in healthcare. GPT 4 outperformed previous models, achieving an average score of 86.65% and 86.7% on the Self-Assessment and Sample Exam of the USMLE tests, with GPT 3.5 achieving 53.61% and 58.78%.
March 21, 2023: Googles focus on AI during its Google I/O event included the release of Bard, a ChatGPT competitor, and other significant announcements about its forthcoming large language models and integrations into Google Workspace and Gmail.
March 21, 2023: Nvidia's announcement of Picasso Cloud Services for creating large language and visual models, aimed at larger enterprises, underscored the increasing interest of major companies in AI technologies.
March 23, 2023: OpenAI's launch of Plugins for GPT expanded the capabilities of GPT models, allowing them to connect to third-party services via an API.
March 30, 2023: AutoGPT was released, with the capability to execute and improve its responses to prompts autonomously. This advancement in AI technology showcased a significant step toward greater autonomy in AI systems, and came with the ability to be installed on users local PCs, allowing individuals to have a large language model AI chat application in their homes without the need for internet access.
April 4, 2023: An unsurprising study discovered that participants could only differentiate between human and AI-generated text with about 50% accuracy, similar to random chance.
April 13, 2023: AWS announced Bedrock, a service making Fundamental AI Models from various labs accessible via an API, streamlining the development and scaling of generative AI-based applications.
May 23, 2023: OpenAI revealed plans to enhance ChatGPT with web browsing capabilities using Microsoft Bing and additional plugins for Plus subscribers, which would initially become available to ChatGPT Plus subscribers.
July 18, 2023: In a study, ChatGPT, particularly GPT 4, was found to be able to outperform medical students in responding to complex clinical care exam questions.
August 6, 2023: The EU AI Act, announced on this day, was one of the world's first legal frameworks for AI, and saw major developments and negotiations in 2023, with potential global implications, though it was still being hashed out in mid-December.
September 8, 2023: A study revealed that AI detectors, designed to identify AI-generated content, exhibit low reliability, especially for content created by non-native English speakers, raising ethical concerns. This has been an ongoing concern for both teachers and students, as these tools regularly present original content as being produced by AI, and AI-generated content as being original.
September 21, 2023: OpenAI announced that Dall-E 3, its text-to-image generation tool, would soon be available to ChatGPT Plus users.
November 4, 2023: Elon Musk announced the latest addition to the world of generative AI: Grok. Musk said that Grok promises to "break the mold of conventional AI," is said to respond with provocative answers and insights, and will welcome all manner of queries.
November 21, 2023: Microsoft unveiled Bing Chat 2.0 now called Copilot a major upgrade to its own chatbot platform, which leverages a hybrid approach of combining generative and retrieval-based models to provide more accurate and diverse responses.
November 22, 2023: With the release of Claude 2.1, Anthropic announced an expansion in Claude's capabilities, enabling it to analyze large volumes of text rapidly, a development favorably compared to the capabilities of ChatGPT.
December 6, 2023: Google announces its OpenAI rival, Gemini, which is multimodal, can generalize and seamlessly understand, operate across and combine different types of information, including text, images, audio, video and code.
These were only a very small portion of 2023s AI achievements and events, as nearly every week a new generative AI-driven application was being announced, including specialized AI-driven chatbots for specific use cases, applications, and industries. Additionally, there was often news of interactions with and uses of AI, AI jailbreaks, predictions about the potential dystopian future it may bring, proposals of regulations, legislation and guardrails, and petitions to stop developing the technology.
Shubham A. Mishra, co-founder and global CEO at AI marketing pioneer Pixis, told CMSWire that in 2023, the world focused on building the technology and democratizing it. "We saw people use it, consume it, and transform it into the most effective use cases to the point that it has now become a companion for them," said Mishra. "It has become such an integral part of its user's day-to-day functions that they don't even realize they are consuming it."
Many view 2023 as the year of generative AI but we are only beginning to tap into the potential applications of the technology. We are still trying to harness the full potential of generative AI across different use cases. In 2024, the industry will witness major shifts, be it a rise or fall in users and applications, said Mishra. There may be a rise in the number of users, but there will also be a second wave of Generative AI innovations where there will be an incremental rise in its applications.
Related Article:Harnessing AI: Top Use Cases for Digital Commerce
Anthony Yell, chief creative officer at interactive agency, Razorfish, told CMSWire that as a chief creative officer, he and his team have seen generative AI stand out by democratizing creativity, making it more accessible and enhancing the potential for those with skills and experience to reach new creative heights. "This technology has introduced the concept of a 'creative partner' or 'creative co-pilot,' revolutionizing our interaction with creative processes."
Yell believes that this era is about marrying groundbreaking creativity with responsible innovation, ensuring that AI's potential is harnessed in a way that respects brand identity and maintains consumer trust. This desire for responsibility and trust is something that is core to the acceptance of what has been and will continue to be a very disruptive technology. As such, 2023 has included many milestones in the quest for AI responsibility, safety, regulations, ethics, and controls. Here are some of the most impactful regulatory AI events in 2023.
February 28, 2023: Former Google engineer Blake Lemoine, who was fired in 2022 for going to the press with claims that Google LaMDA is actually sentient, was back in the news doubling down on his claim.
March 22, 2023: A group of technology and business leaders, including Elon Musk, Steve Wozniak and tech leaders from Meta, Google and Microsoft, signed an open letter hosted by the Future of Life Institute urging AI organizations to pause new developments in AI, citing risks to society. The letter stated that "we call on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT 4."
May 16, 2023: Sam Altman, CEO and co-founder of OpenAI, spoke with members of Congress to regulate AI due to the inherent risks that are posed by the technology.
May 30, 2023: AI industry leaders and researchers signed a statement hosted by the Center for AI Safety warning of the "extinction risk posed by AI." The statement said that Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war, and was signed by OpenAI CEO Sam Altman, Geoffrey Hinton, Google DeepMind and Anthropic executives and researchers, Microsoft CTO Kevin Scott, and security expert Bruce Schneier.
October 31, 2023: President Biden signed the sweeping Executive Order on Artificial Intelligence, which was designed to establish new standards for AI safety and security, protect Americans privacy, advance equity and civil rights, stand up for consumers and workers, promote innovation and competition, and advance American leadership around the world.
November 14, 2023: The DHS Cybersecurity and Infrastructure Security Agency (CISA) released its initial Roadmap for Artificial Intelligence, leading the way to ensure safe and secure AI development in the future. The CISA AI roadmap came in response to President Biden's October 2023 Executive Order on Artificial Intelligence.
December 11, 2023: The European Commission and the bloc's 27 member countries reached a deal on the world's first comprehensive AI rules, opening the door for the legal oversight of AI technology.
Rubab Rizvi, chief data scientist at Brainchild, a media agency affiliated with the Publicis Groupe, told CMSWire that from predictive analytics to seamless automation, the rapid embrace of AI has not only elevated efficiency but has also opened new frontiers for innovation, shaping a dynamic landscape that keeps us on our toes and fuels the excitement of what's to come.
The generative AI we've come to embrace in 2023 hasn't just been about enhancing personalization, she said. "It's becoming your digital best friend, offering tailored experiences that elevate brand engagement to a new level," said Rizvi. "This calls for proper governance and guardrails. As generative AI can potentially expose new previously inaccessible data, we must ensure that we are disciplined in protecting ourselves and our unstructured data." Rizvi aptly reiterated what many have said throughout the year: Dont blindly trust the machine."
Related Article: The Evolution of AI Chatbots: Past, Present and Future
OpenAI was the organization that officially started the era of AI with the announcement and introduction of ChatGPT 3.5 in 2022. In the year that followed, OpenAI ceaselessly worked to continue the evolution of AI, and has been no stranger to its share of both conspiracies and controversies. This came to a head late in the year, when the organization surprised everyone with news regarding its CEO, Sam Altman.
November, 17, 2023: The board of OpenAI fired co-founder and CEO Sam Altman, stating that a review board found he was not consistently candid in his communications and that "the board no longer has confidence in his ability to continue leading OpenAI.
November, 20, 2023: Microsoft hired former OpenAI CEO Sam Altman and co-founder Greg Brockman, with Microsoft CEO Satya Nadella announcing that Altman and Brockman would be joining to lead Microsofts new advanced AI research team, and that Altman would become CEO of the new group.
November 22, 2023: OpenAI rehired Sam Altman as its CEO, stating that it had "reached an agreement in principle for Sam Altman to return to OpenAI as CEO," along with significant changes in its non-profit board.
November 24, 2023: It was suggested that prior to Altmans firing, OpenAI researchers sent a letter to its board of directors warning of a new AI discovery that posed potential risks to humanity. The discovery, which has been referred to as Project Q*, was said to be a breakthrough in the pursuit of AGI, and reportedly influenced the board's firing of Sam Altman because of concerns that he was rushing to commercialize the new AI advancement without fully understanding its implications.
The quest for AGI, (something that Microsoft has since said could take decades), is an advanced form of AI characterized by self-learning capabilities and proficiency in a wide range of tasks, and stands as a cornerstone objective in the AI field. AGI could potentially seek to develop machines that mirror human intelligence, with the ability to understand, learn, and adeptly apply knowledge across diverse contexts, surpassing human performance in various domains.
Reflecting on 2023, we have witnessed a landmark year in AI, marked by groundbreaking advancements. Amidst these innovations, the year has also been pivotal in addressing the ethical, safety, and regulatory aspects of AI. As we conclude the year, the progress in AI not only showcases human ingenuity but also sets the stage for future challenges and opportunities, emphasizing the need for responsible stewardship of this transformative yet disruptive technology.
The rest is here:
Posted in Artificial General Intelligence
Comments Off on The Era of AI: 2023’s Landmark Year – CMSWire