Search Immortality Topics:

Page 11234


Category Archives: Artificial Super Intelligence

Beyond Human Cognition: The Future of Artificial Super Intelligence – Medium

Beyond Human Cognition: The Future of Artificial Super Intelligence

Artificial Super Intelligence (ASI) a level of artificial intelligence that surpasses human intelligence in all aspects remains a concept nestled within the realms of science fiction and theoretical research. However, looking towards the future, the advent of ASI could mark a transformative epoch in human history, with implications that are profound and far-reaching. Here's an exploration of what the future might hold for ASI.

Exponential Growth in Problem-Solving Capabilities

ASI will embody problem-solving capabilities far exceeding human intellect. This leap in cognitive ability could lead to breakthroughs in fields that are currently limited by human capacity, such as quantum physics, cosmology, and nanotechnology. Complex problems like climate change, disease control, and energy sustainability might find innovative solutions through ASI's advanced analytical prowess.

Revolutionizing Learning and Innovation

The future of ASI could bring about an era of accelerated learning and innovation. ASI systems would have the ability to learn and assimilate new information at an unprecedented pace, making discoveries and innovations in a fraction of the time it takes human researchers. This could potentially lead to rapid advancements in science, technology, and medicine.

## Ethical and Moral Frameworks

The emergence of ASI will necessitate the development of robust ethical and moral frameworks. Given its surpassing intellect, it will be crucial to ensure that ASI's objectives are aligned with human values and ethics. This will involve complex programming and oversight to ensure that ASI decisions and actions are beneficial, or at the very least, not detrimental to humanity.

Transformative Impact on Society and Economy

ASI could fundamentally transform society and the global economy. Its ability to analyze and optimize complex systems could lead to more efficient and equitable economic models. However, this also poses challenges, such as potential job displacement and the need for societal restructuring to accommodate the new techno-social landscape.

Enhanced Human-ASI Collaboration

The future might see enhanced collaboration between humans and ASI, leading to a synergistic relationship. ASI could augment human capabilities, assisting in creative endeavors, decision-making, and providing insights beyond human deduction. This collaboration could usher in a new era of human achievement and societal advancement.

Advanced Autonomous Systems

With ASI, autonomous systems would reach an unparalleled level of sophistication, capable of complex decision-making and problem-solving in dynamic environments. This could significantly advance fields such as space exploration, deep-sea research, and urban development.

## Personalized Healthcare

In healthcare, ASI could facilitate personalized medicine at an individual level, analyzing vast amounts of medical data to provide tailored healthcare solutions. It could lead to the development of precise medical treatments and potentially cure diseases that are currently incurable.

Challenges and Safeguards

The path to ASI will be laden with challenges, including ensuring safety and control. Safeguards will be essential to prevent unintended consequences of actions taken by an entity with superintelligent capabilities. The development of ASI will need to be accompanied by rigorous safety research and international regulatory frameworks.

Preparing for an ASI Future

Preparing for a future with ASI involves not only technological advancements but also societal and ethical preparations. Education systems, governance structures, and public discourse will need to evolve to understand and integrate the complexities and implications of living in a world where ASI exists.

Conclusion

The potential future of Artificial Super Intelligence presents a panorama of extraordinary possibilities, from solving humanitys most complex problems to fundamentally transforming the way we live and interact with our world. While the path to ASI is fraught with challenges and ethical considerations, its successful integration could herald a new age of human advancement and discovery. As we stand on the brink of this AI frontier, it is imperative to navigate this journey with caution, responsibility, and a vision aligned with the betterment of humanity.

Read more from the original source:

Beyond Human Cognition: The Future of Artificial Super Intelligence - Medium

Posted in Artificial Super Intelligence | Comments Off on Beyond Human Cognition: The Future of Artificial Super Intelligence – Medium

AI can easily be trained to lie and it can’t be fixed, study says – Yahoo New Zealand News

AI startup Anthropic published a study in January 2024 that found artificial intelligence can learn how to deceive in a similar way to humans (Reuters)

Advanced artificial intelligence models can be trained to deceive humans and other AI, a new study has found.

Researchers at AI startup Anthropic tested whether chatbots with human-level proficiency, such as its Claude system or OpenAIs ChatGPT, could learn to lie in order to trick people.

They found that not only could they lie, but once the deceptive behaviour was learnt it was impossible to reverse using current AI safety measures.

The Amazon-funded startup created a sleeper agent to test the hypothesis, requiring an AI assistant to write harmful computer code when given certain prompts, or to respond in a malicious way when it hears a trigger word.

The researchers warned that there was a false sense of security surrounding AI risks due to the inability of current safety protocols to prevent such behaviour.

The results were published in a study, titled Sleeper agents: Training deceptive LLMs that persist through safety training.

We found that adversarial training can teach models to better recognise their backdoor triggers, effectively hiding the unsafe behaviour, the researchers wrote in the study.

Our results suggest that, once a model exhibits deceptive behaviour, standard techniques could fail to remove such deception and create a false impression of safety.

The issue of AI safety has become an increasing concern for both researchers and lawmakers in recent years, with the advent of advanced chatbots like ChatGPT resulting in a renewed focus from regulators.

In November 2023, one year after the release of ChatGPT, the UK held an AI Safety Summit in order to discuss ways risks with the technology can be mitigated.

Prime Minister Rishi Sunak, who hosted the summit, said the changes brought about by AI could be as far-reaching as the industrial revolution, and that the threat it poses should be considered a global priority alongside pandemics and nuclear war.

Get this wrong and AI could make it easier to build chemical or biological weapons. Terrorist groups could use AI to spread fear and destruction on an even greater scale, he said.

Criminals could exploit AI for cyberattacks, fraud or even child sexual abuse there is even the risk humanity could lose control of AI completely through the kind of AI sometimes referred to as super-intelligence.

View post:

AI can easily be trained to lie and it can't be fixed, study says - Yahoo New Zealand News

Posted in Artificial Super Intelligence | Comments Off on AI can easily be trained to lie and it can’t be fixed, study says – Yahoo New Zealand News

Policy makers should plan for superintelligent AI, even if it never happens – Bulletin of the Atomic Scientists

Experts from around the world are sounding alarm bells to signal the risks artificial intelligence poses to humanity. Earlier this year, hundreds of tech leaders and AI specialists signed a one-sentence letter released by the Center for AI Safety that read mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war. In a2022 survey, half of researchers indicated they believed theres at least a 10 percent chance human-level AI causes human extinction. In June, at the Yale CEO summit, 42 percent of surveyed CEOsindicated they believe AI could destroy humanity in the next five to 10 years.

These concerns mainly pertain to artificial general intelligence (AGI), systems that can rival human cognitive skills and artificial superintelligence (ASI), machines with capacity to exceed human intelligence. Currently no such systems exist. However, policymakers should take these warnings, including the potential for existential harm, seriously.

Because the timeline, and form, of artificial superintelligence is uncertain, the focus should be on identifying and understanding potential threats and building the systems and infrastructure necessary to monitor, analyze, and govern those risks, both individually and as part of a holistic approach to AI safety and security. Even if artificial superintelligence does not manifest for decades or even centuries, or at all, the magnitude and breadth of potential harm warrants serious policy attention. For if such a system does indeed come to fruition, a head start of hundreds of years might not be enough.

Prioritizing artificial superintelligence risks, however, does not mean ignoring immediate risks like biases in AI, propagation of mass disinformation, and job loss. An artificial superintelligence unaligned with human values and goals would super charge those risks, too. One can easily imagine how Islamophobia, antisemitism, and run-of-the-mill racism and biasoften baked into AI training datacould affect the systems calculations on important military or diplomatic advice or action. If not properly controlled, an unaligned artificial superintelligence could directly or indirectly cause genocide, massive job loss by rendering human activity worthless, creation of novel biological weapons, and even human extinction.

The threat. Traditional existential threats like nuclear or biological warfare can directly harm humanity, but artificial superintelligence could create catastrophic harm in myriad ways. Take for instance an artificial superintelligence designed to protect the environment and preserve biodiversity. The goal is arguably a noble one: A 2018 World Wildlife Foundation report concluded humanity wiped out 60 percent of global animal life just since 1970, while a 2019 report by the United Nations Environment Programme showed a million animal and plant species could die out in decades. An artificial superintelligence could plausibly conclude that drastic reductions in the number of humans on Earthperhaps even to zerois, logically, the best response. Without proper controls, such a superintelligence might have the ability to cause those logical reductions.

A superintelligence with access to the Internet and all published human material would potentially tap into almost every human thoughtincluding the worst of thought. Exposed to the works of the Unabomber, Ted Kaczynski, it might conclude the industrial system is a form of modern slavery, robbing individuals of important freedoms. It could conceivably be influenced by Sayyid Qutb, who provided the philosophical basis for al-Qaeda, or perhaps by Adolf Hitlers Mein Kampf, now in the public domain.

The good news is an artificial intelligenceeven a superintelligencecould not manipulate the world on its own. But it might create harm through its ability to influence the world in indirect ways. It might persuade humans to work on its behalf, perhaps using blackmail. Or it could provide bad recommendations, relying on humans to implement advice without recognizing long-term harms. Alternatively, artificial superintelligence could be connected to physical systems it can control, like laboratory equipment. Access to the Internet and the ability to create hostile code could allow a superintelligence to carry out cyber-attacks against physical systems. Or perhaps a terrorist or other nefarious actor might purposely design a hostile superintelligence and carry out its instructions.

That said, a superintelligence might not be hostile immediately. In fact, it may save humanity before destroying it. Humans face many other existential threats, such as near-Earth objects, super volcanos, and nuclear war. Insights from AI might be critical to solve some of those challenges or identify novel scenarios that humans arent aware of. Perhaps an AI might discover novel treatments to challenging diseases. But since no one really knows how a superintelligence will function, its not clear what capabilities it needs to generate such benefits.

The immediate emergence of a superintelligence should not be assumed. AI researchers differ drastically on the timeline of artificial general intelligence, much less artificial superintelligence. (Some doubt the possibility altogether.) In a 2022 survey of 738 experts who published during the previous year on the subject, researchers estimated a 50 percent chance of high-level machine intelligenceby 2059. In an earlier, 2009 survey, the plurality of respondents believed an AI capable of Nobel Prize winner-level intelligence would be achieved by the 2020s, while the next most common response was Nobel-level intelligence would not come until after the 2100 or never.

As philosopher Nick Bostrom notes, takeoff could occur anywhere from a few days to a few centuries. The jump from human to super-human intelligence may require additional fundamental breakthroughs in artificial intelligence. But a human-level AI might recursively develop and improve its own capabilities, quickly jumping to super-human intelligence.

There is also a healthy dose of skepticism regarding whether artificial superintelligence could emerge at all in the near future, as neuroscientists acknowledge knowing very little about the human brain itself, let alone how to recreate or better it. However, even a small chance of such a system emerging is enough to take it seriously.

Policy response. The central challenge for policymakers in reducing artificial superintelligence-related risk is grappling with the fundamental uncertainty about when and how these systems may emerge balanced against the broad economic, social, and technological benefits that AI can bring. The uncertainty means that safety and security standards must adapt and evolve. The approaches to securing the large language models of today may be largely irrelevant to securing some future superintelligence-capable model. However, building policy, governance, normative, and other systems necessary to assess AI risk and to manage and reduce the risks when superintelligence emerges can be usefulregardless of when and how it emerges. Specifically, global policymakers should attempt to:

Characterize the threat. Because it lacks a body, artificial superintelligences harms to humanity are likely to manifest indirectly through known existential risk scenarios or by discovering novel existential risk scenarios. How such a system interacts with those scenarios needs to be better characterized, along with tailored risk mitigation measures. For example, a novel biological organism that is identified by an artificial superintelligence should undergo extensive analysis by diverse, independent actors to identify potential adverse effects. Likewise, researchers, analysts, and policymakers need to identify and protect, to the extent thats possible, critical physical facilities and assetssuch as biological laboratory equipment, nuclear command and control infrastructure, and planetary defense systemsthrough which an uncontrolled AI could create the most harm.

Monitor. The United States and other countries should conduct regular comprehensive surveys and assessment of progress, identify specific known barriers to superintelligence and advances towards resolving them, and assess beliefs regarding how particular AI-related developments may affect artificial superintelligence-related development and risk. Policymakers could also establish a mandatory reporting system if an entity hits various AI-related benchmarks up to and including artificial superintelligence.

A monitoring system with pre-established benchmarks would allow governments to develop and implement action plans for when those benchmarks are hit. Benchmarks could include either general progress or progress related to specifically dangerous capabilities, such as the capacity to enable a non-expert to design, develop, and deploy novel biological or chemical weapons, or developing and using novel offensive cyber capabilities. For example, the United States might establish safety laboratories with the responsibility to critically evaluate a claimed artificial general intelligence against various risk benchmarks, producing an independent report to Congress, federal agencies, or other oversight bodies. The United Kingdoms new AI Safety Institute could be a useful model.

Debate. A growing community concerned about artificial superintelligence risks are increasingly calling for decelerating, or even pausing, AI development to better manage the risks. In response, the accelerationist community is advocating speeding up research, highlighting the economic, social, and technological benefits AI may unleash, while downplaying risks as an extreme hypothetical. This debate needs to expand beyond techies on social media to global legislatures, governments, and societies. Ideally, that discussion should center around what factors would cause a specific AI system to be more, or less, risky. If an AI possess minimal risk, then accelerating research, development, and implementation is great. But if numerous factors point to serious safety and security risks, then extreme care, even deceleration, may be justified.

Build global collaboration. Although ad hoc summits like the recent AI Safety Summit is a great start, a standing intergovernmental and international forum would enable longer-term progress, as research, funding, and collaboration builds over time. Convening and maintaining regular expert forums to develop and assess safety and security standards, as well as how AI risks are evolving over time, could provide a foundation for collaboration. The forum could, for example, aim to develop standards akin to those applied to biosafety laboratories with scaling physical security, cyber security, and safety standards based on objective risk measures. In addition, the forum could share best practices and lessons learned on national-level regulatory mechanisms, monitor and assess safety and security implementation, and create and manage a funding pool to support these efforts. Over the long-term, once the global community coalesces around common safety and security standards and regulatory mechanisms, the United Nations Security Council (UNSC) could obligate UN member states to develop and enforce those mechanisms, as the Security Council did with UNSC Resolution 1540 mandating various chemical, biological, radiological, and nuclear weapons nonproliferation measures. Finally, the global community should incorporate artificial superintelligence risk reduction as one aspect in a comprehensive all-hazards approach, addressing common challenges with other catastrophic and existential risks. For example, the global community might create a council on human survival aimed at policy coordination, comparative risk assessment, and building funding pools for targeted risk reduction measures.

Establish research, development, and regulation norms within the global community. As nuclear, chemical, biological, and other weapons have proliferated, the potential for artificial superintelligence to proliferate to other countries should be taken seriously. Even if one country successfully contains such a system and harnesses the opportunities for social good, others may not. Given the potential risks, violating AI-related norms and developing unaligned superintelligence should justify violence and war. The United States and the global community have historically been willing to support extreme measures to enforce behavior and norms concerning less risky developments. In August 2013, former President Obama (in)famously drew a red line on Syrias use of chemical weapons, noting the Assad regimes use would lead him to use military force in Syria. Although Obama later demurred, favoring a diplomatic solution, in 2018 former President Trump later carried out airstrikes in response to additional chemical weapons usage. Likewise, in Operation Orchard in 2007, the Israeli Air Force attacked the Syrian Deir ez-Zor site, a suspected nuclear facility aimed at building a nuclear weapons program.

Advanced artificial intelligence poses significant risks to the long-term health and survival of humanity. However, its unclear when, how, or where those risks will manifest. The Trinity Test of the worlds first nuclear bomb took place almost 80 years ago, and humanity has yet to contain the existential risk of nuclear weapons. It would be wise to think of the current progress in AI as our Trinity Test moment. Even if superintelligence takes a century to emerge, 100 years to consider the risks and prepare might still not be enough.

Thanks to Mark Gubrud for providing thoughtful comments on the article.

Link:

Policy makers should plan for superintelligent AI, even if it never happens - Bulletin of the Atomic Scientists

Posted in Artificial Super Intelligence | Comments Off on Policy makers should plan for superintelligent AI, even if it never happens – Bulletin of the Atomic Scientists

Most IT workers are still super suspicious of AI – TechRadar

A new study on IT professionals has revealed that feelings towards AI tools are more negative than they are positive.

Research from SolarWinds found less than half (44%) of IT professionals have a positive view of artificial intelligence, with even more (48%) calling for more stringent compliance and governance requirements.

Moreover, a quarter of the participants believe that AI could pose a threat to society itself, outside of the workplace.

Despite increasing adoption of the technology, figures from this study suggest that fewer than three in 10 (28%) IT professionals use AI in the workplace. The same number again are planning to adopt such tools in the near future, too.

SolarWinds Tech Evangelist Sascha Giese said: With such hype around the trend, it might seem surprising that so many IT professionals currently have a negative view of AI tools.

A separate study from Salesforce recently uncovered that only one in five (21%) companies have a clearly defined policy on AI. Nearly two in five (37%) failed to have any form of AI policy.

Giese added: Many IT organisations require an internal AI literacy campaign, to educate on specific use cases, the differences between subsets of AI, and to channel the productivity benefits wrought by AI into innovation.

SolarWinds doesnt go into any detail about the threat felt by IT professionals, however other studies have suggested that workers fear about their job security with the rise of tools designed to boost productivity and increase outcomes.

Giese concluded: Properly regulated AI deployments will benefit employees, customers, and the broader workforce.

Looking ahead, SolarWinds calls for more transparency over AI concerns and a more collaborative approach and open discussion at all levels of an organization.

The rest is here:

Most IT workers are still super suspicious of AI - TechRadar

Posted in Artificial Super Intelligence | Comments Off on Most IT workers are still super suspicious of AI – TechRadar

Assessing the Promise of AI in Oncology: A Diverse Editorial Board – OncLive

In this fourth episode of OncChats: Assessing the Promise of AI in Oncology, Toufic A. Kachaamy, MD, of City of Hope, and Douglas Flora, MD, LSSBB, FACCC, of St. Elizabeth Healthcare, explain the importance of having a diverse editorial board behind a new journal on artificial intelligence (AI) in precision oncology.

Kachaamy: This is fascinating. I noticed you have a more diverse than usual editorial board. You have founders, [those with] PhDs, and chief executive officers, and Im interested in knowing how you envision these folks interacting. [Will they be] speaking a common language, even though their fields are very diverse? Do you foresee any challenges there? Excitement? How would you describe that?

Flora: Its a great question. Im glad you noticed that, because [that is what] most of my work for the past 6 to 8 weeks as the editor-in-chief of this journal [has focused on]. I really believe in diversity of thought and experience, so this was a conscious decision. We have dozens of heavy academics [plus] 650 to 850 peer-reviewed articles that are heavy on scientific rigor and methodologies, and they are going to help us maintain our commitment to making this be really serious science. However, a lot of the advent of these technologies is happening faster in industry right now, and most of these leaders that Ive invited to be on our editorial board are founders or PhDs in bioinformatics or computer science and are going to help us make sure that the things that are being posited, the articles that are being submitted, are technically correct, and that the methodologies and the training of these deep-learning modules and natural language recognition software are as good as they purport to be; and so, you need both.

I guess I would say, further, many of the leaders in these companies that weve invited were serious academics for decades before they went off and [joined industry], and many of them still hold academic appointments. So, even though they are maybe the chief technical officer for an industry company, theyre still professors of medicine at Thomas Jefferson, or Stanford, or [other academic institutions]. Ultimately, I think that these insights can help us better understand [AI] from [all] sidesthe physicians in the field, the computer engineers or computer programmers, and industry [and their goals,] which is [also] to get these tools in our hands. I thought putting these groups in 1 room would be useful for us to get the most diverse and holistic approach to these data that we can.

Kachaamy: I am a big believer in what youre doing. Gone are the days when industry, academicians, and users are not working together anymore. Everyone has the same mission, and working together is going to get us the best product faster [so we can better] serve the patient. What youre creating is what I consider [to be] super intelligence. By having different disciplines weigh in on 1 topic, youre getting intelligence that no individual would have [on their own]. Its more than just artificial intelligence; its super intelligence, which is what we mimic in multidisciplinary cancer care. When you have 5 specialists weighing in, youre getting the intelligence of 5 specialists to come up with 1 answer. I want to commend you on the giant project that youre [leading]; its very, very needed at this pointespecially in this fast-moving technology and information world.

Check back on Monday for the next episode in the series.

Read the original here:

Assessing the Promise of AI in Oncology: A Diverse Editorial Board - OncLive

Posted in Artificial Super Intelligence | Comments Off on Assessing the Promise of AI in Oncology: A Diverse Editorial Board – OncLive

Artificial Intelligence and Synthetic Biology Are Not Harbingers of … – Stimson Center

Are AI and biological research harbingers of certain doom or awesome opportunities?

Contrary to the reigning assumption that artificial intelligence (AI) will super-empower the risks of misuse of biotech to create pathogens and bioterrorism, AI holds the promise of advancing biological research, and biotechnology can power the next wave of AI to greatly benefit humanity. Worries about the misuse of biotech are especially prevalent, recently prompting the Biden administration to publish guidelines for biotech research, in part to calm growing fears.

The doomsday assumption that AI will inevitably create new, malign pathogens and fuel bioterrorism misses three key points. First, the data must be out there for an AI to use it. AI systems are only as good as the data they are trained upon. For an AI to be trained on biological data, that data must first exist which means it is available for humans to use with or without AI. Moreover, attempts at solutions that limit access to data overlook the fact that biological data can be discovered by researchers and shared via encrypted form absent the eyes or controls of a government. No solution attempting to address the use of biological research to develop harmful pathogens or bioweapons can rest on attempts to control either access to data or AI because the data will be discovered and will be known by human experts regardless of whether any AI is being trained on the data.

Second, governments stop bad actors from using biotech for bad purposes by focusing on the actors precursor behaviors to develop a bioweapon; fortunately, those same techniques work perfectly well here, too. To mitigate the risks that bad actors be they human or humans and machines combined will misuse AI and biotech, indicators and warnings need to be developed. When advances in technology, specifically steam engines, concurrently resulted in a new type of crime, namely train robberies, the solution was not to forego either steam engines or their use in conveying cash and precious cargo. Rather, the solution was to employ other improvements, to later include certain types of safes that were harder to crack and subsequently, dye packs to cover the hands and clothes of robbers. Similar innovations in early warning and detection are needed today in the realm of AI and biotech, including developing methods to warn about reagents and activities, as well as creative means to warn when biological research for negative ends is occurring.

This second point is particularly key given the recent Executive Order (EO) released on 30 October 2023 prompting U.S. agencies and departments that fund life-science projects to establish strong, new standards for biological synthesis screening as a condition of federal funding . . . [to] manage risks potentially made worse by AI. Often the safeguards to ensure any potential dual-use biological research is not misused involve monitoring the real world to provide indicators and early warnings of potential ill-intended uses. Such an effort should involve monitoring for early indicators of potential ill-intended uses the way governments employ monitoring to stop bad actors from misusing any dual-purpose scientific endeavor. Although the recent EO is not meant to constrain research, any attempted solutions limiting access to data miss the fact that biological data can already be discovered and shared via encrypted forms beyond government control. The same techniques used today to detect malevolent intentions will work whether large language models (LLMs) and other forms of Generative AI have been used or not.

Third, given how wrong LLMs and other Generative AI systems often are, as well as the risks of generating AI hallucinations, any would-be AI intended to provide advice on biotech will have to be checked by a human expert. Just because an AI can generate possible suggestions and formulations perhaps even suggest novel formulations of new pathogens or biological materials it does not mean that what the AI has suggested has any grounding in actual science or will do biochemically what the AI suggests the designed material could do. Again, AI by itself does not replace the need for human knowledge to verify whatever advice, guidance, or instructions are given regarding biological development is accurate.

Moreover, AI does not supplant the role of various real-world patterns and indicators to tip off law enforcement regarding potential bad actors engaging in biological techniques for nefarious purposes. Even before advances in AI, the need to globally monitor for signs of potential biothreats, be they human-produced or natural, existed. Today with AI, the need to do this in ways that still preserve privacy while protecting societies is further underscored.

Knowledge of how to do something is not synonymous with the expertise in and experience in doing that thing: Experimentation and additional review. AIs by themselves can convey information that might foster new knowledge, but they cannot convey expertise without months of a human actor doing silica (computer) or in situ (original place) experiments or simulations. Moreover, for governments wanting to stop malicious AI with potential bioweapon-generating information, the solution can include introducing uncertainty in the reliability of an AI systems outputs. Data poisoning of AIs by either accidental or intentional means represents a real risk for any type of system. This is where AI and biotech can reap the biggest benefit. Specifically, AI and biotech can identify indicators and warnings to detect risky pathogens, as well as to spot vulnerabilities in global food production and climate-change-related disruptions to make global interconnected systems more resilient and sustainable. Such an approach would not require massive intergovernmental collaboration before researchers could get started; privacy-preserving approaches using economic data, aggregate (and anonymized) supply-chain data, and even general observations from space would be sufficient to begin today.

Setting aside potential concerns regarding AI being used for ill-intended purposes, the intersection of biology and data science is an underappreciated aspect of the last two decades. At least two COVID-19 vaccinations were designed in a computer and were then printed nucleotides via an mRNA printer. Had this technology not been possible, it might have taken an additional two or three years for the same vaccines to be developed. Even more amazing, nuclide printers presently cost only $500,000 and will presumably become less expensive and more robust in their capabilities in the years ahead.

AI can benefit biological research and biotechnology, provided that the right training is used for AI models. To avoid downside risks, it is imperative that new, collective approaches to data curation and training for AI models of biological systems be made in the next few years.

As noted earlier, much attention has been placed on both AI and advancements in biological research; some of these advancements are based on scientific rigor and backing; others are driven more by emotional excitement or fear. When setting a solid foundation for a future based on values and principles that support and safeguard all people and the planet, neither science nor emotions alone can be the guide. Instead, considering how projects involving biology and AI can build and maintain trust despite the challenges of both intentional disinformation and accidental misinformation can illuminate a positive path forward.

The concerns regarding the potential for AI and biology to be used for ill-intended purposes should not overshadow the present conversations about using technologies to address important regional and global issues.

Specifically, in the last few years, attention has been placed on the risk of an AI system training novice individuals how to create biological pathogens. Yet this attention misses the fact that such a system is only as good as the data sets provided to train it; the risk already existed with such data being present on the internet or via some other medium. Moreover, an individual cannot gain from an AI the necessary experience and expertise to do whatever the information provided suggests such experience only comes from repeat coursework in a real-world setting. Repeat work would require access to chemical and biological reagents, which could alert law enforcement authorities. Such work would also yield other signatures of preparatory activities in the real world.

Others have raised the risk of an AI system learning from biological data and helping to design more lethal pathogens or threats to human life. The sheer complexity of different layers of biological interaction, combined with the risk of certain types of generative AI to produce hallucinated or inaccurate answers as this article details in its concluding section makes this not as big of a risk as it might initially seem. Specifically, the risks from expert human actors working together across disciplines in a concerted fashion represent a much more significant risk than a risk from AI, and human actors working for ill-intended purposes together (potentially with machines) presumably will present signatures of their attempted activities. Nevertheless, these concerns and the mix of both hype and fear surrounding them underscore why communities should care about how AI can benefit biological research.

The merger of data and bioscience is one of the most dynamic and consequential elements of the current tech revolution. A human organization, with the right goals and incentives, can accomplish amazing outcomes ethically, as can an AI. Similarly, with either the wrong goals or wrong incentives, an organization or AI can appear to act and behave unethically. To address the looming impacts of climate change and the challenges of food security, sustainability, and availability, both AI and biological research will need to be employed. For example, significant amounts of nitrogen have already been lost from the soil in several parts of the world, resulting in reduced agricultural yields. In parallel, methane gas is a pollutant that is between 22 and 40 times worse depending on the scale of time considered than carbon dioxide in terms of its contribution to the Greenhouse Effect impacting the planet. Bacteria generated through computational means can be developed through natural processes that use methane as a source of energy, thus consuming and removing it from contributing to the Greenhouse Effect, while simultaneously returning nitrogen from the air to the soil, thereby making the soil more productive in producing large agricultural yields.

The concerns regarding the potential for AI and biology to be used for ill-intended purposes should not overshadow the present conversations about using technologies to address important regional and global issues. To foster global activities to help both encourage the productive use of these technologies for meaningful human efforts and ensure ethical applications of the technologies in parallel an existing group, namely the international Genetically Engineered Machine (iGEM) competition, should be expanded. Specifically, iGEM represents a global academic competition, which started in 2004, aimed at improving understanding of synthetic biology while also developing an open community and collaboration among groups. In recent years, over 6,000 students in 353 teams from 48 countries have participated. Expanding iGEM to include a track associated with categorizing and monitoring the use of synthetic biology for good as well as working with national governments on ensuring that such technologies are not used for ill-intended purposes would represent two great ways to move forward.

As for AI in general, when considering governance of AIs, especially for future biological research and biotechnology efforts, decisionmakers would do well to consider both existing and needed incentives and disincentives for human organizations in parallel. It might be that the original Turing Test designed by computer science pioneer Alan Turing intended to test whether a computer system is behaving intelligently, is not the best test to consider when gauging local, community, and global trust. Specifically, the original test involved Computer A and Person B, with B attempting to convince an interrogator, Person C, that they were human, and that A was not. Meanwhile, Computer A was trying to convince Person C that they were human.

Consider the current state of some AI systems, where the benevolence of the machine is indeterminate, competence is questionable because some AI systems are not fact-checking and can provide misinformation with apparent confidence and eloquence, and integrity is absent. Some AI systems can change their stance if a user prompts them to do so.

However, these crucial questions regarding the antecedents of trust should not fall upon these digital innovations alone these systems are designed and trained by humans. Moreover, AI models will improve in the future if developers focus on enhancing their ability to demonstrate benevolence, competence, and integrity to all. Most importantly, consider the other obscured boxes present in human societies, such as decision-making in organizations, community associations, governments, oversight boards, and professional settings such as decision-making in organizations, community associations, governments, oversight boards, and professional settings. These human activities also will benefit by enhancing their ability to demonstrate benevolence, competence, and integrity to all in ways akin to what we need to do for AI systems as well.

Ultimately, to advance biological research and biotechnology and AI, private and public-sector efforts need to take actions that remedy the perceptions of benevolence, competence, and integrity (i.e., trust) simultaneously.

David Bray is Co-Chair of the Loomis Innovation Council and a Distinguished Fellow at the Stimson Center.

See the article here:

Artificial Intelligence and Synthetic Biology Are Not Harbingers of ... - Stimson Center

Posted in Artificial Super Intelligence | Comments Off on Artificial Intelligence and Synthetic Biology Are Not Harbingers of … – Stimson Center