Search Immortality Topics:

Page 3«..2345..»


Category Archives: Artificial General Intelligence

How to win the artificial general intelligence race and not end … – The Strategist

In 2016, I witnessed DeepMinds artificial-intelligence model AlphaGo defeat Go champion Lee Sedol in Seoul. That event was a milestone, demonstrating that an AI model could beat one of the worlds greatest Go players, a feat that was thought to be impossible. Not only was the model making clever strategic moves but, at times, those moves were beautiful in a very deep and humanlike way.

Other scientists and world leaders took note and, seven years later, the race to control AI and its governance is on. Over the past month, US President Joe Biden has issued an executive order on AI safety, the G7 announced the Hiroshima AI Process and 28 countries signed the Bletchley Declaration at the UKs AI Safety Summit. Even the Chinese Communist Party is seeking to carve out its own leadership role with the Global AI Governance Initiative.

These developments indicate that governments are starting to take the potential benefits and risks of AI equally seriously. But as the security implications of AI become clearer, its vital that democracies outcompete authoritarian political systems to ensure future AI models reflect democratic values and are not concentrated in institutions beholden to the whims of dictators. At the same time, countries must proceed cautiously, with adequate guardrails, and shut down unsafe AI projects when necessary.

Whether AI models will outperform humans in the near future and pose existential risks is a contentious question. For some researchers who have studied these technologies for decades, the performance of AI models like AlphaGo and ChatGPT are evidence that the general foundations for human-level AI have been achieved and that an AI system thats more intelligent than humans across a range of tasks will likely be deployed within our lifetimes. Those systems are known as artificial general intelligence (AGI), artificial superintelligence or general AI.

For example, most AI models now use neural networks, an old machine-learning technique created in the 1940s that was inspired by the biological neural networks of animal brains. The abilities of modern neural networks like AlphaGo werent fully appreciated until computer chips used mostly for gaming and video rendering, known as graphics processing units, became powerful enough in the 21st century to process the computations needed for specific human-level tasks.

The next step towards AGI was the arrival of large-language models, such as OpenAIs GPT-4, which are created using a version of neural networks known as transformers. OpenAIs previous version of its chatbot, GPT-3, surprised everyone in 2020 by generating text that was indistinguishable from that written by people and performinga range of language-based tasks with few or no examples. GPT-4, the latest model, has demonstrated human-level reasoning capabilities and outperformed human test-takers on the US bar exam, a notoriously difficult test for lawyers. Future iterations are expected to have the ability to understand, learn and apply knowledge at a level equal to, or beyond, humans across all useful tasks.

AGI would be the most disruptive technology humanity has created. An AI system that can automate human analytical thinking, creativity and communication at a large scale and generate insights, content and reports from huge datasets would bring about enormous social and economic change. It would be our generations Oppenheimer moment, only with strategic impacts beyond just military and security applications. The first country to successfully deploy it would have significant advantages in every scientific and economic activity across almost all industries. For those reasons, long-term geopolitical competition between liberal democracies and authoritarian countries is fuelling an arms race to develop and control AGI.

At the core of this race is ideological competition, which pushes governments to support the development of AGI in their country first, since the technology will likely reflect the values of the inventor and set the standards for future applications. This raises important questions about what world views we want AGIs to express. Should an AGI value freedom of political expression above social stability? Or should it align itself with a rule-by-law or rule-of-law society? With our current methods, researchers dont even know if its possible to predetermine those values in AGI systems before theyre created.

Its promising that universities, corporations and civil research groups in democracies are leading the development of AGI so far. Companies like OpenAI, Anthropic and DeepMind are household names and have been working closely with the US government to consider a range of AI safety policies. But startups, large corporations and research teams developing AGI in China, under the authoritarian rule of the CCP, are quickly catching up and pose significant competition. China certainly has the talent, the resources and the intent but faces additional regulatory hurdles and a lack of high-quality, open-source Chinese-language datasets. In addition, large-language models threaten the CCPs monopoly on domestic information control by offering alternative worldviews to state propaganda.

Nonetheless, we shouldnt underestimate the capacity of Chinese entrepreneurs to innovate under difficult regulatory conditions. If a research team in China, subject to the CCPs National Intelligence Law, were to develop and tame AGI or near-AGI capabilities first, it would further entrench the partys power to repress its domestic population and ability to interfere with the sovereignty of other countries. Chinas state security system or the Peoples Liberation Army could deploy it to supercharge their cyberespionage operations or automate the discovery of zero-day vulnerabilities. The Chinese government could embed it as a superhuman adviser in its bureaucracies to make better operational, military, economic or foreign-policy decisions and propaganda. Chinese companies could sell their AGI services to foreign government departments and companies with back doors into their systems or covertly suppress content and topics abroad at the direction of Chinese security services.

At the same time, an unfettered AGI arms race between democratic and authoritarian systems could exacerbate various existential risks, either by enabling future malign use by state and non-state actors or through poor alignment of the AIs own objectives. AGI could, for instance, lower the impediments for savvy malicious actors to develop bioweapons or supercharge disinformation and influence operations. An AGI could itself become destructive if it pursues poorly described goals or takes shortcuts such as deceiving humans to achieve goals more efficiently.

When Meta trained Cicero to play the board game Diplomacy honestly by generating only messages that reflected its intention in each interaction, analysts noted that it could still withhold information about its true intentions or not inform other players when its intentions changed. These are serious considerations with immediate risks and have led many AI experts and people who study existential risk to call for a pause on advanced AI research. But policymakers worldwide are unlikely to stop given the strong incentives to be a first mover.

This all may sound futuristic, but its not as far away as you might think. In a 2022 survey, 352 AI experts put a 50% chance of human-level machine intelligence arriving in 37 yearsthat is, 2059. The forecasting community on the crowd-sourced platform Metaculus, which has a robust track record of AI-related forecasts, is even more confident of the imminent development of AGI. The aggregation of more than 1,000 forecasters suggests2032 as the likely year general AI systems will be devised, tested and publicly announced. But thats just the current estimateexperts and the amateurs on Metaculus have shortened their timelines each year as new AI breakthroughs are publicly announced.

That means democracies have a lead time of between 10 and 40 years to prepare for the development of AGI. The key challenge will be how to prevent AI existential risks while innovating faster than authoritarian political systems.

First, policymakers in democracies must attract global AI talent, including from China and Russia, to help align AGI models with democratic values. Talent is also needed within government policymaking departments and think tanks to assess AGI implications and build the bureaucratic capacity to rapidly adapt to future developments.

Second, governments should be proactively monitoring all AGI research and development activity and should pass legislation that allows regulators to shut down or pause exceptionally risky projects. We should remember that Beijing has more to worry about with regard to AI alignment because the CCP is too worried about its own political safety to relax its strict rules on AI development.

We therefore shouldnt see government involvement only in terms of its potential to slow us down. At a minimum, all countries, including the US and China, should be transparent about their AGI research and advances. That should include publicly disclosing their funding for AGI research and safety policies and identifying their leading AGI developers.

Third, liberal democracies must collectively maintain as large a lead as possible in AI development and further restrict access to high-end technology, intellectual property, strategic datasets and foreign investments in Chinas AI and national-security industries. Impeding the CCPs AI development in its military, security and intelligence industries is also morally justifiable in preventing human rights violations.

For example, Midu, an AI company based in Shanghai that supports Chinas propaganda and public-security work, recently announced the use of large-language models to automate reporting on public opinion analysis to support surveillance of online users. While Chinas access to advanced US technologies and investment has been restricted, other like-minded countries such as Australia should implement similar outbound investment controls into Chinas AI and national-security industries.

Finally, governments should create incentives for the market to develop safe AGI and solve the alignment problem. Technical research on AI capabilities is outpacing technical research on AI alignment and companies are failing to put their money where their mouth is. Governments should create prizes for research teams or individuals to solve difficult AI alignment problems. One model potential model could be like the Clay Institutes Millennium Prize Problems, which provides awards for solutions to some of the worlds most difficult mathematics problems.

Australia is an attractive destination for global talent and is already home to many AI safety researchers. The Australian government should capitalise on this advantage to become an international hub for AI safety and alignment research. The Department of Industry, Science and Resources should set up the worlds first AGI prize fund with at least $100 million to be awarded to the first global research team to align AGI safely.

The National Artificial Intelligence Centre should oversee a board that manages this fund and work with the research community to create a list of conditions and review mechanisms to award the prize. With $100 million, the board could adopt a similar investment mandate as Australias Future Fund to achieve an average annual return of at least the consumer price index plus 45% per annum over the long term. Instead of being reinvested into the fund, the 45% interest accrued each year on top of CPI should be used as smaller awards for incremental achievements in AI research each year. These awards could also be used to fund AI PhD scholarships or attract AI postdocs to Australia. Other awards could be given to research, including research conducted outside Australia, in annual award ceremonies, like the Nobel Prize, which will bring together global experts on AI to share knowledge and progress.

A $100 million fund may seem a lot for AI research but, as a comparison, Microsoft is rumoured to have invested US$10 billion into OpenAI this year alone. And $100 million pales in comparison to the contributions safely aligned AGI would have on the national economy.

The stakes are high for getting AGI right. If properly aligned and developed, it could bring an epoch of unimaginable human prosperity and enlightenment. But AGI projects pursued recklessly could pose real risks of creating dangerous superhuman AI systems or bringing about global catastrophes. Democracies must not cede leadership of AGI development to authoritarian systems, but nor should they rush to secure a Pyrrhic victory by going ahead with models that fail to embed respect for human rights, liberal values and basic safety.

This tricky balance between innovation and safety is the reason policymakers, intelligence agencies, industry, civil society and researchers must work together to shape the future of AGIs and cooperate with the global community to navigate an uncertain period of elevated human-extinction risks.

Read the original here:

How to win the artificial general intelligence race and not end ... - The Strategist

Posted in Artificial General Intelligence | Comments Off on How to win the artificial general intelligence race and not end … – The Strategist

Artificial intelligence: the world is waking up to the risks – InCyber

All these documents refer to the risks linked to Artificial General Intelligence (AGI), which is level 2 of AI. Todays artificial intelligence, including generative AI systems like ChatGPT, fall within Artificial Narrow Intelligence (ANI), which is level 1. This artificial intelligence can do a single activity as well as a human, perhaps even better.

AGI and its level 3 successor, Artificial Super Intelligence (ASI), are AIs that can accomplish all informational activities to a quality level that equals or exceeds what humans can produce. Currently, the expert consensus is that AGI could arrive between 2030 and 2040. Tomorrow, basically.

These documents point to major risks for humanity, but are they right to warn us of these dangers? The answer is clearly yes. I urge you to read all five documents, but if you were to read just one, it would be the one by this group of 30 experts.

This excerpt gives the general tone of the document: AI advancement could culminate in a large-scale loss of life and the biosphere, and the marginalization or even extinction of humanity. It coolly suggests the extinction of mankind! The three ensuing documents each mostly resemble each other. They are very general declarations of intent full of goodwill but with little real impact.

They were published by the United Nations, the G7 as well as the Bletchley Summit, an international meeting organized by the United Kingdom that was held on November 1 and 2, 2023.

No one will argue against the ideas expressed in the Bletchley Declaration signed by 28 countries with widely divergent interests, including the United States, China, India, Israel, Saudi Arabia and the European Union. The recognition of the need to take account of human rights protection, transparency and explicability, fairness, accountability, regulation, security, appropriate human oversight, ethics, bias mitigation, privacy and data protection.

The fifth document is different it is an executive order signed by Joe Biden on October 30, 2023. In 60 pages, the US president lists a hundred specific actions to be taken, and for each, the executive order names the public authorities in charge of carrying them out. Furthermore, the timetable is restrictive, with most of these actions being given between 45 and 365 days to be completed. It is far from a catalogue of good intentions: it demonstrates the United States clear desire to do everything it can to maintain its global leadership of AI.

The European Commission has been working on AI since 2020. In June 2023, it published a document, EU Legislation in Progress, detailing work on a European Artificial Intelligence Act (AIA) to follow the Digital Service Act and the Digital Market Act. The AIA must now be submitted to the Member States, who can make changes before its final approval. No one knows how long this could take.

To summarize, can we imagine what the future might hold for collaboration between humankind and AGI and ASI? If we are to believe Rich Sutton, professor at the University of Alberta in Canada and a recognized specialist in artificial intelligence, humanity must inevitably prepare to hand over the reins to AI, as this illustration from one of his recent lectures shows.

My recommendation: the challenges posed by the rapid arrival of AGIs and ASIs are among the questions that require quick reflection from directors of all organizations, public and private.

Furthermore, the best AI specialists are often asked, what is humanitys future in a world where AI performs better than humans?. The common answer? I dont know.But that is no reason not to think about it, all together, and very quickly.

See the original post:

Artificial intelligence: the world is waking up to the risks - InCyber

Posted in Artificial General Intelligence | Comments Off on Artificial intelligence: the world is waking up to the risks – InCyber

How the AI Executive Order and OMB memo introduce … – Brookings Institution

President Biden recently signed the Executive Order (EO) on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence. With sections on privacy, content verification, and immigration of tech workers (to name just a few areas), the executive order is sweeping. Encouragingly, it introduces key guardrails for the use of AI and takes important steps to protect peoples rights. It is also inherently limited: Unlike acts of Congress, executive actions cannot create new agencies or grant new regulatory powers over private companies. (They can also be undone by the next president.) The EO was followed two days later by a draft memorandum, now open for public comment, from the Office of Management and Budget (OMB) with additional guidance for the federal government to manage risks and mandate accountability while advancing innovation in AI. Taken together, these two government directives offer one of the most detailed pictures of how governments should establish rules and guidance around AI.

Notably, these actions towards accountability focus on current harms and not existential risk, and thus can serve as useful guides to policymakers focused on the everyday concerns of their constituents. Beyond executive action, with its inherent limits, the next step will be for other policymakersfrom Congress to the statesto use these documents as a guide for future action in requiring accountability in the use of AI.

As we analyze the EO and the OMB memo alongside each other for accountability directions, here is what stands out:

Impact on government use of AI

The executive order (in Section 10.1(b)) gives explicit guidance to federal agencies for using AI in ways that protect safety and rights. The section outlines contents of the draft OMB memo released for public comment two days after the EO. In what may become a model for AI governance from localities, to states, to international governing agreements, the OMB memo, Advancing Governance, Innovation, and Risk Management for Agency Use of Artificial Intelligence, requires specific AI guardrails.

Critically, the memo includes definitions of safety- and rights-impacting AI as well as lists of systems presumed to be safety- and-rights impacting. This approach builds on work done over the past decade to document the harms of algorithmic systems in mediating critical services and impacting peoples vital opportunities. By taking this presumptive approach, rather than requiring agencies start from scratch with risk assessments on every system, the OMB memo also reduces the administrative burden on agencies and allows decision-makers to move directly to instituting appropriate guardrails and accountability practices. Systems can also be added or removed from the list based on a conducted risk assessment.

Once an AI system is identified as safety- or rights-impacting, the draft OMB memo specifies a minimum set of practices that must be in place before and during its use. As required by the executive order, these practices build on those identified in the Blueprint for an AI Bill of Rights. This detailed section of the memo leads off with impact assessments and lists three key areas that agencies must assess before a system is put into use: intended purpose and expected benefit; potential risks to a broad range of stakeholder groups; and quality and appropriateness of the data the AI model is built from. Should the assessing agency conclude that the systems benefits do not meaningfully outweigh the risks, agencies should not use the AI. The memo also directs agencies to assess, through this process, whether the AI system is fit for the task at hand; this is a critical effort to make sure AI actually works, when many times it has been shown not to, and to assess whether AI is the right solution to the given problem, countering the tendency to assume it is.

The OMB memo goes on to require a range of accountability processes, including human fallback, the mitigation of new or emerging risks to rights and safety, ongoing assessment throughout a systems lifecycle, assessment for bias, and consultation and feedback from affected groups. Taken together, if carried through to the final version of the memo, these requirements create a remarkable step forward in establishing an accountability ecosystemnot one point of intervention, but many methodologies and practices that, working together over time and at multiple stages in an AI lifecycle, could represent meaningful controls.

Importantly, the OMB memo requires agencies to stop using an AI system if these practices are not in place. The minimum practices additionally include instructions to reconsider use of a system if concerning outcomes, such as discrimination, are found through testing.

Public accountability will be challenging, given the breadth and complexity of these practices. One key accountability mechanism used will be annual reporting, as part of an expanded AI use case inventory. However, the details of what will be reported were not included as part of the memorandum and will be determined later by OMB. Journalists and researchers have identified problems with the previous practices of the AI use case inventory, including both that agencies left known AI uses off their inventory and that the reporting requirements were minimal and did not include testing and bias assessment results. Looking forward, effectiveness of the AI use case inventory as an accountability mechanism will depend on whether existing loopholes and under-reporting concerns are addressed through the OMB process to come. Its also important to consider that the effectiveness of transparency reporting on AI systems as an accountability mechanism has also been more broadly challenged.

Throughout the guidance, OMB refers to requirements for government use of AI. This phrase, importantly, covers both AI that is developed and then used by the federal government, and AI that is procured by the government. By using the power of the governments purse, the guidance also has the potential to influence the private sector as well. OMB also commits to developing further guidance for AI contracts that aligns with what it has laid out so far in this draft memo. That current guidance is rigorous; if those same provisions are successfully required for government purchasing of AI, it will significantly shape how government AI vendors are building and testing their products.

Impact on the private sector

The president only has so many levers to pull through an executive order to regulate private industry. Because the EO cannot make new laws, it relies on existing agency and presidential authorities (and the development of procurement rules described above) to influence how private companies are developing and deploying AI systems. Within that scope, the regulatory impact of the EO on the private sector could still be far-reaching.

The EO directs agencies with enforcement powers to deepen their understanding of their capacities in the context of AI, to coordinate, and to develop guidance and potentially additional regulations to protect civil rights and civil liberties in the broader marketplaceas well as to protect consumers from fraud, discrimination, and other risks, including risks to financial stability, and specifically to protect privacy. Sections 7 through 9 address various aspects of this, starting by directing the attorney general to assemble the heads of federal civil rights offices, including those of enforcement agencies, to determine how to apply and potentially expand the reach of civil rights law across the government to address existing harms.

Additionally, the President calls on Congress to pass federal data privacy protections, and then through the EOs Section 9 directs agencies to do what they can to protect peoples data privacy without Congressional action. The section opener calls out not only AIs facilitation of the collection or use of information about individuals, but also specifically the making of inferences about individuals. This could open up a broader approach to assessing privacy violations, along the lines of networked privacy and associated harms, which considers not only individual personal identifiable information but the inferences that can be drawn by looking at connected data about an individual, or relationships between individuals.

The EO directs agencies to revisit the guidelines for privacy impact assessments in the context of AI, as well as to assess and potentially issue guidelines on the use of privacy-enhancing technologies (PETs), such as differential privacy. Though brief, the EOs privacy section pushes to expand the understanding of data privacy and the remedies that might be taken to address novel and emerging harms. As those ideas move through government, they will inevitably inform potential data protection and privacy laws at the federal and (more likely) state level that will govern private industry.

Its not surprising that generative AI was given a prominent treatment in the executive order: systems like ChatGPT that can generate text in response to prompts and other systems that can generate images, video, or audio, have catapulted concerns about AI into the public consciousness. Concerns have ranged from the technologys potential to replace skilled writers to its reinforcement of degrading stereotypes to the overblown notion that it will end humanity as we know it. Yet these systems are largely created by the private sector, and without new legislation the White House has limited levers to require these companies to act responsibly. There is an unfolding, live debate about whether to treat generative AI systems differently than other AI systems. The EOs authors choose to differentiate generative AI in Section 4, and have drawn criticism for that decision; a better approach may have been the one taken in the OMB memo where the same protections are required for generative AI as other AI and the focus is on the potential harms of the system.

To govern generative AI systems, the executive order invokes the Defense Production Act. Introduced during the Korean War and also used for production of masks and ventilators during the COVID pandemic, the Defense Production Act gives the president the authority to expedite and expand industrial production in order to promote national defense. The executive order (in Section 4.2(i)) uses it to require private companies to preemptively test their models for specific safety concerns; it also specifies red-teaming as the testing methodology. Red-teaming is a practice of having a team external to the development of a system (but potentially still within the company) stress-test the system for specific concerns. The executive order requires that companies perform red-teaming in line with guidance from NIST that will be developed per Section 4.1(ii). Companies must report the resulting documentation of safety testing practices and results to the federal government.

This AI accountability modelpreemptive testing according to specific standards and associated reporting requirementsis potentially useful. Unfortunately, the specifics in this case leave much to be desired. First, given the use of the Defense Production Act, the testing and reporting the EO requires are limited to concerns relating to national defense and the protection of critical infrastructure, including cybersecurity and bioweapons. Yet as public debate has shown, concerns about generative AI go well beyond these limited settings. Second, the specific definitions used in the executive order to determine which systems must adhere to these standards appear to have been copied wholesale from a policy document put forth by OpenAI and other authors. Its thresholds for model size have little substantive justification; this means that future technological developments may render them under-inclusive or otherwise ineffective in targeting the systems with the most potential for harm. Finally, the executive order positions AI red-teaming as the singular AI accountability mechanism to be used for generative AI, when AI red-teaming works best in combination with other accountability mechanisms. By contrast, the OMB guidance for AI use by the federal government, which will also be required for generative AI, requires multiple accountability mechanisms including algorithmic impact assessments and public consultation. The full landscape of AI accountability mechanisms should be applied to generative AI by private companies as well.

Consistent with the EOs broad approach, the order addresses AIs worker impacts in multiple ways. First, while research suggests a more complicated picture on technological automation and work, the EO sets out to support workers during an AI transition. To that end, the EO directs the chairman of the presidents Council of Economic Advisers to prepare and submit a report to the president on the labor-market effects of AI. Section 6(a)(ii) mandates that the secretary of labor submit to the president a report analyzing how federal agencies may support workers displaced by the adoption of AI and other technological advancements.

Alongside the focus on AI displacement, the EO recognizes that automated decision systems are already in use in the workplace and directs attention to their ongoing impacts on job quality, worker power, and worker health and safety. The most encompassing directive lies in Section 6(b), which directs the secretary of labor, working with other agencies and outside entities, including labor unions and workers, to develop principles and best practices to mitigate harms to employees well-being. The best practices must cover labor standards and job quality, and the EO further encourages federal agencies to adopt the guidelines in their internal programs.

Section 7.3 of the EO directs the labor department to publish guidance for federal contractors regarding nondiscrimination in hiring involving AI and other technology-based hiring systems. Given the overwhelming evidence that algorithmic systems replicate and reinforce human biases, the broad language of other technology-based hiring systems is a major opportunity for the DOL to model standards of nondiscriminatory hiring.

While the EOs worker protections are only guidance and best practices, the OMB memo directly mandates protocols to support workers and their rights when agencies use AI. The memo applies the minimum risk management practices where AI is used to determine the terms and conditions of employment. This broad definition positions the federal government, as the nations largest employer, to influence the use of AI systems within the workplace. The memo also requires that human remedies are in place in some cases, a requirement that may add jobs, adding complexity to concerns about the labor-market effects of AI. Further, the OMB memos requirement that federal agencies consult and incorporate feedback from affected groups positions workers and unions to influence the deployment of AI technology, which aligns with calls from civil society and academia to ensure that the people most likely to be affected by technology should have influence into that systems design and deployment.

How will this all get done?

The narrative that the federal government is not knowledgeable about AI systems should be laid to rest by these recent documents. There was clearly a lot of thought put into the design and implementation of a national AI governance model. That said, its also clear that many more people representing the right mix of expertise will be needed quickly to implement this ambitious plan on the tight timeline laid out in the orderand on the implicit deadline marked by the end of the Biden administrations first term. Given that the EO and the OMB memo collectively run to well over 100 pages of actions that the federal government should take to address AI, the question looms: who will do all this work?

A major new role addressed in both the EO and the OMB memo is that of the Chief AI Officer (CAIO), which every agency head is required to designate within 60 days of the EOs enactment. The CAIOs responsibilities are laid out in the OMB memo and fall into three categories: coordinating agency use of AI, promoting AI innovation, and managing risks from AI use. The way the CAIO role is understood and filled will be critical to what comes next; if agencies interpret the role as solely or primarily a technical one, rather than one focused societally on opportunities and risks related to the public interest use of AI, they may pursue very different implementation priorities than those articulated by the EO. CAIOs are also responsible for agency-level AI strategies, which are due within one year of the EOs launch. The strategies seem likely to call for increased headcount and new expertise in government.

The EO has anticipated the need for both bringing new talent into the government and building the skills and capacities of civil servants on AI matters. The federal government has long been criticized for its slow, difficult hiring processes, making it tremendously challenging for an administration to pivot attention to an emerging issue. This administration has tried to preempt this criticism through the announcement of AI talent surge specified in Section 10.2 of the EO. That section gives OSTP and OMB a spare 45 days to figure out how to get the needed people into government, including through the establishment of a cross-agency AI and Technology Talent Task Force. The federal government has already started some of that recruitment push in the launch of a new AI jobs website.

What is potentially most challenging in recruiting AI talent is identifying the actual skills, capacities, and expertise needed to implement the EOs many angles. While there is a need, of course, for technological talent, much of what the EO calls for, particularly in the area of protecting rights and ensuring safety, requires interdisciplinary expertise. What the EO requires is the creation of new knowledge about how to governindeed, what the role of government is in an increasingly data-centric and AI-mediated environment. These are questions for teams with a sociotechnical lens, requiring expertise in a range of disciplines, including legal scholarship, the social and behavioral sciences, computer and data science, and often, specific field knowledgehealth and human services, the criminal legal system, financial markets and consumer financial protection, and so on. Such skills will especially be key for the second pillar of the administrations talent surgethe growth in regulatory and enforcement capacity needed to keep watch over the powerful AI companies. Its also critical to ensure that these teams are built with attention to equity at the center. Given the broad empirical base that demonstrates the disproportionate harms of AI systems to historically marginalized groups, and the Presidents declared commitment to advancing racial equity across the federal government, equity in both hiring and as a focus of implementation must be a top priority of all aspects of EO implementation.

As broad as the EO is, there are critical areas of concern that have either been pushed off to later consideration, or avoided. For instance, the EO includes a national security carveout, with direction to develop separate guidance in 270 days to address the governance of AI used as a component of a national security system or for military and intelligence purposes; many applications of AI could potentially fall within those criteria. The EO also doesnt take the opportunity to ban specific practices shown to be harmful or ineffective; an example where it could have taken further action is in banning the use of affective computing in law enforcement. The EO addresses the potential for AI to be valuable in climate science and the mitigation of climate change; however, it does nothing about AIs own environmental impact, missing an opportunity to force reporting on energy and water usage by companies creating some of the biggest AI systems. Lastly, the EO sets guidelines for the use of AI by federal agencies and contractors but does not attach any requirements or guidance for recipients of federal grants, such as cities and states.

Finally, the EO addresses research in a number of points throughout the document and references research on a range of topics and through many vehicles, including an National Science Foundation (NSF) Regional Innovation Engine and four NSF AI Research Institutes, to join the 25 already established. Yet the EO doesnt include major *new* commitments to research funding. A more robust approach to addressing AI research and education in the EO could have been a statement that reframed the national AI research and development field as sociotechnical, rather than purely technicalproactively focused on interdisciplinary approaches that center societal impacts of AI alongside technological advancement. Such a statement would have aligned meaningfully with Vice President Kamala Harriss November 1st 2023 speech at the UK AI Safety Summit in which she argued for a future where AI is used to advance the public interest.

If the administration is indeed committed to seeing AI in the public interest, as Vice President Harris indicated, its new EO and OMB guidance are the clearest indication of how it intends to meet that ambition: mandating hard accountability to protect rights, regulating private industry, and moving iteratively, so that governance efforts advance alongside the field of sociotechnical research. But the executive branch can only do so much. Ultimately, the EO can be readamong other waysas a roadmap for Congress to legislate. Additionally, cities, states, and other countries should understand these new documents as direction-setting and could choose to rapidly align their policies with these documents to create more comprehensive rights and safety protections.

Continue reading here:

How the AI Executive Order and OMB memo introduce ... - Brookings Institution

Posted in Artificial General Intelligence | Comments Off on How the AI Executive Order and OMB memo introduce … – Brookings Institution

Europe’s weaknesses, opportunities facing the AI revolution – EURACTIV

From the regulatory approach currently under discussion to the geopolitical risks of AI,Europes challenges vis-a-vis Artificial Intelligence are many. The think thank network PromethEUs presented its paper on AI on Tuesday (14 November), focusing on the EUs AI Act, generative AI, and AI and businesses.

The network includes four Southern European think tanks: the Institute for Competitiveness from Italy, the Elcano Royal Institute from Spain, the Foundation for Economic and Industrial Research from Greece, and the Institute of Public Policy from Portugal.

For the presentation of its latest study, experts and stakeholders gathered in Brussels to discuss the possible road ahead for Europes future competitiveness in this field.

The EUs AI Act is a flagship legislative proposal and the worlds first attempt to regulate Artificial Intelligence on a risk-based approach.

The definition of AI, as strange as it may sound, is still under discussion in the trilogue, said Steffen Hoernig, professor at Nova School of Business and Economics, adding that it is important to be able to decide which type of systems fall under the AI Act.

Euractiv understands that EU policymakers have been waiting for the Organisation for Economic Co-operation and Development (OECD) to update its definition of AI.

Hoernig said that discussions are ongoing about the file, such as under which risk category biometric AI belongs, or the establishment of an AI Board or an AI Office. National positions differ, especially on the latter, Hoerning noted.

He said a big issue is the question of foundational models and the general purpose of AI, pointing out that ChatGPT was introduced after the proposal was drafted so it is not covered in the text.

Last Friday, Euractiv reported that France and Germany, under pressure from their leading AI startups, were pushing against obligations for foundation models, leading to strong political frictions with MEPs, who want to regulate these models.

Hoering believes that national interests in some countries are taking priority over the interests of the EU when it comes to the regulation and that the question of how we should define hyperscale AI systems remains.

Stefano da Empoli, president of the Institute for Competitiveness, argued that, while generative AI systems like the chatbot ChatGPT may be the most visible to users, the terms also refer to other tools.

The study focuses on Italy, Spain, Greece, and Portugal, which are at the bottom of the ranking in terms of using generative AI compared to Nordic EU countries. More than a third of the generative AI startups in Europe are located in the UK.

At the same time, da Empoli emphasised that investments in this disruptive technology have been put slightly on the sidelines because they are more in the hands of the member states.

Raquel Jorge, a policy analyst at the Elcano Royal Institute explained that in terms of security, what we have identified is that generative AI will present security risks, but we are not quite sure that it will create new threats, adding that instead, it looks like it will amplify the existing threats.

When it comes down to the dual-use applications of generative AI, there is some doubt about the military usage, she said.

Jorge also noted that while it may seem that NATO keeps away from the EUs reality, in July, NATOs Data and Artificial Intelligence Review Board hosted a private event related to generative AI.

Aggelos Tsakanikas, an associate professor at the National Technical University of Athens, said they aimed to measure the impact of AI on businesses for entrepreneurship and assess the policies implemented in the four countries of the PromethEUs network.

The research showed, for example, that there is a shortage of specialists in Spain, while in Greece, there are startup activities related to AI.

Tsakanikas agreed with Hoernig that defining AI is still ongoing but added that it is also a question of how businesses use it.

We need to have a very strict definition of what exactly we are measuring when we are trying to see the diffusion of AI in the business sector, he said.

A SWOT (strengths, weaknesses, opportunities, and threats) analysis has been conducted for the paper, discussing all the major issues related to AI, such as non-qualified workers, political resistance, and economic costs, Tsakanikas explained.

[Edited by Luca Bertuzzi/Zoran Radosavljevic]

Read more:

Europe's weaknesses, opportunities facing the AI revolution - EURACTIV

Posted in Artificial General Intelligence | Comments Off on Europe’s weaknesses, opportunities facing the AI revolution – EURACTIV

Understanding Artificial Intelligence: Definition, Applications, and … – Medium

Artificial Intelligence (AI) epitomizes computer systems capabilities to perform intricate tasks that traditionally demanded human intellect, such as problem-solving, decision-making, and reasoning. Today, the term AI encompasses a broad spectrum of technologies powering various services and products that significantly influence our daily lives from recommendation apps for TV shows to real-time customer support via chatbots. Yet, the question persists: do these technologies genuinely embody the envisioned concept of artificial intelligence? If not, why is the term ubiquitously applied? This article delves into the essence of artificial intelligence, its functionalities, diversified types, along with a glance at its potential perils and rewards, elucidating pathways for furthering knowledge through flexible educational courses.

Artificial Intelligence Defined AI encapsulates the theory and evolution of computer systems adept at performing tasks historically reliant on human intelligence, including speech recognition, decision-making, and pattern identification. This all-encompassing term spans various technologies like machine learning, deep learning, and natural language processing (NLP). However, a debate lingers on whether current technologies categorically constitute true artificial intelligence or merely denote highly sophisticated machine learning, perceived as an initial stride towards achieving general artificial intelligence (GAI).

Present AI Landscape While philosophical disparities persist regarding the existence of true intelligent machines, contemporary use of the term AI mostly refers to machine learning-fueled technologies such as ChatGPT or computer vision, enabling machines to accomplish erstwhile human-exclusive tasks like content generation, autonomous driving, or data analysis.

Illustrative AI Applications Though humanoid AI entities akin to characters in science fiction remain elusive, encounters with machine learning-powered services or devices are commonplace. These range from systems making music suggestions, optimizing travel routes, translating languages (e.g., Google Translate), personalized content recommendations (e.g., Netflix), to self-driving capabilities in vehicles like Teslas cars.

AI in Diverse Industries AI pervades multiple sectors, revolutionizing operations by automating tasks devoid of human intervention. Examples include fraud detection in finance, leveraging AIs data analysis prowess, and healthcares deployment of AI-driven robotics to facilitate surgeries near sensitive organs, curbing risks like blood loss or infections.

Unveiling Artificial General Intelligence (AGI) AGI embodies the theoretical realm where computer systems attain or surpass human intelligence. Recognizing true AGIs advent remains a point of contention, with the Turing Test proposed by Alan Turing in 1950 often cited as a benchmark for machine intelligence. Despite claims of early AGI forms, skepticism lingers among researchers regarding the achievement of AGI.

The 4 AI Paradigms In a bid to comprehend intelligence and consciousness in AI, scholars delineate four AI types:

AIs Prospects and Perils AIs transformative potential in various domains comes with an array of benefits and concerns. While promising greater accuracy, cost efficiencies, personalized services, and enhanced decision-making, AI also raises alarms about job displacement, biases in training data, cybersecurity threats, opaque decision-making processes, and the potential for misinformation and regulatory breaches.

In Conclusion AIs multifaceted impacts demand a balanced perspective. Its capabilities and implications underscore the importance of responsible implementation. Understanding AIs nuances is crucial, for wielding such power entails commensurate responsibility.

See original here:

Understanding Artificial Intelligence: Definition, Applications, and ... - Medium

Posted in Artificial General Intelligence | Comments Off on Understanding Artificial Intelligence: Definition, Applications, and … – Medium

What OpenAI’s latest batch of chips says about the future of AI – Quartz

OpenAI has received a coveted order of H100 chips and is expecting more soon, CEO Sam Altman said in a Nov. 13 interview with the Financial Times, adding that next year looks already like its going to be better in regards to securing more chips.

One could say that the level of attention on AI chatbots like OpenAIs ChatGPT and Googles Bard this year matches the amount of focus on Nvidias $40,000 H100 chips. OpenAI, like many other AI companies, uses Nvidias latest model of chips to train its models.

The procurement of more chips from OpenAI signals that more sophisticated AI models, which go beyond powering the current version of chatbots, will be ready in the near future.

Generative AI systems are trained on vast amounts of data to generate complex responses to questions, and that requires a lot of computing power. Enter Nvidias H100 chips, which are tailored for generative AI and run much faster than previous chip models. The more powerful the chips, the faster you can process queries, Willy Shih, a professor at Harvard Business School, previously told Quartz.

In the background, startups, chip rivals like AMD, and Big Tech companies like Google and Amazon have been working on building more efficient chips tailored to AI applications to meet the demandbut none so far have been able to outperform Nvidia.

Such intense demand for a specific chip from one company has created somewhat of a buying frenzy for Nvidia, and its not just tech companies racing to snap up these hot chipsgovernments and venture capital firms are chomping at the bit too. But if OpenAI was able to obtain its order, perhaps that tide is finally turning, and the flow of chips to AI businesses is improving.

And while Nvidia reigns, just last week, Prateek Kathpal, the CEO of SymphonyAI Industrial, which is building AI chatbots for internal use within manufacturers, told Quartz that, although its AI applications run on Nvidias chips, the company has also been in discussion with AMD and Arm for their technology.

OpenAIs growing chip inventory means a couple of things.

The H100 chips will help power the companys next AI model GPT-5, which Altman said is currently in the works. The new model will require more data to train on, which will come from both publicly available information and proprietary intel from companies, he told the Financial Times. GPT-5 will likely be more sophisticated than its predecessors, although its not clear what it will do that GPT-4 cant, he added.

Altman did not disclose a timeline for the release of GPT-5. But the quick succession of releases, with GPT-4 coming just eight months ago, following the release of its predecessor GPT-3 in 2020, highlights a rapid development cycle.

The procurement of more chips also suggests that the company is getting closer to creating artificial general intelligence, or AGI, for short, which is an AI system that can essentially accomplish any task that human beings can do.

Read the rest here:

What OpenAI's latest batch of chips says about the future of AI - Quartz

Posted in Artificial General Intelligence | Comments Off on What OpenAI’s latest batch of chips says about the future of AI – Quartz