Search Immortality Topics:

Page 10«..9101112..»


Category Archives: Ai

Google launches Gemini Business AI, adds $20 to the $6 Workspace bill – Ars Technica

Google

Google went ahead with plans to launch Gemini for Workspace today. The big news is the pricing information, and you can see the Workspace pricing page is new, with every plan offering a "Gemini add-on." Google's old AI-for-Business plan, "Duet AI for Google Workspace," is dead, though it never really launched anyway.

Google has a blog post explaining the changes. Google Workspace starts at $6 per user per month for the "Starter" package, and the AI "Add-on," as Google is calling it, is an extra $20 monthly cost per user (all of these prices require an annual commitment). That is a massive price increase over the normal Workspace bill, but AI processing is expensive. Google says this business package will get you "Help me write in Docs and Gmail, Enhanced Smart Fill in Sheets and image generation in Slides." It also includes the "1.0 Ultra" model for the Gemini chatbotthere's a full feature list here. This $20 plan is subject to a usage limit for Gemini AI features of "1,000 times per month."

Google

Google's second plan is "Gemini Enterprise," which doesn't come with any usage limits, but it's also only available through a "contact us" link and not a normal checkout procedure. Enterprise is $30 per user per month, and it "includes additional capabilities for AI-powered meetings, where Gemini can translate closed captions in more than 100 language pairs, and soon even take meeting notes."

More here:

Google launches Gemini Business AI, adds $20 to the $6 Workspace bill - Ars Technica

Posted in Ai | Comments Off on Google launches Gemini Business AI, adds $20 to the $6 Workspace bill – Ars Technica

AI and You: OpenAI’s Sora Previews Text-to-Video Future, First Ivy League AI Degree – CNET

AI developments are happening pretty fast. If you don't stop and look around once in a while, you could miss them.

Fortunately, I'm looking around for you and what I saw this week is that competition between OpenAI, maker of ChatGPT and Dall-E, and Google continues to heat up in a way that's worth paying attention to.

A week after updating its Bard chatbot and changing the name to Gemini, Google's DeepMind AI subsidiary previewed the next version of its generative AI chatbot. DeepMind told CNET's Lisa Lacy that Gemini 1.5 will be rolled out "slowly" to regular people who sign up for a wait list and will be available now only to developers and enterprise customers.

Gemini 1.5 Pro, Lacy reports, is "as capable as" the Gemini 1.0 Ultra model, which Google announced on Feb. 8. The 1.5 Pro model has a win rate -- a measurement of how many benchmarks it can outperform -- of 87% compared with the 1.0 Pro and 55% against the 1.0 Ultra. So the 1.5 Pro is essentially an upgraded version of the best available model now.

Gemini 1.5 Pro can ingest video, images, audio and text to answer questions, added Lacy. Oriol Vinyals, vice president of research at Google DeepMind and co-lead of Gemini, described 1.5 as a "research release" and said the model is "very efficient" thanks to a unique architecture that can answer questions by zeroing in on expert sources in that particular subject rather than seeking the answer from all possible sources.

Meanwhile, OpenAI announced a new text-to-video model called Sora that's capturing a lot of attention because of the photorealistic videos it's able to generate. Sora can "create videos of up to 60 seconds featuring highly detailed scenes, complex camera motion, and multiple characters with vibrant emotions." Following up on a promise it made, along with Google and Meta last week, to watermark AI-generated images and video, OpenAI says it's also creating tools to detect videos created with Sora so they can be identified as being AI generated.

Google and Meta have also announced their own gen AI text-to-video creators.

Sora, which means "sky" in Japanese, is also being called experimental, with OpenAI limiting access for now to so-called "red teamers," security experts and researchers who will assess the tool's potential harms or risks. That follows through on promises made as part of President Joe Biden's AI executive order last year, asking developers to submit the results of safety checks on their gen AI chatbots before releasing them publicly. OpenAI said it's also looking to get feedback on Sora from some visual artists, designers and filmmakers.

How do the photorealistic videos look? Pretty realistic. I agree with the The New York Times, which described the short demo videos -- "of wooly mammoths trotting through a snowy meadow, a monster gazing at a melting candle and a Tokyo street scene seemingly shot by a camera swooping across the city" -- as "eye popping."

The MIT Review, which also got a preview of Sora, said the "tech has pushed the envelope of what's possible with text-to-video generation." Meanwhile, The Washington Post noted Sora could exacerbate an already growing problem with video deepfakes, which have been used to "deceive voters" and scam consumers.

One X commentator summarized it this way: "Oh boy here we go what is real anymore." And OpenAI CEO Sam Altman called the news about its video generation model a "remarkable moment."

You can see the four examples of what Sora can produce on OpenAI's intro site, which notes that the tool is "able to generate complex scenes with multiple characters, specific types of motion, and accurate details of the subject and background. The model understands not only what the user has asked for in the prompt, but also how those things exist in the physical world. The model has a deep understanding of language, enabling it to accurately interpret prompts and generate compelling characters that express vibrant emotions."

But Sora has its weaknesses, which is why OpenAI hasn't yet said whether it will actually be incorporated into its chatbots. Sora "may struggle with accurately simulating the physics of a complex scene and may not understand specific instances of cause and effect. For example, a person might take a bite out of a cookie, but afterward, the cookie may not have a bite mark. The model may also confuse spatial details of a prompt, for example, mixing up left and right."

All of this is to remind us that tech is a tool -- and that it's up to us humans to decide how, when, where and why to use that technology. In case you didn't see it, the trailer for the new Minions movie (Despicable Me 4: Minion Intelligence) makes this point cleverly, with its sendup of gen AI deepfakes and Jon Hamm's voiceover of how "artificial intelligence is changing how we see the worldtransforming the way we do business."

"With artificial intelligence," Hamm adds over the minions' laughter, "the future is in good hands."

Here are the other doings in AI worth your attention.

Twenty tech companies, including Adobe, Amazon, Anthropic, ElevenLabs, Google, IBM, Meta, Microsoft, OpenAI, Snap, TikTok and X, agreed at a security conference in Munich that they will voluntarily adopt "reasonable precautions" to guard against AI tools being used to mislead or deceive voters ahead of elections.

"The intentional and undisclosed generation and distribution of Deceptive AI Election content can deceive the public in ways that jeopardize the integrity of electoral processes," the text of the accord says, according to NPR. "We affirm that the protection of electoral integrity and public trust is a shared responsibility and a common good that transcends partisan interests and national borders."

But the agreement is "largely symbolic," the Associated Press reported, noting that "reasonable precautions" is a little vague.

"The companies aren't committing to ban or remove deepfakes," the AP said. "Instead, the accord outlines methods they will use to try to detect and label deceptive AI content when it is created or distributed on their platforms. It notes the companies will share best practices with each other and provide 'swift and proportionate responses' when that content starts to spread."

AI has already been used to try to trick voters in the US and abroad. Days before the New Hampshire presidential primary, fraudsters sent an AI robocall that mimicked President Biden's voice, asking them not to vote in the primary. That prompted the Federal Communications Commission this month to make AI-generated robocalls illegal. The AP said that "Just days before Slovakia's elections in November, AI-generated audio recordings impersonated a candidate discussing plans to raise beer prices and rig the election. Fact-checkers scrambled to identify them as false as they spread across social media."

"Everybody recognizes that no one tech company, no one government, no one civil society organization is able to deal with the advent of this technology and its possible nefarious use on their own," Nick Clegg, president of global affairs for Meta, told the Associated Press in an interview before the summit.

Over 4 billion people are set to vote in key elections this year in more than 40 countries,. including the US, The Hill reported.

If you're concerned about how deepfakes may be used to scam you or your family members -- someone calls your grandfather and asks them for money by pretending to be you -- Bloomberg reporter Rachel Metz has a good idea. She suggests it may be time for us all to create a "family password" or safe word or phrase to share with our family or personal network that we can ask for to make sure we're talking to who we think we're talking to.

"Extortion has never been easier," Metz reports. "The kind of fakery that used to take time, money and technical know-how can now be accomplished quickly and cheaply by nearly anyone."

That's where family passwords come in, since they're "simple and free," Metz said. "Pick a word that you and your family (or another trusted group) can easily remember. Then, if one of those people reaches out in a way that seems a bit odd -- say, they're suddenly asking you to deliver 5,000 gold bars to a P.O. Box in Alaska -- first ask them what the password is."

How do you pick a good password? She offers a few suggestions, including using a word you don't say frequently and that's not likely to come up in casual conversations. Also, "avoid making the password the name of a pet, as those are easily guessable."

Hiring experts have told me it's going to take years to build an AI-educated workforce, considering that gen AI tools like ChatGPT weren't released until late 2022. So it makes sense that learning platforms like Coursera, Udemy, Udacity, Khan Academy and many universities are offering online courses and certificates to upskill today's workers. Now the University of Pennsylvania's School of Engineering and Applied Science said it's the first Ivy League school to offer an undergraduate major in AI.

"The rapid rise of generative AI is transforming virtually every aspect of life: health, energy, transportation, robotics, computer vision, commerce, learning and even national security," Penn said in a Feb. 13 press release. "This produces an urgent need for innovative, leading-edge AI engineers who understand the principles of AI and how to apply them in a responsible and ethical way."

The bachelor of science in AI offers coursework in machine learning, computing algorithms, data analytics and advanced robotics and will have students address questions about "how to align AI with our social values and how to build trustworthy AI systems," Penn professor Zachary Ives said.

"We are training students for jobs that don't yet exist in fields that may be completely new or revolutionized by the time they graduate," added Robert Ghrist, associate dean of undergraduate education in Penn Engineering.

FYI, the cost of an undergraduate education at Penn, which typically spans four years, is over $88,000 per year (including housing and food).

For those not heading to college or who haven't signed up for any of those online AI certificates, their AI upskilling may come courtesy of their current employee. The Boston Consulting Group, for its Feb. 9 report, What GenAI's Top Performer Do Differently, surveyed over 150 senior executives across 10 sectors. Generally:

Bottom line: companies are starting to look at existing job descriptions and career trajectories, and the gaps they're seeing in the workforce when they consider how gen AI will affect their businesses. They've also started offering gen AI training programs. But these efforts don't lessen the need for today's workers to get up to speed on gen AI and how it may change the way they work -- and the work they do.

In related news, software maker SAP looked at Google search data to see which states in the US were most interested in "AI jobs and AI business adoption."

Unsurprisingly, California ranked first in searches for "open AI jobs" and "machine learning jobs." Washington state came in second place, Vermont in third, Massachusetts in fourth and Maryland in fifth.

California, "home to Silicon Valley and renowned as a global tech hub, shows a significant interest in AI and related fields, with 6.3% of California's businesses saying that they currently utilize AI technologies to produce goods and services and a further 8.4% planning to implement AI in the next six months, a figure that is 85% higher than the national average," the study found.

Virginia, New York, Delaware, Colorado and New Jersey, in that order, rounded out the top 10.

Over the past few months, I've highlighted terms you should know if you want to be knowledgeable about what's happening as it relates to gen AI. So I'm going to take a step back this week and provide this vocabulary review for you, with a link to the source of the definition.

It's worth a few minutes of your time to know these seven terms.

Anthropomorphism: The tendency for people to attribute humanlike qualities or characteristics to an AI chatbot. For example, you may assume it's kind or cruel based on its answers, even though it isn't capable of having emotions, or you may believe the AI is sentient because it's very good at mimicking human language.

Artificial general intelligence (AGI): A description of programs that are as capable as -- or even more capable than -- than a human. While full general intelligence is still off in the future, models are growing in sophistication. Some have demonstrated skills across multiple domains ranging from chemistry to psychology, with task performance paralleling human benchmarks.

Generative artificial intelligence (gen AI): Technology that creates content -- including text, images, video and computer code -- by identifying patterns in large quantities of training data and then creating original material that has similar characteristics.

Hallucination: Hallucinations are unexpected and incorrect responses from AI programs that can arise for reasons that aren't yet fully known. A language model might suddenly bring up fruit salad recipes when you were asking about planting fruit trees. It might also make up scholarly citations, lie about data you ask it to analyze or make up facts about events that aren't in its training data. It's not fully understood why this happens but can arise from sparse data, information gaps and misclassification.

Large language model (LLM): A type of AI model that can generate human-like text and is trained on a broad dataset.

Prompt engineering: This is the act of giving AI an instruction so it has the context it needs to achieve your goal. Prompt engineering is best associated with OpenAI's ChatGPT, describing the tasks users feed into the algorithm. (e.g. "Give me five popular baby names.")

Temperature: In simple terms, model temperature is a parameter that controls how random a language model's output is. A higher temperature means the model takes more risks, giving you a diverse mix of words. On the other hand, a lower temperature makes the model play it safe, sticking to more focused and predictable responses.

Model temperature has a big impact on the quality of the text generated in a bunch of [natural language processing] tasks, like text generation, summarization and translation.

The tricky part is finding the perfect model temperature for a specific task. It's kind of like Goldilocks trying to find the perfect bowl of porridge -- not too hot, not too cold, but just right. The optimal temperature depends on things like how complex the task is and how much creativity you're looking for in the output.

Editors' note: CNET is using an AI engine to help create some stories. For more, seethis post.

Read the original post:

AI and You: OpenAI's Sora Previews Text-to-Video Future, First Ivy League AI Degree - CNET

Posted in Ai | Comments Off on AI and You: OpenAI’s Sora Previews Text-to-Video Future, First Ivy League AI Degree – CNET

Can AI help us forecast extreme weather? – Vox.com

Weve learned how to predict weather over the past century by understanding the science that governs Earths atmosphere and harnessing enough computing power to generate global forecasts. But in just the past three years, AI models from companies like Google, Huawei, and Nvidia that use historical weather data have been releasing forecasts rivaling those created through traditional forecasting methods.

This video explains the promise and challenges of these new models built on artificial intelligence rather than numerical forecasting, particularly the ability to foresee extreme weather.

Additional reading:

You can find this video and all of Voxs videos on YouTube.

This video is sponsored by Microsoft Copilot for Microsoft 365. Microsoft has no editorial influence on our videos, but their support makes videos like these possible.

Yes, I'll give $5/month

Yes, I'll give $5/month

We accept credit card, Apple Pay, and Google Pay. You can also contribute via

Read more:

Can AI help us forecast extreme weather? - Vox.com

Posted in Ai | Comments Off on Can AI help us forecast extreme weather? – Vox.com

Scale AI to set the Pentagon’s path for testing and evaluating large language models – DefenseScoop

The Pentagons Chief Digital and Artificial Intelligence Office (CDAO) tapped Scale AI to produce a trustworthy means for testing and evaluating large language models that can support and potentially disrupt military planning and decision-making.

According to a statement the San Francisco-based company shared exclusively with DefenseScoop, the outcomes of this new one-year contract will supply the CDAO with a framework to deploy AI safely by measuring model performance, offering real-time feedback for warfighters, and creating specialized public sector evaluation sets to test AI models for military support applications, such as organizing the findings from after action reports.

Large language models and the overarching field of generative AI include emerging technologies that can generate (convincing but not always accurate) text, software code, images and other media, based on prompts from humans.

This rapidly evolving realm holds a lot of promise for the Department of Defense, but also poses unknown and serious potential challenges. Last year, Pentagon leadership launched Task Force Lima within the CDAOs Algorithmic Warfare Directorate to accelerate its components grasp, assessment and deployment of generative artificial intelligence.

The department has long leaned on test-and-evaluation (T&E) processes to assess and ensure its systems, platforms and technologies perform in a safe and reliable manner before they are fully fielded. But AI safety standards and policies have not yet been universally set, and the complexities and uncertainties associated with large language models make T&E even more complicated when it comes to generative AI.

Broadly, T&E enables experts to determine the baseline performance of a specific model.

For instance, to test and evaluate a computer vision algorithm that differentiates between images of dogs and cats and things that are not dogs or cats, an official might first train it with millions of different pictures of those type of animals as well as objects that arent dogs or cats. In doing so, the expert will also hold back a diverse subset of data that can then be presented to the algorithm down the line.

They can then assess that evaluation dataset against the test set, or ground truth, and ultimately determine failure rates of where the model is unable to determine if something is or is not one of the classifiers theyre trying to identify.

Experts at Scale AI will adopt a similar approach for T&E with large language models, but because they are generative in nature and the English language can be hard to evaluate, there isnt that same level of ground truth for these complex systems. For example, if prompted to supply five different responses, an LLM might be generally factually accurate in all five, yet contrasting sentence structures could change the meanings of each output.

So, part of the companys effort to develop the framework, methods and technology CDAO can use to test and evaluate large language models will involve creating holdout datasets where they include DOD insiders to prompt response pairs and adjudicate them by layers of review, and ensure that each is as good of a response as would be expected from a human in the military.

The entire process will be iterative in nature.

Once datasets that are germane to the DOD for world knowledge, truthfulness, and other topics are made and refined, the experts can then evaluate existing large language models against them.

Eventually, as they have these holdout datasets, experts will be able to run evaluations and establish model cards or short documents that supply details on the context for best for use of various machine learning models and information for measuring their performance.

Officials plan to automate in this development as much as possible, so that as new models come in, there can be some baseline understanding of how they will perform, where they will perform best, and where they will probably start to fail.

Further in the process, the ultimate intent is for models to essentially send signals to CDAO officials that engage with them, if they start to waver from the domains they have been tested against.

This work will enable the DOD to mature its T&E policies to address generative AI by measuring and assessing quantitative data via benchmarking and assessing qualitative feedback from users. The evaluation metrics will help identify generative AI models that are ready to support military applications with accurate and relevant results using DoD terminology and knowledge bases. The rigorous T&E process aims to enhance the robustness and resilience of AI systems in classified environments, enabling the adoption of LLM technology in secure environments, Scale AIs statement reads.

Beyond the CDAO, the company has also partnered with Meta, Microsoft, the U.S. Army, the Defense Innovation Unit, OpenAI, General Motors, Toyota Research Institute, Nvidia, and others.

Testing and evaluating generative AI will help the DoD understand the strengths and limitations of the technology, so it can be deployed responsibly. Scale is honored to partner with the DoD on this framework, Alexandr Wang, Scale AIs founder and CEO, said in the statement.

Continue reading here:

Scale AI to set the Pentagon's path for testing and evaluating large language models - DefenseScoop

Posted in Ai | Comments Off on Scale AI to set the Pentagon’s path for testing and evaluating large language models – DefenseScoop

What is AI governance? – Cointelegraph

The landscape and importance of AI governance

AI governance encompasses the rules, principles and standards that ensure AI technologies are developed and used responsibly.

AI governance is a comprehensive term encompassing the definition, principles, guidelines and policies designed to steer the ethical creation and utilization of artificial intelligence (AI) technologies. This governance framework is crucial for addressing a wide array of concerns and challenges associated with AI, such as ethical decision-making, data privacy, bias in algorithms, and the broader impact of AI on society.

The concept of AI governance extends beyond mere technical aspects to include legal, social and ethical dimensions. It serves as a foundational structure for organizations and governments to ensure that AI systems are developed and deployed in beneficial ways that do not cause unintentional harm.

In essence, AI governance forms the backbone of responsible AI development and usage, providing a set of standards and norms that guide various stakeholders, including AI developers, policymakers and end-users. By establishing clear guidelines and ethical principles, AI governance aims to harmonize the rapid advancements in AI technology with the societal and ethical values of human communities.

AI governance adapts to organizational needs without fixed levels, employing frameworks like NIST and OECD for guidance.

AI governance doesnt follow universally standardized levels, as seen in fields like cybersecurity. Instead, it utilizes structured approaches and frameworks from various entities, allowing organizations to tailor these to their specific requirements.

Frameworks, such as the National Institute Of Standards and Technology (NIST) AI Risk Management Framework, the Organization for Economic Co-operation and Development (OECD) principles on artificial intelligence, and the European Commissions Ethics Guidelines for Trustworthy AI, are among the most utilized. They cover many topics, including transparency, accountability, fairness, privacy, security and safety, providing a solid foundation for governance practices.

The extent of governance adoption varies with the organizations size, the complexity of the AI systems it employs, and the regulatory landscape it operates within. Three main approaches to AI governance are:

The most basic form relies on an organizations core values and principles, with some informal processes in place, such as ethical review boards, but lacking a formal governance structure.

A more structured approach than informal governance involves creating specific policies and procedures in response to particular challenges. However, it may not be comprehensive or systematic.

The most comprehensive approach entails the development of an extensive AI governance framework that reflects the organizations values, aligns with legal requirements and includes detailed risk assessment and ethical oversight processes.

Illustrating AI governance through diverse examples like GDPR, the OECD AI principles and corporate ethics boards showcases the multifaceted approach to responsible AI use.

AI governance manifests through various policies, frameworks and practices aimed at ethically deploying AI technologies through organizations and governments. These instances highlight the application of AI governance across different scenarios:

The General Data Protection Regulation (GDPR) is a pivotal example of AI governance in safeguarding personal data and privacy. Although the GDPR isnt solely AI-focused, its regulations significantly impact AI applications, particularly those processing personal data within the European Union, emphasizing the need for transparency and data protection.

The OECD AI principles, endorsed by over 40 countries, underscore the commitment to trustworthy AI. These principles advocate for AI systems to be transparent, fair and accountable, guiding international efforts toward responsible AI development and usage.

Corporate AI Ethics Boards represent an organizational approach to AI governance. Numerous corporations have instituted ethics boards to supervise AI projects, ensuring they conform to ethical norms and societal expectations. For instance, IBMs AI Ethics Council reviews AI offerings to ensure they comply with the companys AI ethics, involving a diverse team from various disciplines to provide comprehensive oversight.

Stakeholder engagement is essential for developing inclusive, effective AI governance frameworks that reflect a broad spectrum of perspectives.

A wide range of stakeholders, including governmental entities, international organizations, business associations and civil society organizations, are in charge of AI governance. Because different areas and nations have different legal, cultural and political contexts, their oversight structures can also differ significantly.

The complexity of AI governance requires active participation from all sectors of society, including government, industry, academia and civil society. Engaging a diverse range of stakeholders ensures that multiple perspectives are considered when developing AI governance frameworks, leading to more robust and inclusive policies.

This engagement also fosters a sense of shared responsibility for the ethical development and use of AI technologies. By involving stakeholders in the governance process, policymakers can leverage a wide range of expertise and insights, ensuring that AI governance frameworks are well-informed, adaptable and capable of addressing the multifaceted challenges and opportunities presented by AI.

For instance, the exponential growth of data collection and processing raises significant privacy concerns, necessitating stringent governance frameworks to protect an individuals personal information. This involves compliance with global data protection regulations like GDPR and active participation by stakeholders in implementing advanced data security technologies to prevent unauthorized access and data breaches.

The future of AI governance will be shaped by advancements in technology, evolving societal values and the need for international collaboration.

As AI technologies evolve, so will the frameworks governing them. The future of AI governance is likely to see a greater emphasis on sustainable and human-centered AI practices.

Sustainable AI focuses on developing environmentally friendly and economically viable technologies over the long term. Human-centered AI prioritizes systems that enhance human capabilities and well-being, ensuring that AI serves as a tool for augmenting human potential rather than replacing it.

Moreover, the global nature of AI technologies necessitates international collaboration in AI governance. This involves harmonizing regulatory frameworks across borders, fostering global standards for AI ethics, and ensuring that AI technologies can be safely deployed across different cultural and regulatory environments. Global cooperation is key to addressing challenges, such as cross-border data flow and ensuring that AI benefits are shared equitably worldwide.

Read more here:

What is AI governance? - Cointelegraph

Posted in Ai | Comments Off on What is AI governance? – Cointelegraph

HOUSE LAUNCHES BIPARTISAN TASK FORCE ON ARTIFICIAL INTELLIGENCE – Congressman Ted Lieu

WASHINGTON Speaker Mike Johnson and Democratic Leader Hakeem Jeffries announced the establishment of a bipartisan Task Force on Artificial Intelligence (AI) to explore how Congress can ensure America continues to lead the world in AI innovation while considering guardrails that may be appropriate to safeguard the nation against current and emerging threats.

Speaker Johnson and Leader Jeffries have each appointed twelve members to the Task Force that represent key committees of jurisdiction and will be jointly led by Chair JayObernolte (CA-23) and Co-Chair Ted Lieu (CA-36). The Task Force will seek to produce a comprehensive report that will include guiding principles, forward-looking recommendations and bipartisan policy proposals developed in consultation with committees of jurisdiction.

Because advancements in artificial intelligence have the potential to rapidly transform our economy and our society, it is important for Congress to work in a bipartisan manner to understand and plan for both the promises and the complexities of this transformative technology, saidSpeaker Mike Johnson.I am happy to announce with Leader Jeffries this new Bipartisan Task Force on Artificial Intelligenceto ensure America continues leading in this strategic arena.

Led by Rep. JayObernolte (R-Ca.) and Rep. Ted Lieu (D-Ca.), the task force will bring together a bipartisan group of Members who have AI expertise and represent the relevant committees of jurisdiction. As we look to the future, Congress must continue to encourage innovation and maintain our countrys competitive edge, protect our national security, and carefully consider what guardrails may be needed to ensure the development of safe and trustworthy technology.

Congress has a responsibility to facilitate the promising breakthroughs that artificial intelligence can bring to fruition and ensure that everyday Americans benefit from these advancements in an equitable manner, saidDemocratic Leader HakeemJeffries.That is why I am pleased to join Speaker Johnson in announcing the new Bipartisan Task Force on Artificial Intelligence, led by Rep. Ted Lieu and Rep. Jay Obernolte.

The rise of artificial intelligence also presents a unique set of challenges and certain guardrails must be put in place to protect the American people. Congress needs to work in a bipartisan way to ensure that America continues to lead in this emerging space, while also preventing bad actors from exploiting this evolving technology. The Members appointed to this Task Force bring a wide range of experience and expertise across the committees of jurisdiction and I look forward to working with them to tackle these issues in a bipartisan way.

It is an honor to be entrusted by Speaker Johnson to serve as Chairman of the House Task Force on Artificial Intelligence, saidChair Jay Obernolte (CA-23).As new innovations in AI continue to emerge, Congress and our partners in federal government must keep up. House Republicans and Democrats will work together to create a comprehensive report detailing the regulatory standards and congressional actions needed to both protect consumers and foster continued investment and innovation in AI.

The United States has led the world in the development of advanced AI, and we must work to ensure that AI realizes its tremendous potential to improve the lives of people across our country. I look forward to working with Co-Chair Ted Lieu and the rest of the Task Force on this critical bipartisan effort.

Thank you to Leader Jeffries and Speaker Johnson for establishing this bipartisan House Task Force on Artificial intelligence. AI has the capability of changing our lives as we know it. The question is how to ensure AI benefits society instead of harming us. As a recovering Computer Science major, I know this will not be an easy or quick or one-time task, but I believe Congress has an essential role to play in the future of AI. I have been heartened to see so many Members of Congress of all political persuasions agree, saidCo-Chair Ted Lieu (CA-36).

I am honored to join Congressman Jay Obernolte in leading this Task Force on AI, and honored to work with the bipartisan Members on the Task Force. I look forward to engaging with Members of both the Democratic Caucus and Republican Conference, as well as the Senate, to find meaningful, bipartisan solutions with regards to AI.

Membership

Rep. Ted Lieu (CA-36),Co-Chair Rep. Anna Eshoo (CA-16) Rep. Yvette Clarke (NY-09) Rep. Bill Foster (IL-11) Rep. Suzanne Bonamici (OR-01) Rep. Ami Bera (CA-06) Rep. Don Beyer (VA-08) Rep. Alexandria Ocasio-Cortez (NY-14) Rep. Haley Stevens (MI-11) Rep. Sara Jacobs (CA-51) Rep. Valerie Foushee (NC-04) Rep. Brittany Pettersen (CO-07)

Rep. Jay Obernolte (CA-23),Chair Rep. Darrell Issa (CA-48) Rep. French Hill (AR-02) Rep. Michael Cloud (TX-27) Rep. Neal Dunn (FL-02) Rep. Ben Cline (VA-06) Rep. Kat Cammack (FL-03) Rep. Scott Franklin (FL-18) Rep. Michelle Steel (CA-45) Rep. Eric Burlison (MO-07) Rep. Laurel Lee (FL-15) Rep. Rich McCormick (GA-06)

###

Go here to see the original:

HOUSE LAUNCHES BIPARTISAN TASK FORCE ON ARTIFICIAL INTELLIGENCE - Congressman Ted Lieu

Posted in Ai | Comments Off on HOUSE LAUNCHES BIPARTISAN TASK FORCE ON ARTIFICIAL INTELLIGENCE – Congressman Ted Lieu