Search Immortality Topics:

Page 13«..10..12131415..20..»


Category Archives: Ai

Intel Launches World’s First Systems Foundry Designed for the AI Era – Investor Relations :: Intel Corporation (INTC)

Announced at Intel Foundry Direct Connect, Intels extended process technology roadmap adds Intel 14A to the companys leading-edge node plan, in addition to several specialized node evolutions and new Intel Foundry Advanced System Assembly and Test capabilities. Intel also affirmed that its ambitious five-nodes-in-four-years process roadmap remains on track and will deliver the industrys first backside power solution. (Credit: Intel Corporation)

Intel announces expanded process roadmap, customers and ecosystem partners to deliver on ambition to be the No. 2 foundry by 2030.

Company hosts Intel Foundry event featuring U.S. Commerce Secretary Gina Raimondo, Arm CEO Rene Haas and Open AI CEO Sam Altman and others.

NEWS HIGHLIGHTS

SAN JOSE, Calif.--(BUSINESS WIRE)-- Intel Corp. (INTC) today launched Intel Foundry as a more sustainable systems foundry business designed for the AI era and announced an expanded process roadmap designed to establish leadership into the latter part of this decade. The company also highlighted customer momentum and support from ecosystem partners including Synopsys, Cadence, Siemens and Ansys who outlined their readiness to accelerate Intel Foundry customers chip designs with tools, design flows and IP portfolios validated for Intels advanced packaging and Intel 18A process technologies.

This press release features multimedia. View the full release here: https://www.businesswire.com/news/home/20240221189319/en/

Announced at Intel Foundry Direct Connect, Intels extended process technology roadmap adds Intel 14A to the companys leading-edge node plan, in addition to several specialized node evolutions and new Intel Foundry Advanced System Assembly and Test capabilities. Intel also affirmed that its ambitious five-nodes-in-four-years process roadmap remains on track and will deliver the industrys first backside power solution. (Credit: Intel Corporation)

The announcements were made at Intels first foundry event, Intel Foundry Direct Connect, where the company gathered customers, ecosystem companies and leaders from across the industry. Among the participants and speakers were U.S. Secretary of Commerce Gina Raimondo, Arm CEO Rene Haas, Microsoft CEO Satya Nadella, OpenAI CEO Sam Altman and others.

More: Intel Foundry Direct Connect (Press Kit)

AI is profoundly transforming the world and how we think about technology and the silicon that powers it, said Intel CEO Pat Gelsinger. This is creating an unprecedented opportunity for the worlds most innovative chip designers and for Intel Foundry, the worlds first systems foundry for the AI era. Together, we can create new markets and revolutionize how the world uses technology to improve peoples lives.

Process Roadmap Expands Beyond 5N4Y

Intels extended process technology roadmap adds Intel 14A to the companys leading-edge node plan, in addition to several specialized node evolutions. Intel also affirmed that its ambitious five-nodes-in-four-years (5N4Y) process roadmap remains on track and will deliver the industrys first backside power solution. Company leaders expect Intel will regain process leadership with Intel 18A in 2025.

The new roadmap includes evolutions for Intel 3, Intel 18A and Intel 14A process technologies. It includes Intel 3-T, which is optimized with through-silicon vias for 3D advanced packaging designs and will soon reach manufacturing readiness. Also highlighted are mature process nodes, including new 12 nanometer nodes expected through the joint development with UMC announced last month. These evolutions are designed to enable customers to develop and deliver products tailored to their specific needs. Intel Foundry plans a new node every two years and node evolutions along the way, giving customers a path to continuously evolve their offerings on Intels leading process technology.

Intel also announced the addition of Intel Foundry FCBGA 2D+ to its comprehensive suite of ASAT offerings, which already include FCBGA 2D, EMIB, Foveros and Foveros Direct.

Microsoft Design on Intel 18A Headlines Customer Momentum

Customers are supporting Intels long-term systems foundry approach. During Pat Gelsingers keynote, Microsoft Chairman and CEO Satya Nadella stated that Microsoft has chosen a chip design it plans to produce on the Intel 18A process.

We are in the midst of a very exciting platform shift that will fundamentally transform productivity for every individual organization and the entire industry, Nadella said. To achieve this vision, we need a reliable supply of the most advanced, high-performance and high-quality semiconductors. Thats why we are so excited to work with Intel Foundry, and why we have chosen a chip design that we plan to produce on Intel 18A process.

Intel Foundry has design wins across foundry process generations, including Intel 18A, Intel 16 and Intel 3, along with significant customer volume on Intel Foundry ASAT capabilities, including advanced packaging.

In total, across wafer and advanced packaging, Intel Foundrys expected lifetime deal value is greater than $15 billion.

IP and EDA Vendors Declare Readiness for Intel Process and Packaging Designs

Intellectual property and electronic design automation (EDA) partners Synopsys, Cadence, Siemens, Ansys, Lorentz and Keysight disclosed tool qualification and IP readiness to enable foundry customers to accelerate advanced chip designs on Intel 18A, which offers the foundry industrys first backside power solution. These companies also affirmed EDA and IP enablement across Intel node families.

At the same time, several vendors announced plans to collaborate on assembly technology and design flows for Intels embedded multi-die interconnect bridge (EMIB) 2.5D packaging technology. These EDA solutions will ensure faster development and delivery of advanced packaging solutions for foundry customers.

Intel also unveiled an "Emerging Business Initiative" that showcases a collaboration with Arm to provide cutting-edge foundry services for Arm-based system-on-chips (SoCs). This initiative presents an important opportunity for Arm and Intel to support startups in developing Arm-based technology and offering essential IP, manufacturing support and financial assistance to foster innovation and growth.

Systems Approach Differentiates Intel Foundry in the AI Era

Intels systems foundry approach offers full-stack optimization from the factory network to software. Intel and its ecosystem empower customers to innovate across the entire system through continuous technology improvements, reference designs and new standards.

Stuart Pann, senior vice president of Intel Foundry at Intel said, We are offering a world-class foundry, delivered from a resilient, more sustainable and secure source of supply, and complemented by unparalleled systems of chips capabilities. Bringing these strengths together gives customers everything they need to engineer and deliver solutions for the most demanding applications.

Global, Resilient, More Sustainable and Trusted Systems Foundry

Resilient supply chains must also be increasingly sustainable, and today Intel shared its goal of becoming the industrys most sustainable foundry. In 2023, preliminary estimates show that Intel used 99% renewable electricity in its factories worldwide. Today, the company redoubled its commitment to achieving 100% renewable electricity worldwide, net-positive water and zero waste to landfills by 2030. Intel also reinforced its commitment to net-zero Scope 1 and Scope 2 GHG emissions by 2040 and net-zero upstream Scope 3 emissions by 2050.

Forward-Looking Statements

This release contains forward-looking statements, including with respect to Intels:

Such statements involve many risks and uncertainties that could cause our actual results to differ materially from those expressed or implied, including those associated with:

All information in this press release reflects Intel management views as of the date hereof unless an earlier date is specified. Intel does not undertake, and expressly disclaims any duty, to update such statements, whether as a result of new information, new developments, or otherwise, except to the extent that disclosure may be required by law.

About Intel

Intel (Nasdaq: INTC) is an industry leader, creating world-changing technology that enables global progress and enriches lives. Inspired by Moores Law, we continuously work to advance the design and manufacturing of semiconductors to help address our customers greatest challenges. By embedding intelligence in the cloud, network, edge and every kind of computing device, we unleash the potential of data to transform business and society for the better. To learn more about Intels innovations, go to newsroom.intel.com and intel.com.

Intel Corporation. Intel, the Intel logo and other Intel marks are trademarks of Intel Corporation or its subsidiaries. Other names and brands may be claimed as the property of others.

View source version on businesswire.com: https://www.businesswire.com/news/home/20240221189319/en/

John Hipsher 1-669-223-2416 john.hipsher@intel.com

Robin Holt 1-503-616-1532 robin.holt@intel.com

Source: Intel Corp.

Released Feb 21, 2024 11:30 AM EST

More here:

Intel Launches World's First Systems Foundry Designed for the AI Era - Investor Relations :: Intel Corporation (INTC)

Posted in Ai | Comments Off on Intel Launches World’s First Systems Foundry Designed for the AI Era – Investor Relations :: Intel Corporation (INTC)

Generative AI’s environmental costs are soaring and mostly secret – Nature.com

Last month, OpenAI chief executive Sam Altman finally admitted what researchers have been saying for years that the artificial intelligence (AI) industry is heading for an energy crisis. Its an unusual admission. At the World Economic Forums annual meeting in Davos, Switzerland, Altman warned that the next wave of generative AI systems will consume vastly more power than expected, and that energy systems will struggle to cope. Theres no way to get there without a breakthrough, he said.

Im glad he said it. Ive seen consistent downplaying and denial about the AI industrys environmental costs since I started publishing about them in 2018. Altmans admission has got researchers, regulators and industry titans talking about the environmental impact of generative AI.

So what energy breakthrough is Altman banking on? Not the design and deployment of more sustainable AI systems but nuclear fusion. He has skin in that game, too: in 2021, Altman started investing in fusion company Helion Energy in Everett, Washington.

Is AI leading to a reproducibility crisis in science?

Most experts agree that nuclear fusion wont contribute significantly to the crucial goal of decarbonizing by mid-century to combat the climate crisis. Helions most optimistic estimate is that by 2029 it will produce enough energy to power 40,000 average US households; one assessment suggests that ChatGPT, the chatbot created by OpenAI in San Francisco, California, is already consuming the energy of 33,000 homes. Its estimated that a search driven by generative AI uses four to five times the energy of a conventional web search. Within years, large AI systems are likely to need as much energy as entire nations.

And its not just energy. Generative AI systems need enormous amounts of fresh water to cool their processors and generate electricity. In West Des Moines, Iowa, a giant data-centre cluster serves OpenAIs most advanced model, GPT-4. A lawsuit by local residents revealed that in July 2022, the month before OpenAI finished training the model, the cluster used about 6% of the districts water. As Google and Microsoft prepared their Bard and Bing large language models, both had major spikes in water use increases of 20% and 34%, respectively, in one year, according to the companies environmental reports. One preprint1 suggests that, globally, the demand for water for AI could be half that of the United Kingdom by 2027. In another2, Facebook AI researchers called the environmental effects of the industrys pursuit of scale the elephant in the room.

Rather than pipe-dream technologies, we need pragmatic actions to limit AIs ecological impacts now.

Theres no reason this cant be done. The industry could prioritize using less energy, build more efficient models and rethink how it designs and uses data centres. As the BigScience project in France demonstrated with its BLOOM model3, it is possible to build a model of a similar size to OpenAIs GPT-3 with a much lower carbon footprint. But thats not whats happening in the industry at large.

It remains very hard to get accurate and complete data on environmental impacts. The full planetary costs of generative AI are closely guarded corporate secrets. Figures rely on lab-based studies by researchers such as Emma Strubell4 and Sasha Luccioni3; limited company reports; and data released by local governments. At present, theres little incentive for companies to change.

There are holes in Europes AI Act and researchers can help to fill them

But at last, legislators are taking notice. On 1 February, US Democrats led by Senator Ed Markey of Massachusetts introduced the Artificial Intelligence Environmental Impacts Act of 2024. The bill directs the National Institute for Standards and Technology to collaborate with academia, industry and civil society to establish standards for assessing AIs environmental impact, and to create a voluntary reporting framework for AI developers and operators. Whether the legislation will pass remains uncertain.

Voluntary measures rarely produce a lasting culture of accountability and consistent adoption, because they rely on goodwill. Given the urgency, more needs to be done.

To truly address the environmental impacts of AI requires a multifaceted approach including the AI industry, researchers and legislators. In industry, sustainable practices should be imperative, and should include measuring and publicly reporting energy and water use; prioritizing the development of energy-efficient hardware, algorithms, and data centres; and using only renewable energy. Regular environmental audits by independent bodies would support transparency and adherence to standards.

Researchers could optimize neural network architectures for sustainability and collaborate with social and environmental scientists to guide technical designs towards greater ecological sustainability.

Finally, legislators should offer both carrots and sticks. At the outset, they could set benchmarks for energy and water use, incentivize the adoption of renewable energy and mandate comprehensive environmental reporting and impact assessments. The Artificial Intelligence Environmental Impacts Act is a start, but much more will be needed and the clock is ticking.

K.C. is employed by both USC Annenberg, and Microsoft Research, which makes generative AI systems.

See the original post here:

Generative AI's environmental costs are soaring and mostly secret - Nature.com

Posted in Ai | Comments Off on Generative AI’s environmental costs are soaring and mostly secret – Nature.com

Energy companies tap AI to detect defects in an aging grid – E&E News by POLITICO

A helicopter loaded with cameras and sensors sweeps over a utilitys high-voltage transmission line in the southeastern United States.

High-resolution cameras record images of cables, connections and towers. Artificial intelligence tools search for cracks and flaws that could be overlooked by the naked eye, the worn-out component that could spark the next wildfire.

We have trained a lot of AI models to recognize defects, said Marion Baroux, a Germany-based business developer for Siemens Energy, which built the helicopter scanning and analysis technology.

Drones have been inspecting power lines for a decade. Today, the rapid advancement of AI and machine-learning technology has opened the door to faster detection of potential failures in aging power lines, guiding transmission owners on how to upgrade the grid to meet clean energy and extreme weather challenges.

Automating inspections is a first step in a still uncharted future for AI adoption in the electric power sector, echoing the high-stakes international debate over the risks and potential of AI technology.

President Joe Bidens executive order on AI last October emphasized caution. Safety requires robust, reliable, repeatable, and standardized evaluations of AI systems, the order said, as well as policies, institutions, and as appropriate, other mechanisms to test, understand, and mitigate risks from these systems before they are put to use.

There is also a case for accelerating AIs adoption, according to Department of Energy experts speaking at a recent conference.

Balancing supply and demand on the grid is becoming more complex as renewable generation replaces fossil power plants.

AI has the potential to help us operate the grid with much higher percentages of renewables, said Andrew Bochman, senior grid strategist at the Idaho National Laboratory.

But first, AI must earn the confidence of engineers who are responsible for ensuring utilities face as few risks as possible.

Obviously, there are a lot of technical concerns about how these systems work and what we can trust them to do, said Christopher Lamb, a senior cybersecurity researcher at Sandia National Laboratories in New Mexico.

There are definitely risks associated with AI, said Colin Ponce, a computational mathematician at Lawrence Livermore National Laboratory in California. A lot of utilities have a certain amount of hesitation about it because they dont really understand what it will do.

The need for transmission owners and operators to find and prevent breaks in aging power line components was driven home tragically in Californias fatal Camp Fire in 2018.

A 99-year-old metal hook supporting a high-voltage cable on a Pacific Gas & Electric power line wore through, allowing the line to hit the tower causing a short-circuit whose sparks ignited the fire. The fire claimed 85 lives.

Baroux said Siemens Energys system may or may not have prevented the Camp Fire. But the purpose is to find the transmission line components like the failed PG&E hook that are most in need of replacement.

Another California catastrophe demonstrates a case for that capability.

On July 13, 2021, a California grid trouble man driving through Californias rugged, remote Sierra Nevada region spotted a 65-foot-tall Douglas fir that had fallen onto a PG&E power line. According to his court testimony there was nothing he could do to prevent the spread of what would be called the Dixie Fire, which burned for three months, consuming nearly 1 million acres.

Faced with the threat of more impacts between dead or dying trees and its lines, PG&E has received state regulators permission to bury 1,230 miles of its power lines at a cost of roughly $3 million per mile.

The flying inspections produce thousands of gigabytes of data per mile, which would overwhelm human investigators. We will run AI models on data, then the customer-operators will review these results to look for the most urgent actions to take. The human remains the decisionmaking, always, she said. But this saves them time.

Siemens Energy declined to discuss the systems price tag and would not identify the utility in the Southeast using it. The service is in use at the E.ON Group energy operations in Germany, in French grid operator RTE, and TenneT, which runs the Netherlands network, a Siemens Energy spokesperson said.

In addition to the helicopters camera array, its instrument pod also carries sensors that detect wasteful or damaging electrical current leaks in lines. Lidar distance measuring radar scanners are also aboard to create 3D views of towers and nearby vegetation, alerting operators to potential threats from tree impacts with lines.

The possibility of applying AI and other advanced computing solutions to grid operations is the goal of another DOE project called HIPPO, for high-performance power grid optimization. HIPPOs lead partners are the Midcontinent Independent System Operator (MISO); DOEs Pacific Northwest National Laboratory; General Electric; and Gurobi Optimization, a Beaverton, Oregon, technology firm.

HIPPO has designed high-speed computing algorithms employing machine learning tools to improve the speed and accuracy of power plant scheduling decisions by MISO, the grid operator in 15 central U.S. states and Canadas Manitoba province.

Every day, MISO operators must make decisions about which electricity generating resources will run each hour of the following day, based on the generators competing power prices and transmission costs. The growth of wind and solar power, microgrids, and customers rooftop solar power and electric vehicle charging are making decisions harder as forecasting weather impacts on the grid is also more challenging.

HIPPOs heavier computing power and complex calculations produce answers 35 times faster than current systems, allowing greener and more sustainable grid operations, MISO reported last year.

One of the advantage of HIPPO is its flexibility, said Feng Pan, PNNL research scientist and the projects principal investigator. In addition to scheduling generation and confirming grid stability, HIPPO will enable operators to run what-if scenarios involving battery storage customer-based resources, he said in an email.

HIPPO is easing its way into the MISO operation. The project, launched with a 2015 grant from DOEs Advanced Projects Research Agency-Energy, is not yet scheduled for full deployment. It will assist operators, not take over, Pan said.

For AI systems to solve problems, they will need trusted data about grid operations, said Lamb, the senior researcher at Sandia.

Are there biases that could get cooked into algorithms that could create serious risks to operation reliability, and if so, what might they be? Lamb asked.

Data issues arent waiting for AI. Even without the complications AI may bring, operators of the principal Texas grid were dangerously in the dark during Winter Storm Uri in 2021.

If an adversary can insert data into your [computer] training pipeline, there are ways they can poison your data set and cause a variety of problems, Lawrence Livermores Ponce said, adding that designing defenses against rogue data threats is a major priority.

Ponce and Lamb came down on AIs side in the conference.

There is a bunch of hype around AI that is really undeserved, Lamb said. Operators understand their businesses. They are going to be making responsible decisions, and frankly I trust them to do so.

Grid operators should be able to maximize benefits and minimize risks provided they invest wisely in safety technology, he said. It doesnt mean the risks will be zero.

If we get too scared of AI and completely put the brakes on, I fear that will hinder our ability to respond to real threats and significant risk we already have evidence for, like climate change, Ponce said.

Theres a lot of doom and a lot of gloom about the application of AI, Lamb said. Dont be scared.

Read the original post:

Energy companies tap AI to detect defects in an aging grid - E&E News by POLITICO

Posted in Ai | Comments Off on Energy companies tap AI to detect defects in an aging grid – E&E News by POLITICO

Tor Books Criticized for Use of AI-Generated Art in ‘Gothikana’ Cover Design – Publishers Weekly

A number of readers are calling out Tor Books over the cover art of Gothikana by RuNyx, published by Tor's romance imprint Bramble on January 23, which incorporates AI-generated assets in its design.

On February 9, BookTok influencer @emmaskies identified two Adobe Stock images that had been used for the book's cover, both of which include the phrase "Generative AI" in their titles and are flagged on the Adobe Stock website as "generated with AI."

"We cannot allow AI-generated anything to infiltrate creative spaces because they are not just going to stop at covers," says @emmaskies in the video. She goes on to suggest that the use of such images is a slippery slope, imagining a publishing industry in the near future in which AI-generated images supplant cover artists, AI language models replace editorial staff, and AI models make acquisition judgements.

The video has since garnered more than 64,000 views. Her initial analysis of the cover, in which she alleged but had not yet confirmed the use of AI-generated images, received more than 300,000 views and 35,000 likes.

This is not the first time that Tor has attracted criticism online for using AI-generated assets in book cover designs. When Tor unveiled the cover of Christopher Paolini's sci-fi thriller Fractal Noise in November 2022, the publisher was quickly met with criticism over the use of an AI-generated asset, which had been posted to Shutterstock and created with Midjourney. The book was subsequently review-bombed on Goodreads.

"During the process of creating this cover, we licensed an image from a reputable stock house. We were not aware that the image may have been created by AI," Tor Books said in a statement posted to X on December 15. "Our in-house designer used the licensed image to create the cover, which was presented to Christopher for approval." Tor decided to move ahead with the cover "due to production constraints."

In response to the statement, Eisner Awardwinning illustrator Trung Le Nguyen commented, "I might not be able to judge a book by its cover, but I sure as hell will judge its publisher."

Tor is not the only publisher catch heat for using AI-generated art on book covers. Last spring, the Verge reported on the controversy over the U.K. paperback edition of Sarah J. Maas's House of Earth and Blood, published by Bloomsbury, which credited Adobe Stock for the illustration of a wolf on the book's cover; the illustration had been marked as AI-generated on Adobe's website. Bloomsbury later claimed that its in-house design team was "unaware" that the licensed image had been created by AI.

Gothikana was originally self-published by author RuNyx in June 2021, and was reissued by Bramble in a hardcover edition featuring sprayed edges, a foil case stamp, and detailed endpapers. Bramble did not respond to PW's request for comment by press time.

See the original post here:

Tor Books Criticized for Use of AI-Generated Art in 'Gothikana' Cover Design - Publishers Weekly

Posted in Ai | Comments Off on Tor Books Criticized for Use of AI-Generated Art in ‘Gothikana’ Cover Design – Publishers Weekly

Google launches Gemini Business AI, adds $20 to the $6 Workspace bill – Ars Technica

Google

Google went ahead with plans to launch Gemini for Workspace today. The big news is the pricing information, and you can see the Workspace pricing page is new, with every plan offering a "Gemini add-on." Google's old AI-for-Business plan, "Duet AI for Google Workspace," is dead, though it never really launched anyway.

Google has a blog post explaining the changes. Google Workspace starts at $6 per user per month for the "Starter" package, and the AI "Add-on," as Google is calling it, is an extra $20 monthly cost per user (all of these prices require an annual commitment). That is a massive price increase over the normal Workspace bill, but AI processing is expensive. Google says this business package will get you "Help me write in Docs and Gmail, Enhanced Smart Fill in Sheets and image generation in Slides." It also includes the "1.0 Ultra" model for the Gemini chatbotthere's a full feature list here. This $20 plan is subject to a usage limit for Gemini AI features of "1,000 times per month."

Google

Google's second plan is "Gemini Enterprise," which doesn't come with any usage limits, but it's also only available through a "contact us" link and not a normal checkout procedure. Enterprise is $30 per user per month, and it "includes additional capabilities for AI-powered meetings, where Gemini can translate closed captions in more than 100 language pairs, and soon even take meeting notes."

More here:

Google launches Gemini Business AI, adds $20 to the $6 Workspace bill - Ars Technica

Posted in Ai | Comments Off on Google launches Gemini Business AI, adds $20 to the $6 Workspace bill – Ars Technica

AI and You: OpenAI’s Sora Previews Text-to-Video Future, First Ivy League AI Degree – CNET

AI developments are happening pretty fast. If you don't stop and look around once in a while, you could miss them.

Fortunately, I'm looking around for you and what I saw this week is that competition between OpenAI, maker of ChatGPT and Dall-E, and Google continues to heat up in a way that's worth paying attention to.

A week after updating its Bard chatbot and changing the name to Gemini, Google's DeepMind AI subsidiary previewed the next version of its generative AI chatbot. DeepMind told CNET's Lisa Lacy that Gemini 1.5 will be rolled out "slowly" to regular people who sign up for a wait list and will be available now only to developers and enterprise customers.

Gemini 1.5 Pro, Lacy reports, is "as capable as" the Gemini 1.0 Ultra model, which Google announced on Feb. 8. The 1.5 Pro model has a win rate -- a measurement of how many benchmarks it can outperform -- of 87% compared with the 1.0 Pro and 55% against the 1.0 Ultra. So the 1.5 Pro is essentially an upgraded version of the best available model now.

Gemini 1.5 Pro can ingest video, images, audio and text to answer questions, added Lacy. Oriol Vinyals, vice president of research at Google DeepMind and co-lead of Gemini, described 1.5 as a "research release" and said the model is "very efficient" thanks to a unique architecture that can answer questions by zeroing in on expert sources in that particular subject rather than seeking the answer from all possible sources.

Meanwhile, OpenAI announced a new text-to-video model called Sora that's capturing a lot of attention because of the photorealistic videos it's able to generate. Sora can "create videos of up to 60 seconds featuring highly detailed scenes, complex camera motion, and multiple characters with vibrant emotions." Following up on a promise it made, along with Google and Meta last week, to watermark AI-generated images and video, OpenAI says it's also creating tools to detect videos created with Sora so they can be identified as being AI generated.

Google and Meta have also announced their own gen AI text-to-video creators.

Sora, which means "sky" in Japanese, is also being called experimental, with OpenAI limiting access for now to so-called "red teamers," security experts and researchers who will assess the tool's potential harms or risks. That follows through on promises made as part of President Joe Biden's AI executive order last year, asking developers to submit the results of safety checks on their gen AI chatbots before releasing them publicly. OpenAI said it's also looking to get feedback on Sora from some visual artists, designers and filmmakers.

How do the photorealistic videos look? Pretty realistic. I agree with the The New York Times, which described the short demo videos -- "of wooly mammoths trotting through a snowy meadow, a monster gazing at a melting candle and a Tokyo street scene seemingly shot by a camera swooping across the city" -- as "eye popping."

The MIT Review, which also got a preview of Sora, said the "tech has pushed the envelope of what's possible with text-to-video generation." Meanwhile, The Washington Post noted Sora could exacerbate an already growing problem with video deepfakes, which have been used to "deceive voters" and scam consumers.

One X commentator summarized it this way: "Oh boy here we go what is real anymore." And OpenAI CEO Sam Altman called the news about its video generation model a "remarkable moment."

You can see the four examples of what Sora can produce on OpenAI's intro site, which notes that the tool is "able to generate complex scenes with multiple characters, specific types of motion, and accurate details of the subject and background. The model understands not only what the user has asked for in the prompt, but also how those things exist in the physical world. The model has a deep understanding of language, enabling it to accurately interpret prompts and generate compelling characters that express vibrant emotions."

But Sora has its weaknesses, which is why OpenAI hasn't yet said whether it will actually be incorporated into its chatbots. Sora "may struggle with accurately simulating the physics of a complex scene and may not understand specific instances of cause and effect. For example, a person might take a bite out of a cookie, but afterward, the cookie may not have a bite mark. The model may also confuse spatial details of a prompt, for example, mixing up left and right."

All of this is to remind us that tech is a tool -- and that it's up to us humans to decide how, when, where and why to use that technology. In case you didn't see it, the trailer for the new Minions movie (Despicable Me 4: Minion Intelligence) makes this point cleverly, with its sendup of gen AI deepfakes and Jon Hamm's voiceover of how "artificial intelligence is changing how we see the worldtransforming the way we do business."

"With artificial intelligence," Hamm adds over the minions' laughter, "the future is in good hands."

Here are the other doings in AI worth your attention.

Twenty tech companies, including Adobe, Amazon, Anthropic, ElevenLabs, Google, IBM, Meta, Microsoft, OpenAI, Snap, TikTok and X, agreed at a security conference in Munich that they will voluntarily adopt "reasonable precautions" to guard against AI tools being used to mislead or deceive voters ahead of elections.

"The intentional and undisclosed generation and distribution of Deceptive AI Election content can deceive the public in ways that jeopardize the integrity of electoral processes," the text of the accord says, according to NPR. "We affirm that the protection of electoral integrity and public trust is a shared responsibility and a common good that transcends partisan interests and national borders."

But the agreement is "largely symbolic," the Associated Press reported, noting that "reasonable precautions" is a little vague.

"The companies aren't committing to ban or remove deepfakes," the AP said. "Instead, the accord outlines methods they will use to try to detect and label deceptive AI content when it is created or distributed on their platforms. It notes the companies will share best practices with each other and provide 'swift and proportionate responses' when that content starts to spread."

AI has already been used to try to trick voters in the US and abroad. Days before the New Hampshire presidential primary, fraudsters sent an AI robocall that mimicked President Biden's voice, asking them not to vote in the primary. That prompted the Federal Communications Commission this month to make AI-generated robocalls illegal. The AP said that "Just days before Slovakia's elections in November, AI-generated audio recordings impersonated a candidate discussing plans to raise beer prices and rig the election. Fact-checkers scrambled to identify them as false as they spread across social media."

"Everybody recognizes that no one tech company, no one government, no one civil society organization is able to deal with the advent of this technology and its possible nefarious use on their own," Nick Clegg, president of global affairs for Meta, told the Associated Press in an interview before the summit.

Over 4 billion people are set to vote in key elections this year in more than 40 countries,. including the US, The Hill reported.

If you're concerned about how deepfakes may be used to scam you or your family members -- someone calls your grandfather and asks them for money by pretending to be you -- Bloomberg reporter Rachel Metz has a good idea. She suggests it may be time for us all to create a "family password" or safe word or phrase to share with our family or personal network that we can ask for to make sure we're talking to who we think we're talking to.

"Extortion has never been easier," Metz reports. "The kind of fakery that used to take time, money and technical know-how can now be accomplished quickly and cheaply by nearly anyone."

That's where family passwords come in, since they're "simple and free," Metz said. "Pick a word that you and your family (or another trusted group) can easily remember. Then, if one of those people reaches out in a way that seems a bit odd -- say, they're suddenly asking you to deliver 5,000 gold bars to a P.O. Box in Alaska -- first ask them what the password is."

How do you pick a good password? She offers a few suggestions, including using a word you don't say frequently and that's not likely to come up in casual conversations. Also, "avoid making the password the name of a pet, as those are easily guessable."

Hiring experts have told me it's going to take years to build an AI-educated workforce, considering that gen AI tools like ChatGPT weren't released until late 2022. So it makes sense that learning platforms like Coursera, Udemy, Udacity, Khan Academy and many universities are offering online courses and certificates to upskill today's workers. Now the University of Pennsylvania's School of Engineering and Applied Science said it's the first Ivy League school to offer an undergraduate major in AI.

"The rapid rise of generative AI is transforming virtually every aspect of life: health, energy, transportation, robotics, computer vision, commerce, learning and even national security," Penn said in a Feb. 13 press release. "This produces an urgent need for innovative, leading-edge AI engineers who understand the principles of AI and how to apply them in a responsible and ethical way."

The bachelor of science in AI offers coursework in machine learning, computing algorithms, data analytics and advanced robotics and will have students address questions about "how to align AI with our social values and how to build trustworthy AI systems," Penn professor Zachary Ives said.

"We are training students for jobs that don't yet exist in fields that may be completely new or revolutionized by the time they graduate," added Robert Ghrist, associate dean of undergraduate education in Penn Engineering.

FYI, the cost of an undergraduate education at Penn, which typically spans four years, is over $88,000 per year (including housing and food).

For those not heading to college or who haven't signed up for any of those online AI certificates, their AI upskilling may come courtesy of their current employee. The Boston Consulting Group, for its Feb. 9 report, What GenAI's Top Performer Do Differently, surveyed over 150 senior executives across 10 sectors. Generally:

Bottom line: companies are starting to look at existing job descriptions and career trajectories, and the gaps they're seeing in the workforce when they consider how gen AI will affect their businesses. They've also started offering gen AI training programs. But these efforts don't lessen the need for today's workers to get up to speed on gen AI and how it may change the way they work -- and the work they do.

In related news, software maker SAP looked at Google search data to see which states in the US were most interested in "AI jobs and AI business adoption."

Unsurprisingly, California ranked first in searches for "open AI jobs" and "machine learning jobs." Washington state came in second place, Vermont in third, Massachusetts in fourth and Maryland in fifth.

California, "home to Silicon Valley and renowned as a global tech hub, shows a significant interest in AI and related fields, with 6.3% of California's businesses saying that they currently utilize AI technologies to produce goods and services and a further 8.4% planning to implement AI in the next six months, a figure that is 85% higher than the national average," the study found.

Virginia, New York, Delaware, Colorado and New Jersey, in that order, rounded out the top 10.

Over the past few months, I've highlighted terms you should know if you want to be knowledgeable about what's happening as it relates to gen AI. So I'm going to take a step back this week and provide this vocabulary review for you, with a link to the source of the definition.

It's worth a few minutes of your time to know these seven terms.

Anthropomorphism: The tendency for people to attribute humanlike qualities or characteristics to an AI chatbot. For example, you may assume it's kind or cruel based on its answers, even though it isn't capable of having emotions, or you may believe the AI is sentient because it's very good at mimicking human language.

Artificial general intelligence (AGI): A description of programs that are as capable as -- or even more capable than -- than a human. While full general intelligence is still off in the future, models are growing in sophistication. Some have demonstrated skills across multiple domains ranging from chemistry to psychology, with task performance paralleling human benchmarks.

Generative artificial intelligence (gen AI): Technology that creates content -- including text, images, video and computer code -- by identifying patterns in large quantities of training data and then creating original material that has similar characteristics.

Hallucination: Hallucinations are unexpected and incorrect responses from AI programs that can arise for reasons that aren't yet fully known. A language model might suddenly bring up fruit salad recipes when you were asking about planting fruit trees. It might also make up scholarly citations, lie about data you ask it to analyze or make up facts about events that aren't in its training data. It's not fully understood why this happens but can arise from sparse data, information gaps and misclassification.

Large language model (LLM): A type of AI model that can generate human-like text and is trained on a broad dataset.

Prompt engineering: This is the act of giving AI an instruction so it has the context it needs to achieve your goal. Prompt engineering is best associated with OpenAI's ChatGPT, describing the tasks users feed into the algorithm. (e.g. "Give me five popular baby names.")

Temperature: In simple terms, model temperature is a parameter that controls how random a language model's output is. A higher temperature means the model takes more risks, giving you a diverse mix of words. On the other hand, a lower temperature makes the model play it safe, sticking to more focused and predictable responses.

Model temperature has a big impact on the quality of the text generated in a bunch of [natural language processing] tasks, like text generation, summarization and translation.

The tricky part is finding the perfect model temperature for a specific task. It's kind of like Goldilocks trying to find the perfect bowl of porridge -- not too hot, not too cold, but just right. The optimal temperature depends on things like how complex the task is and how much creativity you're looking for in the output.

Editors' note: CNET is using an AI engine to help create some stories. For more, seethis post.

Read the original post:

AI and You: OpenAI's Sora Previews Text-to-Video Future, First Ivy League AI Degree - CNET

Posted in Ai | Comments Off on AI and You: OpenAI’s Sora Previews Text-to-Video Future, First Ivy League AI Degree – CNET