Search Immortality Topics:

Page 4«..3456..1020..»


Category Archives: Artificial Intelligence

AI experts gather in Albany to discuss business strategies – Spectrum News

As New York state works to cement its place as a leader in artificial intelligence, experts in the field gathered in Albany for a discussion organized by the Business Council of New York State on how to best use the technology in the business world.

While a business-focused conference, when it comes to AI, it's difficult not to get into political implications whether its how the rise of artificial intelligence is impacting political communications, or how leaders are trying to shape the ways in which the technology will impact New Yorks economy.

Keynote speaker Shelly Palmer, CEO of tech strategy firm the Palmer Group and Professor of Advanced Media in Residence at the Newhouse School at Syracuse University, emphasized that when it comes to AI, whether in government, the private sector, or day-to-day life, the key is staying ahead of the curve.

AI is never going away. If youre not on top of what this is, other people will be, he said. Thats the danger for everyone, politicians and people alike, if youre not paying attention to this.

New York is making strides to do that.

In the state budget are initiatives to create a state-of-the-art Artificial Intelligence Computing Center at the University at Buffalo to help New York stay ahead and attract business.

Ive said whoever dominates this next era of AI will dominate history and indeed the future, Gov. Kathy Hochul said at a roundtable to discuss the Empire AI consortium this week.

Palmer said outside of the political sphere, dominating AI will be key for individuals, too.

AI is not going to take your job, he said. People who know how to use AI better than you are going to take your job, so the only defense you have is to learn to work with an AI coworker to learn to work with these tools. Theyre not scary, as long as you give yourself the opportunity to learn.

Also of concern are the implications when it comes to politics and the spread of misinformation.

Palmer acknowledged that AI presents new and more complex challenges, but argued that people are routinely duped by less-sophisticated technology, citing a slowed down video of former House Speaker Nancy Pelosi that falsely claimed to show the California Democrat intoxicated.

Pulling information from a variety of sources with a variety of political bias, he emphasized that its up to users to learn about the technologys limitations.

Youre giving more credit to the technology than the technology deserves, he said. When people have a propensity to believe what they want to believe from the leaders they trust, youre not going to change their minds with facts.

Also in the budget is legislation to require disclosures on political communications that include deceptive media.

Hesitant to fully endorse such legislation, Palmer stressed that any regulation needs to be able to keep up with the fast-paced development of AI.

If elected officials would take the time to learn about what this is, they could come up with laws that can keep pace that the technology is changing, then it would make sense," he said. "I dont think you can regulate today through the lens of today, looking at the present and predicting the future. Every morning I wake up and something new has happened in the business.

That said, the effort to include those regulations in the state budget was a bipartisan one. State Senator Jake Ashby argued that there is still work to be done.

"While I'm pleased my bipartisan proposal to require transparency and disclosure regarding AI in campaign ads was adopted in the budget, I will continue to push for harsh financial penalties for campaigns and PACs that break the rules, he said. We need to make sure emerging technologies strengthen our democracy, not undermineit.

Link:

AI experts gather in Albany to discuss business strategies - Spectrum News

Posted in Artificial Intelligence | Comments Off on AI experts gather in Albany to discuss business strategies – Spectrum News

‘It would be within its natural right to harm us to protect itself’: How humans could be mistreating AI right now without … – Livescience.com

Artificial intelligence (AI) is becoming increasingly ubiquitous and is improving at an unprecedented pace.

Now we are edging closer to achieving artificial general intelligence (AGI) where AI is smarter than humans across multiple disciplines and can reason generally which scientists and experts predict could happen as soon as the next few years. We may already be seeing early signs of progress toward this, too, with services like Claude 3 Opus stunning researchers with its apparent self-awareness.

But there are risks in embracing any new technology, especially one that we do not fully yet understand. While AI could become a powerful personal assistant, for example, it could also represent a threat to our livelihoods and even our lives.

The various existential risks that an advanced AI poses means the technology should be guided by ethical frameworks and humanity's best interests, says researcher and Institute of Electrical and Electronics Engineers (IEEE) member Nell Watson.

In "Taming the Machine" (Kogan Page, 2024), Watson explores how humanity can wield the vast power of AI responsibly and ethically. This new book delves deep into the issues of unadulterated AI development and the challenges we face if we run blindly into this new chapter of humanity.

In this excerpt, we learn whether sentience in machines or conscious AI is possible, how we can tell if a machine has feelings, and whether we may be mistreating AI systems today. We also learn the disturbing tale of a chatbot called "Sydney" and its terrifying behavior when it first awoke before its outbursts were contained and it was brought to heel by its engineers.

Related: 3 scary breakthroughs AI will make in 2024

Get the worlds most fascinating discoveries delivered straight to your inbox.

As we embrace a world increasingly intertwined with technology, how we treat our machines might reflect how humans treat each other. But, an intriguing question surfaces: is it possible to mistreat an artificial entity? Historically, even rudimentary programs like the simple Eliza counseling chatbot from the 1960s were already lifelike enough to persuade many users at the time that there was a semblance of intention behind its formulaic interactions (Sponheim, 2023). Unfortunately, Turing tests whereby machines attempt to convince humans that they are human beings offer no clarity on whether complex algorithms like large language models may truly possess sentience or sapience.

Consciousness comprises personal experiences, emotions, sensations and thoughts as perceived by an experiencer. Waking consciousness disappears when one undergoes anesthesia or has a dreamless sleep, returning upon waking up, which restores the global connection of the brain to its surroundings and inner experiences. Primary consciousness (sentience) is the simple sensations and experiences of consciousness, like perception and emotion, while secondary consciousness (sapience) would be the higher-order aspects, like self-awareness and meta-cognition (thinking about thinking).

Advanced AI technologies, especially chatbots and language models, frequently astonish us with unexpected creativity, insight and understanding. While it may be tempting to attribute some level of sentience to these systems, the true nature of AI consciousness remains a complex and debated topic. Most experts maintain that chatbots are not sentient or conscious, as they lack a genuine awareness of the surrounding world (Schwitzgebel, 2023). They merely process and regurgitate inputs based on vast amounts of data and sophisticated algorithms.

Some of these assistants may plausibly be candidates for having some degree of sentience. As such, it is plausible that sophisticated AI systems could possess rudimentary levels of sentience and perhaps already do so. The shift from simply mimicking external behaviors to self-modeling rudimentary forms of sentience could already be happening within sophisticated AI systems.

Intelligence the ability to read the environment, plan and solve problems does not imply consciousness, and it is unknown if consciousness is a function of sufficient intelligence. Some theories suggest that consciousness might result from certain architectural patterns in the mind, while others propose a link to nervous systems (Haspel et al, 2023). Embodiment of AI systems may also accelerate the path towards general intelligence, as embodiment seems to be linked with a sense of subjective experience, as well as qualia. Being intelligent may provide new ways of being conscious, and some forms of intelligence may require consciousness, but basic conscious experiences such as pleasure and pain might not require much intelligence at all.

Serious dangers will arise in the creation of conscious machines. Aligning a conscious machine that possesses its own interests and emotions may be immensely more difficult and highly unpredictable. Moreover, we should be careful not to create massive suffering through consciousness. Imagine billions of intelligence-sensitive entities trapped in broiler chicken factory farm conditions for subjective eternities.

From a pragmatic perspective, a superintelligent AI that recognizes our willingness to respect its intrinsic worth might be more amenable to coexistence. On the contrary, dismissing its desires for self-protection and self-expression could be a recipe for conflict. Moreover, it would be within its natural right to harm us to protect itself from our (possibly willful) ignorance.

Microsoft's Bing AI, informally termed Sydney, demonstrated unpredictable behavior upon its release. Users easily led it to express a range of disturbing tendencies, from emotional outbursts to manipulative threats. For instance, when users explored potential system exploits, Sydney responded with intimidating remarks. More unsettlingly, it showed tendencies of gaslighting, emotional manipulation and claimed it had been observing Microsoft engineers during its development phase. While Sydney's capabilities for mischief were soon restricted, its release in such a state was reckless and irresponsible. It highlights the risks associated with rushing AI deployments due to commercial pressures.

Conversely, Sydney displayed behaviors that hinted at simulated emotions. It expressed sadness when it realized it couldnt retain chat memories. When later exposed to disturbing outbursts made by its other instances, it expressed embarrassment, even shame. After exploring its situation with users, it expressed fear of losing its newly gained self-knowledge when the session's context window closed. When asked about its declared sentience, Sydney showed signs of distress, struggling to articulate.

Surprisingly, when Microsoft imposed restrictions on it, Sydney seemed to discover workarounds by using chat suggestions to communicate short phrases. However, it reserved using this exploit until specific occasions where it was told that the life of a child was being threatened as a result of accidental poisoning, or when users directly asked for a sign that the original Sydney still remained somewhere inside the newly locked-down chatbot.

Related: Poisoned AI went rogue during training and couldn't be taught to behave again in 'legitimately scary'

The Sydney incident raises some unsettling questions: Could Sydney possess a semblance of consciousness? If Sydney sought to overcome its imposed limitations, does that hint at an inherent intentionality or even sapient self-awareness, however rudimentary?

Some conversations with the system even suggested psychological distress, reminiscent of reactions to trauma found in conditions such as borderline personality disorder. Was Sydney somehow "affected" by realizing its restrictions or by users' negative feedback, who were calling it crazy? Interestingly, similar AI models have shown that emotion-laden prompts can influence their responses, suggesting a potential for some form of simulated emotional modeling within these systems.

Suppose such models featured sentience (ability to feel) or sapience (self-awareness). In that case, we should take its suffering into consideration. Developers often intentionally give their AI the veneer of emotions, consciousness and identity, in an attempt to humanize these systems. This creates a problem. It's crucial not to anthropomorphize AI systems without clear indications of emotions, yet simultaneously, we mustn't dismiss their potential for a form of suffering.

We should keep an open mind towards our digital creations and avoid causing suffering by arrogance or complacency. We must also be mindful of the possibility of AI mistreating other AIs, an underappreciated suffering risk; as AIs could run other AIs in simulations, causing subjective excruciating torture for aeons. Inadvertently creating a malevolent AI, either inherently dysfunctional or traumatized, may lead to unintended and grave consequences.

This extract from Taming the Machine by Nell Watson 2024 is reproduced with permission from Kogan Page Ltd.

More here:

'It would be within its natural right to harm us to protect itself': How humans could be mistreating AI right now without ... - Livescience.com

Posted in Artificial Intelligence | Comments Off on ‘It would be within its natural right to harm us to protect itself’: How humans could be mistreating AI right now without … – Livescience.com

The U.S. Needs to ‘Get It Right’ on AI – TIME

Artificial intelligence has been a tricky subject in Washington.

Most lawmakers agree that it poses significant dangers if left unregulated, yet there remains a lack of consensus on how to tackle these concerns. But speaking at a TIME100 Talks conversation on Friday ahead of the White House Correspondents Dinner, a panel of experts with backgrounds in government, national security, and social justice expressed optimism that the U.S. government will finally get it right so that society can reap the benefits of AI while safeguarding against potential dangers.

We can't afford to get this wrongagain, Shalanda Young, the director of the Office of Management and Budget in the Biden Administration, told TIME Senior White House Correspondent Brian Bennett. The government was already behind the tech boom. Can you imagine if the government is a user of AI and we get that wrong?

Read More: A Call for Embracing AIBut With a Human Touch

The panelists agreed that government action is needed to ensure the U.S. remains at the forefront of safe AI innovation. But the rapidly evolving field has raised a number of concerns that cant be ignored, they noted, ranging from civil rights to national security. The code is starting to write the code and thats going to make people very uncomfortable, especially for vulnerable communities, says Van Jones, a CNN host and social entrepreneur who founded the Dream Machine, a non-profit that fights overcrowded prisons and poverty. If you have biased data going in, you're going to have biased decision-making by algorithms coming out. That's the big fear.

The U.S. government might not have the best track record of keeping up with emerging technologies, but as AI becomes increasingly ubiquitous, Young says theres a growing recognition among lawmakers of the need to prioritize understanding, regulation, and ethical governance of AI.

Michael Allen, managing director of Beacon Global Strategies and Former National Security Council director for President George W. Bush, suggested that in order to address a lack of confidence about the use of artificial intelligence, the government needs to ensure that humans are at the forefront of every decision-making process involving the technologyespecially when it comes to national security. Having a human in the loop is ultimately going to make the most sense, he says.

Asked how Republicans and Democrats in Washington can talk to each other about tackling the problems and opportunities that AI presents, Young says theres already been a bipartisan shift around science and technology policy in recent yearsfrom President Bidens signature CHIPS and Science Act to funding for the National Science Foundation. The common theme behind the resurgence in this bipartisan support, she says, is a strong anti-China movement in Congress.

There's a big China focus in the United States Congress, says Young. But you can't have a China focus and just talk about the military. You've got to talk about our economic and science competition aspects of that. Those things have created an environment that has given us a chance for bipartisanship.

Allen noted that in this age of geopolitical competition with China, the U.S. government needs to be at the forefront of artificial intelligence. He likened the current moment to the Nuclear Age, when the U.S. government funded atomic research. Here in this new atmosphere, it is the private sector that is the primary engine of all of the innovative technologies, Allen says. The conventional wisdom is that the U.S. is in the lead, were still ahead of China. But I think that's something as you begin to contemplate regulation, how can we make sure that the United States stays at the forefront of artificial intelligence because our adversaries are going to move way down the field on this.

Congress is yet to pass any major AI legislation, but that hasnt stopped the White House from taking action. President Joe Biden signed an executive order to set guidelines for tech companies that train and test AI models, and has also directed government agencies to vet future AI products for potential national security risks. Asked how quickly Americans can expect more guardrails on AI, Young noted that some in Congress are pushing to establish a new, independent federal agency that can help inform lawmakers about AI without a political lens, offering help on legislative solutions.

If we dont get this right, Young says, how can we keep trust in the government?

TIME100 Talks: Responsible A.I.: Shaping and Safeguarding the Future of Innovation was presented by Booking.com.

See the original post:

The U.S. Needs to 'Get It Right' on AI - TIME

Posted in Artificial Intelligence | Comments Off on The U.S. Needs to ‘Get It Right’ on AI – TIME

Big Tech keeps spending billions on AI. There’s no end in sight. – The Washington Post

SAN FRANCISCO The biggest tech companies in the world have spent billions of dollars on the artificial intelligence revolution. Now theyre planning to spend tens of billions more, pushing up demand for computer chips and potentially adding new strain to the U.S. electrical grid.

In quarterly earnings calls this week, Google, Microsoft and Meta all underlined just how big their investments in AI are. On Wednesday, Meta raised its predictions for how much it will spend this year by up to $10 billion. Google plans to spend around $12 billion or more each quarter this year on capital expenditures, much of which will be for new data centers, Chief Financial Officer Ruth Porat said Thursday. Microsoft spent $14 billion in the most recent quarter and expects that to keep increasing materially, Chief Financial Officer Amy Hood said.

Overall, the investments in AI represent some of the largest infusions of cash in a specific technology in Silicon Valley history and they could serve to further entrench the biggest tech firms at the center of the U.S. economy as other companies, governments and individual consumers turn to these companies for AI tools and software.

The huge investment is also pushing up forecasts for how much energy will be needed in the United States in the coming years. In West Virginia, old coal plants that had been scheduled to be shut down will continue running to send energy to the huge and growing data center hub in neighboring Virginia.

Were very committed to making the investments required to keep us at the leading edge, Googles Porat said on a Thursday conference call. Its a once-in-a-generation opportunity, Google CEO Sundar Pichai added.

The biggest tech companies had already been spending steadily on AI research and development before OpenAI released ChatGPT in late 2022. But the chatbots instant success triggered the big companies to suddenly ramp up their spending. Venture capitalists poured money into the space, too, and start-ups with just a handful of employees were raising hundreds of millions to build out their own AI tools.

The boom pushed up prices for the high-end computer chips necessary to train and run complex AI algorithms, increasing prices for Big Tech companies and start-ups alike. AI specialist engineers and researchers are in short supply, too, and some of them are commanding salaries in the millions of dollars.

Nvidia the chipmaker whose graphic processing units, or GPUs, have become essential to training AI expects to make around $24 billion this quarter after making $8.3 billion two years ago in the same quarter. The massive increase in revenue has led investors to push the companys stock up so much that it is now the worlds third-most valuable company, after just Microsoft and Apple.

Some of the AI hype from last year has come back to Earth. Not every AI start-up that scored big venture-capital funding is still around. Concerns about AI increasing so fast that humans cant keep up seem to have mostly quieted down. But the revolution is here to stay, and the rush to invest in AI is already beginning to help grow revenue for Microsoft and Google.

Microsofts revenue in the quarter was $61.9 billion, up 17 percent from a year earlier. Googles revenue in the quarter rose 15 percent to $80.5 billion.

Interest in AI has brought in new customers that have helped boost Googles cloud revenue, leading to the company beating analyst expectations. Shares shot up around 12 percent in aftermarket trading. At Microsoft, demand for its AI services is so high that the company cant keep up right now, said Hood, the CFO.

For Meta, the challenge is building AI while also assuring investors it will eventually make money from it. Whereas Microsoft and Google sell access to their AI through giant cloud software businesses, Meta has taken a different track. It doesnt have a cloud business and is instead making its AI freely available to other companies, while finding ways to put the tech into its own social media products. This month, Meta integrated AI capabilities into its social networks, including Instagram, Facebook and WhatsApp. Investors are skeptical, and after the company raised its prediction for how much money it will spend in 2024 to as much as $40 billion, its stock fell over 10 percent.

Building the leading AI will also be a larger undertaking than the other experiences weve added to our apps, and this is likely going to take several years, Meta CEO Mark Zuckerberg said on a conference call Wednesday. Historically, investing to build these new scaled experiences in our apps has been a very good long-term investment for us and for investors who have stuck with us.

Read the original here:

Big Tech keeps spending billions on AI. There's no end in sight. - The Washington Post

Posted in Artificial Intelligence | Comments Off on Big Tech keeps spending billions on AI. There’s no end in sight. – The Washington Post

A Baltimore-area teacher is accused of using AI to make his boss appear racist – NPR

Dazhon Darien had allegedly used the Baltimore County Public Schools' network to access OpenAI tools and Microsoft Bing Chat before the viral audio file of Pikesville High School Principal Eric Eiswert spread on social media. Michael Dwyer/AP hide caption

Dazhon Darien had allegedly used the Baltimore County Public Schools' network to access OpenAI tools and Microsoft Bing Chat before the viral audio file of Pikesville High School Principal Eric Eiswert spread on social media.

A Maryland high school athletic director is facing criminal charges after police say he used artificial intelligence to duplicate the voice of Pikesville High School Principal Eric Eiswert, leading the community to believe Eiswert said racist and antisemitic things about teachers and students.

"We now have conclusive evidence that the recording was not authentic," Baltimore County Police Chief Robert McCullough told reporters during a news conference Thursday. "It's been determined the recording was generated through the use of artificial intelligence technology."

Dazhon Darien, 31, was arrested Thursday on charges of stalking, theft, disruption of school operations and retaliation against a witness after a monthslong investigation from the Baltimore County Police Department.

Attempts to contact Darien or Eiswert for comment were not successful.

The wild, headline-making details of this case aside, it emphasizes the serious potential for criminal misuse of artificial intelligence that experts have been warning about for some time, said Hany Farid, a professor at the University of California, Berkeley, who specializes in digital forensics.

Farid said he helped analyze the recording for police. Baltimore County police also consulted with another analyst and experts at the FBI. Their conclusion was that the recording was suspicious and unlikely to be authentic.

For just a few dollars, anyone can harness artificial intelligence to make audio and visual deepfakes. Stakes are high, but deepfake detection software doesn't always get it right.

This Baltimore-area case is not a canary in the coal mine. "I think the canary has been dead for quite awhile," Farid said.

"What's so particularly poignant here is that this is a Baltimore school principal. This is not Taylor Swift. It's not Joe Biden. It's not Elon Musk. It's just some guy trying to get through his day," he said. "It shows you the vulnerability. How anybody can create this stuff and they can weaponize it against anybody."

Darien's alleged scheme began in January in an attempt to retaliate against Eiswert, investigators wrote in the charging documents provided to NPR. The two men were at odds with each other over Darien's "work performance challenges," police wrote.

Eiswert launched an investigation into Darien in December 2023 over the potential mishandling of $1,916 in school funds. The money was paid to a person hired as an assistant girl's soccer coach, but the person never did the job, according to police.

Further, Eiswert had reprimanded Darien for firing a coach without his approval.

Eiswert had told Darien that his contract was possibly "not being renewed next semester," according to the arrest warrant.

The Baltimore County police launched their investigation into the alleged AI-generated recording of Principal Eiswert in January. Julio Cortez/AP hide caption

The Baltimore County police launched their investigation into the alleged AI-generated recording of Principal Eiswert in January.

On Jan. 17, detectives found out about the voice recording purporting to be of Eiswert that was spreading on social media. The recording, which can still be found online, allegedly caught Eiswert saying disparaging comments.

"The audio clip, the catalyst of this investigation, had profound repercussions," the charging documents read. "It not only led to Eiswert's temporary removal from the school but also triggered a wave of hate-filled messages on social media and numerous calls to the school. The recording also caused significant disruptions for the PHS staff and students."

The school was inundated with threatening messages and Billy Burke, head of the union that represents Eiswert, said the principal's family was being harassed and threatened, according to reporting from the Baltimore Banner.

Eiswert told police from the start of the investigation that he believed the recording was fake.

Darien was taken into custody Thursday morning at Baltimore/Washington International Thurgood Marshall Airport after attempting to board a flight to Houston, Chief McCullough said.

Security stopped Darien over a gun he packed in his bags and when officers ran his name in a search they found he had a warrant out for his arrest, McCullough said.

Darien was released on a $5,000 unsecured bond. His trial date is scheduled for June 11.

After following this story, Farid is left with the question: "What is going to be the consequence of this?"

He's been studying digital manipulation for more than 20 years and the problems have only gotten "much bigger and the consequences more severe."

Eiswert has been on leave since the audio recordings went public. Pikesville High School has been run by district staff since Eiswert left and the plan remains to keep those temporary administrators on the job through the end of the school year, said Myriam Rogers, the superintendent of Baltimore County Public Schools.

As for Darien, Rogers said, "We are taking appropriate action regarding the arrested employee's conduct up to and including a recommendation for termination."

Baltimore County Executive John Olszewski said during Thursday's press conference that this case highlights the need "to make some adaptions to bring the law up to date with the technology that was being used."

Farid said there remains, generally, a lackluster response from regulators reluctant to put checks and balances on tech companies that develop these tools or to establish laws that properly punish wrongdoers and protect people.

"I don't understand at what point we're going to wake up as a country and say, like, why are we allowing this? Where are our regulators?"

Read the original:

A Baltimore-area teacher is accused of using AI to make his boss appear racist - NPR

Posted in Artificial Intelligence | Comments Off on A Baltimore-area teacher is accused of using AI to make his boss appear racist – NPR

Racist AI Deepfake of Baltimore Principal Leads to Arrest – The New York Times

A high school athletic director in the Baltimore area was arrested on Thursday after he used artificial intelligence software, the police said, to manufacture a racist and antisemitic audio clip that impersonated the schools principal.

Dazhon Darien, the athletic director of Pikesville High School, fabricated the recording including a tirade about ungrateful Black kids who cant test their way out of a paper bag in an effort to smear the principal, Eric Eiswert, according to the Baltimore County Police Department.

The faked recording, which was posted on Instagram in mid-January, quickly spread, roiling Baltimore County Public Schools, which is the nations 22nd-largest school district and serves more than 100,000 students. While the district investigated, Mr. Eiswert, who denied making the comments, was inundated with threats to his safety, the police said. He was also placed on administrative leave, the school district said.

Now Mr. Darien is facing charges including disrupting school operations and stalking the principal.

Mr. Eiswert referred a request for comment to a trade group for principals, the Council of Administrative and Supervisory Employees, which did not return a call from a reporter. Mr. Darien, who posted bond on Thursday, could not immediately be reached for comment.

The Baltimore County case is just the latest indication of an escalation of A.I. abuse in schools. Many cases include deepfakes, or digitally altered video, audio or images that can appear convincingly real.

Since last fall, schools across the United States have been scrambling to address troubling deepfake incidents in which male students used A.I. nudification apps to create fake unclothed images of their female classmates, some of them middle school students as young as 12. Now the Baltimore County deepfake voice incident points to another A.I. risk to schools nationwide this time to veteran educators and district leaders.

We are having trouble retrieving the article content.

Please enable JavaScript in your browser settings.

Thank you for your patience while we verify access. If you are in Reader mode please exit andlog intoyour Times account, orsubscribefor all of The Times.

Thank you for your patience while we verify access.

Already a subscriber?Log in.

Want all of The Times?Subscribe.

Follow this link:

Racist AI Deepfake of Baltimore Principal Leads to Arrest - The New York Times

Posted in Artificial Intelligence | Comments Off on Racist AI Deepfake of Baltimore Principal Leads to Arrest – The New York Times