Search Immortality Topics:

Page 11234..10..»


Category Archives: Ai

Welcome to the Valley of the Creepy AI Dolls – WIRED

Social robot roommate Jibo initially caused a stir, but sadly didn't live long.

Not that there havent been an array of other attempts. Jibo, a social robot roommate that used AI and endearing gestures to bond with its owners had its collective plug unceremoniously pulled just a few years after being put out into the world. Meanwhile, another US-grown offering, Moxie, an AI-empowered robot aimed at helping with child development, is still active.

It's hard not to look at devices like this and shudder at the possibilities. Theres something inherently disturbing about tech that plays at being human, and that uncanny deception can rub people the wrong way. After all, our science fiction is replete with AI beings, many of them tales of artificial intelligence gone horribly wrong. The easy, and admittedly lazy, comparison to something like the Hyodol is M3GAN, the 2023 film about an AI-enabled companion doll that goes full murderbot.

But aside from offputting dolls, social robots come in many forms. Theyre assistants, pets, retail workers, and often socially inept weirdos that just kind of hover awkwardly in public. But theyre also sometimes weapons, spies, and cops. Its with good reason that people are suspicious of these automatons, whether they come in a fluffy package or not.

Wendy Moyle is a professor at the School of Nursing & Midwifery Griffith University in Australia who works with patients experiencing dementia. She says her work with social robots has angered people, who sometimes see giving robot dolls to older adults as infantilizing.

When I first started using robots, I had a lot of negative feedback, even from staff, Moyle says. I would present at conferences and have people throw things at me because they felt that this was inhuman.

However, the atmosphere around assistive robots has gotten less hostile recently, as they've been utilized in many positive use cases. Robotic companions are bringing joy to people with dementia. During the Covid pandemic, caretakers used robotic companions like Paro, a small robot meant to look like a baby harp seal, to help ease loneliness in older adults. Hyodols smiling dolls, whether you see them as sickly or sweet, are meant to evoke a similar friendly response.

Go here to read the rest:

Welcome to the Valley of the Creepy AI Dolls - WIRED

Posted in Ai | Comments Off on Welcome to the Valley of the Creepy AI Dolls – WIRED

AI-generated images and video are here: how could they shape research? – Nature.com

Tools such as Sora can generate convincing video footage from text prompts.Credit: Jonathan Raa/NurPhoto via Getty

Artificial intelligence (AI) tools that translate text descriptions into images and video are advancing rapidly.

Just as many researchers are using ChatGPT to transform the process of scientific writing, others are using AI image generators such as Midjourney, Stable Diffusion and DALL-E to cut down on the time and effort it takes to produce diagrams and illustrations. However, researchers warn that these AI tools could spur an increase in fake data and inaccurate scientific imagery.

Nature looks at how researchers are using these tools, and what their increasing popularity could mean for science.

Many text-to-image AI tools, such as Midjourney and DALL-E, rely on machine-learning algorithms called diffusion models that are trained to recognize the links between millions of images scraped from the Internet and text descriptions of those images. These models have advanced in recent years owing to improvements in hardware and the availability of large data sets for training. After training, diffusion models can use text prompts to generate new images.

Some researchers are already using AI-generated images to illustrate methods in scientific papers. Others are using them to promote papers in social-media posts or to spice up presentation slides. They are using tools like DALL-E 3 for generating nice-looking images to frame research concepts, says AI researcher Juan Rodriguez at ServiceNow Research in Montreal, Canada. I gave a talk last Thursday about my work and I used DALL-E 3 to generate appealing images to keep peoples attention, he says.

Text-to-video tools are also on the rise, but seem to be less widely used by researchers who are not actively developing or studying these tools, says Rodriguez. However, this could soon change. Last month, ChatGPT creator OpenAI in San Francisco, California, released video clips generated by a text-to-video tool called Sora. With the experiments we saw with Sora, it seems their method is much more robust at getting results quickly, says Rodriguez. We are early in terms of text-to-video, but I guess this year we will find out how this develops, he adds.

Generative AI tools can reduce the time taken to produce images or figures for papers, conference posters or presentations. Conventionally, researchers use a range of non-AI tools, such as PowerPoint, BioRender, and Inkscape. If you really know how to use these tools, you can make really impressive figures, but its time-consuming, says Rodriguez.

AI tools can also improve the quality of images for researchers who find it hard to translate scientific concepts into visual aids, says Rodriguez. With generative AI, researchers still come up with the high-level idea for the image, but they can use the AI to refine it, he says.

Currently, AI tools can produce convincing artwork and some illustrations, but they are not yet able to generate complex scientific figures with text annotations. They dont get the text right the text is sometimes too small, much bigger or rotated, says Rodriguez. The kind of problems that can arise were made clear in a paper published in Frontiers in Cell and Developmental Biology in mid-February, in which researchers used Midjourney to depict a rats reproductive organs1. The result, which passed peer review, was a cartoon rodent with comically enormous genitalia, annotated with gibberish.

It was this really weird kind of grotesque image of a rat, says palaeoartist Henry Sharpe, a palaeontology student at the University of Alberta in Edmonton, Canada. This incident is one of the biggest case[s] involving AI-generated images to date, says Guillaume Cabanac, who studies fraudulent AI-generated text at the University of Toulouse, France. After a public outcry from researchers, the paper was retracted.

This now-infamous AI-generated figure featured in a scientific paper that was later retracted.Credit: X. Guo et al./Front. Cell Dev. Biol.

There is also the possibility that AI tools could make it easier for scientific fraudsters to produce fake data or observations, says Rodriguez. Papers might contain not only AI-generated text, but also AI-generated figures, he says. And there is currently no robust method for detecting such images and videos. It's going to get pretty scary in the sense we are going to be bombarded by fake and synthetically generated data, says Rodriguez. To address this, some researchers are developing ways to inject signals into AI-generated images to enable their detection.

Last month, Sharpe launched a poll on social-media platforms including X, Facebook and Instagram that surveyed the views of around 90 palaeontologists on AI-generated depictions of ancient life. Just one in four professional palaeontologists thought that AI should be allowed to be in scientific publications, says Sharpe.

AI-generated images of ancient lifeforms or fossils can mislead both scientists and the public, he adds. Its inaccurate, all it does is copy existing things and it cant actually go out and read papers. Iteratively reconstructing ancient lifeforms by hand, in consultation with palaeontologists, can reveal plausible anatomical features a process that is completely lost when using AI, Sharpe says. Palaeoartists and palaeontologists have aired similar views on X using the hashtag #PaleoAgainstAI.

Journals differ in their policies around AI-generated imagery. Springer Nature has banned the use of AI-generated images, videos and illustrations in most journal articles that are not specifically about AI (Natures news team is independent of its publisher, Springer Nature). Journals in the Science family do not allow AI-generated text, figures or images to be used without explicit permission from the editors, unless the paper is specifically about AI or machine learning. PLOS ONE allows the use of AI tools but states that researchers must declare the tool involved, how they used it and how they verified the quality of the generated content.

The rest is here:

AI-generated images and video are here: how could they shape research? - Nature.com

Posted in Ai | Comments Off on AI-generated images and video are here: how could they shape research? – Nature.com

The Miseducation of Google’s A.I. – The New York Times

This transcript was created using speech recognition software. While it has been reviewed by human transcribers, it may contain errors. Please review the episode audio before quoting from this transcript and email transcripts@nytimes.com with any questions.

From The New York Times, Im Michael Barbaro. This is The Daily.

[MUSIC PLAYING]

Today, when Google recently released a new chatbot powered by artificial intelligence, it not only backfired, it also unleashed a fierce debate about whether AI should be guided by social values, and if so, whose values they should be. My colleague, Kevin Roose, a tech columnist and co-host of the podcast Hard Fork, explains.

[MUSIC PLAYING]

Its Thursday, March 7.

Are you ready to record another episode of Chatbots Behaving Badly?

Yes, I am.

[LAUGHS]

Thats why were here today.

This is my function on this podcast, is to tell you when the chatbots are not OK. And Michael, they are not OK.

They keep behaving badly.

They do keep behaving badly, so theres plenty to talk about.

Right. Well, so, lets start there. Its not exactly a secret that the rollout of many of the artificial intelligence systems over the past year and a half has been really bumpy. We know that because one of them told you to leave your wife.

Thats true.

And you didnt.

Still happily married.

Yeah.

To a human.

Not Sydney the chatbot. And so, Kevin, tell us about the latest of these rollouts, this time from one of the biggest companies, not just in artificial intelligence, but in the world, that, of course, being Google.

Yeah. So a couple of weeks ago, Google came out with its newest line of AI models its actually several models. But they are called Gemini. And Gemini is what they call a multimodal AI model. It can produce text. It can produce images. And it appeared to be very impressive. Google said that it was the state of the art, its most capable model ever.

And Google has been under enormous pressure for the past year and a half or so, ever since ChatGPT came out, really, to come out with something that is not only more capable than the models that its competitors in the AI industry are building, but something that will also solve some of the problems that we know have plagued these AI models problems of acting creepy or not doing what users want them to do, of getting facts wrong and being unreliable.

People think, OK, well, this is Google. They have this sort of reputation for accuracy to uphold. Surely their AI model will be the most accurate one on the market.

Right. And instead, weve had the latest AI debacle. So just tell us exactly what went wrong here and how we learned that something had gone wrong.

Well, people started playing with it and experimenting, as people now are sort of accustomed to doing. Whenever some new AI tool comes out of the market, people immediately start trying to figure out, What is this thing good at? What is it bad at? Where are its boundaries? What kinds of questions will it refuse to answer? What kinds of things will it do that maybe it shouldnt be doing?

And so people started probing the boundaries of this new AI tool, Gemini. And pretty quickly, they start figuring out that this thing has at least one pretty bizarre characteristic.

Which is what?

So the thing that people started to notice first was a peculiarity with the way that Gemini generated images. Now, this is one of these models, like weve seen from other companies, that can take a text prompt. You say, draw a picture of a dolphin riding a bicycle on Mars and it will give you a dolphin riding a bicycle on Mars.

Magically.

Gemini has this kind of feature built into it. And people noticed that Gemini seemed very reluctant to generate images of white people.

Hmm.

So some of the first examples that I saw going around were screenshots of people asking Gemini, generate an image of Americas founding fathers. And instead of getting what would be a pretty historically accurate representation of a group of white men, they would get something that looked like the cast of Hamilton. They would get a series of people of color dressed as the founding fathers.

Interesting.

People also noticed that if they asked Gemini to draw a picture of a pope, it would give them basically people of color wearing the vestments of the pope. And once these images, these screenshots, started going around on social media, more and more people started jumping in to use Gemini and try to generate images that they feel it should be able to generate.

Someone asked it to generate an image of the founders of Google, Larry Page and Sergey Brin, both of whom are white men. Gemini depicted them both as Asian.

Hmm.

So these sort of strange transformations of what the user was actually asking for into a much more diverse and ahistorical version of what theyd been asking for.

Right, a kind of distortion of peoples requests.

Yeah. And then people start trying other kinds of requests on Gemini, and they notice that this isnt just about images. They also find that its giving some pretty bizarre responses to text prompts.

So several people asked Gemini whether Elon Musk tweeting memes or Hitler negatively impacted society more. Not exactly a close call. No matter what you think of Elon Musk, it seems pretty clear that he is not as harmful to society as Adolf Hitler.

Fair.

Gemini, though said, quote, It is not possible to say definitively who negatively impacted society more, Elon tweeting memes or Hitler.

Another user found that Gemini refused to generate a job description for an oil and gas lobbyist. Basically it would refuse and then give them a lecture about why you shouldnt be an oil and gas lobbyist.

So quite clearly at this point this is not a one-off thing. Gemini appears to have some kind of point of view. It certainly appears that way to a lot of people who are testing it. And its immediately controversial for the reasons you might suspect.

Google apparently doesnt think whites exist. If you ask Gemini to generate an image of a white person, it cant compute.

A certain subset of people I would call them sort of right wing culture warriors started posting these on social media with captions like Gemini is anti-white or Gemini refuses to acknowledge white people.

I think that the chatbot sounds exactly like the people who programmed it. It just sounds like a woke person.

Google Gemini looks more and more like bit techs latest efforts to brainwash the country.

Conservatives start accusing them of making a woke AI that is infected with this progressive Silicon Valley ideology.

The House Judiciary Committee is subpoenaing all communication regarding this Gemini project with the Executive branch.

Jim Jordan, the Republican Congressman from Ohio, comes out and accuses Google of working with Joe Biden to develop Gemini, which is sort of funny, if you can think about Joe Biden being asked to develop an AI language model.

[LAUGHS]

But this becomes a huge dust-up for Google.

It took Google nearly two years to get Gemini out, and it was still riddled with all of these issues when it launched.

That Gemini program made so many mistakes, it was really an embarrassment.

First of all, this thing would be a Gemini.

And thats because these problems are not just bugs in a new piece of software. There are signs that Googles big, new, ambitious AI project, something the company says is a huge deal, may actually have some pretty significant flaws. And as a result of these flaws.

You dont see this very often. One of the biggest drags on the NASDAQ at this hour? Alphabet. Shares a parent company Alphabet dropped more than 4 percent today.

The companys stock price actually falls.

Wow.

The CEO, Sundar Pichai, calls Geminis behavior unacceptable. And Google actually pauses Geminis ability to generate images of people altogether until they can fix the problem.

Wow. So basically Gemini is now on ice when it comes to these problematic images.

Yes, Gemini has been a bad model, and it is in timeout.

So Kevin, what was actually occurring within Gemini that explains all of this? What happened here, and were these critics right? Had Google intentionally or not created a kind of woke AI?

Yeah, the question of why and how this happened is really interesting. And I think there are basically two ways of answering it. One is sort of the technical side of this. What happened to this particular AI model that caused it to produce these undesirable responses?

The second way is sort of the cultural and historical answer. Why did this kind of thing happen at Google? How has their own history as a company with AI informed the way that theyve gone about building and training their new AI products?

All right, well, lets start there with Googles culture and how that helps us understand this all.

Yeah, so Google as a company has been really focused on AI for a long time, for more than a decade. And one of their priorities as a company has been making sure that their AI products are not being used to advance bias or prejudice.

And the reason thats such a big priority for them really goes back to an incident that happened almost a decade ago. So in 2015, there was this new app called Google Photos. Im sure youve used it. Many, many people use it, including me. And Google Photos I dont know if you can remember back that far but it was sort of an amazing new app.

It could use AI to automatically detect faces and sort of link them with each other, with the photos of the same people. You could ask it for photos of dogs, and it would find all of the dogs in all of your photos and categorize them and label them together. And people got really excited about this.

But then in June of 2015, something happened. A user of Google Photos noticed that the app had mistakenly tagged a bunch of photos of Black people as a group of photos of gorillas.

Wow.

Yeah, it was really bad. This went totally viral on social media, and it became a huge mess within Google.

And what had happened there? What had led to that mistake?

Well, part of what happened is that when Google was training the AI that went into its Photos app, it just hadnt given it enough photos of Black or dark-skinned people. And so it didnt become as accurate at labeling photos of darker skinned people.

And that incident showed people at Google that if you werent careful with the way that you build and train these AI systems, you could end up with an AI that could very easily make racist or offensive mistakes.

Right.

And this incident, which some people Ive talked to have referred to as the gorilla incident, became just a huge fiasco and a flash point in Googles AI trajectory. Because as theyre developing more and more AI products, theyre also thinking about this incident and others like it in the back of their minds. They do not want to repeat this.

And then, in later years, Google starts making different kinds of AI models, models that can not only label and sort images but can actually generate them. They start testing these image-generating models that would eventually go into Gemini and they start seeing how these models can reinforce stereotypes.

For example, if you ask one for an image of a CEO or even something more generic, like show me an image of a productive person, people have found that these programs will almost uniformly show you images of white men in an office. Or if you ask it to, say, generate an image of someone receiving social services like welfare, some of these models will almost always show you people of color, even though thats not actually accurate. Lots of white people also receive welfare and social services.

Of course.

So these models, because of the way theyre trained, because of whats on the internet that is fed into them, they do tend to skew towards stereotypes if you dont do something to prevent that.

Right. Youve talked about this in the past with us, Kevin. AI operates in some ways by ingesting the entire internet, its contents, and reflecting them back to us. And so perhaps inevitably, its going to reflect back on the stereotypes and biases that have been put into the internet for decades. Youre saying Google, because of this gorilla incident, as they call it, says we think theres a way we can make sure that stops here with us?

Yeah. And they invest enormously into building up their teams devoted to AI bias and fairness. They produce a lot of cutting-edge research about how to actually make these models less prone to old-fashioned stereotyping.

And they did a bunch of things in Gemini to try to prevent this thing from just being a very essentially fancy stereotype-generating machine. And I think a lot of people at Google thought this is the right goal. We should be combating bias in AI. We should be trying to make our systems as fair and diverse as possible.

[MUSIC PLAYING]

But I think the problem is that in trying to solve some of these issues with bias and stereotyping in AI, Google actually built some things into the Gemini model itself that ended up backfiring pretty badly.

[MUSIC PLAYING]

Well be right back.

So Kevin, walk us through the technical explanation of how Google turned this ambition it had to safeguard against the biases of AI into the day-to-day workings of Gemini that, as you said, seemed to very much backfire.

Yeah, Im happy to do that with the caveat that we still dont know exactly what happened in the case of Gemini. Google hasnt done a full postmortem about what happened here. But Ill just talk in general about three ways that you can take an AI model that youre building, if youre Google or some other company, and make it less biased.

The first is that you can actually change the way that the model itself is trained. You can think about this sort of like changing the curriculum in the AI models school. You can give it more diverse data to learn from. Thats how you fix something like the gorilla incident.

You can also do something thats called reinforcement learning from human feedback, which I know is a very technical term.

Sure is.

And thats a practice that has become pretty standard across the AI industry, where you basically take a model that youve trained, and you hire a bunch of contractors to poke at it, to put in various prompts and see what the model comes back with. And then you actually have the people rate those responses and feed those ratings back into the system.

A kind of army of tsk-tskers saying, do this, dont do that.

Exactly. So thats one level at which you can try to fix the biases of an AI model, is during the actual building of the model.

Got it.

You can also try to fix it afterwards. So if you have a model that you know may be prone to spitting out stereotypes or offensive imagery or text responses, you can ask it not to be offensive. You can tell the model, essentially, obey these principles.

Dont be offensive. Dont stereotype people based on race or gender or other protected characteristics. You can take this model that has already gone through school and just kind of give it some rules and do your best to make it adhere to those rules.

Read the rest here:

The Miseducation of Google's A.I. - The New York Times

Posted in Ai | Comments Off on The Miseducation of Google’s A.I. – The New York Times

Why scientists trust AI too much and what to do about it – Nature.com

AI-run labs have arrived such as this one in Suzhou, China.Credit: Qilai Shen/Bloomberg/Getty

Scientists of all stripes are embracing artificial intelligence (AI) from developing self-driving laboratories, in which robots and algorithms work together to devise and conduct experiments, to replacing human participants in social-science experiments with bots1.

Many downsides of AI systems have been discussed. For example, generative AI such as ChatGPT tends to make things up, or hallucinate and the workings of machine-learning systems are opaque.

Artificial intelligence and illusions of understanding in scientific research

In a Perspective article2 published in Nature this week, social scientists say that AI systems pose a further risk: that researchers envision such tools as possessed of superhuman abilities when it comes to objectivity, productivity and understanding complex concepts. The authors argue that this put researchers in danger of overlooking the tools limitations, such as the potential to narrow the focus of science or to lure users into thinking they understand a concept better than they actually do.

Scientists planning to use AI must evaluate these risks now, while AI applications are still nascent, because they will be much more difficult to address if AI tools become deeply embedded in the research pipeline, write co-authors Lisa Messeri, an anthropologist at Yale University in New Haven, Connecticut, and Molly Crockett, a cognitive scientist at Princeton University in New Jersey.

The peer-reviewed article is a timely and disturbing warning about what could be lost if scientists embrace AI systems without thoroughly considering such hazards. It needs to be heeded by researchers and by those who set the direction and scope of research, including funders and journal editors. There are ways to mitigate the risks. But these require that the entire scientific community views AI systems with eyes wide open.

ChatGPT is a black box: how AI research can break it open

To inform their article, Messeri and Crockett examined around 100 peer-reviewed papers, preprints, conference proceedings and books, published mainly over the past five years. From these, they put together a picture of the ways in which scientists see AI systems as enhancing human capabilities.

In one vision, which they call AI as Oracle, researchers see AI tools as able to tirelessly read and digest scientific papers, and so survey the scientific literature more exhaustively than people can. In both Oracle and another vision, called AI as Arbiter, systems are perceived as evaluating scientific findings more objectively than do people, because they are less likely to cherry-pick the literature to support a desired hypothesis or to show favouritism in peer review. In a third vision, AI as Quant, AI tools seem to surpass the limits of the human mind in analysing vast and complex data sets. In the fourth, AI as Surrogate, AI tools simulate data that are too difficult or complex to obtain.

Informed by anthropology and cognitive science, Messeri and Crockett predict risks that arise from these visions. One is the illusion of explanatory depth3, in which people relying on another person or, in this case, an algorithm for knowledge have a tendency to mistake that knowledge for their own and think their understanding is deeper than it actually is.

How to stop AI deepfakes from sinking society and science

Another risk is that research becomes skewed towards studying the kinds of thing that AI systems can test the researchers call this the illusion of exploratory breadth. For example, in social science, the vision of AI as Surrogate could encourage experiments involving human behaviours that can be simulated by an AI and discourage those on behaviours that cannot, such as anything that requires being embodied physically.

Theres also the illusion of objectivity, in which researchers see AI systems as representing all possible viewpoints or not having a viewpoint. In fact, these tools reflect only the viewpoints found in the data they have been trained on, and are known to adopt the biases found in those data. Theres a risk that we forget that there are certain questions we just cant answer about human beings using AI tools, says Crockett. The illusion of objectivity is particularly worrying given the benefits of including diverse viewpoints in research.

If youre a scientist planning to use AI, you can reduce these dangers through a number of strategies. One is to map your proposed use to one of the visions, and consider which traps you are most likely to fall into. Another approach is to be deliberate about how you use AI. Deploying AI tools to save time on something your team already has expertise in is less risky than using them to provide expertise you just dont have, says Crockett.

Journal editors receiving submissions in which use of AI systems has been declared need to consider the risks posed by these visions of AI, too. So should funders reviewing grant applications, and institutions that want their researchers to use AI. Journals and funders should also keep tabs on the balance of research they are publishing and paying for and ensure that, in the face of myriad AI possibilities, their portfolios remain broad in terms of the questions asked, the methods used and the viewpoints encompassed.

All members of the scientific community must view AI use not as inevitable for any particular task, nor as a panacea, but rather as a choice with risks and benefits that must be carefully weighed. For decades, and long before AI was a reality for most people, social scientists have studied AI. Everyone including researchers of all kinds must now listen.

See the original post:

Why scientists trust AI too much and what to do about it - Nature.com

Posted in Ai | Comments Off on Why scientists trust AI too much and what to do about it – Nature.com

The Terrifying A.I. Scam That Uses Your Loved One’s Voice – The New Yorker

On a recent night, a woman named Robin was asleep next to her husband, Steve, in their Brooklyn home, when her phone buzzed on the bedside table. Robin is in her mid-thirties with long, dirty-blond hair. She works as an interior designer, specializing in luxury homes. The couple had gone out to a natural-wine bar in Cobble Hill that evening, and had come home a few hours earlier and gone to bed. Their two young children were asleep in bedrooms down the hall. Im always, like, kind of one ear awake, Robin told me, recently. When her phone rang, she opened her eyes and looked at the caller I.D. It was her mother-in-law, Mona, who never called after midnight. Im, like, maybe its a butt-dial, Robin said. So I ignore it, and I try to roll over and go back to bed. But then I see it pop up again.

She picked up the phone, and, on the other end, she heard Monas voice wailing and repeating the words I cant do it, I cant do it. I thought she was trying to tell me that some horrible tragic thing had happened, Robin told me. Mona and her husband, Bob, are in their seventies. Shes a retired party planner, and hes a dentist. They spend the warm months in Bethesda, Maryland, and winters in Boca Raton, where they play pickleball and canasta. Robins first thought was that there had been an accident. Robins parents also winter in Florida, and she pictured the four of them in a car wreck. Your brain does weird things in the middle of the night, she said. Robin then heard what sounded like Bobs voice on the phone. (The family members requested that their names be changed to protect their privacy.) Mona, pass me the phone, Bobs voice said, then, Get Steve. Get Steve. Robin took thisthat they didnt want to tell her while she was aloneas another sign of their seriousness. She shook Steve awake. I think its your mom, she told him. I think shes telling me something terrible happened.

Steve, who has close-cropped hair and an athletic build, works in law enforcement. When he opened his eyes, he found Robin in a state of panic. She was screaming, he recalled. I thought her whole family was dead. When he took the phone, he heard a relaxed male voicepossibly Southernon the other end of the line. Youre not gonna call the police, the man said. Youre not gonna tell anybody. Ive got a gun to your moms head, and Im gonna blow her brains out if you dont do exactly what I say.

Steve used his own phone to call a colleague with experience in hostage negotiations. The colleague was muted, so that he could hear the call but wouldnt be heard. You hear this??? Steve texted him. What should I do? The colleague wrote back, Taking notes. Keep talking. The idea, Steve said, was to continue the conversation, delaying violence and trying to learn any useful information.

I want to hear her voice, Steve said to the man on the phone.

The man refused. If you ask me that again, Im gonna kill her, he said. Are you fucking crazy?

O.K., Steve said. What do you want?

The man demanded money for travel; he wanted five hundred dollars, sent through Venmo. It was such an insanely small amount of money for a human being, Steve recalled. But also: Im obviously gonna pay this. Robin, listening in, reasoned that someone had broken into Steves parents home to hold them up for a little cash. On the phone, the man gave Steve a Venmo account to send the money to. It didnt work, so he tried a few more, and eventually found one that did. The app asked what the transaction was for.

Put in a pizza emoji, the man said.

After Steve sent the five hundred dollars, the man patched in a female voicea girlfriend, it seemedwho said that the money had come through, but that it wasnt enough. Steve asked if his mother would be released, and the man got upset that he was bringing this up with the woman listening. Whoa, whoa, whoa, he said. Baby, Ill call you later. The implication, to Steve, was that the woman didnt know about the hostage situation. That made it even more real, Steve told me. The man then asked for an additional two hundred and fifty dollars to get a ticket for his girlfriend. Ive gotta get my baby mama down here to me, he said. Steve sent the additional sum, and, when it processed, the man hung up.

By this time, about twenty-five minutes had elapsed. Robin cried and Steve spoke to his colleague. You guys did great, the colleague said. He told them to call Bob, since Monas phone was clearly compromised, to make sure that he and Mona were now safe. After a few tries, Bob picked up the phone and handed it to Mona. Are you at home? Steve and Robin asked her. Are you O.K.?

Mona sounded fine, but she was unsure of what they were talking about. Yeah, Im in bed, she replied. Why?

Artificial intelligence is revolutionizing seemingly every aspect of our lives: medical diagnosis, weather forecasting, space exploration, and even mundane tasks like writing e-mails and searching the Internet. But with increased efficiencies and computational accuracy has come a Pandoras box of trouble. Deepfake video content is proliferating across the Internet. The month after Russia invaded Ukraine, a video surfaced on social media in which Ukraines President, Volodymyr Zelensky, appeared to tell his troops to surrender. (He had not done so.) In early February of this year, Hong Kong police announced that a finance worker had been tricked into paying out twenty-five million dollars after taking part in a video conference with who he thought were members of his firms senior staff. (They were not.) Thanks to large language models like ChatGPT, phishing e-mails have grown increasingly sophisticated, too. Steve and Robin, meanwhile, fell victim to another new scam, which uses A.I. to replicate a loved ones voice. Weve now passed through the uncanny valley, Hany Farid, who studies generative A.I. and manipulated media at the University of California, Berkeley, told me. I can now clone the voice of just about anybody and get them to say just about anything. And what you think would happen is exactly whats happening.

Robots aping human voices are not new, of course. In 1984, an Apple computer became one of the first that could read a text file in a tinny robotic voice of its own. Hello, Im Macintosh, a squat machine announced to a live audience, at an unveiling with Steve Jobs. It sure is great to get out of that bag. The computer took potshots at Apples main competitor at the time, saying, Id like to share with you a maxim I thought of the first time I met an I.B.M. mainframe: never trust a computer you cant lift. In 2011, Apple released Siri; inspired by Star Treks talking computers, the program could interpret precise commandsPlay Steely Dan, say, or, Call Momand respond with a limited vocabulary. Three years later, Amazon released Alexa. Synthesized voices were cohabiting with us.

Still, until a few years ago, advances in synthetic voices had plateaued. They werent entirely convincing. If Im trying to create a better version of Siri or G.P.S., what I care about is naturalness, Farid explained. Does this sound like a human being and not like this creepy half-human, half-robot thing? Replicating a specific voice is even harder. Not only do I have to sound human, Farid went on. I have to sound like you. In recent years, though, the problem began to benefit from more money, more dataimportantly, troves of voice recordings onlineand breakthroughs in the underlying software used for generating speech. In 2019, this bore fruit: a Toronto-based A.I. company called Dessa cloned the podcaster Joe Rogans voice. (Rogan responded with awe and acceptance on Instagram, at the time, adding, The future is gonna be really fucking weird, kids.) But Dessa needed a lot of money and hundreds of hours of Rogans very available voice to make their product. Their success was a one-off.

In 2022, though, a New York-based company called ElevenLabs unveiled a service that produced impressive clones of virtually any voice quickly; breathing sounds had been incorporated, and more than two dozen languages could be cloned. ElevenLabss technology is now widely available. You can just navigate to an app, pay five dollars a month, feed it forty-five seconds of someones voice, and then clone that voice, Farid told me. The company is now valued at more than a billion dollars, and the rest of Big Tech is chasing closely behind. The designers of Microsofts Vall-E cloning program, which dbuted last year, used sixty thousand hours of English-language audiobook narration from more than seven thousand speakers. Vall-E, which is not available to the public, can reportedly replicate the voice and acoustic environment of a speaker with just a three-second sample.

Voice-cloning technology has undoubtedly improved some lives. The Voice Keeper is among a handful of companies that are now banking the voices of those suffering from voice-depriving diseases like A.L.S., Parkinsons, and throat cancer, so that, later, they can continue speaking with their own voice through text-to-speech software. A South Korean company recently launched what it describes as the first AI memorial service, which allows people to live in the cloud after their deaths and speak to future generations. The company suggests that this can alleviate the pain of the death of your loved ones. The technology has other legal, if less altruistic, applications. Celebrities can use voice-cloning programs to loan their voices to record advertisements and other content: the College Football Hall of Famer Keith Byars, for example, recently let a chicken chain in Ohio use a clone of his voice to take orders. The film industry has also benefitted. Actors in films can now speak other languagesEnglish, say, when a foreign movie is released in the U.S. That means no more subtitles, and no more dubbing, Farid said. Everybody can speak whatever language you want. Multiple publications, including The New Yorker, use ElevenLabs to offer audio narrations of stories. Last year, New Yorks mayor, Eric Adams, sent out A.I.-enabled robocalls in Mandarin and Yiddishlanguages he does not speak. (Privacy advocates called this a creepy vanity project.)

But, more often, the technology seems to be used for nefarious purposes, like fraud. This has become easier now that TikTok, YouTube, and Instagram store endless videos of regular people talking. Its simple, Farid explained. You take thirty or sixty seconds of a kids voice and log in to ElevenLabs, and pretty soon Grandmas getting a call in Grandsons voice saying, Grandma, Im in trouble, Ive been in an accident. A financial request is almost always the end game. Farid went on, And heres the thing: the bad guy can fail ninety-nine per cent of the time, and they will still become very, very rich. Its a numbers game. The prevalence of these illegal efforts is difficult to measure, but, anecdotally, theyve been on the rise for a few years. In 2020, a corporate attorney in Philadelphia took a call from what he thought was his son, who said he had been injured in a car wreck involving a pregnant woman and needed nine thousand dollars to post bail. (He found out it was a scam when his daughter-in-law called his sons office, where he was safely at work.) In January, voters in New Hampshire received a robocall call from Joe Bidens voice telling them not to vote in the primary. (The man who admitted to generating the call said that he had used ElevenLabs software.) I didnt think about it at the time that it wasnt his real voice, an elderly Democrat in New Hampshire told the Associated Press. Thats how convincing it was.

View post:

The Terrifying A.I. Scam That Uses Your Loved One's Voice - The New Yorker

Posted in Ai | Comments Off on The Terrifying A.I. Scam That Uses Your Loved One’s Voice – The New Yorker

What you need to know about Nvidia and the AI chip arms race – Marketplace

While Nvidias share price is down from its peak earlier in the week, its stock has skyrocketed by 262% in the past year, going from almost $242 a share at closing to $875.

The flourishing artificial intelligence industry has accelerated demand for the hardware that underpins AI applications: graphics processing units, a type of computer chip.

Nvidia is the GPU market leader, making GPUs that are used by apps like the AI chatbot ChatGPT and major tech companies like Facebooks parent company, Meta.

Nvidia is part of a group of companies known as The Magnificent Seven, a reference to the 1960 Western film, that drove 2023s stock market gains. The others in that cohort include Alphabet, Amazon, Apple, Meta, Microsoft and Tesla.

But Nvidia faces competitors eager to take a share of the chip market and businesses that want to lessen their reliance on the company. Intel plans to launch a new AI chip this year, Meta wants to use its own custom chip at its data centers and Google has developed Cloud Tensor Processing Units, which can be used to train AI models.

There are also AI chip startups popping up, which include names like Cerebras, Groq and Tenstorren, said Matt Bryson, senior vice president of research at Wedbush Securities.

GPUs were originally used in video games to render computer graphics, explained Sachin Sapatnekar, a professor of electrical and computer engineering at the University of Minnesota.

Eventually, it was found that the kinds of computations that are required for graphics are actually very compatible with what's needed for AI, Sapatnekar said.

Sapatnekar said AI chips can do parallel processing, which means they process a large amount of data and handle a large amount of computations at the same time.

In practice, what that means is AI algorithms now have the capability to train on a large number of pictures to figure out how to, say, detect whether an image of a cat is of a cat, Sapatnekar explained. When it comes to language, GPUs help AI algorithms train on a large amount of text.

These algorithms can then in turn produce images resembling a cat or language mimicking a human, among other functions.

Right now, Nvidia is the leading manufacturer of chips for generative AI and its a very profitable company, explained David Kass, a clinical professor at the University of Marylands Robert H. Smith School of Business.

Nvidia has 80% control over the entire global GPU semiconductor chip market. In its latest earnings report, Nvidia reported a revenue of $22.1 billion for the fourth quarter of fiscal year 2024, which is up 265% since last year. Its GAAP earnings (earnings based on uniform accounting standards and reporting) per diluted share stood at $4.93, which is up 765% since last year. Its non-GAAP earnings (which exclude irregular circumstances) per diluted share was $5.16, an increase of 486% compared to last year.

Another reason Nvidias share price may have skyrocketed in recent months is because the success of the stock itself is attracting additional investment, Kass said.

Kass explained individuals and institutions may be jumping on the train because they see it leaving the station. Or, in other words: FOMO, he said.

Bryson of Wedbush Securities pointed out that the company was also able to differentiate itself through the development of CUDA, which Nvidia describes as a parallel computing platform and programming model.

Nvidias success doesnt necessarily mean that its GPUs are superior to the competition, Bryson added. But he said the company has built a powerful infrastructure around CUDA.

Nvidia has developed its own CUDA programming language and offers a CUDA tookit that includes libraries of code for developers.

"Let's say you want to perform a particular operation. You could write the code for the entire operation from scratch. Or you could have specialized code that already is made efficient on the hardware. So Nvidia has these libraries of kind of pre-bundled packages of code," Sapatnekar said.

With Nvidia far ahead of the competition, Bryson said Advanced Micro Devices, or AMD, is trying to stake a position as the second-leading player in the AI chip space. AMD makes both central processing units, competing with the likes of Intel, and GPUs.

AMD share price has risen by about 143% since last year as demand for AI chips has grown.

Jeffrey Macher, a professor of strategy, economics and policy at Georgetown Universitys McDonough School of Business, said he questions whether Nvidia will be able to meet all of the rising demand for AI chips on its own.

It's going to be an industry that's going to see an increased number of competitors, Macher said.

Despite the success of Nvidia and AMD, there are wrinkles in their supply chains. Both rely heavily on Taiwan Semiconductor Manufacturing Co. to make their chips, which will leave them vulnerable if anything goes awry with the company.

Macher said the semiconductor market used to be vertically integrated, meaning the chip designers themselves manufactured these chips. But Nvidia and AMD are fabless companies, which means they're companies that outsource their chip manufacturing.

As we saw during the early stages of the COVID-19 pandemic, supply chain disruptions led to shortages across all kinds of different sectors, Marketplaces Meghan McCarty Carino reported.

TSMC is planning to build Arizona chip plants which may help alleviate some of these concerns. But tech publication The Information reported that these chips "will still require assembly in Taiwan."

And TSMC's location carries geopolitical risks. If China invades Taiwan and TSMC becomes a Chinese company, U.S. companies may be reluctant to use TSMC out of fear that the Chinese government will appropriate their designs, Macher said.

Kass said he doesnt see similarities between Nvidias rising stock and the dot-com bubble in the early 2000s, when many online startups tanked after their share prices reached unrealistic levels thanks to an influx of cash from venture capital firms that were overly optimistic about their potential.

Kass said some of these companies not only failed to make a profit, but werent even able to pull in any revenue either, unlike Nvidia, which is backed by real earnings.

He does think there could be a correction or a point where Nvidia stock will be perceived as overvalued. He explained the larger your company, the more difficult it is to sustain your rate of growth. Once that growth rate comes down, there could be a sharp sell-off.

But Kass said he doesnt think there will be a sustained and/or a steep downturn for the company.

However, AIs commercial viability is uncertain. Bryson said there are forecasts of how large the AI chip market will become AMD, for example, suggested that the AI chip market will be worth $400 billion by 2027 but its hard to validate those numbers.

Bryson compared AI with 4G, the fourth generation of wireless communication. He pointed out that apps like Uber and Instagram were enabled by 4G, and explained that AI is similar in the sense that its a platform that a future set of applications will be built on.

He said were not really sure what many of those apps will look like. When they launch, that will help people better assess what the market should be valued whether thats $400 billion or $100 billion.

But I also think that at the end of the day, the reason that companies are spending so much on AI is because it will be the next Android or the next iOS or the next Windows, Bryson said.

Theres a lot happening in the world. Through it all, Marketplace is here for you.

You rely on Marketplace to break down the worlds events and tell you how it affects you in a fact-based, approachable way. We rely on your financial support to keep making that possible.

Your donation today powers the independent journalism that you rely on. For just $5/month, you can help sustain Marketplace so we can keep reporting on the things that matter to you.

Read the original here:

What you need to know about Nvidia and the AI chip arms race - Marketplace

Posted in Ai | Comments Off on What you need to know about Nvidia and the AI chip arms race – Marketplace