Search Immortality Topics:



The Miseducation of Google’s A.I. – The New York Times

Posted: March 10, 2024 at 3:17 am

This transcript was created using speech recognition software. While it has been reviewed by human transcribers, it may contain errors. Please review the episode audio before quoting from this transcript and email transcripts@nytimes.com with any questions.

From The New York Times, Im Michael Barbaro. This is The Daily.

[MUSIC PLAYING]

Today, when Google recently released a new chatbot powered by artificial intelligence, it not only backfired, it also unleashed a fierce debate about whether AI should be guided by social values, and if so, whose values they should be. My colleague, Kevin Roose, a tech columnist and co-host of the podcast Hard Fork, explains.

[MUSIC PLAYING]

Its Thursday, March 7.

Are you ready to record another episode of Chatbots Behaving Badly?

Yes, I am.

[LAUGHS]

Thats why were here today.

This is my function on this podcast, is to tell you when the chatbots are not OK. And Michael, they are not OK.

They keep behaving badly.

They do keep behaving badly, so theres plenty to talk about.

Right. Well, so, lets start there. Its not exactly a secret that the rollout of many of the artificial intelligence systems over the past year and a half has been really bumpy. We know that because one of them told you to leave your wife.

Thats true.

And you didnt.

Still happily married.

Yeah.

To a human.

Not Sydney the chatbot. And so, Kevin, tell us about the latest of these rollouts, this time from one of the biggest companies, not just in artificial intelligence, but in the world, that, of course, being Google.

Yeah. So a couple of weeks ago, Google came out with its newest line of AI models its actually several models. But they are called Gemini. And Gemini is what they call a multimodal AI model. It can produce text. It can produce images. And it appeared to be very impressive. Google said that it was the state of the art, its most capable model ever.

And Google has been under enormous pressure for the past year and a half or so, ever since ChatGPT came out, really, to come out with something that is not only more capable than the models that its competitors in the AI industry are building, but something that will also solve some of the problems that we know have plagued these AI models problems of acting creepy or not doing what users want them to do, of getting facts wrong and being unreliable.

People think, OK, well, this is Google. They have this sort of reputation for accuracy to uphold. Surely their AI model will be the most accurate one on the market.

Right. And instead, weve had the latest AI debacle. So just tell us exactly what went wrong here and how we learned that something had gone wrong.

Well, people started playing with it and experimenting, as people now are sort of accustomed to doing. Whenever some new AI tool comes out of the market, people immediately start trying to figure out, What is this thing good at? What is it bad at? Where are its boundaries? What kinds of questions will it refuse to answer? What kinds of things will it do that maybe it shouldnt be doing?

And so people started probing the boundaries of this new AI tool, Gemini. And pretty quickly, they start figuring out that this thing has at least one pretty bizarre characteristic.

Which is what?

So the thing that people started to notice first was a peculiarity with the way that Gemini generated images. Now, this is one of these models, like weve seen from other companies, that can take a text prompt. You say, draw a picture of a dolphin riding a bicycle on Mars and it will give you a dolphin riding a bicycle on Mars.

Magically.

Gemini has this kind of feature built into it. And people noticed that Gemini seemed very reluctant to generate images of white people.

Hmm.

So some of the first examples that I saw going around were screenshots of people asking Gemini, generate an image of Americas founding fathers. And instead of getting what would be a pretty historically accurate representation of a group of white men, they would get something that looked like the cast of Hamilton. They would get a series of people of color dressed as the founding fathers.

Interesting.

People also noticed that if they asked Gemini to draw a picture of a pope, it would give them basically people of color wearing the vestments of the pope. And once these images, these screenshots, started going around on social media, more and more people started jumping in to use Gemini and try to generate images that they feel it should be able to generate.

Someone asked it to generate an image of the founders of Google, Larry Page and Sergey Brin, both of whom are white men. Gemini depicted them both as Asian.

Hmm.

So these sort of strange transformations of what the user was actually asking for into a much more diverse and ahistorical version of what theyd been asking for.

Right, a kind of distortion of peoples requests.

Yeah. And then people start trying other kinds of requests on Gemini, and they notice that this isnt just about images. They also find that its giving some pretty bizarre responses to text prompts.

So several people asked Gemini whether Elon Musk tweeting memes or Hitler negatively impacted society more. Not exactly a close call. No matter what you think of Elon Musk, it seems pretty clear that he is not as harmful to society as Adolf Hitler.

Fair.

Gemini, though said, quote, It is not possible to say definitively who negatively impacted society more, Elon tweeting memes or Hitler.

Another user found that Gemini refused to generate a job description for an oil and gas lobbyist. Basically it would refuse and then give them a lecture about why you shouldnt be an oil and gas lobbyist.

So quite clearly at this point this is not a one-off thing. Gemini appears to have some kind of point of view. It certainly appears that way to a lot of people who are testing it. And its immediately controversial for the reasons you might suspect.

Google apparently doesnt think whites exist. If you ask Gemini to generate an image of a white person, it cant compute.

A certain subset of people I would call them sort of right wing culture warriors started posting these on social media with captions like Gemini is anti-white or Gemini refuses to acknowledge white people.

I think that the chatbot sounds exactly like the people who programmed it. It just sounds like a woke person.

Google Gemini looks more and more like bit techs latest efforts to brainwash the country.

Conservatives start accusing them of making a woke AI that is infected with this progressive Silicon Valley ideology.

The House Judiciary Committee is subpoenaing all communication regarding this Gemini project with the Executive branch.

Jim Jordan, the Republican Congressman from Ohio, comes out and accuses Google of working with Joe Biden to develop Gemini, which is sort of funny, if you can think about Joe Biden being asked to develop an AI language model.

[LAUGHS]

But this becomes a huge dust-up for Google.

It took Google nearly two years to get Gemini out, and it was still riddled with all of these issues when it launched.

That Gemini program made so many mistakes, it was really an embarrassment.

First of all, this thing would be a Gemini.

And thats because these problems are not just bugs in a new piece of software. There are signs that Googles big, new, ambitious AI project, something the company says is a huge deal, may actually have some pretty significant flaws. And as a result of these flaws.

You dont see this very often. One of the biggest drags on the NASDAQ at this hour? Alphabet. Shares a parent company Alphabet dropped more than 4 percent today.

The companys stock price actually falls.

Wow.

The CEO, Sundar Pichai, calls Geminis behavior unacceptable. And Google actually pauses Geminis ability to generate images of people altogether until they can fix the problem.

Wow. So basically Gemini is now on ice when it comes to these problematic images.

Yes, Gemini has been a bad model, and it is in timeout.

So Kevin, what was actually occurring within Gemini that explains all of this? What happened here, and were these critics right? Had Google intentionally or not created a kind of woke AI?

Yeah, the question of why and how this happened is really interesting. And I think there are basically two ways of answering it. One is sort of the technical side of this. What happened to this particular AI model that caused it to produce these undesirable responses?

The second way is sort of the cultural and historical answer. Why did this kind of thing happen at Google? How has their own history as a company with AI informed the way that theyve gone about building and training their new AI products?

All right, well, lets start there with Googles culture and how that helps us understand this all.

Yeah, so Google as a company has been really focused on AI for a long time, for more than a decade. And one of their priorities as a company has been making sure that their AI products are not being used to advance bias or prejudice.

And the reason thats such a big priority for them really goes back to an incident that happened almost a decade ago. So in 2015, there was this new app called Google Photos. Im sure youve used it. Many, many people use it, including me. And Google Photos I dont know if you can remember back that far but it was sort of an amazing new app.

It could use AI to automatically detect faces and sort of link them with each other, with the photos of the same people. You could ask it for photos of dogs, and it would find all of the dogs in all of your photos and categorize them and label them together. And people got really excited about this.

But then in June of 2015, something happened. A user of Google Photos noticed that the app had mistakenly tagged a bunch of photos of Black people as a group of photos of gorillas.

Wow.

Yeah, it was really bad. This went totally viral on social media, and it became a huge mess within Google.

And what had happened there? What had led to that mistake?

Well, part of what happened is that when Google was training the AI that went into its Photos app, it just hadnt given it enough photos of Black or dark-skinned people. And so it didnt become as accurate at labeling photos of darker skinned people.

And that incident showed people at Google that if you werent careful with the way that you build and train these AI systems, you could end up with an AI that could very easily make racist or offensive mistakes.

Right.

And this incident, which some people Ive talked to have referred to as the gorilla incident, became just a huge fiasco and a flash point in Googles AI trajectory. Because as theyre developing more and more AI products, theyre also thinking about this incident and others like it in the back of their minds. They do not want to repeat this.

And then, in later years, Google starts making different kinds of AI models, models that can not only label and sort images but can actually generate them. They start testing these image-generating models that would eventually go into Gemini and they start seeing how these models can reinforce stereotypes.

For example, if you ask one for an image of a CEO or even something more generic, like show me an image of a productive person, people have found that these programs will almost uniformly show you images of white men in an office. Or if you ask it to, say, generate an image of someone receiving social services like welfare, some of these models will almost always show you people of color, even though thats not actually accurate. Lots of white people also receive welfare and social services.

Of course.

So these models, because of the way theyre trained, because of whats on the internet that is fed into them, they do tend to skew towards stereotypes if you dont do something to prevent that.

Right. Youve talked about this in the past with us, Kevin. AI operates in some ways by ingesting the entire internet, its contents, and reflecting them back to us. And so perhaps inevitably, its going to reflect back on the stereotypes and biases that have been put into the internet for decades. Youre saying Google, because of this gorilla incident, as they call it, says we think theres a way we can make sure that stops here with us?

Yeah. And they invest enormously into building up their teams devoted to AI bias and fairness. They produce a lot of cutting-edge research about how to actually make these models less prone to old-fashioned stereotyping.

And they did a bunch of things in Gemini to try to prevent this thing from just being a very essentially fancy stereotype-generating machine. And I think a lot of people at Google thought this is the right goal. We should be combating bias in AI. We should be trying to make our systems as fair and diverse as possible.

[MUSIC PLAYING]

But I think the problem is that in trying to solve some of these issues with bias and stereotyping in AI, Google actually built some things into the Gemini model itself that ended up backfiring pretty badly.

[MUSIC PLAYING]

Well be right back.

So Kevin, walk us through the technical explanation of how Google turned this ambition it had to safeguard against the biases of AI into the day-to-day workings of Gemini that, as you said, seemed to very much backfire.

Yeah, Im happy to do that with the caveat that we still dont know exactly what happened in the case of Gemini. Google hasnt done a full postmortem about what happened here. But Ill just talk in general about three ways that you can take an AI model that youre building, if youre Google or some other company, and make it less biased.

The first is that you can actually change the way that the model itself is trained. You can think about this sort of like changing the curriculum in the AI models school. You can give it more diverse data to learn from. Thats how you fix something like the gorilla incident.

You can also do something thats called reinforcement learning from human feedback, which I know is a very technical term.

Sure is.

And thats a practice that has become pretty standard across the AI industry, where you basically take a model that youve trained, and you hire a bunch of contractors to poke at it, to put in various prompts and see what the model comes back with. And then you actually have the people rate those responses and feed those ratings back into the system.

A kind of army of tsk-tskers saying, do this, dont do that.

Exactly. So thats one level at which you can try to fix the biases of an AI model, is during the actual building of the model.

Got it.

You can also try to fix it afterwards. So if you have a model that you know may be prone to spitting out stereotypes or offensive imagery or text responses, you can ask it not to be offensive. You can tell the model, essentially, obey these principles.

Dont be offensive. Dont stereotype people based on race or gender or other protected characteristics. You can take this model that has already gone through school and just kind of give it some rules and do your best to make it adhere to those rules.

Read the rest here:

The Miseducation of Google's A.I. - The New York Times

Recommendation and review posted by G. Smith