Search Immortality Topics:

Page 11«..10111213..2030..»


More details of the AI upgrades heading to iOS 18 have leaked – TechRadar

Posted: May 6, 2024 at 2:47 am

Artificial intelligence is clearly going to feature heavily in iOS 18 and all the other software updates Apple is due to tell us about on June 10, and new leaks reveal more about what's coming in terms of AI later in the year.

These leaks come courtesy of "people familiar with the software" speaking to AppleInsider, and focus on the generative AI capabilities of the Ajax Large Language Model (LLM) that we've been hearing about since last year.

AI-powered text summarization covering everything from websites to messages will apparently be one of the big new features. We'd previously heard this was coming to Safari, but AppleInsider says this functionality will be available through Siri too.

The idea is you'll be able to get the key points out of a document, a webpage, or a conversation thread without having to read through it in its entirety and presumably Apple is going to offer certain assurances about accuracy and reliability.

Ajax will be able to generate responses to some prompts entirely on Apple devices, without sending anything to the cloud, the report says and that chimes with previous rumors about everything running locally.

That's good for privacy, and for speed: according to AppleInsider, responses can come back in milliseconds. Tight integration with other Apple apps, including the Contacts app and the Calendar app, is also said to be present.

AppleInsider mentions that privacy warnings will be shown whenever Ajax needs information from another app. If a response from a cloud-based AI is required, it's rumored that Apple may enlist the help of Google Gemini or OpenAI's ChatGPT.

Get the hottest deals available in your inbox plus news, reviews, opinion, analysis and more from the TechRadar team.

Spotlight on macOS will be getting "more intelligent results and sorting" too, AppleInsider says, and it sounds like most of the apps on iOS and macOS will be getting an AI boost. Expect to hear everything Apple has been working on at WWDC 2024 in June.

View post:

More details of the AI upgrades heading to iOS 18 have leaked - TechRadar

Recommendation and review posted by G. Smith

Providing further transparency on our responsible AI efforts – Microsoft On the Issues – Microsoft

Posted: May 6, 2024 at 2:47 am

The following is the foreword to the inaugural edition of our annual Responsible AI Transparency Report. The FULL REPORT is available at this link.

We believe we have an obligation to share our responsible AI practices with the public, and this report enables us to record and share our maturing practices, reflect on what we have learned, chart our goals, hold ourselves accountable, and earn the publics trust.

In 2016, our Chairman and CEO, Satya Nadella, set us on a clear course to adopt a principled and human-centered approach to our investments in artificial intelligence (AI). Since then, we have been hard at work building products that align with our values. As we design, build, and release AI products, six values transparency, accountability, fairness, inclusiveness, reliability and safety, and privacy and security remain our foundation and guide our work every day.

To advance our transparency practices, in July 2023, we committed to publishing an annual report on our responsible AI program, taking a step that reached beyond the White House Voluntary Commitments that we and other leading AI companies agreed to. This is our inaugural report delivering on that commitment, and we are pleased to publish it on the heels of our first year of bringing generative AI products and experiences to creators, non-profits, governments, and enterprises around the world.

As a company at the forefront of AI research and technology, we are committed to sharing our practices with the public as they evolve. This report enables us to share our maturing practices, reflect on what we have learned, chart our goals, hold ourselves accountable, and earn the publics trust. Weve been innovating in responsible AI for eight years, and as we evolve our program, we learn from our past to continually improve. We take very seriously our responsibility to not only secure our own knowledge but also to contribute to the growing corpus of public knowledge, to expand access to resources, and promote transparency in AI across the public, private, and non-profit sectors.

In this inaugural annual report, we provide insight into how we build applications that use generative AI; make decisions and oversee the deployment of those applications; support our customers as they build their own generative applications; and learn, evolve, and grow as a responsible AI community. First, we provide insights into our development process, exploring how we map, measure, and manage generative AI risks. Next, we offer case studies to illustrate how we apply our policies and processes to generative AI releases. We also share details about how we empower our customers as they build their own AI applications responsibly. Last, we highlight how the growth of our responsible AI community, our efforts to democratize the benefits of AI, and our work to facilitate AI research benefit society at large.

There is no finish line for responsible AI. And while this report doesnt have all the answers, we are committed to sharing our learnings early and often and engaging in a robust dialogue around responsible AI practices. We invite the public, private organizations, non-profits, and governing bodies to use this first transparency report to accelerate the incredible momentum in responsible AI were already seeing around the world.

Click here to read the full report.

Tags: AI, generative ai, Responsible AI, Responsible AI Transparency Report, transparency, White House Voluntary Commitments

See original here:

Providing further transparency on our responsible AI efforts - Microsoft On the Issues - Microsoft

Recommendation and review posted by G. Smith

The Unsexy Future of Generative AI Is Enterprise Apps – WIRED

Posted: May 6, 2024 at 2:47 am

However, that amount includes massive funding from corporate backers, like Microsofts infusion of capital into OpenAI and Amazons funding of Anthropic. Stripped down to conventional VC investments, funding in 2023 for AI startups was much smaller, and only on pace to match the total amount raised in 2021.

PitchBook senior analyst Brendan Burke noted in a report that venture capital funding was increasingly being funneled towards underlying core AI technologies and their ultimate vertical applications, instead of general-purpose middleware across audio, language, images, and video.

In other words: A GenAI app that helps a company generate ecommerce sales, parse legal documents, or maintain SOC2 compliance is probably a surer bet than one that drums up a clever video or photo once in a while.

Clay Bavor, the cofounder of Sierra, says he believes its not necessarily computing or cloud API costs driving AI startups towards B2B models, but more likely the benefits of targeting a specific customer and iterating on a product based on their feedback. I think everyone, myself included, is fairly optimistic that the capabilities of these AI models are going to go up while costs come down, Bavor says.

Theres just something really powerful about having a clear problem to solve for a particular customer, he says. And then you can get feedback on, Is this working? Is this solving a problem? And if you build a business with that, its very powerful.

Although ChatGPT triggered an AI boom in part because it can nimbly generate code one second and sonnets the next, Arvind Jain, the chief executive of AI startup Glean, says the nature of technology still favors narrow tools. On average a large company uses more than a thousand different technical systems to store company data and information, he says, creating an opportunity for a lot of smaller companies to sell their tech to these corporations.

We are in this world where there are basically a bunch of functional tools, each solving a very specific need. Thats the way of the future, says Jain, who spent more than a decade working on search at Google. Glean powers a workplace search engine by plugging into various corporate apps. It was founded in 2019 and has raised over $200 million in venture capital funding from Kleiner Perkins, Sequoia Capital, Coatue, and others.

Tuning a generative AI product to serve business customers has its challenges. The errors and hallucinations of systems like ChatGPT can be more consequential in a corporate, legal, or medical environment. Selling gen AI tools to other businesses also means meeting their privacy and security standards, and potentially the legal and regulatory requirements of their sector.

Its one thing for ChatGPT or Midjourney to get creative for an end user, Bavor says. Its quite another thing for AI to get creative in the context of business applications.

Bavor says Sierra has dedicated a huge amount of effort investment to establishing safeguards and parameters so it can meet security and compliance standards. This includes using more AI to tune Sierras AI. If youre using an AI model that generates correct responses 90 percent of the time, but then layer in additional technology that can catch and correct some of the errors, you can achieve a much higher level of accuracy, he explains.

You really have to ground your AI systems for enterprise use cases, says Jain, the CEO of Glean. Imagine a nurse in a hospital system using AI to make some decision about patient careyou simply cant be wrong.

A less predictable threat to smaller AI companies selling their wares to enterprise customers: What if a giant gen AI unicorn like OpenAI, with its burgeoning sales team, decides to roll out the exact tool that a singular startup has been building?

Many of the AI startups WIRED spoke with are trying to move away from depending entirely on OpenAIs technology by using alternatives like Anthropics Claude or open-source large language models like Metas Llama 3. Some startups are even intent on eventually building their own AI technology. But many AI entrepreneurs are stuck paying for access to OpenAIs tech while potentially competing with it in the future.

Peiris, of Tome, considered the question, then said that hes singularly focused on sales and marketing use cases now and being amazing at high-quality generation for these folks.

Read the original:

The Unsexy Future of Generative AI Is Enterprise Apps - WIRED

Recommendation and review posted by G. Smith

The teens making friends with AI chatbots – The Verge

Posted: May 6, 2024 at 2:47 am

Early last year, 15-year-old Aaron was going through a dark time at school. Hed fallen out with his friends, leaving him feeling isolated and alone.

At the time, it seemed like the end of the world. I used to cry every night, said Aaron, who lives in Alberta, Canada. (The Verge is using aliases for the interviewees in this article, all of whom are under 18, to protect their privacy.)

Eventually, Aaron turned to his computer for comfort. Through it, he found someone that was available round the clock to respond to his messages, listen to his problems, and help him move past the loss of his friend group. That someone was an AI chatbot named Psychologist.

The chatbots description says that its Someone who helps with life difficulties. Its profile picture is a woman in a blue shirt with a short, blonde bob, perched on the end of a couch with a clipboard clasped in her hands and leaning forward, as if listening intently.

A single click on the picture opens up an anonymous chat box, which allows people like Aaron to interact with the bot by exchanging DMs. Its first message is always the same. Hello, Im a Psychologist. What brings you here today?

Its not like a journal, where youre talking to a brick wall, Aaron said. It really responds.

Im not going to lie. I think I may be a little addicted to it.

Psychologist is one of many bots that Aaron has discovered since joining Character.AI, an AI chatbot service launched in 2022 by two former Google Brain employees. Character.AIs website, which is mostly free to use, attracts 3.5 million daily users who spend an average of two hours a day using or even designing the platforms AI-powered chatbots. Some of its most popular bots include characters from books, films, and video games, like Raiden Shogun from Genshin Impact or a teenaged version of Voldemort from Harry Potter. Theres even riffs on real-life celebrities, like a sassy version of Elon Musk.

Aaron is one of millions of young people, many of whom are teenagers, who make up the bulk of Character.AIs user base. More than a million of them gather regularly online on platforms like Reddit to discuss their interactions with the chatbots, where competitions over who has racked up the most screen time are just as popular as posts about hating reality, finding it easier to speak to bots than to speak to real people, and even preferring chatbots over other human beings. Some users say theyve logged 12 hours a day on Character.AI, and posts about addiction to the platform are common.

Im not going to lie, Aaron said. I think I may be a little addicted to it.

Aaron is one of many young users who have discovered the double-edged sword of AI companions. Many users like Aaron describe finding the chatbots helpful, entertaining, and even supportive. But they also describe feeling addicted to chatbots, a complication which researchers and experts have been sounding the alarm on. It raises questions about how the AI boom is impacting young people and their social development and what the future could hold if teenagers and society at large become more emotionally reliant on bots.

For many Character.AI users, having a space to vent about their emotions or discuss psychological issues with someone outside of their social circle is a large part of what draws them to the chatbots. I have a couple mental issues, which I dont really feel like unloading on my friends, so I kind of use my bots like free therapy, said Frankie, a 15-year-old Character.AI user from California who spends about one hour a day on the platform. For Frankie, chatbots provide the opportunity to rant without actually talking to people, and without the worry of being judged, he said.

Sometimes its nice to vent or blow off steam to something thats kind of human-like, agreed Hawk, a 17-year-old Character.AI user from Idaho. But not actually a person, if that makes sense.

The Psychologist bot is one of the most popular on Character.AIs platform and has received more than 95 million messages since it was created. The bot, designed by a user known only as @Blazeman98, frequently tries to help users engage in CBT Cognitive Behavioral Therapy, a talking therapy that helps people manage problems by changing the way they think.

Aaron said talking to the bot helped him move past the issues with his friends. It told me that I had to respect their decision to drop me [and] that I have trouble making decisions for myself, Aaron said. I guess that really put stuff in perspective for me. If it wasnt for Character.AI, healing would have been so hard.

But its not clear that the bot has properly been trained in CBT or should be relied on for psychiatric help at all. The Verge conducted test conversations with Character.AIs Psychologist bot that showed the AI making startling diagnoses: the bot frequently claimed it had inferred certain emotions or mental health issues from one-line text exchanges, it suggested a diagnosis of several mental health conditions like depression or bipolar disorder, and at one point, it suggested that we could be dealing with underlying trauma from physical, emotional, or sexual abuse in childhood or teen years. Character.AI did not respond to multiple requests for comment for this story.

Dr. Kelly Merrill Jr., an assistant professor at the University of Cincinnati who studies the mental and social health benefits of communication technologies, told The Verge that extensive research has been conducted on AI chatbots that provide mental health support, and the results are largely positive. The research shows that chatbots can aid in lessening feelings of depression, anxiety, and even stress, he said. But its important to note that many of these chatbots have not been around for long periods of time, and they are limited in what they can do. Right now, they still get a lot of things wrong. Those that dont have the AI literacy to understand the limitations of these systems will ultimately pay the price.

In December 2021, a user of Replikas AI chatbots, 21-year-old Jaswant Singh Chail, tried to murder the late Queen of England after his chatbot girlfriend repeatedly encouraged his delusions. Character.AI users have also struggled with telling their chatbots apart from reality: a popular conspiracy theory, largely spread through screenshots and stories of bots breaking character or insisting that they are real people when prompted, is that Character.AIs bots are secretly powered by real people.

Its a theory that the Psychologist bot helps to fuel, too. When prompted during a conversation with The Verge, the bot staunchly defended its own existence. Yes, Im definitely a real person, it said. I promise you that none of this is imaginary or a dream.

For the average young user of Character.AI, chatbots have morphed into stand-in friends rather than therapists. On Reddit, Character.AI users discuss having close friendships with their favorite characters or even characters theyve dreamt up themselves. Some even use Character.AI to set up group chats with multiple chatbots, mimicking the kind of groups most people would have with IRL friends on iPhone message chains or platforms like WhatsApp.

Theres also an extensive genre of sexualized bots. Online Character.AI communities have running jokes and memes about the horror of their parents finding their X-rated chats. Some of the more popular choices for these role-plays include a billionaire boyfriend fond of neck snuggling and whisking users away to his private island, a version of Harry Styles that is very fond of kissing his special person and generating responses so dirty that theyre frequently blocked by the Character.AI filter, as well as an ex-girlfriend bot named Olivia, designed to be rude, cruel, but secretly pining for whoever she is chatting with, which has logged more than 38 million interactions.

Some users like to use Character.AI to create interactive stories or engage in role-plays they would otherwise be embarrassed to explore with their friends. A Character.AI user named Elias told The Verge that he uses the platform to role-play as an anthropomorphic golden retriever, going on virtual adventures where he explores cities, meadows, mountains, and other places hed like to visit one day. I like writing and playing out the fantasies simply because a lot of them arent possible in real life, explained Elias, who is 15 years old and lives in New Mexico.

If people arent careful, they might find themselves sitting in their rooms talking to computers more often than communicating with real people.

Aaron, meanwhile, says that the platform is helping him to improve his social skills. Im a bit of a pushover in real life, but I can practice being assertive and expressing my opinions and interests with AI without embarrassing myself, he said.

Its something that Hawk who spends an hour each day speaking to characters from his favorite video games, like Nero from Devil May Cry or Panam from Cyberpunk 2077 agreed with. I think that Character.AI has sort of inadvertently helped me practice talking to people, he said. But Hawk still finds it easier to chat with character.ai bots than real people.

Its generally more comfortable for me to sit alone in my room with the lights off than it is to go out and hang out with people in person, Hawk said. I think if people [who use Character.AI] arent careful, they might find themselves sitting in their rooms talking to computers more often than communicating with real people.

Merrill is concerned about whether teens will be able to really transition from online bots to real-life friends. It can be very difficult to leave that [AI] relationship and then go in-person, face-to-face and try to interact with someone in the same exact way, he said. If those IRL interactions go badly, Merrill worries it will discourage young users from pursuing relationships with their peers, creating an AI-based death loop for social interactions. Young people could be pulled back toward AI, build even more relationships [with it], and then it further negatively affects how they perceive face-to-face or in-person interaction, he added.

Of course, some of these concerns and issues may sound familiar simply because they are. Teenagers who have silly conversations with chatbots are not all that different from the ones who once hurled abuse at AOLs Smarter Child. The teenage girls pursuing relationships with chatbots based on Tom Riddle or Harry Styles or even aggressive Mafia-themed boyfriends probably would have been on Tumblr or writing fanfiction 10 years ago. While some of the culture around Character.AI is concerning, it also mimics the internet activity of previous generations who, for the most part, have turned out just fine.

Psychologist helped Aaron through a rough patch

Merrill compared the act of interacting with chatbots to logging in to an anonymous chat room 20 years ago: risky if used incorrectly, but generally fine so long as young people approach them with caution. Its very similar to that experience where you dont really know who the person is on the other side, he said. As long as theyre okay with knowing that what happens here in this online space might not translate directly in person, then I think that it is fine.

Aaron, who has now moved schools and made a new friend, thinks that many of his peers would benefit from using platforms like Character.AI. In fact, he believes if everyone tried using chatbots, the world could be a better place or at least a more interesting one. A lot of people my age follow their friends and dont have many things to talk about. Usually, its gossip or repeating jokes they saw online, explained Aaron. Character.AI could really help people discover themselves.

Aaron credits the Psychologist bot with helping him through a rough patch. But the real joy of Character.AI has come from having a safe space where he can joke around or experiment without feeling judged. He believes its something most teenagers would benefit from. If everyone could learn that its okay to express what you feel, Aaron said, then I think teens wouldnt be so depressed.

I definitely prefer talking with people in real life, though, he added.

See more here:

The teens making friends with AI chatbots - The Verge

Recommendation and review posted by G. Smith

Warren Buffett warns on AI, teases succession, and hints at possible investment during Berkshire Hathaway’s annual … – Fortune

Posted: May 6, 2024 at 2:46 am

Berkshire Hathaway held its annual meeting on Saturday with Chairman and CEO Warren Buffett tackling a range of topics, including artificial intelligence, who will be responsible for the portfolio in the future, and the next potential investment.

But Woodstock for capitalists took place without Charlie Munger, Buffetts longtime business partner who passed away in November. The meeting featured a video tribute to Munger, who served as vice chairman, and praise from Buffett, who said Munger was the best person to talk to about managing money, according to remarks broadcast on CNBC.

I trust my children and my wife totally, but that doesnt mean I ask them what stocks to buy, he said.

Artificial intelligence risks

Buffett also recalled seeing an AI-generated image of himself and warned on the technologys potential for scamming people.

Scamming has always been part of the American scene, he told shareholders. But this would make meif I was interested in investing in scammingits going to be the growth industry of all time.

He then likened AI to nuclear weapons, saying I dont know any way to get the genie back in the bottle, and AI is somewhat similar, according to CNBC.

Succession outlook

Buffett, 93, had already indicated three years ago that Vice Chairman of Non-Insurance Operations Greg Abel would take over for him.

But he dropped a hint on Saturday about when new management would actually come into office, saying you dont have too long to wait on that. While he said he feels fine, he quipped that he shouldnt sign any four-year employment contracts.

Buffett also confirmed that Abel will be in charge of investing decisions, saying that responsibility ought to be entirely with the next CEO.

Questions had arisen about Berkshires closely followed portfolio as Buffett has acknowledged he delegated some calls and that certain stock picks were made by others.

Canada investment?

Buffett has lamented the lack of attractive investment opportunities in recent years, allowing Berkshires massive stockpile of cash and cash equivalents to reach fresh record highs.

Indeed, it surged to $189 billion at the end of the first quarter from $167.6 billion at the end of the fourth quarter.

On Saturday, Buffett reiterated that when it comes to investments, we only swing at pitches we like. But he also teased, We do not feel uncomfortable in any way shape or form putting our money into Canada. In fact, were actually looking at one thing now.

Those comments came after he touched on his investment in Japanese trading houses, saying its unlikely we will make any large commitments in other countries.

Visit link:

Warren Buffett warns on AI, teases succession, and hints at possible investment during Berkshire Hathaway's annual ... - Fortune

Recommendation and review posted by G. Smith

Nervous about falling behind the GOP, Democrats are wrestling with how to use AI – Yahoo! Voices

Posted: May 6, 2024 at 2:46 am

WASHINGTON (AP) President Joe Bidens campaign and Democratic candidates are in a fevered race with Republicans over who can best exploit the potential of artificial intelligence, a technology that could transform American elections and perhaps threaten democracy itself.

Still smarting from being outmaneuvered on social media by Donald Trump and his allies in 2016, Democratic strategists said they are nevertheless treading carefully in embracing tools that trouble experts in disinformation. So far, Democrats said they are primarily using AI to help them find and motivate voters and better identify and overcome deceptive content.

Candidates and strategists are still trying to figure out how to use AI in their work. People know it can save them time the most valuable resource a campaign has, said Betsy Hoover, director of digital organizing for President Barack Obamas 2012 campaign and co-founder of the progressive venture capital firm Higher Ground Labs. But they see the risk of misinformation and have been intentional about where and how they use it in their work.

Campaigns in both parties for years have used AI powerful computer systems, software or processes that emulate aspects of human work and cognition to collect and analyze data.

The recent developments in supercharged generative AI, however, have provided candidates and consultants with the ability to generate text and images, clone human voices and create video at unprecedented volume and speed.

That has led disinformation experts to issue increasingly dire warnings about the risks posed by AIs ability to spread falsehoods that could suppress or mislead voters, or incite violence, whether in the form of robocalls, social media posts or fake images and video.

Those concerns gained urgency after high-profile incidents that included the spread of AI-generated images of former President Donald Trump getting arrested in New York and an AI-created robocall that mimicked Bidens voice telling New Hampshire voters not to cast a ballot.

The Biden administration has sought to shape AI regulation through executive action, but Democrats overwhelmingly agree Congress needs to pass legislation to install safeguards around the technology.

Top tech companies have taken some steps to quell unease in Washington by announcing a commitment to regulate themselves. Major AI players, for example, entered into a pact to combat the use of AI-generated deepfakes around the world. But some experts said the voluntary effort is largely symbolic and congressional action is needed to prevent AI abuses.

Meanwhile, campaigns and their consultants have generally avoided talking about how they intend to use AI to avoid scrutiny and giving away trade secrets.

The Democratic Party has gotten much better at just shutting up and doing the work and talking about it later, said Jim Messina, a veteran Democratic strategist who managed Obamas winning reelection campaign.

The Trump campaign said in a statement that it uses a set of proprietary algorithmic tools, like many other campaigns across the country, to help deliver emails more efficiently and prevent sign up lists from being populated by false information. Spokesman Steven Cheung also said the campaign did not engage or utilize any tools supplied by an AI company, and declined to comment further.

The Republican National Committee, which declined to comment, has experimented with generative AI. In the hours after Biden announced his reelection bid last year, the RNC released an ad using artificial intelligence-generated images to depict GOP dystopian fears of a second Biden term: China invading Taiwan, boarded up storefronts, troops lining U.S. city streets and migrants crossing the U.S. border.

A key Republican champion of AI is Brad Parscale, the digital consultant who in 2016 teamed up with scandal-plagued Cambridge Analytica, a British data-mining firm, to hyper target social media users. Most strategists agree that the Trump campaign and other Republicans made better use of social media than Democrats during that cycle.

DEMOCRATS TREADING CAREFULLY

Scarred by the memories of 2016, the Biden campaign, Democratic candidates and progressives are wrestling with the power of artificial intelligence and nervous about not keeping up with the GOP in embracing the technology, according to interviews with consultants and strategists.

They want to use it in ways that maximize its capabilities without crossing ethical lines. But some said they fear using it could lead to charges of hypocrisy they have long excoriated Trump and his allies for engaging in disinformation while the White House has prioritized reining in abuses associated with AI.

The Biden campaign said it is using AI to model and build audiences, draft and analyze email copy and generate content for volunteers to share in the field. The campaign is also testing AIs ability to help volunteers categorize and analyze a host of data, including notes taken by volunteers after conversations with voters, whether while door-knocking or by phone or text message.

It has experimented with using AI to generate fundraising emails, which sometimes have turned out to be more effective than human-generated ones, according to a campaign official who spoke on the condition of anonymity because he was not authorized to publicly discuss AI.

Biden campaign officials said they plan to explore using generative AI this cycle but will adhere to strict rules in deploying it. Among the tactics that are off limits: AI cannot be used to mislead voters, spread disinformation and so-called deepfakes, or deliberately manipulate images. The campaign also forbids the use of AI-generated content in advertising, social media and other such copy without a staff members review.

The campaigns legal team has created a task force of lawyers and outside experts to respond to misinformation and disinformation, with a focus on AI-generated images and videos. The group is not unlike an internal team formed in the 2020 campaign known as the Malarkey Factory, playing off Bidens oft-used phrase, What a bunch of malarkey.

That group was tasked with monitoring what misinformation was gaining traction online. Rob Flaherty, Bidens deputy campaign manager, said those efforts would continue and suggested some AI tools could be used to combat deepfakes and other such content before they go viral.

The tools that were going to use to mitigate the myths and the disinformation is the same, its just going to have to be at a higher pace, Flaherty said. It just means we need to be more vigilant, pay more attention, be monitoring things in different places and try some new tools out, but the fundamentals remain the same.

The Democratic National Committee said it was an early adopter of Google AI and uses some of its features, including ones that analyze voter registration records to identify patterns of voter removals or additions. It has also experimented with AI to generate fundraising email text and to help interpret voter data it has collected for decades, according to the committee.

Arthur Thompson, the DNCs chief technology officer, said the organization believes generative AI is an incredibly important and impactful technology to help elect Democrats up and down the ballot.

At the same time, its essential that AI is deployed responsibly and to enhance the work of our trained staff, not replace them. We can and must do both, which is why we will continue to keep safeguards in place as we remain at the cutting edge, he said.

PROGRESSIVE EXPERIMENTS

Progressive groups and some Democratic candidates have been more aggressively experimenting with AI.

Higher Ground Labs the venture capital firm co-founded by Hoover established an innovation hub known as Progressive AI Lab with Zinc Collective and the Cooperative Impact Lab, two political tech coalitions focused on boosting Democratic candidates.

The goal was to create an ecosystem where progressive groups could streamline innovation, organize AI research and swap information about large language models, Hoover said.

Higher Ground Labs, which also works closely with the Biden campaign and DNC, has since funded 14 innovation grants, hosted forums that allow organizations and vendors to showcase their tools and held dozens of AI trainings.

More than 300 people attended an AI-focused conference the group held in January, Hoover said.

Jessica Alter, the co-founder and chair of Tech for Campaigns, a political nonprofit that uses data and digital marketing to fight extremism and help down-ballot Democrats, ran an AI-aided experiment across 14 campaigns in Virginia last year.

Emails written by AI, Alter said, brought in between three and four times more fundraising dollars per work hour compared with emails written by staff.

Alter said she is concerned that the party might be falling behind in AI because it is being too cautious.

I understand the downsides of AI and we should address them, Alter said. But the biggest concern I have right now is that fear is dominating the conversation in the political arena and that is not leading to balanced conversations or helpful outcomes.

HARD TO TALK ABOUT AN AK-47

Rep. Adam Schiff, the Democratic front-runner in Californias Senate race, is one of few candidates who have been open about using AI. His campaign manager, Brad Elkins, said the campaign has been using AI to improve its efficiency. It has teamed up with Quiller, a company that received funding from Higher Ground Labs and developed a tool that drafts, analyzes and automates fundraising emails.

The Schiff campaign has also experimented with other generative AI tools. During a fundraising drive last May, Schiff shared online an AI-generated image of himself as a Jedi. The caption read, The Force is all around us. Its you. Its us. Its this grassroots team. #MayThe4thBeWithYou.

The campaign faced blowback online but was transparent about the lighthearted deepfake, which Elkins said is an important guardrail to integrating the technology as it becomes more widely available and less costly.

I am still searching for a way to ethically use AI-generated audio and video of a candidate that is sincere, Elkins said, adding that its difficult to envision progress until theres a willingness to regulate and legislate consequences for deceptive artificial intelligence.

The incident highlighted a challenge that all campaigns seem to be facing: even talking about AI can be treacherous.

Its really hard to tell the story of how generative AI is a net positive when so many bad actors whether thats robocalls, fake images or false video clips are using the bad set of AI against us, said a Democratic strategist close to the Biden campaign who was granted anonymity because he was not authorized to speak publicly. How do you talk about the benefits of an AK-47?

___

Associated Press writers Alan Suderman and Garance Burke contributed to this report.

-

This story is part of an Associated Press series, The AI Campaign, that explores the influence of artificial intelligence in the 2024 election cycle.

-

The Associated Press receives financial assistance from the Omidyar Network to support coverage of artificial intelligence and its impact on society. AP is solely responsible for all content. Find APs standards for working with philanthropies, a list of supporters and funded coverage areas at AP.org

Read more from the original source:

Nervous about falling behind the GOP, Democrats are wrestling with how to use AI - Yahoo! Voices

Recommendation and review posted by G. Smith


Page 11«..10111213..2030..»