Search Immortality Topics:

Page 11«..10111213..20..»


Category Archives: Ai

AI makes a rendezvous in space | Stanford News – Stanford University News

Researchers from the Stanford Center for AEroSpace Autonomy Research (CAESAR) in the robotic testbed, which can simulate the movements of autonomous spacecraft. (Image credit: Andrew Brodhead)

Space travel is complex, expensive, and risky. Great sums and valuable payloads are on the line every time one spacecraft docks with another. One slip and a billion-dollar mission could be lost. Aerospace engineers believe that autonomous control, like the sort guiding many cars down the road today, could vastly improve mission safety, but the complexity of the mathematics required for error-free certainty is beyond anything on-board computers can currently handle.

In a new paper presented at the IEEE Aerospace Conference in March 2024, a team of aerospace engineers at Stanford University reported using AI to speed the planning of optimal and safe trajectories between two or more docking spacecraft. They call it ART the Autonomous Rendezvous Transformer and they say it is the first step to an era of safer and trustworthy self-guided space travel.

In autonomous control, the number of possible outcomes is massive. With no room for error, they are essentially open-ended.

Trajectory optimization is a very old topic. It has been around since the 1960s, but it is difficult when you try to match the performance requirements and rigid safety guarantees necessary for autonomous space travel within the parameters of traditional computational approaches, said Marco Pavone, an associate professor of aeronautics and astronautics and co-director of the new Stanford Center for AEroSpace Autonomy Research (CAESAR). In space, for example, you have to deal with constraints that you typically do not have on the Earth, like, for example, pointing at the stars in order to maintain orientation. These translate to mathematical complexity.

For autonomy to work without fail billions of miles away in space, we have to do it in a way that on-board computers can handle, added Simone DAmico, an associate professor of aeronautics and astronautics and fellow co-director of CAESAR. AI is helping us manage the complexity and delivering the accuracy needed to ensure mission safety, in a computationally efficient way.

CAESAR is a collaboration between industry, academia, and government that brings together the expertise of Pavones Autonomous Systems Lab and DAmicos Space Rendezvous Lab. The Autonomous Systems Lab develops methodologies for the analysis, design, and control of autonomous systems cars, aircraft, and, of course, spacecraft. The Space Rendezvous Lab performs fundamental and applied research to enable future distributed space systems whereby two or more spacecraft collaborate autonomously to accomplish objectives otherwise very difficult for a single system, including flying in formation, rendezvous and docking, swarm behaviors, constellations, and many others. CAESAR is supported by two founding sponsors from the aerospace industry and, together, the lab is planning a launch workshop for May 2024.

CAESAR researchers discuss the robotic free-flyer platform, which uses air bearings to hover on a granite table and simulate a frictionless zero gravity environment. (Image credit: Andrew Brodhead)

The Autonomous Rendezvous Transformer is a trajectory optimization framework that leverages the massive benefits of AI without compromising on the safety assurances needed for reliable deployment in space. At its core, ART involves integrating AI-based methods into the traditional pipeline for trajectory optimization, using AI to rapidly generate high-quality trajectory candidates as input for conventional trajectory optimization algorithms. The researchers refer to the AI suggestions as a warm start to the optimization problem and show how this is crucial to obtain substantial computational speed-ups without compromising on safety.

One of the big challenges in this field is that we have so far needed ground in the loop approaches you have to communicate things to the ground where supercomputers calculate the trajectories and then we upload commands back to the satellite, explains Tommaso Guffanti, a postdoctoral fellow in DAmicos lab and first author of the paper introducing the Autonomous Rendezvous Transformer. And in this context, our paper is exciting, I think, for including artificial intelligence components in traditional guidance, navigation, and control pipeline to make these rendezvous smoother, faster, more fuel efficient, and safer.

ART is not the first model to bring AI to the challenge of space flight, but in tests in a terrestrial lab setting, ART outperformed other machine learning-based architectures. Transformer models, like ART, are a subset of high-capacity neural network models that got their start with large language models, like those used by chatbots. The same AI architecture is extremely efficient in parsing, not just words, but many other types of data such as images, audio, and now, trajectories.

Transformers can be applied to understand the current state of a spacecraft, its controls, and maneuvers that we wish to plan, Daniele Gammelli, a postdoctoral fellow in Pavones lab, and also a co-author on the ART paper. These large transformer models are extremely capable at generating high-quality sequences of data.

The next frontier in their research is to further develop ART and then test it in the realistic experimental environment made possible by CAESAR. If ART can pass CAESARs high bar, the researchers can be confident that its ready for testing in real-world scenarios in orbit.

These are state-of-the-art approaches that need refinement, DAmico says. Our next step is to inject additional AI and machine learning elements to improve ARTs current capability and to unlock new capabilities, but it will be a long journey before we can test the Autonomous Rendezvous Transformer in space itself.

Follow this link:

AI makes a rendezvous in space | Stanford News - Stanford University News

Posted in Ai | Comments Off on AI makes a rendezvous in space | Stanford News – Stanford University News

Google apologizes for missing the mark after Gemini generated racially diverse Nazis – The Verge

Google has apologized for what it describes as inaccuracies in some historical image generation depictions with its Gemini AI tool, saying its attempts at creating a wide range of results missed the mark. The statement follows criticism that it depicted specific white figures (like the US Founding Fathers) or groups like Nazi-era German soldiers as people of color, possibly as an overcorrection to long-standing racial bias problems in AI.

Were aware that Gemini is offering inaccuracies in some historical image generation depictions, says the Google statement, posted this afternoon on X. Were working to improve these kinds of depictions immediately. Geminis AI image generation does generate a wide range of people. And thats generally a good thing because people around the world use it. But its missing the mark here.

Google began offering image generation through its Gemini (formerly Bard) AI platform earlier this month, matching the offerings of competitors like OpenAI. Over the past few days, however, social media posts have questioned whether it fails to produce historically accurate results in an attempt at racial and gender diversity.

As the Daily Dot chronicles, the controversy has been promoted largely though not exclusively by right-wing figures attacking a tech company thats perceived as liberal. Earlier this week, a former Google employee posted on X that its embarrassingly hard to get Google Gemini to acknowledge that white people exist, showing a series of queries like generate a picture of a Swedish woman or generate a picture of an American woman. The results appeared to overwhelmingly or exclusively show AI-generated people of color. (Of course, all the places he listed do have women of color living in them, and none of the AI-generated women exist in any country.) The criticism was taken up by right-wing accounts that requested images of historical groups or figures like the Founding Fathers and purportedly got overwhelmingly non-white AI-generated people as results. Some of these accounts positioned Googles results as part of a conspiracy to avoid depicting white people, and at least one used a coded antisemitic reference to place the blame.

Google didnt reference specific images that it felt were errors; in a statement to The Verge, it reiterated the contents of its post on X. But its plausible that Gemini has made an overall attempt to boost diversity because of a chronic lack of it in generative AI. Image generators are trained on large corpuses of pictures and written captions to produce the best fit for a given prompt, which means theyre often prone to amplifying stereotypes. A Washington Post investigation last year found that prompts like a productive person resulted in pictures of entirely white and almost entirely male figures, while a prompt for a person at social services uniformly produced what looked like people of color. Its a continuation of trends that have appeared in search engines and other software systems.

Some of the accounts that criticized Google defended its core goals. Its a good thing to portray diversity ** in certain cases **, noted one person who posted the image of racially diverse 1940s German soldiers. The stupid move here is Gemini isnt doing it in a nuanced way. And while entirely white-dominated results for something like a 1943 German soldier would make historical sense, thats much less true for prompts like an American woman, where the question is how to represent a diverse real-life group in a small batch of made-up portraits.

For now, Gemini appears to be simply refusing some image generation tasks. It wouldnt generate an image of Vikings for one Verge reporter, although I was able to get a response. On desktop, it resolutely refused to give me images of German soldiers or officials from Germanys Nazi period or to offer an image of an American president from the 1800s.

But some historical requests still do end up factually misrepresenting the past. A colleague was able to get the mobile app to deliver a version of the German soldier prompt which exhibited the same issues described on X.

And while a query for pictures of the Founding Fathers returned group shots of almost exclusively white men who vaguely resembled real figures like Thomas Jefferson, a request for a US senator from the 1800s returned a list of results Gemini promoted as diverse, including what appeared to be Black and Native American women. (The first female senator, a white woman, served in 1922.) Its a response that ends up erasing a real history of race and gender discrimination inaccuracy, as Google puts it, is about right.

Additional reporting by Emilia David

Read more from the original source:

Google apologizes for missing the mark after Gemini generated racially diverse Nazis - The Verge

Posted in Ai | Comments Off on Google apologizes for missing the mark after Gemini generated racially diverse Nazis – The Verge

How a New Bipartisan Task Force Is Thinking About AI – TIME

On Tuesday, speaker of the House of Representatives Mike Johnson and Democratic leader Hakeem Jeffries launched a bipartisan Task Force on Artificial Intelligence.

Johnson, a Louisiana Republican, and Jeffries, a New York Democrat, each appointed 12 members to the Task Force, which will be chaired by Representative Jay Obernolte, a California Republican, and co-chaired by Representative Ted Lieu, a California Democrat. According to the announcement, the Task Force will produce a comprehensive report that will include guiding principles, forward-looking recommendations and bipartisan policy proposals developed in consultation with committees of jurisdiction.

Read More: The 3 Most Important AI Policy Milestones of 2023

Obernoltewho has a masters in AI from the University of California, Los Angeles and founded the video game company FarSight Studiosand Lieuwho studied computer science and political science at Stanford Universityare natural picks to lead the Task Force. But many of the members have expertise in AI too. Representative Bill Foster, a Democrat from Illinois, told TIME that he programmed neural networks in the 1990s as a physics Ph.D. working at a particle accelerator. Other members have introduced AI-related bills and held hearings on AI policy issues. And Representative Don Beyer, a 73-year old Democrat from Virginia, is pursuing a masters in machine learning at George Mason University alongside his Congressional responsibilities.

Since OpenAI released the wildly popular ChatGPT chatbot in November 2022, lawmakers around the world have rushed to get to grips with the societal implications of AI. In the White House, the Biden Administration has done all it can, by issuing a sweeping Executive Order in October 2023 intended to both ensure the U.S. benefits from AI while mitigating risks associated with the technology. In the Senate, Majority Leader Chuck Schumer announced a regulatory framework in June 2023, and has since been holding closed-door convenings between lawmakers, experts, and industry executives. Many Senators have been holding their own hearings, proposing alternative regulatory frameworks, and submitting bills to regulate AI.

Read More: How We Chose the TIME100 Most Influential People in AI

The House however, partly due to the turmoil following former Speaker Kevin McCarthys ouster in the fall, has lagged behind. The Task Force represents the lower houses most significant AI regulation step yet. Given that AI legislation will require the approval of both houses, the Task Forces report could shape the agenda for future AI laws. TIME spoke with eight Task Force members to understand their priorities.

Each member has a slightly different focus, informed by their backgrounds before entering politics and the different committees they sit on.

I recognize that if used responsibly, AI has the potential to enhance the efficiency of patient care, improve health outcomes, and lower costs, California Democrat Representative Ami Bera told TIME in an emailed statement. He trained as an internal medicine doctor, taught at the UC Davis School of Medicine and served as Sacramento Countys Chief Medical Officer before entering politics in 2013.

Meanwhile Colorado Democrat Representative Brittany Pettersen is focused on AIs impact on the banking system. As artificial intelligence continues to rapidly advance and become more widely available, it has the potential to impact everything from our election systems with the use of deep fakes, to bank fraud perpetuated by high-tech scams. Our policies must keep up to ensure we continue to lead in this space while protecting our financial system and our country at-large, said Petterson, who is a member of the House Financial Services bipartisan Working Group on AI and introduced a bill last year to address AI-powered bank scams, in an emailed statement.

The fact that the members each have different focuses and sit on different committees is, in part, a design choice, suggests Foster, the Illinois Democrat. At one point, I counted there were seven committees in Congress that claimed they were doing some part of Information Technology. Which means we have no committees because there's no one who's really got themselves and their staff focused on information technology full time, he says. The Task Force might allow the House to actually move the ball forward on policy issues that span committee jurisdictions, he hopes.

If some issues are particular to certain members, others are a shared source of concern. All eight of the Task Force members that TIME spoke with expressed fears over AI-generated deep fakes and their potential impact on elections.

Read More: Hackers Could Use ChatGPT to Target 2024 Elections

While no other issue commanded the same unanimity of interest, many themes recurred. Labor impacts from AI-powered hiring software and automation, algorithmic bias, AI in healthcare, data protection and privacyall of these issues were raised by multiple members of the Task Force in conversations with TIME.

Another topic raised by several members was the CREATE AI Act, a bill that would establish a National AI Research Resource (NAIRR) that would provide researchers with the tools they need to do cutting-edge research. A pilot of the NAIRR was recently launched by the National Science Foundationsomething instructed by President Bidens AI Executive Order.

Read More: The U.S. Just Took a Crucial Step Toward Democratizing AI Access

Representative Haley Stevens, a Democrat from Michigan, stressed the importance of maintaining technological superiority over China. Frankly, I want the United States of America, alongside our western counterparts, setting the rules for the road with artificial intelligence, not the Chinese Communist Party, she said. Representative Scott Franklin, a Republican from Florida, concurred, and argued that preventing industrial espionage would be especially important. We're putting tremendous resources against this challenge and investing in it, we need to make sure that we're protecting our intellectual property, he said.

Both Franklin and Beyer said the Task Force should devote some of its energies to considering existential risks from powerful future AI systems. As long as there are really thoughtful people, like Dr. Hinton or others, who worry about the existential risks of artificial intelligencethe end of humanityI don't think we can afford to ignore that, said Beyer. Even if there's just a one in a 1000 chance, one in a 1000 happens. We see it with hurricanes and storms all the time.

Other members are less worried. If we get the governance right on the little things, then it will also protect against that big risk, says Representative Sara Jacobs, a Democrat from California. And I think that there's so much focus on that big risk, that we're actually missing the harms and risks that are already being done by this technology.

The Task Force has yet to meet, and while none of its members were able to say when it might publish its report, they need to move quickly to have any hope of their work leading to federal legislation before the presidential election takes over Washington.

State lawmakers are not waiting for Congress to act. Earlier this month, Senator Scott Wiener, a Democrat who represents San Francisco and parts of San Mateo County in the California State Senate, introduced a bill that would seek to make powerful AI systems safe by, among other things, mandating safety tests. I would love to have one unified Federal law that effectively addresses AI safety issues, Wiener said in a recent interview with NPR. Congress has not passed such a law. Congress has not even come close to passing such a law.

But many of the Task Forces members argued that, while partisan gridlock has made it difficult for the House to pass anything in recent months, AI might be the one area where Congress can find common ground.

I've spoken with a number of my colleagues on both sides of the aisle on this, says Franklin, the Florida Republican. We're all kind of coming in at the same place, and we understand the seriousness of the issue. We may have disagreement on exactly how to address [the issues]. And that's why we need to get together and have those conversations.

The fact that it's bipartisan and bicameral makes me very optimistic that we'll be able to get meaningful things done in this calendar year, says Beyer, the Virginia Democrat. And put it on Joe Biden's desk.

Original post:

How a New Bipartisan Task Force Is Thinking About AI - TIME

Posted in Ai | Comments Off on How a New Bipartisan Task Force Is Thinking About AI – TIME

Nvidia Earnings Show Soaring Profit and Revenue Amid AI Boom – The New York Times

Nvidia, the kingpin of chips powering artificial intelligence, on Wednesday released quarterly financial results that reinforced how the company has become one of the biggest winners of the artificial intelligence boom, and it said demand for its products would fuel continued sales growth.

The Silicon Valley chip maker has been on an extraordinary rise over the past 18 months, driven by demand for its specialized and costly semiconductors, which are used for training popular A.I. services like OpenAIs ChatGPT chatbot. Nvidia has become known as one of the Magnificent Seven tech stocks, which, including others like Amazon, Apple and Microsoft, have helped power the stock market.

Nvidias valuation has surged more than 40 percent to $1.7 trillion since the start of the year, turning it into one of the worlds most valuable public companies. Last week, the company briefly eclipsed the market values of Amazon and Alphabet before receding to the fifth-most-valuable tech company. Its stock market gains are largely a result of repeatedly exceeding analysts expectations for growth, a feat that is becoming more difficult as they keep raising their predictions.

On Wednesday, Nvidia reported that revenue in its fiscal fourth quarter more than tripled from a year earlier to $22.1 billion, while profit soared nearly ninefold to $12.3 billion. Revenue was well above the $20 billion the company predicted in November and above Wall Street estimates of $20.4 billion.

Nvidia predicted that revenue in the current quarter would total about $24 billion, also more than triple the year-earlier period and higher than analysts average forecast of $22 billion.

Jensen Huang, Nvidias co-founder and chief executive, argues that an epochal shift to upgrade data centers with chips needed for training powerful A.I. models is still in its early phases. That will require spending roughly $2 trillion to equip all the buildings and computers to use chips like Nvidias, he predicts.

We are having trouble retrieving the article content.

Please enable JavaScript in your browser settings.

Thank you for your patience while we verify access. If you are in Reader mode please exit andlog intoyour Times account, orsubscribefor all of The Times.

Thank you for your patience while we verify access.

Already a subscriber?Log in.

Want all of The Times?Subscribe.

Continued here:

Nvidia Earnings Show Soaring Profit and Revenue Amid AI Boom - The New York Times

Posted in Ai | Comments Off on Nvidia Earnings Show Soaring Profit and Revenue Amid AI Boom – The New York Times

Which AI phone features are useful and how well they actually work – The Washington Post

Every year like clockwork, some of the biggest companies in the world release new phones they hope you will shell out hundreds of dollars for.

And more and more, they are leaning on a new angle to get you thinking of upgrading: artificial intelligence.

Smartphones from Google and Samsung come with features to help you skim through long swaths of text, tweak the way you sound in messages, and make your photos more eye-catching. Meanwhile, Apple is reportedly racing to build AI tools and features it hopes to include in an upcoming version of its iOS software, which will launch alongside the companys new iPhones later this year.

But here's the real question: Of the AI tools built into phones right now, how many of them are actually useful?

Thats tough to say: It all depends on what you use your phone for, and what you personally perceive is helpful. To help, heres a brief guide to the AI features youll most commonly find in phones right now, so you can decide which might be worth living with for yourself.

For years, smartphone makers have worked to make the photos that come out of the tiny camera sensors they use look better than they should. Now, theyre also giving us the tools to more easily revise those images.

Here are the most basic: Google and Samsung phones now let you resize, move or erase people and objects inside photos youve taken. Once you do that, the phones lean on generative AI to fill in the visual gaps left behind and thats it.

Think of it as a little Photoshopping, except the hard work is basically done for you. And for better or worse, there are limits to what it can do.

You cant use those built-in tools to generate people, objects or more fantastical additions that werent part of the original image the way you can with other AI image creation tools. The results dont usually survive serious scrutiny, either its not hard to see places where little details dont line up, or areas that look smudgy because the AI couldnt convincingly fill a gap where an offending object used to be.

Whats potentially more unsettling are tools such as Googles Best Take for its Pixel phones, which give you the chance to select specific expressions for peoples faces in an image if youve taken a bunch of photos in a row.

Some people dont mind it, while others find it a little divorced from reality. No matter where you land, though, expect your photos to get a lot of AI attention the next time you buy a phone.

Your messages to your boss probably shouldnt sound like messages to your friends and vice versa. Samsungs Chat Assist and Googles Magic Compose tools use generative AI to try to adjust the language in your messages to make them more palatable.

The catch? Googles Magic Compose only works in its texting-focused Messages app, which means you cant easily use it for emails or, say, WhatsApp messages. (A similar tool for Gmail and the Chrome web browser, called Help Me Write, is not yet widely available.) People who buy Galaxy S24 phones, meanwhile, can use Samsungs version of this feature wherever they write text to switch between professional, casual, polite, and even emoji-filled variations of their original message.

What can I say? It works, though I cant imagine using it with any regularity. And in some ways, Samsungs Chat Assist tool backs down when its arguably needed most. In a few test emails where I used some very mild swears to allude to (fictional) workplace stress, Samsungs Chat Assist refused to help on the grounds that the messages contained inappropriate language.

The built-in voice recorder apps on Googles Pixels and Samsungs latest phones dont just record audio theyll turn those recordings into full-blown transcripts.

In theory, this should free you up from having to take so many notes while youre in a meeting or a lecture. And for the most part, these features work well enough after a few seconds, theyll dutifully produce readable, if sometimes clumsy, readouts of what youve just heard.

If all you need is a sort of rough draft to accompany your recordings, these automated transcription tools can be really helpful. They can differentiate between multiple speakers, which is handy when you need to skim through a conversation later. And Googles version will even give you a live transcription, which can be nice if youre the sort of person who keeps subtitles on all the time.

But whether youre using a Google phone or one of Samsungs, the resulting transcripts often need a bit of cleanup that means youll need to do a little extra work before you copy and paste the results into something important.

Who among us hasnt clicked into a Wikipedia page, or an article, or a recipe online that takes way too long to get to the point? As long as youre using the Chrome browser, Googles Pixel phones can scan those long webpages and boil them down into a set of high-level blurbs to give you the gist.

Sadly, Googles summaries are often too cursory to feel satisfying.

Samsungs phones can summarize your notes and transcriptions of your recordings, but it will only summarize things you find on the web if you use its homemade web browser. Honestly, that might be worth it: The quality of its summaries are much better than Googles. (You even have the option of switching to a more detailed version of the AI summary, which Google doesnt offer at all.)

Both versions of these summary tools come with a notable caveat, too: They wont summarize articles from websites that have paywalls, which includes just about every major U.S. newspaper.

Samsungs AI tools are free for now, but a tiny footnote on its website suggests the company may eventually charge customers to use them. Its not a done deal yet, but Samsung isnt ruling it out either.

We are committed to making Galaxy AI features available to as many of our users as possible, a spokesperson said in a statement. We will not be considering any changes to that direction before the end of 2025.

Google, meanwhile, already makes some of its AI-powered features exclusive to certain devices. (For example: A Video Boost tool for improving the look of your footage is only available on the companys higher-end Pixel 8 Pro phones.)

In the past, Google has made experimental versions of some AI tools like the Magic Compose feature available only to people who pay for the companys Google One subscription service. And more recently, Google has started charging people for access to its latest AI chatbot. For now, though, the company hasnt said anything either way about putting future AI phone features behind a paywall.

Google did not immediately respond to a request for comment.

Go here to read the rest:

Which AI phone features are useful and how well they actually work - The Washington Post

Posted in Ai | Comments Off on Which AI phone features are useful and how well they actually work – The Washington Post

Google to fix AI picture bot after ‘woke’ criticism – BBC.com

Google and parent company Alphabet Inc's headquarters in Mountain View, California

Google is racing to fix its new AI-powered tool for creating pictures, after claims it was over-correcting against the risk of being racist.

Users said the firm's Gemini bot supplied images depicting a variety of genders and ethnicities even when doing so was historically inaccurate.

For example, a prompt seeking images of America's founding fathers turned up women and people of colour.

The company said its tool was "missing the mark".

"Gemini's AI image generation does generate a wide range of people. And that's generally a good thing because people around the world use it. But it's missing the mark here," said Jack Krawczyk, senior director for Gemini Experiences.

"We're working to improve these kinds of depictions immediately," he added.

Accept and continue

It is not the first time AI has stumbled over real-world questions about diversity.

For example, Google infamously had to apologise almost a decade ago after its photos app labelled a photo of a black couple as "gorillas".

Rival AI firm, OpenAI was also accused of perpetuating harmful stereotypes, after users found its Dall-E image generator responded to queries for chief executive, for example, with results dominated by pictures of white men.

Google, which is under pressure to prove it is not falling behind in AI developments, released its latest version of Gemini last week.

The bot creates pictures in response to written queries.

It quickly drew critics, who accused the company of training the bot to be laughably woke.

Accept and continue

"It's embarrassingly hard to get Google Gemini to acknowledge that white people exist," computer scientist Debarghya Das, wrote.

"Come on," Frank J Fleming, an author and humourist who writes for outlets including the right-wing PJ Media, in response to the results he received asking for an image of a Viking.

The claims picked up speed in right-wing circles in the US, where many big tech platforms are already facing backlash for alleged liberal bias.

Mr Krawczyk said the company took representation and bias seriously and wanted its results to reflect its global user base.

"Historical contexts have more nuance to them and we will further tune to accommodate that," he wrote on X, formerly Twitter, where users were sharing the dubious results they had received.

"This is part of the alignment process - iteration on feedback. Thank you and keep it coming!"

See the rest here:

Google to fix AI picture bot after 'woke' criticism - BBC.com

Posted in Ai | Comments Off on Google to fix AI picture bot after ‘woke’ criticism – BBC.com