Search Immortality Topics:

Page 20«..10..19202122..»


Category Archives: Artificial Intelligence

Big brother: Democrat-led NTSB pushes for artificial intelligence … – Must Read Alaska

A National Transportation Safety Board investigation into a 2022 fatal crash inNorth Las Vegas, Nevada, that resulted in nine fatalities has given the board an excuse to recommend a new requirement for intelligent speed assistance technology in all new cars.

The board issued the recommendations earlier this month at a public meeting after determining the crash was caused by high speed, drug-impaired driving, and Nevadas failure to deter one drivers speeding recidivism due to systemic deficiencies, despite numerous speeding citations.

Intelligent speed assistance technology, or ISA, uses a cars GPS location compared with a database of posted speed limits and its onboard cameras to either issue a warning to drivers or to throttle back speed.

Passive ISA systems warn a driver when the vehicle exceeds the speed limit through visual, sound, or haptic alerts, and the driver is responsible for slowing the car.

Active systems include mechanisms that make it more difficult, but not impossible, to increase the speed of a vehicle above the posted speed limit and those that electronically limit the speed of the vehicle to fully prevent drivers from exceeding the speed limit.

This crash is the latest in a long line of tragedies weve investigated where speeding and impairment led to catastrophe, but it doesnt have to be this way, said NTSB Chair Jennifer Homendy. We know the key to saving lives is redundancy, which can protect all of us from human error that occurs on our roads. What we lack is the collective will to act on NTSB safety recommendations.

Homendy is a Democrat who served more than 14 years as Democratic staff director for the Subcommittee on Railroads, Pipelines, and Hazardous Materials of the Committee on Transportation and Infrastructure in the U.S. House of Representatives. She worked for the International Brotherhood of Teamsters, AFL-CIO, and American Iron and Steel Institute.

Eliminating speeding through the use of federally mandated speed limiters built into cars is a priority for the NTSB, which seeks to Develop performance standards for advanced speed-limiting technology, such as variable speed limiters and intelligent speed adaptation devices, for heavy vehicles, including trucks, buses, and motorcoaches. Then require that all newly manufactured heavy vehicles be equipped with such devices.

In 2021, speeding-related crashes resulted in 12,330 fatalitiesabout one-third of all traffic fatalities in the United States, the NTSB said.

However, according to the latest figures from AAA, 245 million drivers made a total of 229 billion driving trips, spent 91 billion hours driving, and drove 2.92 trillion miles in 2021. That means out of every 91 million hours of driving, there are 12,330 fatalities, or one fatality for every 7.3 million hours of driving.

According to the National Safety Council, the rates of fatal car accidents has greatly improved over the decades. This is due, in part, to the numerous safety features now required in cars, such as seatbelts, air bags, and backup cameras, which were mandated in 2018.

The rest is here:

Big brother: Democrat-led NTSB pushes for artificial intelligence ... - Must Read Alaska

Posted in Artificial Intelligence | Comments Off on Big brother: Democrat-led NTSB pushes for artificial intelligence … – Must Read Alaska

Artificial Intelligence May Predict Ovarian Cancer Therapy Outcomes – Curetoday.com

An artificial intelligence model showed promise in predicting ovarian cancer outcomes.

An artificial intelligence model called IRON (Integrated Radiogenomics for Ovarian Neoadjuvant therapy) shows an 80% accuracy rate in the prediction of therapy outcomes for patients with ovarian cancer. The artificial intelligence model focuses specifically on the volumetric reduction of tumor lesions within this patient population, according to a press release.

This tool surpassed the efficiency of present clinical methods, as IRON uses an approach that focuses on a patients liquid biopsy (cancer-specific information that can be observed via blood tests), overall health characteristics such as age, health status, tumor markers and disease images that were captured during CT scan images.

A recent research study, which was featured in Nature Communications, focused on 134 patients who had been diagnosed with high-grade ovarian cancer. Dr. Evis Sala, chair of Diagnostic Imaging and Radiotherapy at the Faculty of Medicine and Surgery of the Catholic University and Director of the Advanced Radiology Center at the Policlinico Universitario A. Gemelli IRCCS ran the study and the AI model was created by Professor Salas team at the University of Cambridge.

"We compiled two independent datasets with a total of 134 patients (92 cases in the first dataset, 42 in the second independent test set)," Sala and Dr. Mireia Crispin Ortuzar from Cambridge said in the press release.

When it comes to a precise accuracy rate in high-grade ovarian carcinoma, therapy response predictions result to about 50% accuracy. As there were a few significant biomarkers used for this type of cancer, the IRON model was able to predict chemotherapy responders more accurately.

Notably, this is not the first study showing that artificial intelligence has the potential to predict cancer outcomes. Earlier this year, research found that artificial intelligence may help determine which patients with prostate cancer were more likely to benefit from hormone therapy plus radiation. Additionally, another group of researchers also showed that artificially intelligence may play a role in diagnosing sarcopenia in patients with head and neck cancer.

Within the study, demographic information, treatment details, blood biomarkers (CA-125, which could indicate the growth of ovarian cancer) and circulating tumor DNA (ctDNA, which is bits of cancer DNA that can be observed on a blood test) were collected from patients. CT scans were also found and characteristics of the tumor were obtained throughout the CT scans.

Where disease spread in the initial stage was investigated within omental and pelvic/ovarian regions. Omental deposits showed more of a response compared to pelvic disease when it came to neoadjuvant (presurgical) therapy. Before and after therapy responses, tumor mutations and CA-125 had correlation to the overall disease burden, according to the press release.

Analyzation of CT scans showed that six patient subgroups, classified by biological and clinical characteristics became indicators of therapy responses. The efficiency of the model was shown throughout the independent patient sample shown within the study.

"From a clinical perspective, the proposed framework addresses the unmet need to early identify patients unlikely to respond to neoadjuvant therapy and may be directed to immediate surgical intervention," Sala emphasized. "The tool could be applied to stratify the risk of each individual patient in future clinical research...

For more news on cancer updates, research and education, dont forget tosubscribe to CUREs newsletters here.

Link:

Artificial Intelligence May Predict Ovarian Cancer Therapy Outcomes - Curetoday.com

Posted in Artificial Intelligence | Comments Off on Artificial Intelligence May Predict Ovarian Cancer Therapy Outcomes – Curetoday.com

Artificial Intelligence Is Here to Stay, so We Should Think more about … – GW Today

On Friday morning, George Washington University Provost Christopher Alan Bracey disseminated a document on the use of generative artificial intelligence to guide faculty members on how they might (or might not) allow the use of AI by their students. At the same moment, a daylong symposium titled I Am Not a Robot: The Entangled Futures of AI and the Humanities kicked off with remarks by its principal organizer, Katrin Schultheiss, associate professor of history in the Columbian College of Arts and Sciences.

In late 2022, said Schultheiss, the launch of ChatGPT presented educators with a significant moment of technological change.

Here was a toolavailable, at least temporarily, for free, Schultheiss said, that would answer almost any question in grammatically correct, informative, plausible-sounding paragraphs of text.

In response, people expressed the fear that jobs would be eliminated, the ability to write would atrophy and misinformation would flourish, with some invoking dystopias where humans became so dependent on machines that they can no longer think or do anything for themselves.

But that wasnt even the worst of the fears expressed. At the very far end, Schultheiss said, they conjured up a future when AI-equipped robots would break free of their human trainers and take over the world.

On the other hand, she noted, proponents of the new technology argued that ChatGPT will lead to more creative teaching and increase productivity.

The pace at which new AI tools are being developed is astonishing, Schultheiss said. Its nearly impossible to keep up with the new capabilities and the new concerns that they raise.

For that reason, she added, some observers (including members of Congress) are advocating for a slowdown or even a pause in the deployment of these tools until various ethical and regulatory issues can be addressed.

With this in mind, she said, a group of GW faculty from various humanities departments saw a need to expand the discourse beyond the discussion of new tools and applications, beyond questions of regulation and potential abuses of A.I., adding that the symposium is one of the fruits of those discussions.

Maybe we should spend some more time thinking about exactly what we are doing as we stride forward boldly into the AI-infused future, Schultheiss said.

Four panel discussions followed, the first one featuring philosophers. Tadeusz Zawidzki, associate professor and chair of philosophy, located ChatGPT in the larger philosophical tradition, beginning with the Turing test.

That test was proposed by English scientist Alan Turing, who asked: Could a normal human subject tell the difference between another human and a computer by reading the text of their conversation? If not, Turing said, that machine counts as intelligent.

Some philosophers, such as John Searle, objected, saying a digitally simulated mind does not really think or understand. But Zawidzki said ChatGPT passes the test.

Theres no doubt in my mind that ChatGPT passes the Turing test, he said. So, by Turings criteria, it is a mind. But its not like a human mind, which can interact with the world around it in ways currently unavailable to ChatGPT.

Marianna B. Ganapini, assistant professor at Union College and a visiting scholar at the Center for Bioethics at New York University, began by asking if we can learn from ChatGPT and if we can trust it.

As a spoiler alert, Ganapini said, Im going to answer no to the second questionits the easy questionand maybe to the first.

Ganapini said the question of whether ChatGPT can be trusted is unfair, in a sense, because no one trusts people to know absolutely everything.

A panel on the moral status of AI featured Robert M. Geraci, professor of religious studies at Manhattan College, and Eyal Aviv, assistant professor of religion atGW.

In thinking about the future of AI and of humanity, Geraci said, we must evaluate whether the new technology has been brought into alignment with human values and the degree to which it reflects our biases.

A fair number of scholars and advocates fear that our progress in value alignment is too slow, Geraci said. They worry that we will build powerful machines that lack our values and are a danger to humanity as a result. I worry that in fact our value alignment is near perfect.

Unfortunately, he said, our daily values are not in fact aligned with our aspirations for a better world. One way to counteract this is through storytelling, he added, creating models for reflection on ourselves and the future.

A story told by the late Stephen Hawking set the stage for remarks by Aviv, an expert on Buddhism, who recalled an interview with Hawking from Last Week Tonight with John Oliver posted to YouTube in 2014.

Theres a story that scientists built an intelligent computer, Hawking said. The first question they asked it was, Is there a God? The computer replied, There is now, and a bolt of lightning struck the plug so it couldnt be turned off.

Aviv presented the equally grim vision of Jaron Lanier, considered by many to be father of virtual reality, who said the danger isnt that AI will destroy us, but that it will drive us insane.

For most of us, Aviv said, its pretty clear that AI will produce unforeseen consequences.

One of the most important concepts in Buddhist ethics, Aviv said, is ahimsa, or doing no harm. From its inception, he added, AI has been funded primarily by the military, placing it on complex moral terrain from the start.

Many experts call for regulation to keep AI safer, Aviv said, but will we heed such calls? He pointed to signs posted in casinos that urge guests to play responsibly. But such venues are designed precisely to keep guests from doing so.

The third panel featured Neda Atanasoski of the University of Maryland, College Park, and Despina Kakoudaki of American University.

Atanasoski spoke about basic technologies found in the home, assisting us with cleaning, shopping, eldercare and childcare. Such technologies become creepy, she said, when they reduce users to data points and invade their privacy.

Tech companies have increasingly begun to market privacy as a commodity that can be bought, she said.

How do you ban it if its everywhere?

Pop culture has had an impact on how we understand new technology, Kakoudaki said, noting that very young children can draw a robot, typically in an anthropomorphic form.

After suggesting the historical roots of the idea of the mechanical body, in the creation of Pandora and, later, Frankenstein, for example, Kakoudaki showed how such narratives reverse the elements of natural birth, with mechanical beings born as adults and undergoing a trajectory from death to birth.

The fourth panel, delving further into the history of AI and meditating on its future, featured Jamie Cohen-Cole, associate professor of American Studies, and Ryan Watkins, professor and director of the Educational Technology Leadership Program in the Graduate School of Education and Human Development.

Will we come to rely on statements from ChatGPT? Maybe, Cohen-Cole said, though he noted that human biases will likely continue to be built into the technology.

Watkins said he thinks we will learn to live with the duality presented by AI, enjoying its convenience while remaining aware of its fundamental untrustworthiness. It is difficult for most people to adjust in real time to rapid technological change, he said, encouraging listeners to play with the technology and see how they might use it, adding that he has used it to help one of his children do biology homework. Chatbot technology is being integrated into MS Word, email platforms and smartphones, to name a few places the average person will soon encounter it.

How do you ban it if its everywhere? he asked.

The symposium, part of the CCAS Engaged Liberal Arts Series, was sponsored by the CCAS Departments of American Studies, English, History, Philosophy, Religion and Department of Romance, German and Slavic Languages and Literatures. Each session concluded with questions for panelists from the audience. The sessions were moderated, respectively, by Eric Saidel, from the philosophy department; Irene Oh, from the religion department; Alexa Alice Joubin, from the English Department; and Eric Arnesen, from the history department.

Follow this link:

Artificial Intelligence Is Here to Stay, so We Should Think more about ... - GW Today

Posted in Artificial Intelligence | Comments Off on Artificial Intelligence Is Here to Stay, so We Should Think more about … – GW Today

Artificial intelligence helps people be productive in these ways – CBS News

Artificial intelligence is becoming increasingly commonin the workplace, but it's also starting to assist with tasks at home.

Insider tech reporter Lakshmi Varanasi told CBS News she uses OpenAI's GPT4 technology to help her plan and prep meals, while parents are using it to generate bedtime stories to read to their children. Really committed parents can even use it to create their own books with corresponding images, also using AI tools, like image generator DALL-E.

In this way, AI can be tremendously helpful in sating kids' appetites for constant entertainment, Varanasi added.

click to expand

Something to beware of when reading AI-generated text to children: AI tools like ChatGPT are known to occasionally make errors or inappropriate statements. "There needs to be fact-checking involved whenever you use an AI tool," Varanasi said.

To be sure, AI doesn't have the same level of judgment and insight that humans do, and may not be able to respond helpfully to personal questions.

"It's really good for the broad strokes of navigating life." Varanasi said.

Other useful applications of sophisticated AI include asking it for help generating emails, or general inspiration for creating any type of content. Travel company Expedia is even betting that it will be helpful for people planning trips.

Computer programmers and coders have found AI useful as well. One worker used GPT4 as a coding assistant while building a video game.

"He'd type in command he wanted, the tool gave him code," Varanasi said. When a digital spaceship that was part of the game wouldn't move, AI stepped in and "helped get it moving."

A coder might have ordinarily spent hours on trial and error, but AI sped up the process.

Trending News

Read More

More:

Artificial intelligence helps people be productive in these ways - CBS News

Posted in Artificial Intelligence | Comments Off on Artificial intelligence helps people be productive in these ways – CBS News

HDR uses artificial intelligence tools to help design a vital health … – Building Design + Construction

Paul Howard Harrison has had a longstanding fascination with machine learning and performance optimization. Over the past five years, artificial intelligence (AI) has been augmenting some of the design work done by HDR, where Harrison is a computational design lead. He also lectures on AI and machine learning at the University of Toronto in Canada, where he earned his Masters of Architecture.

Harrisons interest in computational research and data-driven design contributed to the development of an 8,500-sf healthcare clinic and courtyard at Baruipur, in West Bengal, India. This was Harrisons first project with Design 4 Others (D4O), a philanthropic initiative that operates out of HDRs architecture practice through which architects volunteer their services to make positive impacts on underserved communities.

India has fewer than one doctor per 1,000 people. (By comparison, the ratio in the U.S. is more than 2.5 per 1,000.) The client for the Baruipur clinic is iKure, a technology and social enterprise that delivers healthcare through a hub-and-spoke model, where clinics (hubs) extend their reach to where patients live through local healthcare workers (the spokes), who are trained to monitor, track, and collect data from patients. The hubs and spokes are connected by a proprietary platform called the Wireless Health Incident Monitoring System. According to iKures website, there are 20 hubs of varying sizes serving nine million people in 10 Indian states. iKures goal is to eventually operate 125 hubs and expand its concept to 10 Asian and African countries.

D4O and iKure became aware of each other in 2019 through Construction for Change, a Seattle-based nonprofit construction management firm. Prior to the Baruipur hub project, D4O and Construction for Change had worked on more than a dozen projects together, starting with a healthcare clinic in northwest Uganda, according to the August 11, 2020, episode of HDRs podcast Speaking of Design.

Harrisonwhom BD+C interviewed with Megan Gallagher, a health planner at HDR and a D4O volunteer on the Baruipur projectacknowledges that all design outputs come with inherent biases. But by training AI on smaller models, the datasets and biases can be controlled, he posits.

Initially, HDR found AI useful for design optimization; more recently, the firm has been using AI for early-stage ideation. Harrison points specifically to the design for a hospital in Kingston, Ontario, where HDR used AI as an ideation tool. AI is better at coming up with what I like than I am, he laughs.

The firm has also used AI as a means of engagement to get different client constituencies on the same page about a projects mission.

During the interview, Harrison several times referred to DALL-E, an open AI system used to create realistic images. DALL-E favors a diffusion model, a random-field approach to produce generative models that are similar to data on which the AI has been trained.

Where most project designs start with a facilitys programming, the iKure clinic was different in that it needed to support the hub-and-spoke delivery method. The client also wanted a design that could add a second floor, as needed.

To help design the iKure hub, Harrison wrote a machine-learning program that focused on the buildings gross floor area, the amount of shade the building would provide (as some patients need relief after traveling long distances to receive care), and the size of the buildings modules. (Gallagher notes that each room is 125 sf.)

By optimizing for shade, the algorithm consistently came up with a courtyard design. The end result looked similar to a courtyard house in Kolkata, observes Gallagher. The computer program also came up with the best positioning for circulation aisles within a building that would not be air conditioned.

Treatment rooms were moved to the back of the building, which has four strategically located shading areas. Air is circulated up and out of the building through chimneys whose design takes its cue from local brick kilns.

The last piece of the hubs design will be its screening for security and ventilation. Harrison says that HDR has been training AI on a dataset of different screen designs that could be made from brick. (This area of India is known for its brickmaking, he explains.)

Gallagher says shes curious to see how AI will progress as a design tool. Harrison concedes that while AI is quicker for ideation, it will take some time to perfect the tool for larger projects.

As for the iKure hub, Harrison observed in HDRs 2020 podcast that you dont need to have a high-architecture project to have a high-tech approach.

When its completed, the Baruipur clinic will offer eye and dental care, X-rays, maternal and pediatric care, and telemedicine. The hub will serve about a half-dozen spokes as well as multiple villages that include remote islands in the Sundarbans Delta, where diagnostics will be accessible through portable handheld devices, says Jason-Emery Gron, Vice President and Design Director for HDRs Kingston office.

Gron says that HDR focuses on projects that are most likely to have a significant impact on their communities, and have the best chance of getting built. And D4O has been in discussions with iKure about helping with its expansion plans.

But hes also realistic about the unpredictability of project delays in underdeveloped markets. The iKure hub was scheduled for completion in 2021, but might not be ready until 2024. Gron explains the construction has taken longer than anticipated because the client wanted D4O to review land options before it settled on the original site, the pandemics impact on labor and materials availability, and longer-than expected monsoon seasons.

View post:

HDR uses artificial intelligence tools to help design a vital health ... - Building Design + Construction

Posted in Artificial Intelligence | Comments Off on HDR uses artificial intelligence tools to help design a vital health … – Building Design + Construction

For the First-Time Ever, Miller Lite Teaches Artificial Intelligence What Beer Tastes Like – Food Industry Executive

Miller Lite kicks off new global campaign by showing Sophia the robot the feeling behind real-life beer moments

CHICAGO April 19, 2023 Artificial intelligence has had a busy year answering our questions, generating headshots, and even making aging actors look younger, but despite all of these advances in technology, theres one thing AI still cant do enjoy the great taste of beer. But thats all about to change thanks to Miller Liteseriously. For the first time ever, the brand is teaching AI the taste, feeling and human emotion behind enjoying a beer, starting with Sophia, an advanced humanoid robot from Hanson Robotics.

Miller Lite and AIreally? Yes, and for good reason too. Miller Lite is all about great beer taste and celebrating Miller Time so in its new global campaign, Tastes like Miller Time, the brand is demonstrating that the taste of beer is so much more than what we literally taste. And Miller Lite is making sure everyone, including AI, knows what the experience of cracking open a great beer like Miller Lite truly feels like.

The taste of beer is so much more than barley, malt, and hops its the real moments at neighborhood bars, tailgates and backyards spent over a Miller Lite, says Sofia Colucci, Chief Marketing Officer at Molson Coors Beverage Company (not to be confused with Sophia the robot). Our new campaign pays tribute to those unforgettable experiences that just taste better with a Miller Lite in hand. Were bringing this notion to life in fresh and unexpected ways from our new TV spots to even teaching AI what beer actually tastes like.

Miller Lite worked with Hanson Robotics to analyze social media and identify humanitys most cherished beer drinking moments, translating them to something Sophia could finally experience. Watch here to learn more:https://youtu.be/5OkB6s9hsPc.

When Miller Lite approached us about teaching Sophia what beer tasted like, we were intrigued because it was something AI has never experienced before, says Kath Yeung, Chief Operations Coordinator of Hanson Robotics.

Our teams scrolled social media and assessed our findings to gather the feelings and emotions humans get when tasting beer and translated that data into something Sophia could experience for the first time, says CEO David Hanson PhD. We were excited to see Sophia was making new friends, learning and analyzing the human experience.

So, what did Sophia think of her first beer? To see her reaction and have the chance to ask Sophia questions in real time, tune-in to the Miller Lite Instagram Live on Friday, April 21st at 5pm CDT.

To further AIs education on the true joy and experience of beer, Miller Lite is asking everyone to share the moments that taste better with beer. Follow @MillerLite on Instagram, share a photo of your Tastes Like Miller Time Moment in an Instagram story or post, and tag @MillerLite. Then use the hashtag #BeerforAI and #Sweepstakes for a chance to win free beer.* These moments will be added to the data set so Miller Lite can continue to teach AI what the human experience of beer is.

The new Tastes like Miller Time campaign will appear across all touchpoints in the United States, Canada, and Latin America. It includes retail, out of home, advertising, social media, partnerships, localization, and brand new video spots, which you can viewhere.

Miller Lites new campaign aims to fuel continued growth and positive trajectory for the brand. Year to date in the U.S., Miller Lite is growing dollar share of total beer dollar according to April 2023 Circana multi-source and convenience data.

See the rest here:

For the First-Time Ever, Miller Lite Teaches Artificial Intelligence What Beer Tastes Like - Food Industry Executive

Posted in Artificial Intelligence | Comments Off on For the First-Time Ever, Miller Lite Teaches Artificial Intelligence What Beer Tastes Like – Food Industry Executive