Search Immortality Topics:

Page 16«..10..15161718..»


Category Archives: Ai

AI is here and everywhere: 3 AI researchers look to the challenges ahead in 2024 – The Conversation Indonesia

2023 was an inflection point in the evolution of artificial intelligence and its role in society. The year saw the emergence of generative AI, which moved the technology from the shadows to center stage in the public imagination. It also saw boardroom drama in an AI startup dominate the news cycle for several days. And it saw the Biden administration issue an executive order and the European Union pass a law aimed at regulating AI, moves perhaps best described as attempting to bridle a horse thats already galloping along.

Weve assembled a panel of AI scholars to look ahead to 2024 and describe the issues AI developers, regulators and everyday people are likely to face, and to give their hopes and recommendations.

Casey Fiesler, Associate Professor of Information Science, University of Colorado Boulder

2023 was the year of AI hype. Regardless of whether the narrative was that AI was going to save the world or destroy it, it often felt as if visions of what AI might be someday overwhelmed the current reality. And though I think that anticipating future harms is a critical component of overcoming ethical debt in tech, getting too swept up in the hype risks creating a vision of AI that seems more like magic than a technology that can still be shaped by explicit choices. But taking control requires a better understanding of that technology.

One of the major AI debates of 2023 was around the role of ChatGPT and similar chatbots in education. This time last year, most relevant headlines focused on how students might use it to cheat and how educators were scrambling to keep them from doing so in ways that often do more harm than good.

However, as the year went on, there was a recognition that a failure to teach students about AI might put them at a disadvantage, and many schools rescinded their bans. I dont think we should be revamping education to put AI at the center of everything, but if students dont learn about how AI works, they wont understand its limitations and therefore how it is useful and appropriate to use and how its not. This isnt just true for students. The more people understand how AI works, the more empowered they are to use it and to critique it.

So my prediction, or perhaps my hope, for 2024 is that there will be a huge push to learn. In 1966, Joseph Weizenbaum, the creator of the ELIZA chatbot, wrote that machines are often sufficient to dazzle even the most experienced observer, but that once their inner workings are explained in language sufficiently plain to induce understanding, its magic crumbles away. The challenge with generative artificial intelligence is that, in contrast to ELIZAs very basic pattern matching and substitution methodology, it is much more difficult to find language sufficiently plain to make the AI magic crumble away.

I think its possible to make this happen. I hope that universities that are rushing to hire more technical AI experts put just as much effort into hiring AI ethicists. I hope that media outlets help cut through the hype. I hope that everyone reflects on their own uses of this technology and its consequences. And I hope that tech companies listen to informed critiques in considering what choices continue to shape the future.

Kentaro Toyama, Professor of Community Information, University of Michigan

In 1970, Marvin Minsky, the AI pioneer and neural network skeptic, told Life magazine, In from three to eight years we will have a machine with the general intelligence of an average human being. With the singularity, the moment artificial intelligence matches and begins to exceed human intelligence not quite here yet its safe to say that Minsky was off by at least a factor of 10. Its perilous to make predictions about AI.

Still, making predictions for a year out doesnt seem quite as risky. What can be expected of AI in 2024? First, the race is on! Progress in AI had been steady since the days of Minskys prime, but the public release of ChatGPT in 2022 kicked off an all-out competition for profit, glory and global supremacy. Expect more powerful AI, in addition to a flood of new AI applications.

The big technical question is how soon and how thoroughly AI engineers can address the current Achilles heel of deep learning what might be called generalized hard reasoning, things like deductive logic. Will quick tweaks to existing neural-net algorithms be sufficient, or will it require a fundamentally different approach, as neuroscientist Gary Marcus suggests? Armies of AI scientists are working on this problem, so I expect some headway in 2024.

Meanwhile, new AI applications are likely to result in new problems, too. You might soon start hearing about AI chatbots and assistants talking to each other, having entire conversations on your behalf but behind your back. Some of it will go haywire comically, tragically or both. Deepfakes, AI-generated images and videos that are difficult to detect are likely to run rampant despite nascent regulation, causing more sleazy harm to individuals and democracies everywhere. And there are likely to be new classes of AI calamities that wouldnt have been possible even five years ago.

Speaking of problems, the very people sounding the loudest alarms about AI like Elon Musk and Sam Altman cant seem to stop themselves from building ever more powerful AI. I expect them to keep doing more of the same. Theyre like arsonists calling in the blaze they stoked themselves, begging the authorities to restrain them. And along those lines, what I most hope for 2024 though it seems slow in coming is stronger AI regulation, at national and international levels.

Anjana Susarla, Professor of Information Systems, Michigan State University

In the year since the unveiling of ChatGPT, the development of generative AI models is continuing at a dizzying pace. In contrast to ChatGPT a year back, which took in textual prompts as inputs and produced textual output, the new class of generative AI models are trained to be multi-modal, meaning the data used to train them comes not only from textual sources such as Wikipedia and Reddit, but also from videos on YouTube, songs on Spotify, and other audio and visual information. With the new generation of multi-modal large language models (LLMs) powering these applications, you can use text inputs to generate not only images and text but also audio and video.

Companies are racing to develop LLMs that can be deployed on a variety of hardware and in a variety of applications, including running an LLM on your smartphone. The emergence of these lightweight LLMs and open source LLMs could usher in a world of autonomous AI agents a world that society is not necessarily prepared for.

These advanced AI capabilities offer immense transformative power in applications ranging from business to precision medicine. My chief concern is that such advanced capabilities will pose new challenges for distinguishing between human-generated content and AI-generated content, as well as pose new types of algorithmic harms.

The deluge of synthetic content produced by generative AI could unleash a world where malicious people and institutions can manufacture synthetic identities and orchestrate large-scale misinformation. A flood of AI-generated content primed to exploit algorithmic filters and recommendation engines could soon overpower critical functions such as information verification, information literacy and serendipity provided by search engines, social media platforms and digital services.

The Federal Trade Commission has warned about fraud, deception, infringements on privacy and other unfair practices enabled by the ease of AI-assisted content creation. While digital platforms such as YouTube have instituted policy guidelines for disclosure of AI-generated content, theres a need for greater scrutiny of algorithmic harms from agencies like the FTC and lawmakers working on privacy protections such as the American Data Privacy & Protection Act.

A new bipartisan bill introduced in Congress aims to codify algorithmic literacy as a key part of digital literacy. With AI increasingly intertwined with everything people do, it is clear that the time has come to focus not on algorithms as pieces of technology but to consider the contexts the algorithms operate in: people, processes and society.

Read the original post:

AI is here and everywhere: 3 AI researchers look to the challenges ahead in 2024 - The Conversation Indonesia

Posted in Ai | Comments Off on AI is here and everywhere: 3 AI researchers look to the challenges ahead in 2024 – The Conversation Indonesia

UBS boosts AI revenue forecast by 40%, calls industry the ‘tech theme of the decade’ – CNBC

UBS is getting more bullish on the outlook for artificial intelligence and bracing for another prosperous year for the "tech theme of the decade." The firm lifted its revenue forecast for AI by 40%. UBS expects revenues to grow by 15 times, from $28 billion in 2022 to $420 billion by 2027, as companies invest in infrastructure for models and applications. In comparison, it took the smart devices industry more than 10 years for its revenues to grow by 15 times. "As a result, we believe AI will remain the key theme driving global tech stocks again in 2024 and the rest of the decade," wrote Sundeep Gantori, adding that AI growth could result in consolidation that favors the giants getting bigger and "industry leaders with deep pockets and first-mover advantages." AI was the trade to invest behind in 2023, boosting chipmaker Nvidia nearly 240% as Wall Street bet on its graphics processing units powering large language models. Other semiconductor stocks, including Advanced Micro Devices, Broadcom and Marvell Technology, also rallied on the theme, with the VanEck Semiconductor ETF (SMH) notching its second-best year on record with a 72% gain. The world may only be in the early innings of a yearslong AI wave, but UBS views semiconductors and software as the best areas to position in 2024. Both industries should post double-digit profit growth and operating margins exceeding 30%, and the 22% average for the global IT industry. "Semiconductors, while cyclical, are well positioned to benefit from solid near-term demand for AI infrastructure," the firm said. "Meanwhile, software, with broadening AI demand trends from applications and models, is a defensive play, thanks to its strong recurring revenue base." Within the semiconductor industry, UBS favors logic, memory, capital equipment and foundry names, while companies exposed to office productivity, cloud and models appear best situated in software. CNBC's Michael Bloom contributed reporting.

See more here:

UBS boosts AI revenue forecast by 40%, calls industry the 'tech theme of the decade' - CNBC

Posted in Ai | Comments Off on UBS boosts AI revenue forecast by 40%, calls industry the ‘tech theme of the decade’ – CNBC

Opinion | A.I. Use by Law Enforcement Must Be Strictly Regulated – The New York Times

One of the most hopeful proposals involving police surveillance emerged recently from a surprising quarter the federal Office of Management and Budget. The office, which oversees the execution of the presidents policies, has recommended sorely needed constraints on the use of artificial intelligence by federal agencies, including law enforcement.

The offices work is commendable, but shortcomings in its proposed guidance to agencies could still leave people vulnerable to harm. Foremost among them is a provision that would allow senior officials to seek waivers by arguing that the constraints would hinder law enforcement. Those law enforcement agencies should instead be required to provide verifiable evidence that A.I. tools they or their vendors use will not cause harm, worsen discrimination or violate peoples rights.

As scholars of algorithmic tools, policing and constitutional law, we have witnessed the predictable and preventable harms from law enforcements use of emerging technologies. These include false arrests and police seizures, including a family held at gunpoint, after people were wrongly accused of crimes because of the irresponsible use of A.I.-driven technologies including facial recognition and automated license plate readers.

Consider the cases of Porcha Woodruff, Michael Oliver and Robert Julian-Borchak Williams. All were arrested between 2019 and 2023 after they were misidentified by facial recognition technology. These arrests had indelible consequences: Ms. Woodruff was eight months pregnant when she was falsely accused of carjacking and robbery; Mr. Williams was arrested in front of his wife and two young daughters as he pulled into his driveway from work. Mr. Oliver lost his job as a result.

All are Black. This should not be a surprise. A 2018 study co-written by one of us (Dr. Buolamwini) found that three commercial facial-analysis programs from major technology companies showed both skin-type and gender biases. The darker the skin, the more often the errors arose. Questions of fairness and bias persist about the use of these sorts of technologies.

Errors happen because law enforcement deploys emerging technologies without transparency or community agreement that they should be used at all, with little or no consideration of the consequences, insufficient training and inadequate guardrails. Often the data sets that drive the technologies are infected with errors and racial bias. Typically, the officers or agencies face no consequences for false arrests, increasing the likelihood they will continue.

The Office of Management and Budget guidance, which is now being finalized after a period of public comment, would apply to law enforcement technologies such as facial recognition, license-plate readers, predictive policing tools, gunshot detection, social media monitoring and more. It sets out criteria for A.I. technologies that, without safeguards, could put peoples safety or well-being at risk or violate their rights. If these proposed minimum practices are not met, technologies that fall short would be prohibited after next Aug. 1.

Here are highlights of the proposal: Agencies must be transparent and provide a public inventory of cases in which A.I. was used. The cost and benefit of these technologies must be assessed, a consideration that has been altogether absent. Even if the technology provides real benefits, the risks to individuals especially in marginalized communities must be identified and reduced. If the risks are too high, the technology may not be used. The impact of A.I.-driven technologies must be tested in the real world, and be continually monitored. Agencies would have to solicit public comment before using the technologies, including from the affected communities.

The proposed requirements are serious ones. They should have been in place before law enforcement began using these emerging technologies. Given the rapid adoption of these tools, without evidence of equity or efficacy and with insufficient attention to preventing mistakes, we fully anticipate some A.I. technologies will not meet the proposed standards and their use will be banned for noncompliance.

The overall thrust of the federal A.I. initiative is to push for rapid use of untested technologies by law enforcement, an approach that too often fails and causes harm. For that reason, the Office of Management and Budget must play a serious oversight role.

Far and away, the most worrisome elements in the proposal are provisions that create the opportunity for loopholes. For example, the chief A.I. officer of each federal agency could waive proposed protections with nothing more than a justification sent to the Office of Management and Budget. Worse yet, the justification need only claim an unacceptable impediment to critical agency operations the sort of claim law enforcement regularly makes to avoid regulation.

This waiver provision has the potential to wipe away all that the proposal promises. No waiver should be permitted without clear proof that it is essential proof that in our experience law enforcement typically cannot muster. No one person should have the power to issue such a waiver. There must be careful review to ensure that waivers are legitimate. Unless the recommendations are enforced strictly, we will see more surveillance, more people forced into unjustified encounters with law enforcement, and more harm to communities of color. Technologies that are clearly shown to be discriminatory should not be used.

There is also a vague exception for national security, a phrase frequently used to excuse policing from legal protections for civil rights and against discrimination. National security requires a sharper definition to prevent the exemption from being invoked without valid cause or oversight.

Finally, nothing in this proposal applies beyond federal government agencies. The F.B.I., the Transportation Security Administration and other federal agencies are aggressively embracing facial recognition and other biometric technologies that can recognize individuals by their unique physical characteristics. But so are state and local agencies, which do not fall under these guidelines. The federal government regularly offers federal funding as a carrot to win compliance from state and local agencies with federal rules. It should do the same here.

We hope the Office of Management and Budget will set a higher standard at the federal level for law enforcements use of emerging technologies, a standard that state and local governments should also follow. It would be a shame to make the progress envisioned in this proposal and have it undermined by backdoor exceptions.

Joy Buolamwini is the founder of the Algorithmic Justice League, which seeks to raises awareness about the potential harms of artificial intelligence, and the author of Unmasking AI: My Mission to Protect What Is Human in a World of Machines. Barry Friedman is a professor at New York Universitys School of Law and the faculty director of its Policing Project. He is the author of Unwarranted: Policing Without Permission.

Read more from the original source:

Opinion | A.I. Use by Law Enforcement Must Be Strictly Regulated - The New York Times

Posted in Ai | Comments Off on Opinion | A.I. Use by Law Enforcement Must Be Strictly Regulated – The New York Times

Intel Hires HPE’s Justin Hotard To Lead Data Center And AI Group – CRN

By becoming the leader of Intels Data Center and AI Group, former Hewlett Packard Enterprise executive Justin Hotard will take over a business that is fighting competition on multiple fronts, including against AMD in the x86 server CPU market and Nvidia in the AI computing space.

Intel said it has hired Hewlett Packard Enterprise rising star Justin Hotard to lead the companys prized Data Center and AI Group.

The Santa Clara, Calif.-based chipmaker said Wednesday that Hotard will become executive vice president and general manager of the business, effective Feb. 1. He will succeed Sandra Rivera, who moved to lead Intels Programmable Solutions Group as a new stand-alone business under the companys ownership on Monday.

Hotard was most recently executive vice president and general manager of high-performance computing, AI and labs at HPE, where he was responsible for delivering AI capabilities to customers addressing some of the worlds most complex problems through data-intensive workloads, according to the semiconductor giant.

By becoming the leader of Intels Data Center and AI Group, Hotard will take over a business that is fighting competition on multiple fronts: against AMD in the x86 server CPU market, against Nvidia, AMD and smaller firms in the AI computing space, and against the rise of Arm-based server chips from Ampere Computing, Amazon Web Services and Microsoft Azure.

Just last month, the Data Center and AI Group marked the launch of its fifth-generation Xeon processors, which the company said deliver AI acceleration in every core on top of outperforming AMDs latest EPYC chips around the clock. And the business is also fighting its way to win market share from Nvidia in the AI computing market with not just its Xeon CPUs but also its Gaudi accelerator chips and a differentiated software strategy.

The semiconductor giant is making these moves as part of Intel CEO Pat Gelsingers grand comeback plan, which seeks to put the company ahead of Asian contract chip manufacturers TSMC and Samsung in advanced chip-making capabilities by 2025 to unlock new momentum.

Justin is a proven leader with a customer-first mindset and has an impressive track record in driving growth and innovation in the data center and AI, Gelsinger in a statement.

Justin is committed to our vision to create world-changing technologies and passionate about the critical role Intel will play in empowering our customers for decades to come, he added.

Go here to read the rest:

Intel Hires HPE's Justin Hotard To Lead Data Center And AI Group - CRN

Posted in Ai | Comments Off on Intel Hires HPE’s Justin Hotard To Lead Data Center And AI Group – CRN

At Morgan State, seeking AI that is both smart and fair – Baltimore Sun

Your application for college or a mortgage loan. Whether youre correctly diagnosed in the doctors office, make it onto the short list for a job interview or get a shot at parole.

That bias can enter into these often life-altering decisions is nothing new. But today, with artificial intelligence assisting everyone from college admission directors to parole boards, a group of researchers at Morgan State University says the potential for racial, gender and other discrimination is amplified by magnitudes.

You automate the bias, you multiply and expand the bias, said Gabriella Waters, a director at a Morgan State center seeking to prevent just that. If youre doing something wrong, its going to do it in a big way.

Waters directs research and operations for the Baltimore universitys Center for Equitable Artificial Intelligence and Machine Learning Systems, or CEAMLS for short. Pronounced seamless, it indeed brings together specialists from across disciplines ranging from engineering to philosophy with the goal of harnessing the power of artificial intelligence while ensuring it doesnt introduce or spread bias.

AI is a catchall phrase for systems that can process large amounts of data quickly and, mimicking human cognitive functions such as detecting patterns, predict outcomes and recommend decisions.

But therein lies both its benefits and pitfalls: as data points are introduced, so, too, can bias enter in. Facial recognition systems were found more likely to misidentify Black and Asian people, for example, and Amazon dumped a recruiting program that favored male over female applicants.

Bias also cropped up in an algorithm used to assess the relative sickness of patients, and thus the level of treatment they should receive, because it was based on the amount of previous spending on health care meaning Black people, who are more likely to have lower incomes and less access to care to begin with, were erroneously scored as healthier than they actually were.

Dont blame the machines, though. They can only do what they do with what theyre given.

Its human beings that are the source of the data sets being correlated, Waters said. Not all of this is intentional. Its just human nature.

Data can obscure the actual truths, she said. You might find that ice cream sales are high in areas where a lot of shark attacks occur, Waters said, but that, of course, doesnt mean one causes the other.

The center at Morgan was created in July 2022 to find ways to address problems that already underlie existing AI systems, and create new technologies that avoid introducing bias.

As a historically Black university that has been boosting its research capacity in recent years, Morgan State is poised to put its own stamp on the AI field, said Kofi Nyarko, who is the CEAMLS director and a professor of electrical and computer engineering.

Morgan has a unique position here, Nyarko said. Yes, we have the experts in machine learning that we can pull from the sciences.

But also we have a mandate. We have a mission that seeks to not only advance the science, but make sure that we advance our community such that they are involved in that process and that advancement.

Morgan States AI research has been fueled by an influx of public and private funding by its calculations, nearly $18.5 million over the past 3 years.Many of the grants come from federal agencies, including the Office of Naval Research, which gave the university $9 million, the National Science Foundation and the National Institutes of Health.

Throughout the state, efforts are underway to catch up with the burgeoning field of AI, tapping into its potential while working to guard against any unintended consequences.

The General Assembly and Democratic Gov. Wes Moores administration have both been delving into AI, seeking to understand how it can be used to improve state government services and ensure that its applications meet values such as equity, security and privacy.

That was was part of the agenda of a Nov. 29 meeting of the General Assemblys Joint Committee on Cybersecurity, Information Technology, and Biotechnology, where some of Moores newly appointed technology officials briefed state senators and delegates on the use of the rapidly advancing technology in state government.

Its all moving very fast, said Nishant Shah, who in August was named Moores senior advisor for responsible AI. We dont know what we dont know.

Shah said hell be working to develop a set of AI principles and values that will serve as a North Star for procuring AI systems and monitoring them for any possible harm. State techstaff are also doing an inventory of AI already in use very little, according to a survey that drew limited response this summer and hoping to increase the knowledge and skills of personnel across the government, he said.

At Morgan, Nyarko said he is heartened by the amount of attention in the state and also federally on getting AI right. The White House, for example, issued an executive order in October on the safe and responsible use of the technology.

There is a lot of momentum now, which is fantastic, Nyarko said. Are we there yet? No. Just as the technology evolves, the approach will have to evolve with it, but I think the conversations are happening, which is great.

Nyarko, who leads` Morgans Data Engineering and Predictive Analytics (DEPA) Research Lab, is working on ways to monitor the performance of cloud-based systems and whether they alter depending on variables such as a persons race or ethnicity.Hes also working on how to objectively measure the very nebulous concept of fairness could there be a consensus within the industry, for example, on benchmarks that everyone would use to test their systems performance?

Think about going to the grocery store and picking up a package with a nutrition label on it, Nyarko said. Its really clear when you pick it up you know what youre getting.

What would that look like for the AI model? Pick up a product and flip it over, so to speak, metaphorically see what its strengths are, what its weaknesses are, in what areas what groups are impacted one way or the other.

The centers staff and students ranging from undergrads to post-docs are working on multiple projects: A childs toy car is parked in one room, awaiting further work to make it self-driving. There are autonomous wheelchairs, being tested at Baltimore/Washington International Thurgood Marshall Airport, where hopefully one day they can be ordered like an Uber.

Waters, who directs the Cognitive and Neurodiversity AI Lab at Morgan, is working on applications to help in diagnosing autism and assist those with autism in developing skills. With much autism research based on a small pool, usually boys and particularly white boys, she is working on using AI to observe and track children of other racial and ethnic groups in their family settings, seeking to tease out cultural differences that may mask symptoms of autism.

She is also working on using augmented reality glasses and AI to develop individualized programs for those with autism. The glasses would put an overlay on the real environment, prompting and rewarding the wearer to be more vocal, for example, or using a cartoon character to point to a location they should go to, such as a bathroom.

Ulysses Muoz/Baltimore Sun

While the center works on projects that could find their way onto the marketplace, it maintains its focus on providing, as its mission statement puts it, thought leadership in the application of fair and unbiased technology.

One only has to look at previous technologies that took unexpected turns from their original intent, said J. Phillip Honenberger, who joined the center from Morgans philosophy and religious studies department.He specializes in the intersection of philosophy and science, and sees the centers work as an opportunity to get ahead of whatever unforeseen implications AI may have for our lives.

Any socially disruptive technology almost never gets sufficient deliberation and reflection, Honenberger said. They hit the market and start to affect peoples lives before people really have a chance to think about whats happening.

Look at the way social media affected the political space, Honenberger said. No one thought, he said, Were going to build this thing to connect people with their friends and family, and its going to change the outcome of elections, its going to lead to polarization and disinformation and all the other negative effects.

Technology tends to have a reflection and deliberation deficit, Honenberger said.

But, he said, that doesnt mean innovation should be stifled because it might lead to unintended consequences.

The solution is to build ethical capacity, build reflective and deliberative capacity, he said, and thats what were in the business of doing.

See more here:

At Morgan State, seeking AI that is both smart and fair - Baltimore Sun

Posted in Ai | Comments Off on At Morgan State, seeking AI that is both smart and fair – Baltimore Sun

AI predictions for the new year – POLITICO – POLITICO

Mass General Brigham physicians are envisioning the future of AI in medicine. | Courtesy of Mass General Brigham

Will 2024 be the year that artificial intelligence transforms medicine?

Leaders at one of Americas top hospital systems, Mass General Brigham in Boston, might not go that far, but they have high hopes.

Their new years predictions span departments and specialties, some patient-facing and others for the back office.

Heres what they foresee:

Neurosurgery could see advancements in AI and machine learning, according to Dr. Omar Arnaout, a neurosurgeon at Brigham and Womens Hospital. The tech could better tailor treatment plans to patients, more accurately predict outcomes and add new precision to surgeries.

Radiologys continued integration of AI could revolutionize the accuracy of diagnostics and treatments, said Dr. Manisha Bahl, a physician investigator in Mass Generals radiology department. And she sees liquid biopsies taking on more of a role as AI makes it easier to detect biomarkers.

Patient chatbots will likely become more popular, according to Dr. Marc Succi, executive director of Mass General MESH Incubator, a center at the health system that, with Harvard Medical School, looks to create new approaches to health care. That could make triaging more efficient.

Smarter robots could even come to patient care because of AI, according to Randy Trumbower, director of the INSPIRE Lab, affiliated with Mass General Brigham. He and his team are studying semi-autonomous robots that use AI to better care for people with severe spinal cord injuries.

And AI tools themselves could see innovations that make them more appealing for medical use, Dr. Danielle Bitterman, an assistant professor at BWH and a faculty member on the Artificial Intelligence in Medicine program at Mass General Brigham, said. Breakthroughs could make AI systems more efficient and better at quickly incorporating current clinical information for the best patient care across specialties.

Granby, Colo. | Shawn Zeller/POLITICO

This is where we explore the ideas and innovators shaping health care.

Germans are taking more mental health days off work, Die Welt reports. The number hit a record in 2022 and doubled over the previous decade.

Share any thoughts, news, tips and feedback with Carmen Paun at [emailprotected], Daniel Payne at [emailprotected], Ruth Reader at [emailprotected] or Erin Schumaker at [emailprotected].

Send tips securely through SecureDrop, Signal, Telegram or WhatsApp.

Adopting new technology is as much a cultural issue as a technical one, the AMA says. | Anne-Christine Poujoulat/AFP via Getty Images

Health care providers can devise new ways to care for patients with digital tools, but the people building the tech and running hospitals need to be thoughtful about implementation.

All sides of the business must work together to ensure the success and safety of the new tech, including AI-driven tools, according to guidance from the American Medical Association.

Many hurdles standing in the way of digital health models arent technical but cultural and operational, the doctors group says.

To advance patient care and leverage technology along the way, the AMA says health care executives should:

Prepare to share more data. With regulators moving to safeguard the exchange of patient data, organizations can prepare to follow the rules even before a partnership forms.

Find common goals early. Once partnerships form, clarifying the purpose, value and concerns early on can improve prospects for successful implementation.

Make sure clinicians are in the loop. Builders of new data systems should keep the needs of doctors and nurses in mind to ensure the updates aid in patient care and dont get in the way.

Keep patients in mind. Patients who can access and use their health data are more engaged in their care.

Schistosomiasis affects at least 250 million people living in places without access to clean, safe drinking water and sanitation. | Marcus Perkins for Merck KGaA

Preschool children infected with schistosomiasis the second-most widespread tropical disease after malaria could finally have a treatment.

In mid-December, Europes drug regulator, the European Medicines Agency, endorsed Merck Europes Arpraziquantel, the first drug formulated specifically to treat small children who get the disease, caused by a parasitic worm that can remain in the body for many years and cause organ damage.

Some 50 million children ages 3 months to 6 years and mostly in Africa could benefit.

The European Medicines Agencys positive scientific opinion will streamline the drugs endorsement by the World Health Organization, which makes it easier for countries where the disease is endemic to register the new formulation for children.

Why it matters: Also known as bilharzia, schistosomiasis affects at least 250 million people living in places without access to clean, safe drinking water and sanitation. Its long been neglected by drugmakers.

The disease disables more than it kills, according to the WHO. In children, schistosomiasis can cause anemia, stunted growth and learning disabilities.

The effects are usually reversible through treatment with praziquantel, a drug developed in the 1970s, which Merck donates through WHO to 45 countries in sub-Saharan Africa.

The company provides up to 250 million tablets of praziquantel a year to treat school-aged children in the region, Johannes Waltz, head of Mercks Schistosomiasis Elimination Program, told Carmen. Our focus in the treatment is on school-aged children because the effect is the worst and theres hope that theres long-term effect if you treat regularly, he said.

The new formulation will make it easier to treat smaller children. They now receive part of a crushed praziquantel tablet, depending on how much they weigh.

Arpraziquantel is water-soluble. The taste is tolerable for kids, and it withstands hot environments, the European Medicines Agency said.

Original post:

AI predictions for the new year - POLITICO - POLITICO

Posted in Ai | Comments Off on AI predictions for the new year – POLITICO – POLITICO