Search Immortality Topics:

Page 24«..1020..23242526..3040..»


Category Archives: Machine Learning

Biologists train AI to generate medicines and vaccines – UW Medicine Newsroom

Scientists have developed artificial intelligence software that can create proteins that may be useful as vaccines, cancer treatments, or even tools for pulling carbon pollution out of the air.

This research, reported today in the journal Science, was led by the University of Washington School of Medicine and Harvard University. The article is titled"Scaffolding protein functional sites using deep learning."

The proteins we find in nature are amazing molecules, but designed proteins can do so much more, said senior author David Baker, an HHMI Investigator and professor of biochemistry at UW Medicine. In this work, we show that machine learning can be used to design proteins with a wide variety of functions.

For decades, scientists have used computers to try to engineer proteins. Some proteins, such as antibodies and synthetic binding proteins, have been adapted into medicines to combat COVID-19. Others, such as enzymes, aid in industrial manufacturing. But a single protein molecule often contains thousands of bonded atoms; even with specialized scientific software, they are difficult to study and engineer.

Inspired by how machine learning algorithms can generate stories or even images from prompts, the team set out to build similar software for designing new proteins. The idea is the same: neural networks can be trained to see patterns in data. Once trained, you can give it a prompt and see if it can generate an elegant solution. Often the results are compelling or even beautiful, said lead author Joseph Watson, a postdoctoral scholar at UW Medicine.

The team trained multiple neural networks using information from the Protein Data Bank, which is a public repository of hundreds of thousands of protein structures from across all kingdoms of life. The neural networks that resulted have surprised even the scientists who created them.

The team developed two approaches for designing proteins with new functions. The first, dubbed hallucination is akin to DALL-E or other generative A.I. tools that produce new output based on simple prompts. The second, dubbed inpainting, is analogous to the autocomplete feature found in modern search bars and email clients.

Most people can come up with new images of cats or write a paragraph from a prompt if asked, but with protein design, the human brain cannot do what computers now can, said lead author Jue Wang, a postdoctoral scholar at UW Medicine. Humans just cannot imagine what the solution might look like, but we have set up machines that do.

To explain how the neural networks hallucinate a new protein, the team compares it to how it might write a book: You start with a random assortment of words total gibberish. Then you impose a requirement such as that in the opening paragraph, it needs to be a dark and stormy night. Then the computer will change the words one at a time and ask itself Does this make my story make more sense? If it does, it keeps the changes until a complete story is written, explains Wang.

Both books and proteins can be understood as long sequences of letters. In the case of proteins, each letter corresponds to a chemical building block called an amino acid. Beginning with a random chain of amino acids, the software mutates the sequence over and over until a final sequence that encodes the desired function is generated. These final amino acid sequences encode proteins that can then be manufactured and studied in the laboratory.

The team also showed that neural networks can fill in missing pieces of a protein structure in only a few seconds. Such software could aid in the development of new medicines.

With autocomplete, or Protein Inpainting, we start with the key features we want to see in a new protein, then let the software come up with the rest. Those features can be known binding motifs or even enzyme active sites, explains Watson.

Laboratory testing revealed that many proteins generated through hallucination and inpainting functioned as intended. This included novel proteins that can bind metals as well as those that bind the anti-cancer receptor PD-1.

The new neural networks can generate several different kinds of proteins in as little as one second. Some include potential vaccines for the deadly respiratory syncytial virus,orRSV.

All vaccines work by presenting a piece of a pathogen to the immune system. Scientists often know which piece would work best, but creating a vaccine that achieves a desired molecular shape can be challenging. Using the new neural networks, the team prompted a computer to create new proteins that included the necessary pathogen fragment as part of their final structure. The software was free to create any supporting structures around the key fragment, yielding several potential vaccines with diverse molecular shapes.

When tested in the lab, the team found that known antibodies against RSV stuck to three of their hallucinated proteins. This confirms that the new proteins adopted their intended shapes and suggests they may be viable vaccine candidates that could prompt the body to generate its own highly specific antibodies. Additional testing, including in animals, is still needed.

I started working on the vaccine stuff just as a way to test our new methods, but in the middle of working on the project, my two-year-old son got infected by RSV and spent an evening in the ER to have his lungs cleared. It made me realize that even the test problems we were working on were actually quite meaningful, said Wang.

These are very powerful new approaches, but there is still much room for improvement, said Baker, who was a recipient of the 2021 Breakthrough Prize in Life Sciences. Designing high activity enzymes, for example, is still very challenging. But every month our methods just keep getting better! Deep learning transformed protein structure prediction in the past two years, we are now in the midst of a similar transformation of protein design.

This project was led by Jue Wang, Doug Tischer, and Joseph L. Watson, who are postdoctoral scholars at UW Medicine, as well as Sidney Lisanza and David Juergens, who are graduate students at UW Medicine. Senior authors include Sergey Ovchinnikov, a John Harvard Distinguished Science Fellow at Harvard University, and David Baker, professor of biochemistry at UW Medicine.

Compute resources for this work were donated by Microsoft and Amazon Web Services.

Funding was provided by the Audacious Project at the Institute for Protein Design; Microsoft; Eric and Wendy Schmidt by recommendation of the Schmidt Futures; the DARPA Synergistic Discovery and Design project (HR001117S0003 contract FA8750-17-C-0219); the DARPA Harnessing Enzymatic Activity for Lifesaving Remedies project (HR001120S0052 contract HR0011-21-2-0012); the Washington Research Foundation; the Open Philanthropy Project Improving Protein Design Fund; Amgen; the Human Frontier Science Program Cross Disciplinary Fellowship (LT000395/2020-C) and EMBO Non-Stipendiary Fellowship (ALTF 1047-2019); the EMBO Fellowship (ALTF 191-2021); the European Molecular Biology Organization (ALTF 139-2018); the la Caixa Foundation; the National Institute of Allergy and Infectious Diseases (HHSN272201700059C), the National Institutes ofHealth (DP5OD026389); the National Science Foundation (MCB 2032259); the Howard Hughes Medical Institute, the National Institute on Aging (5U19AG065156); the National Cancer Institute (R01CA240339); the Swiss National Science Foundation; the Swiss National Center of Competence for Molecular Systems Engineering; the Swiss National Center of Competence in Chemical Biology; and the European Research Council(716058).

Written by Ian Haydon, UW Medicine Institute for Protein Design

Read more:
Biologists train AI to generate medicines and vaccines - UW Medicine Newsroom

Posted in Machine Learning | Comments Off on Biologists train AI to generate medicines and vaccines – UW Medicine Newsroom

Google Is Selling Advanced AI to Israel, Documents Reveal – The Intercept

Training materials reviewed by The Intercept confirm that Google is offering advanced artificial intelligence and machine-learning capabilities to the Israeli government through its controversial Project Nimbus contract. The Israeli Finance Ministry announced the contract in April 2021 for a $1.2 billion cloud computing system jointly built by Google and Amazon. The project is intended to provide the government, the defense establishment and others with an all-encompassing cloud solution, the ministry said in its announcement.

Google engineers have spent the time since worrying whether their efforts would inadvertently bolster the ongoing Israeli military occupation of Palestine. In 2021, both Human Rights Watch and Amnesty International formally accused Israel of committing crimes against humanity by maintaining an apartheid system against Palestinians. While the Israeli military and security services already rely on a sophisticated system of computerized surveillance, the sophistication of Googles data analysis offerings could worsen the increasingly data-driven military occupation.

According to a trove of training documents and videos obtained by The Intercept through a publicly accessible educational portal intended for Nimbus users, Google is providing the Israeli government with the full suite of machine-learning and AI tools available through Google Cloud Platform. While they provide no specifics as to how Nimbus will be used, the documents indicate that the new cloud would give Israel capabilities for facial detection, automated image categorization, object tracking, and even sentiment analysis that claims to assess the emotional content of pictures, speech, and writing. The Nimbus materials referenced agency-specific trainings available to government personnel through the online learning service Coursera, citing the Ministry of Defense as an example.

A slide presented to Nimbus users illustrating Google image recognition technology.

Credit: Google

The former head of Security for Google Enterprise who now heads Oracles Israel branch has publicly argued that one of the goals of Nimbus is preventing the German government from requesting data relating on the Israel Defence Forces for the International Criminal Court, said Poulson, who resigned in protest from his job as a research scientist at Google in 2018, in a message. Given Human Rights Watchs conclusion that the Israeli government is committing crimes against humanity of apartheid and persecution against Palestinians, it is critical that Google and Amazons AI surveillance support to the IDF be documented to the fullest.

Though some of the documents bear a hybridized symbol of the Google logo and Israeli flag, for the most part they are not unique to Nimbus. Rather, the documents appear to be standard educational materials distributed to Google Cloud customers and presented in prior training contexts elsewhere.

Google did not respond to a request for comment.

The documents obtained by The Intercept detail for the first time the Google Cloud features provided through the Nimbus contract. With virtually nothing publicly disclosed about Nimbus beyond its existence, the systems specific functionality had remained a mystery even to most of those working at the company that built it.In 2020, citing the same AI tools, U.S Customs and Border Protection tapped Google Cloud to process imagery from its network of border surveillance towers.

Many of the capabilities outlined in the documents obtained by The Intercept could easily augment Israels ability to surveil people and process vast stores of data already prominent features of the Israeli occupation.

Data collection over the entire Palestinian population was and is an integral part of the occupation, Ori Givati of Breaking the Silence, an anti-occupation advocacy group of Israeli military veterans, told The Intercept in an email. Generally, the different technologicaldevelopments we are seeing in the Occupied Territories all direct to one central element which is more control.

The Israeli security state has for decades benefited from the countrys thriving research and development sector, and its interest in using AI to police and control Palestinians isnt hypothetical. In 2021, the Washington Post reported on the existence of Blue Wolf, a secret military program aimed at monitoring Palestinians through a network of facial recognition-enabled smartphones and cameras.

Living under a surveillance state for years taught us that all the collected information in the Israeli/Palestinian context could be securitized and militarized, said Mona Shtaya, a Palestinian digital rights advocate at 7amleh-The Arab Center for Social Media Advancement, in a message. Image recognition, facial recognition, emotional analysis, among other things will increase the power of the surveillance state to violate Palestinian right to privacy and to serve their main goal, which is to create the panopticon feeling among Palestinians that we are being watched all the time, which would make the Palestinian population control easier.

The educational materials obtained by The Intercept show that Google briefed the Israeli government on using whats known as sentiment detection, an increasingly controversial and discredited form of machine learning. Google claims that its systems can discern inner feelings from ones face and statements, a technique commonly rejected as invasive and pseudoscientific, regarded as being little better than phrenology. In June, Microsoft announced that it would no longer offer emotion-detection features through its Azure cloud computing platform a technology suite comparable to what Google provides with Nimbus citing the lack of scientific basis.

Google does not appear to share Microsofts concerns. One Nimbus presentation touted the Faces, facial landmarks, emotions-detection capabilities of Googles Cloud Vision API, an image analysis toolset. The presentation then offered a demonstration using the enormous grinning face sculpture at the entrance of Sydneys Luna Park. An included screenshot of the feature ostensibly in action indicates that the massive smiling grin is very unlikely to exhibit any of the example emotions. And Google was only able to assess that the famous amusement park is an amusement park with 64 percent certainty, while it guessed that the landmark was a place of worship or Hindu Temple with 83 percent and 74 percent confidence, respectively.

A slide presented to Nimbus users illustrating Google AIs ability to detect image traits.

Credit: Google

Vision API is a primary concern to me because its so useful for surveillance, said one worker, who explained that the image analysis would be a natural fit for military and security applications. Object recognition is useful for targeting, its useful for data analysis and data labeling. An AI can comb through collected surveillance feeds in a way a human cannot to find specific people and to identify people, with some error, who look like someone. Thats why these systems are really dangerous.

A slide presented to Nimbus users outlining various AI features through the companys Cloud Vision API.

Credit: Google

Training an effective model from scratch is often resource intensive, both financially and computationally. This is not so much of a problem for a world-spanning company like Google, with an unfathomable volume of both money and computing hardware at the ready. Part of Googles appeal to customers is the option of using a pre-trained model, essentially getting this prediction-making education out of the way and letting customers access a well-trained program thats benefited from the companys limitless resources.

An AI can comb through collected surveillance feeds in a way a human cannot to find specific people and to identify people, with some error, who look like someone. Thats why these systems are really dangerous.

Custom models generated through AutoML, one presentation noted, can be downloaded for offline edge use unplugged from the cloud and deployed in the field.

That Nimbus lets Google clients use advanced data analysis and prediction in places and ways that Google has no visibility into creates a risk of abuse, according to Liz OSullivan, CEO of the AI auditing startupParity and a member of the U.S. National Artificial Intelligence Advisory Committee. Countries can absolutely use AutoML to deploy shoddy surveillance systems that only seem like they work, OSullivan said in a message. On edge, its even worse think bodycams, traffic cameras, even a handheld device like a phone can become a surveillance machine and Google may not even know its happening.

In one Nimbus webinar reviewed by The Intercept, the potential use and misuse of AutoML was exemplified in a Q&A session following a presentation. An unnamed member of the audience asked the Google Cloud engineers present on the call if it would be possible to process data through Nimbus in order to determine if someone is lying.

Im a bit scared to answer that question, said the engineer conducting the seminar, in an apparent joke. In principle: Yes. I will expand on it, but the short answer is yes. Another Google representative then jumped in: It is possible, assuming that you have the right data, to use the Google infrastructure to train a model to identify how likely it is that a certain person is lying, given the sound of their own voice. Noting that such a capability would take a tremendous amount of data for the model, the second presenter added that one of the advantages of Nimbus is the ability to tap into Googles vast computing power to train such a model.

Id be very skeptical for the citizens it is meant to protect that these systems can do what is claimed.

A broad body of research, however, has shown that the very notion of a lie detector, whether the simple polygraph or AI-based analysis of vocal changes or facial cues, is junk science. While Googles reps appeared confident that the company could make such a thing possible through sheer computing power, experts in the field say that any attempts to use computers to assess things as profound and intangible as truth and emotion are faulty to the point of danger.

One Google worker who reviewed the documents said they were concerned that the company would even hint at such a scientifically dubious technique. The answer should have been no, because that does not exist, the worker said. It seems like it was meant to promote Google technology as powerful, and its ultimately really irresponsible to say that when its not possible.

Andrew McStay, a professor of digital media at Bangor University in Wales andhead of the Emotional AI Lab, told The Intercept that the lie detector Q&A exchange was disturbing, as is Googles willingness to pitch pseudoscientific AI tools to a national government. It is [a] wildly divergent field, so any technology built on this is going to automate unreliability, he said. Again, those subjected to them will suffer, but Id be very skeptical for the citizens it is meant to protect that these systems can do what is claimed.

According to some critics, whether these tools work might be of secondary importance to a company like Google that is eager to tap the ever-lucrative flow of military contract money. Governmental customers too may be willing to suspend disbelief when it comes to promises of vast new techno-powers. Its extremely telling that in the webinar PDF that they constantly referred to this as magical AI goodness, said Jathan Sadowski, a scholar of automation technologies and research fellow at Monash University, in an interview with The Intercept. It shows that theyre bullshitting.

Google CEO Sundar Pichai speaks at the Google I/O conference in Mountain View, Calif. Google pledges that it will not use artificial intelligence in applications related to weapons or surveillance, part of a new set of principles designed to govern how it uses AI. Those principles, released by Pichai, commit Google to building AI applications that are socially beneficial, that avoid creating or reinforcing bias and that are accountable to people.

Photo: Jeff Chiu/AP

Israel, though, has set up its relationship with Google to shield it from both the companys principles and any outside scrutiny. Perhaps fearing the fate of the Pentagons Project Maven, a Google AI contract felled by intense employee protests, the data centers that power Nimbus will reside on Israeli territory, subject to Israeli lawand insulated from political pressures. Last year, the Times of Israel reported that Google would be contractually barred from shutting down Nimbus services or denying access to a particular government office even in response to boycott campaigns.

Google employees interviewed by The Intercept lamented that the companys AI principles are at best a superficial gesture. I dont believe its hugely meaningful, one employee told The Intercept, explaining that the company has interpreted its AI charter so narrowly that it doesnt apply to companies or governments that buy Google Cloud services. Asked how the AI principles are compatible with the companys Pentagon work, a Google spokesperson told Defense One, It means that our technology can be used fairly broadly by the military.

Google is backsliding on its commitments to protect people from this kind of misuse of our technology. I am truly afraid for the future of Google and the world.

Moreover, this employee added that Google lacks both the ability to tell if its principles are being violated and any means of thwarting violations. Once Google offers these services, we have no technical capacity to monitor what our customers are doing with these services, the employee said. They could be doing anything. Another Google worker told The Intercept, At a time when already vulnerable populations are facing unprecedented and escalating levels of repression, Google is backsliding on its commitments to protect people from this kind of misuse of our technology. I am truly afraid for the future of Google and the world.

Ariel Koren, a Google employee who claimed earlier this year that she faced retaliation for raising concerns about Nimbus, said the companys internal silence on the program continues. I am deeply concerned that Google has not provided us with any details at all about the scope of the Project Nimbus contract, let alone assuage my concerns of how Google can provide technology to the Israeli government and military (both committing grave human rights abuses against Palestinians daily) while upholding the ethical commitments the company has made to its employees and the public, she told The Intercept in an email. I joined Google to promote technology that brings communities together and improves peoples lives, not service a government accused of the crime of apartheid by the worlds two leading human rights organizations.

Sprawling techcompanies have published ethical AI charters to rebut critics who say that their increasingly powerful products are sold unchecked and unsupervised. The same critics often counter that the documents are a form of ethicswashing essentially toothless self-regulatory pledges that provide only the appearance of scruples, pointing to examples like the provisions in Israels contract with Google that prevent thecompany from shutting down its products. The way that Israel is locking in their service providers through this tender and this contract, said Sadowski, the Monash University scholar, I do feel like that is a real innovation in technology procurement.

To Sadowski, it matters little whether Google believes what it peddles about AI or any other technology. What the company is selling, ultimately, isnt just software, but power. And whether its Israel and the U.S. today or another government tomorrow, Sadowski says that some technologies amplify the exercise of power to such an extent that even their use by a country with a spotless human rights record would provide little reassurance. Give them these technologies, and see if they dont get tempted to use them in really evil and awful ways, he said. These are not technologies that are just neutral intelligence systems, these are technologies that are ultimately about surveillance, analysis, and control.

Read more here:
Google Is Selling Advanced AI to Israel, Documents Reveal - The Intercept

Posted in Machine Learning | Comments Off on Google Is Selling Advanced AI to Israel, Documents Reveal – The Intercept

Artificial Intelligence Computing Software Market Analysis Report 2022: Complete Information of the AI-related Processors Specifications and…

DUBLIN--(BUSINESS WIRE)--The "Artificial Intelligence Computing Software: Market Analysis" report has been added to ResearchAndMarkets.com's offering.

Market is predicted to grow from $ 6.9B in 2021 to $ 37.6B in 2026 and may become a new sector of the economy.

This research contains complete information of the AI-related processors specifications and capabilities which were produced by the key market players and start-ups.

This comprehensive analysis can aid you in your technology acquisitions or investment decisions related to the fast-growing AI processors market.

After the main breakthrough at the turn of the century AI started to incorporate more and more artificial neural networks, connected in an ever-growing number of layers, now known as Deep Learning (DL). They can compete and outperform classical ML techniques like clustering but are more flexible and can work with much more complex datasets, including images and audio.

As machine learning entered exponential growth, it expanded into areas usually dominated by high-performance computing - such as protein folding and many-particle interactions. At the same time, our lives become increasingly dependent on its availability and reliability. This poses a number of new technical challenges but at the same time opens a road to novel solutions and technologies, in a similar way as space exploration or fundamental physics does.

More so, the commercial success of AI-enabled systems (autopilots, image processing, speech recognition and translation, to name just a few) ensures that no shortage of funds could hinder this growth. It has clearly become a new industry, if not a sector of the economy, one that is gaining importance with every passing year.

As any industry, it depends on several factors to prosper. Rising consumer demand has led to the consensus of major forecasters on the rapid growth of the sector - around 40% yearly in the near future, so funds shortage is not an issue. Instead, we must concentrate on other requirements for the efficient functioning of the industry.

The three main components are the availability of processing tools, the abundance of raw materials, and the workforce. Raw materials in this case are represented by big data, and there is often more of it than our current systems can make sense of. The workforce also seems to grow sufficiently fast, as ML cements its place in the university curriculum. So the processing tools, as well as the available energy to run them are clear bottlenecks in the exponential growth.

The end of Moore's extrapolation law due to quantum tunnelling and such, which become increasingly important with the reduction in transistor size, sets clear bounds on where we can go. To ensure long-term investments in the industry, a clear strategy must be developed to offset what will happen in 10 years

Key Highlights

Key Topics Covered:

1. Deep learning challenges

1.1 Architectural limitations

1.2 Brief introduction to deep learning

1.3 Cutting corners

1.4 Processing tools

2. Market analysis

2.1 Market overview

2.2 CPU

2.3 Edge and Mobile

2.4 GPU

2.5 FPGA

2.6 ASIC

2.6.1 Tech giants

2.6.2 Startups

2.7 Neuromorphic processors

2.8 Photonic computing

3. Glossary

4. Infographics

For more information about this report visit https://www.researchandmarkets.com/r/5wsx87

Originally posted here:
Artificial Intelligence Computing Software Market Analysis Report 2022: Complete Information of the AI-related Processors Specifications and...

Posted in Machine Learning | Comments Off on Artificial Intelligence Computing Software Market Analysis Report 2022: Complete Information of the AI-related Processors Specifications and…

How we learned to break down barriers to machine learning – Ars Technica

Dr. Sephus discusses breaking down barriers to machine learning at Ars Frontiers 2022. Click here for transcript.

Welcome to the week after Ars Frontiers! This article is the first in a short series of pieces that will recap each of the day's talks for the benefit of those who weren't able to travel to DC for our first conference. We'll be running one of these every few days for the next couple of weeks, and each one will include an embedded video of the talk (along with a transcript).

For today's recap, we're going over our talk with Amazon Web Services tech evangelist Dr. Nashlie Sephus. Our discussion was titled "Breaking Barriers to Machine Learning."

Dr. Sephus came to AWS via a roundabout path, growing up in Mississippi before eventually joining a tech startup called Partpic. Partpic was an artificial intelligence and machine-learning (AI/ML) company with a neat premise: Users could take photographs of tooling and parts, and the Partpic app would algorithmically analyze the pictures, identify the part, and provide information on what the part was and where to buy more of it. Partpic was acquired by Amazon in 2016, and Dr. Sephus took her machine-learning skills to AWS.

When asked, she identified accessasthe biggest barrier to the greater use of AI/MLin a lot of ways, it's another wrinkle in the old problem of the digital divide. A core component of being able to utilize most common AI/ML tools is having reliable and fast Internet access, and drawing on experience from her background, Dr. Sephus pointed out that a lack of access to technology in primary schools in poorer areas of the country sets kids on a path away from being able to use the kinds of tools we're talking about.

Furthermore, lack of early access leads to resistance to technology later in life. "You're talking about a concept that a lot of people think is pretty intimidating," she explained. "A lot of people are scared. They feel threatened by the technology."

One way of tackling the divide here, in addition to simply increasing access, is changing the way that technologists communicate about complex topics like AI/ML to regular folks. "I understand that, as technologists, a lot of times we just like to build cool stuff, right?" Dr. Sephus said. "We're not thinking about the longer-term impact, but that's why it's so important to have that diversity of thought at the table and those different perspectives."

Dr. Sephus said that AWS has been hiring sociologists and psychologists to join its tech teams to figure out ways to tackle the digital divide by meeting people where they are rather than forcing them to come to the technology.

Simply reframing complex AI/ML topics in terms of everyday actions can remove barriers. Dr. Sephus explained that one way of doing this is to point out that almost everyone has a cell phone, and when you're talking to your phone or using facial recognition to unlock it, or when you're getting recommendations for a movie or for the next song to listen tothese things are all examples of interacting with machine learning. Not everyone groks that, especially technological laypersons, and showing people that these things are driven by AI/ML can be revelatory.

"Meeting them where they are, showing them how these technologies affect them in their everyday lives, and having programming out there in a way that's very approachableI think that's something we should focus on," she said.

Continued here:
How we learned to break down barriers to machine learning - Ars Technica

Posted in Machine Learning | Comments Off on How we learned to break down barriers to machine learning – Ars Technica

Keeping water on the radar: Machine learning to aid in essential water cycle measurement – CU Boulder Today

Department of Computer Science assistant professor Chris Heckman and CIRES research hydrologist Toby Minear have been awarded a Grand Challenge Research & Innovation Seed Grant to create an instrument that could revolutionize our understanding of the amount of water in our rivers, lakes, wetlands and coastal areas by greatly increasing the places where we measure it.

The new low-cost instrument would use radar and machine learning to quickly and safely measure water levels in a variety of scenarios.

This work could prove vital as the USDA recently proclaimed the entire state of Colorado to be a "primary natural disaster area" due to an ongoing drought that has made the American West potentially the driest it has been in over a millennium. Other climate records across the globe also continue to be broken, year after year. Our understanding of the changing water cycle has never been more essential at a local, national and global level.

A fundamental part to developing this understanding is knowing changes in the surface height of bodies of water. Currently, measuring changing water surface levels involves high-cost sensors that are easily damaged by floods, difficult to install and time consuming to maintain.

"One of the big issues is that we have limited locations where we take measurements of surface water heights," Minear said.

Heckman and Minear are aiming to change this by building a low-cost instrument that doesn't need to be in a body of water to read its average water surface level. It can instead be placed several meters away safely elevated from floods.

The instrument, roughly the size of two credit-cards stacked on one another, relies on high-frequency radio waves, often referred to as "millimeter wave", which have only been made commercially accessible in the last decade.

Through radar, these short waves can be used to measure the distance between the sensor and the surface of a body of water with great specificity. As the water's surface level increases or decreases over time, the distance between the sensor and the water's surface level changes.

The instrument's small form-factor and potential off-the-shelf usability separate it from previous efforts to identify water through radar.

It also streamlines data transmitted over often limited and expensive cellular and satellite networks, lowering the cost.

In addition, the instrument will use machine learning to determine whether a change in measurements could be a temporary outlier, like a bird swimming by, and whether or not a surface is liquid water.

Machine learning is a form of data analysis that seeks to identify patterns from data to make decisions with little human intervention.

While traditionally radar has been used to detect solid objects, liquids require different considerations to avoid being misidentified. Heckman believes that traditional ways of processing radar may not be enough to measure liquid surfaces at such close proximity.

"We're considering moving further up the radar processing chain and reconsidering how some of these algorithms have been developed in light of new techniques in this kind of signal processing," Heckman said.

In addition to possible fundamental shifts in radar processing, the project could empower communities of citizen scientists, according to Minear.

"Right now, many of the systems that we use need an expert installer. Our idea is to internalize some of those expert decisions, which takes out a lot of the cost and makes this instrument more friendly to a citizen science approach," he said.

By lowering the barrier of entry to water surface level measurement through low-cost devices with smaller data requirements, the researchers broaden opportunities for communities, even in areas with limited cellular networks, to measure their own water sources.

The team is also committing to open-source principles to ensure that anyone can use and build on the technology, allowing for new innovations to happen more quickly and democratically.

Minear, who is a Science Team and Cal/Val Team member for the upcoming NASA Surface Water and Ocean Topography (SWOT) Mission, also hopes that the new instrument could help check the accuracy of water surface level measurements made by satellites.

These sensors could also give local, regional and national communities more insight into their water usage and supply over time and could be used to help make evidence-informed policy decisions about water rights and usage.

"I'm very excited about the opportunities that are presented by getting data in places that we don't currently get it. I anticipate that this could give us better insight into what is happening with our water sources, even in our backyard," said Heckman.

More here:
Keeping water on the radar: Machine learning to aid in essential water cycle measurement - CU Boulder Today

Posted in Machine Learning | Comments Off on Keeping water on the radar: Machine learning to aid in essential water cycle measurement – CU Boulder Today

The role of AI and machine learning in revolutionizing clinical research – MedCity News

Advanced technologies such as artificial intelligence (AI), machine learning (ML), and natural language processing (NLP) have become a cornerstone of successful modern clinical trials, integrated into many of the technologies enabling the transformation of clinical development.

The health and life sciences industrys dramatic leap forward into the digital age in recent years has been a game-changer with innovations and scientific breakthroughs that are improving patient outcomes and population health. Consequently, embracing digital transformation is no longer an option but an industry standard. Lets explore what that truly means for clinical development.

An accelerated path to better results

Over the years, technology has equipped clinical leaders to successfully reduce costs while accelerating stages of research and development. These technologies have aided in the structurization of complex data environmentsa need created by the exponential growth in data sources containing valuable information for clinical research.

Today, the volume, variety and velocity of structured and unstructured data generated by clinical trials are outpacing traditional data management processes. The reality is that there is simply too much data coming from too many sources to be manageable by human teams alone. As a response to this, AI/ML technologies have proven in recent years to hold the remarkable potential to automate data standardization while ensuring quality control, in turn easing the burden on researchers with minimal manual intervention.

Once the collection and streamlining of data is compiled within a single automated ecosystem, clinical trial leaders begin to benefit from faster and smarter insights driven by the application of machine analysis. These include the creation of predictive and prescriptive insights that can aid researchers and sites to uncover best practices for future processes. Altogether, these capabilities can improve research outcomes, patients experience and safety.

A look into compliance and privacy

When we think about the use of patient data, privacy and compliance adherence must be a consideration. The bar is set high for any technology being implemented into clinical trial execution.

Efforts must adhere to Good Clinical Practice (GcP) and validation requirements that ensure an outcome is valid by it being predictable and repeatable. Additionally, there must be transparency and explainability around how any AI algorithm makes decisions to prove correctness and avoidance of any potential bias. This is becoming more essential than ever from a compliance perspective as regulators look at algorithms as part of what they base their approvals on.

Keeping the h(uman) in healthcare

The goal of implementing AI/ML in clinical research is not to replace humans with digital tools but to increase their productivity through high-efficiency human augmentation and the automation of mundane tasks. Before the application of advanced technologies to clinical trials, there was an unmet need for an agile methodology where researchers and organizers could solely focus on critical requirements and the delivery of results.

The intelligent application of technology allows for human interaction with AI models to bring better outcomes to research, and even in its most advanced stage, data science technology never replaces the human data scientist. It does, however, provide a mutually beneficial circumstance wherein the augmentation of workflows allows data scientists to ease data burden while AI models flourish through human feedback. This continuous learning by an AI model is known as Continuous Integration/Continuous Delivery (CI/CD).

The integration of human capacity and technology results in accelerated efficiency, improved compliance and superb patient personalization. Furthermore, regardless of how efficient algorithms become, the decision-making power will always belong to humans.

Envisioning a bold future

AI/ML strategies are redefining the clinical development cycle like never beforeand as the industry leaps into new frontiers, digital transformation is leading the way to incredible advancements that will revolutionize the space forever. Leaders today have the opportunity to apply advanced technologies to solve historically complicated problems in the field.

Already, weve seen better site selection, more effective risk-based quality management, improved patient monitoring and safety, enhanced patient recruitment and engagement, and improved overall study qualityand this is just the beginning.

Photo: Blue Planet Studio, Getty Images

Continue reading here:
The role of AI and machine learning in revolutionizing clinical research - MedCity News

Posted in Machine Learning | Comments Off on The role of AI and machine learning in revolutionizing clinical research – MedCity News