Search Immortality Topics:

Page 35«..1020..34353637..4050..»


Category Archives: Machine Learning

Artificial intelligence and machine learning can detect and predict depression in University of Newcastle research – Newcastle Herald

newsletters, editors-pick-list,

Artificial intelligence is being used to detect and predict depression in people in a University of Newcastle research project that aims to improve quality of life. Associate Professor Raymond Chiong's research team has developed machine-learning models that "detect signs of depression using social media posts with over 98 per cent accuracy". "We have used machine learning to analyse social media posts such as tweets, journal entries, as well as environmental factors such as demographic, social and economic information about a person," Dr Chiong said. This was done to detect if people were suffering from depression and to "predict their likelihood of suffering from depression in the future". Dr Chiong said early detection of depression and poor mental health can "prevent self-harm, relapse or suicide, as well as improve the quality of life" of those affected. "More than four million Australians suffer from depression every year and over 3000 die from suicide, with depression being a major risk factor," he said. People often use social media to "express their feelings" and this can "identify multiple aspects of psychological concerns and human behaviour". The next stage of the team's research will involve "detecting signs of depression by analysing physiological data collected from different kinds of devices". "This should allow us to make more reliable and actionable predictions/detections of a person's mental health, even when all data sources are not available," he said. "Data from wearable devices such as activity measurements, heart rate and sleeping patterns can be used for behaviour and physiological monitoring. "By combining and analysing data from these sources, we can potentially get a very good picture of a person's mental health." The goal is to make such tools available on a smartphone application, which will allow people to regularly monitor their mental health and seek help in the early stages of depression. "Such an app will also build the ability of mental health and wellbeing providers to integrate digital technologies when monitoring their patients, by giving them a source of regular updates about the mental health status of their patients," he said. "We want to use artificial intelligence and machine learning to develop tools that can detect signs of depression by utilising data from things we use on a regular basis, such as social media posts, or data from smartwatches or fitness devices." The research team aims to develop smartphone apps that can be used by mental health professionals to better monitor their patients and help them provide more effective treatment. The overarching goal of the research is to "improve quality of life". "Depression can seriously impact one's enjoyment of life. It does not discriminate - anyone can suffer from it," Dr Chiong said. "To live a high quality of life, one needs to be in good mental health. Good mental health helps people deal with environmental stressors, such as loss of a job or partner, illness and many other challenges in life." The technology involved can help people monitor how well they are coping in challenging circumstances. This can encourage them to seek help from family, friends and professionals in the early stages of ailing mental health. By doing so, professionals could help people prone to depression and other mental illnesses well before the situation becomes risky. "They could also use this technology to get more information about their patients, in addition to what they can glean during consultation," he said. This makes early interventions possible and "reduces the likelihood of self-harm or suicide attempts". Depending on funding, the team plans to work on integrating people's health data from smart-fitness devices, such as heart rate, sleeping patterns and physical activity. The intention is to work with Hunter New England mental health professionals on this stage of the research. "Following this, our goal is to develop a smartphone app that can not only be used by clinical practitioners, but also everyday individuals to monitor their mental health status in real time." He said machine learning models had shown "great potential in terms of learning from training data and making highly accurate predictions". "For example, the application of machine learning/deep learning for image recognition is a major success story," he said. Studies have shown that machine learning had "enormous potential in the field of mental health as well". "The fact that we were able to obtain more than 98 per cent accuracy in detecting signs of ill mental health demonstrates that there is great potential for machine learning in this field." However, he said the technology does face challenges before it can be applied in real-world scenarios. "Some mobile apps have been developed that use machine learning to provide customised physical or other activities for their users, with the goal of helping them stay in good mental health," he said. "However, our proposed app will be one of the first that allows users to monitor their mental health status in real time, by analysing their social media posts and health measurements." Clinical practitioners could use this app to monitor their patients, but convincing them to use the technology will be one of the challenges.

/images/transform/v1/crop/frm/3AijacentBN9GedHCvcASxG/cf2280ff-31ca-4da2-bbb1-672ee0fdc28e.jpg/r1431_550_4993_2563_w1200_h678_fmax.jpg

December 19 2021 - 4:30PM

Detection: Dr Raymond Chiong said "we can potentially get a very good picture of a person's mental health" with artificial intelligence. Picture: Simone De Peak

Artificial intelligence is being used to detect and predict depression in people in a University of Newcastle research project that aims to improve quality of life.

Associate Professor Raymond Chiong's research team has developed machine-learning models that "detect signs of depression using social media posts with over 98 per cent accuracy".

"We have used machine learning to analyse social media posts such as tweets, journal entries, as well as environmental factors such as demographic, social and economic information about a person," Dr Chiong said.

This was done to detect if people were suffering from depression and to "predict their likelihood of suffering from depression in the future".

Dr Chiong said early detection of depression and poor mental health can "prevent self-harm, relapse or suicide, as well as improve the quality of life" of those affected.

"More than four million Australians suffer from depression every year and over 3000 die from suicide, with depression being a major risk factor," he said.

People often use social media to "express their feelings" and this can "identify multiple aspects of psychological concerns and human behaviour".

The next stage of the team's research will involve "detecting signs of depression by analysing physiological data collected from different kinds of devices".

"This should allow us to make more reliable and actionable predictions/detections of a person's mental health, even when all data sources are not available," he said.

"Data from wearable devices such as activity measurements, heart rate and sleeping patterns can be used for behaviour and physiological monitoring.

"By combining and analysing data from these sources, we can potentially get a very good picture of a person's mental health."

The goal is to make such tools available on a smartphone application, which will allow people to regularly monitor their mental health and seek help in the early stages of depression.

"Such an app will also build the ability of mental health and wellbeing providers to integrate digital technologies when monitoring their patients, by giving them a source of regular updates about the mental health status of their patients," he said.

"We want to use artificial intelligence and machine learning to develop tools that can detect signs of depression by utilising data from things we use on a regular basis, such as social media posts, or data from smartwatches or fitness devices."

The research team aims to develop smartphone apps that can be used by mental health professionals to better monitor their patients and help them provide more effective treatment.

The overarching goal of the research is to "improve quality of life".

"Depression can seriously impact one's enjoyment of life. It does not discriminate - anyone can suffer from it," Dr Chiong said.

"To live a high quality of life, one needs to be in good mental health. Good mental health helps people deal with environmental stressors, such as loss of a job or partner, illness and many other challenges in life."

The technology involved can help people monitor how well they are coping in challenging circumstances.

This can encourage them to seek help from family, friends and professionals in the early stages of ailing mental health.

By doing so, professionals could help people prone to depression and other mental illnesses well before the situation becomes risky.

"They could also use this technology to get more information about their patients, in addition to what they can glean during consultation," he said.

This makes early interventions possible and "reduces the likelihood of self-harm or suicide attempts".

Depending on funding, the team plans to work on integrating people's health data from smart-fitness devices, such as heart rate, sleeping patterns and physical activity.

The intention is to work with Hunter New England mental health professionals on this stage of the research.

"Following this, our goal is to develop a smartphone app that can not only be used by clinical practitioners, but also everyday individuals to monitor their mental health status in real time."

He said machine learning models had shown "great potential in terms of learning from training data and making highly accurate predictions".

"For example, the application of machine learning/deep learning for image recognition is a major success story," he said.

Studies have shown that machine learning had "enormous potential in the field of mental health as well".

"The fact that we were able to obtain more than 98 per cent accuracy in detecting signs of ill mental health demonstrates that there is great potential for machine learning in this field."

However, he said the technology does face challenges before it can be applied in real-world scenarios.

"Some mobile apps have been developed that use machine learning to provide customised physical or other activities for their users, with the goal of helping them stay in good mental health," he said.

"However, our proposed app will be one of the first that allows users to monitor their mental health status in real time, by analysing their social media posts and health measurements."

Clinical practitioners could use this app to monitor their patients, but convincing them to use the technology will be one of the challenges.

See the original post:
Artificial intelligence and machine learning can detect and predict depression in University of Newcastle research - Newcastle Herald

Posted in Machine Learning | Comments Off on Artificial intelligence and machine learning can detect and predict depression in University of Newcastle research – Newcastle Herald

GeoMol: New deep learning model to predict the 3D shapes of a molecule – Tech Explorist

Dealing with molecules in their natural 3D structure is essential in cheminformatics or computational drug discovery. These 3D conformations determine the biological, chemical, and physical properties.

Determining the 3D shapes of a molecule helps understand how it will attach to specific protein surfaces. But, thats not an easy task. Plus, it is time consuming and expensive process.

MIT scientists have come up with a solution to ease this task. Using machine learning, they have created a deep learning model called GeoMol that predicts the 3D shape. As molecules are generally represented in small graphs, the GeoMol works based on a graph in 2D of its molecular structure.

Unlike other machine learning models, the GeoMol processes molecules in only seconds and performs better. Plus, it determines the 3D structure of each bond individually.

Usually, pharmaceutical companies need to test several molecules in lab experiments. According to scientists, the GeoMol could help those companies accelerate the drug discovery process by diminishing the need for testing molecules.

Lagnajit Pattanaik, a graduate student in the Department of Chemical Engineering and co-lead author of the paper, said,When you are thinking about how these structures move in 3D space, there are really only certain parts of the molecule that are flexible, these rotatable bonds. One of the key innovations of our work is that we think about modeling conformational flexibility like a chemical engineer would. It is really about trying to predict the potential distribution of rotatable bonds in the structure.

GeoMol leverages a recent tool in deep learning called a message passing neural network. It is specially designed to operate on graphs. By adapting a message passing neural network, scientists could predict specific elements of molecular geometry.

The model, at first, predicts the lengths of the chemical bonds between atoms and the angles of those individual bonds. The arrangement and connection of atoms determine which bonds can rotate.

It then predicts the structure of each atoms surrounding individually. Later, it assembles neighboring rotatable bonds by computing the torsion angles and then aligning them.

Pattanaik said,Here, the rotatable bonds can take a huge range of possible values. So, using these message passing neural networks allows us to capture a lot of the local and global environments that influence that prediction. The rotatable bond can take multiple values, and we want our prediction to be able to reflect that underlying distribution.

As mentioned above, the model determines each bonds structure individually; it explicitly defines chirality during the prediction process. Hence, there is no need for optimization after-the-fact.

Octavian-Eugen Ganea, a postdoc in the Computer Science and Artificial Intelligence Laboratory (CSAIL), said,What we can do now is take our model and connect it end-to-end with a model that predicts this attachment to specific protein surfaces. Our model is not a separate pipeline. It is very easy to integrate with other deep learning models.

Scientists used a dataset of molecules and the likely 3D shapes they could take to test their model. By comparing the model with other methods and models, they evaluated how many were likely to capture 3D structures. They found that GeoMol outperformed the other models on all tested metrics.

Pattanaik said,We found that our model is super-fast, which was exciting to see. And importantly, as you add more rotatable bonds, you expect these algorithms to slow down significantly. But we didnt see that. The speed scales nicely with the number of rotatable bonds, which is promising for using these types of models down the line, especially for applications where you are trying to predict the 3D structures inside these proteins quickly.

Scientists are planning to use GeoMol in high-throughput virtual screening. This would help them determine small molecule structures that interact with a specific protein.

Journal Reference:

Link:
GeoMol: New deep learning model to predict the 3D shapes of a molecule - Tech Explorist

Posted in Machine Learning | Comments Off on GeoMol: New deep learning model to predict the 3D shapes of a molecule – Tech Explorist

VA Aims To Reduce Administrative Tasks With AI, Machine Learning – Nextgov

Officials at the Department of Veterans Affairs are looking to increase efficiency and optimize their clinicians professional capabilities, featuring advanced artificial intelligence and machine learning technologies.

In a November presolicitation, the VA seeks to gauge market readiness for advanced healthcare device manufacturing, ranging from prosthetic solutions, surgical instruments, and personalized digital health assistant technology, as well as artificial intelligence and machine learning capabilities.

Dubbed Accelerating VA Innovation and Learning, or AVAIL, the program is looking to supplement and support agency health care operations, according to Amanda Purnell, an Innovation Specialist with the VA

What we are trying to do is utilize AI and machine learning to remove administrative burden of tasks, she told Nextgov.

The technology requested by the department will be tailored to areas where a computer can do a better, more efficient job than a human, and thereby give people back time to complete demanding tasks that require human judgement.

Some of these areas the AI and machine learning technology could be implemented include surgical preplanning, manufacturing submissions, and 3D printing, along with injection molding to produce plastic medical devices and other equipment.

Purnell also said that the VA is looking for technology that can handle the bulk of document analyses. Using machine learning and natural language processing to scan and detect patterns in medical images, such as CT scans, MRIs and dermatology scans is one of the ways the VA aims to digitize its administrative workload.

Staff at the VA is currently tasked with looking through faxes and other clinical data to siphon it to the right place. AVAIL would combine natural language processing to manage these operations and add human review when necessary.

Purnell said that the forthcoming technology would emphasize streamlining processes that are better and faster done by machines and allowing humans to do something that is more kind of human-meaningful, and also allowing clinicians to operate to the top of their license.

She noted that machines are highly adept at scanning and analyzing images with AI. The VA procedure would likely have the AI technology to do a preliminary scan, followed by a human clinician to make their expert opinion based on results.

With machine learning handling the bulk of these processes along with other manufacturing and designing needs, clinicians and surgeons within the VA could focus more on applying their medical and surgical skills. Purnell used the example of a prosthetist getting more time to foster a human connection with a client rather than oversee other health care devices and manufacturing details.

It is making sure humans are used to their best advantage, and that were using technology to augment the human experience, she said.

The AVAIL program also stands to improve the ongoing modernization effort of the VAs beleaguered electronic health record (EHR) system, which has suffered deployment hiccups thanks to difficult interfaces and budget constraints.

The AI and machine learning technology outlined in the presolicitation could also support new EHR infrastructure and focus on an improved user experience, mainly with an improved platform interface and other accessibility features.

Purnell underscored that having AI manage form processing and data sharing capabilities, including veteran claims and benefits, is another beneficial use case.

Were alleviating that admin burden and increasing the experience both for veterans and our clinicians, in that veterans are getting more facetime with our clinicians and clinicians are doing more of what they are trained to do, Purnell said.

Read this article:
VA Aims To Reduce Administrative Tasks With AI, Machine Learning - Nextgov

Posted in Machine Learning | Comments Off on VA Aims To Reduce Administrative Tasks With AI, Machine Learning – Nextgov

3D Information and Biomedicine: How Artificial Intelligence/Machine Intelligence will contribute to Cancer Patient Care and Vaccine Design – Newswise

Newswise New Brunswick, N.J., December 7, 2021 Artificial Intelligence/Machine Learning (AI/ML)is the development of computer systems that are able to perform tasks that would normally require human intelligence. AI/ML is used by people every day, for example, while using smart home devices or digital voice assistants. The use of AI/ML is also rapidly growing in biomedical research and health care. In a recent viewpoint paper, investigators at Rutgers Cancer Institute of New Jersey and Rutgers New Jersey Medical School (NJMS) explored how AI/ML will complement existing approaches focused on genome-protein sequence information, including identifying mutations in human tumors.

Stephen K. Burley, MD, DPhil, co-program leader of the Cancer Pharmacology Research Program at Rutgers Cancer Institute, and university professor and Henry Rutgers Chair and Director of the Institute for Quantitative Biomedicine at Rutgers University, along with Renata Pasqualini, PhD, resident member of Rutgers Cancer Institute and chief of the Division of Cancer Biology, Department of Radiation Oncology at Rutgers NJMS, and Wadih Arap, MD, PhD, director of Rutgers Cancer Institute at University Hospital, co-program leader of the Clinical Investigations and Precision Therapeutics Research Program at Rutgers Cancer Institute, and chief of the Division of Hematology/Oncology, Department of Medicine at Rutgers NJMS, share more insight on the paper, published online December 2 in The New England Journal of Medicine (DOI: 10.1056/NEJMcibr2113027).

What is the potential of AI/MI in cancer research and clinical practice?

We foresee that the most immediate applications of computed structure modeling will focus on point mutations detected in human tumors (germline or somatic). Computed structure models of frequently mutated oncoproteins (e.g., Epidermal Growth Factor Receptor, EGFR, shown in Figure 2B of the paper) are already being used to help identify cancer-driver genes, enable therapeutics discovery, explain drug resistance, and inform treatment plans.

What are some of the biggest challenges for AI/ML in healthcare?

In the broadest terms, the essential challenges would likely include AI/ML research and development, technology validation, efficient/equitable deployment and coherent integration into the existing healthcare systems, and inherent issues related to the regulatory environment along with complex medical reimbursement issues.

How will this technology have an impact on vaccine design, especially with regard to SARS CoV2?

Going beyond 3D structure knowledge across entire proteomes (parts lists for biology and biomedicine), accurate computational modeling will enable analyses of clinically significant genetic changes manifest in 3D by individual proteins. For example, the SARS-CoV-2 Delta Variant of Concern spike protein carries 13 amino changes. Experimentally-determined 3D structures of SARS-CoV-2 spike protein variants bound to various antibodies, all available open access from the Protein Data Bank (rcsb.org), can be used with computed structure models of new Variant of Concern spike proteins to understand the potential impact other amino acid changes. In currently ongoing work (as yet unpublished), we have used AI/ML approaches to understand the structure-function relationship of SARS-CoV-2 Omicron Variant of Concern spike protein (with more than 30 amino acid changes), illustrating practical and immediate application of this emerging technology.

What is the next step to better utilizing AI/ML in cancer research?

Development and equitable dissemination of user-friendly tools that cancer biologists can use to understand the three-dimensional structures proteins implicated in human cancers and how somatic mutations affect structure and function leading to uncontrolled tumor cell proliferation.

###

Read the rest here:
3D Information and Biomedicine: How Artificial Intelligence/Machine Intelligence will contribute to Cancer Patient Care and Vaccine Design - Newswise

Posted in Machine Learning | Comments Off on 3D Information and Biomedicine: How Artificial Intelligence/Machine Intelligence will contribute to Cancer Patient Care and Vaccine Design – Newswise

Machines that see the world more like humans do – MIT News

Computer vision systems sometimes make inferences about a scene that fly in the face of common sense. For example, if a robot were processing a scene of a dinner table, it might completely ignore a bowl that is visible to any human observer, estimate that a plate is floating above the table, or misperceive a fork to be penetrating a bowl rather than leaning against it.

Move that computer vision system to a self-driving car and the stakes become much higher for example, such systems have failed to detect emergency vehicles and pedestrians crossing the street.

To overcome these errors, MIT researchers have developed a framework that helps machines see the world more like humans do. Their new artificial intelligence system for analyzing scenes learns to perceive real-world objects from just a few images, and perceives scenes in terms of these learned objects.

The researchers built the framework using probabilistic programming, an AI approach that enables the system to cross-check detected objects against input data, to see if the images recorded from a camera are a likely match to any candidate scene. Probabilistic inference allows the system to infer whether mismatches are likely due to noise or to errors in the scene interpretation that need to be corrected by further processing.

This common-sense safeguard allows the system to detect and correct many errors that plague the deep-learning approaches that have also been used for computer vision. Probabilistic programming also makes it possible to infer probable contact relationships between objects in the scene, and use common-sense reasoning about these contacts to infer more accurate positions for objects.

If you dont know about the contact relationships, then you could say that an object is floating above the table that would be a valid explanation. As humans, it is obvious to us that this is physically unrealistic and the object resting on top of the table is a more likely pose of the object. Because our reasoning system is aware of this sort of knowledge, it can infer more accurate poses. That is a key insight of this work, says lead author Nishad Gothoskar, an electrical engineering and computer science (EECS) PhD student with the Probabilistic Computing Project.

In addition to improving the safety of self-driving cars, this work could enhance the performance of computer perception systems that must interpret complicated arrangements of objects, like a robot tasked with cleaning a cluttered kitchen.

Gothoskars co-authors include recent EECS PhD graduate Marco Cusumano-Towner; research engineer Ben Zinberg; visiting student Matin Ghavamizadeh; Falk Pollok, a software engineer in the MIT-IBM Watson AI Lab; recent EECS masters graduate Austin Garrett; Dan Gutfreund, a principal investigator in the MIT-IBM Watson AI Lab; Joshua B. Tenenbaum, the Paul E. Newton Career Development Professor of Cognitive Science and Computation in the Department of Brain and Cognitive Sciences (BCS) and a member of the Computer Science and Artificial Intelligence Laboratory; and senior author Vikash K. Mansinghka, principal research scientist and leader of the Probabilistic Computing Project in BCS. The research is being presented at the Conference on Neural Information Processing Systems in December.

A blast from the past

To develop the system, called 3D Scene Perception via Probabilistic Programming (3DP3), the researchers drew on a concept from the early days of AI research, which is that computer vision can be thought of as the "inverse" of computer graphics.

Computer graphics focuses on generating images based on the representation of a scene; computer vision can be seen as the inverse of this process. Gothoskar and his collaborators made this technique more learnable and scalable by incorporating it into a framework built using probabilistic programming.

Probabilistic programming allows us to write down our knowledge about some aspects of the world in a way a computer can interpret, but at the same time, it allows us to express what we dont know, the uncertainty. So, the system is able to automatically learn from data and also automatically detect when the rules dont hold, Cusumano-Towner explains.

In this case, the model is encoded with prior knowledge about 3D scenes. For instance, 3DP3 knows that scenes are composed of different objects, and that these objects often lay flat on top of each other but they may not always be in such simple relationships. This enables the model to reason about a scene with more common sense.

Learning shapes and scenes

To analyze an image of a scene, 3DP3 first learns about the objects in that scene. After being shown only five images of an object, each taken from a different angle, 3DP3 learns the objects shape and estimates the volume it would occupy in space.

If I show you an object from five different perspectives, you can build a pretty good representation of that object. Youd understand its color, its shape, and youd be able to recognize that object in many different scenes, Gothoskar says.

Mansinghka adds, "This is way less data than deep-learning approaches. For example, the Dense Fusion neural object detection system requires thousands of training examples for each object type. In contrast, 3DP3 only requires a few images per object, and reports uncertainty about the parts of each objects' shape that it doesn't know."

The 3DP3 system generates a graph to represent the scene, where each object is a node and the lines that connect the nodes indicate which objects are in contact with one another. This enables 3DP3 to produce a more accurate estimation of how the objects are arranged. (Deep-learning approaches rely on depth images to estimate object poses, but these methods dont produce a graph structure of contact relationships, so their estimations are less accurate.)

Outperforming baseline models

The researchers compared 3DP3 with several deep-learning systems, all tasked with estimating the poses of 3D objects in a scene.

In nearly all instances, 3DP3 generated more accurate poses than other models and performed far better when some objects were partially obstructing others. And 3DP3 only needed to see five images of each object, while each of the baseline models it outperformed needed thousands of images for training.

When used in conjunction with another model, 3DP3 was able to improve its accuracy. For instance, a deep-learning model might predict that a bowl is floating slightly above a table, but because 3DP3 has knowledge of the contact relationships and can see that this is an unlikely configuration, it is able to make a correction by aligning the bowl with the table.

I found it surprising to see how large the errors from deep learning could sometimes be producing scene representations where objects really didnt match with what people would perceive. I also found it surprising that only a little bit of model-based inference in our causal probabilistic program was enough to detect and fix these errors. Of course, there is still a long way to go to make it fast and robust enough for challenging real-time vision systems but for the first time, we're seeing probabilistic programming and structured causal models improving robustness over deep learning on hard 3D vision benchmarks, Mansinghka says.

In the future, the researchers would like to push the system further so it can learn about an object from a single image, or a single frame in a movie, and then be able to detect that object robustly in different scenes. They would also like to explore the use of 3DP3 to gather training data for a neural network. It is often difficult for humans to manually label images with 3D geometry, so 3DP3 could be used to generate more complex image labels.

The 3DP3 system combines low-fidelity graphics modeling with common-sense reasoning to correct large scene interpretation errors made by deep learning neural nets. This type of approach could have broad applicability as it addresses important failure modes of deep learning. The MIT researchers accomplishment also shows how probabilistic programming technology previously developed under DARPAs Probabilistic Programming for Advancing Machine Learning (PPAML) program can be applied to solve central problems of common-sense AI under DARPAs current Machine Common Sense (MCS) program, says Matt Turek, DARPA Program Manager for the Machine Common Sense Program, who was not involved in this research, though the program partially funded the study.

Additional funders include the Singapore Defense Science and Technology Agency collaboration with the MIT Schwarzman College of Computing, Intels Probabilistic Computing Center, the MIT-IBM Watson AI Lab, the Aphorism Foundation, and the Siegel Family Foundation.

View post:
Machines that see the world more like humans do - MIT News

Posted in Machine Learning | Comments Off on Machines that see the world more like humans do – MIT News

METiS Therapeutics Launches With $86 Million Series A Financing to Transform Drug Discovery and Delivery With Machine Learning and Artificial…

Dec. 7, 2021 11:00 UTC

CAMBRIDGE, Mass.--(BUSINESS WIRE)-- METiS Therapeutics debuts today with $86 million Series A financing to harness artificial intelligence (AI) and machine learning to redefine drug discovery and delivery and develop optimal therapies for patients with serious diseases. PICC PE and China Life led the financing and were joined by Sequoia Capital China, Lightspeed, 5Y Capital, FreeS Fund and CMBI Zhaoxin Wuji Fund. The financing will be used to advance the companys pipeline of novel assets with high therapeutic potential and the continued development of its AI-driven drug discovery and delivery platform.

METiS is well-positioned to change the drug discovery and delivery landscape with the creation of a proprietary predictive AI platform. We leverage machine learning, AI and quantum simulation to uncover novel drug candidates and to transform drug discovery and development, ultimately bringing the best therapies to patients in need, said Chris Lai, CEO, and Founder, METiS Therapeutics. We are fortunate that our world-class roster of investors believes in our vision and todays news represents the first of many significant milestones that we will be accomplishing throughout the next year.

The METiS platform (AiTEM) combines state-of-the-art AI data-driven algorithms, mechanism-driven quantum mechanics and molecular dynamics simulations to calculate Active Pharmaceutical Ingredient (API) properties, elucidate API-target and API-excipient interactions, and predict chemical, physical and pharmacokinetic properties of small molecule and nucleic acid therapeutics in specific microenvironments. This enables efficient lead optimization, candidate selection and formulation design. Founded by a team of MIT researchers, serial entrepreneurs and biotech industry veterans, METiS develops and in-licenses novel assets with high therapeutic potential that could benefit from its data-driven platform.

About METiS Therapeutics METiS Therapeutics is a biotechnology company that aims to drive best-in-class therapies in a wide range of disease areas by integrating drug discovery and delivery with AI, machine learning, and quantum simulation. To learn more, visit http://www.metistx.com/.

View source version on businesswire.com: https://www.businesswire.com/news/home/20211207005197/en/

View post:
METiS Therapeutics Launches With $86 Million Series A Financing to Transform Drug Discovery and Delivery With Machine Learning and Artificial...

Posted in Machine Learning | Comments Off on METiS Therapeutics Launches With $86 Million Series A Financing to Transform Drug Discovery and Delivery With Machine Learning and Artificial…