The Future Of Nano Technology
- Alan Watts
- Anti-Aging Medicine
- David Sinclair
- Gene Medicine
- Gene therapy
- Genetic Medicine
- Genetic Therapy
- Global News Feed
- Hormone Replacement Therapy
- Human Genetic Engineering
- Human Reproduction
- Integrative Medicine
- Life Skills
- Longevity Medicine
- Machine Learning
- Medical School
- Nano Medicine
- Parkinson's disease
- Quantum Computing
- Regenerative Medicine
- Stem Cell Therapy
- Stem Cells
- SPORTS THERAPY – A GREAT WAY TO MAINTAIN A HEALTHY BODY
- How researchers are mapping the future of quantum computing, using the tech of today – GeekWire
- Colorado makes a bid for quantum computing hardware plant that would bring more than 700 jobs – The Denver Post
- The Worldwide Quantum Computing Industry is Expected to Reach $1.7 Billion by 2026 – PRNewswire
- bp Joins the IBM Quantum Network to Advance Use of Quantum Computing in Energy – HPCwire
- dr ortiz greys anatomy
- jaw numbnesd after a tooth extraction
- greys anatomy cast dr ortiz
- mother daughter interna on greys amatomy
- dr ortiz on greys anatomy
- who plays dr orttiz on grays anatomy
- iplness that killed jackson averys baby
- innie vs outie vagian
- mama ortiz on greys anatomy
- what happens when jackson find out about the baby after divorce
|Search Immortality Topics:|
Category Archives: Machine Learning
93% of security operations centers employing AI and machine learning tools to detect advanced threats – Security Magazine
Machine learning prediction for mortality of patients diagnosed with COVID-19: a nationwide Korean cohort study – DocWire News
This article was originally published here
Sci Rep. 2020 Oct 30;10(1):18716. doi: 10.1038/s41598-020-75767-2.
The rapid spread of COVID-19 has resulted in the shortage of medical resources, which necessitates accurate prognosis prediction to triage patients effectively. This study used the nationwide cohort of South Korea to develop a machine learning model to predict prognosis based on sociodemographic and medical information. Of 10,237 COVID-19 patients, 228 (2.2%) died, 7772 (75.9%) recovered, and 2237 (21.9%) were still in isolation or being treated at the last follow-up (April 16, 2020). The Cox proportional hazards regression analysis revealed that age > 70, male sex, moderate or severe disability, the presence of symptoms, nursing home residence, and comorbidities of diabetes mellitus (DM), chronic lung disease, or asthma were significantly associated with increased risk of mortality (p 0.047). For machine learning, the least absolute shrinkage and selection operator (LASSO), linear support vector machine (SVM), SVM with radial basis function kernel, random forest (RF), and k-nearest neighbors were tested. In prediction of mortality, LASSO and linear SVM demonstrated high sensitivities (90.7% [95% confidence interval: 83.3, 97.3] and 92.0% [85.9, 98.1], respectively) and specificities (91.4% [90.3, 92.5] and 91.8%, [90.7, 92.9], respectively) while maintaining high specificities > 90%, as well as high area under the receiver operating characteristics curves (0.963 [0.946, 0.979] and 0.962 [0.945, 0.979], respectively). The most significant predictors for LASSO included old age and preexisting DM or cancer; for RF they were old age, infection route (cluster infection or infection from personal contact), and underlying hypertension. The proposed prediction model may be helpful for the quick triage of patients without having to wait for the results of additional tests such as laboratory or radiologic studies, during a pandemic when limited medical resources must be wisely allocated without hesitation.
PMID:33127965 | DOI:10.1038/s41598-020-75767-2
Microsoft/MITRE group declares war on machine learning vulnerabilities with Adversarial ML Threat Matrix – Diginomica
The extraordinary advances in machine learning that drive the increasing accuracy and reliability of artificial intelligence systems have been matched by a corresponding growth in malicious attacks by bad actors seeking to exploit a new breed of vulnerabilities designed to distort the results.
Microsoft reports it has seen a notable increase in attacks on commercial ML systems over the past four years. Other reports have also brought attention to this problem.Gartner's Top 10 Strategic Technology Trends for 2020, published in October 2019, predicts that:
Through 2022, 30% of all AI cyberattacks will leverage training-data poisoning, AI model theft, or adversarial samples to attack AI-powered systems.
Training data poisoning happens when an adversary is able to introduce bad data into your model's training pool, and hence get it to learn things that are wrong. One approach is to target your ML's availability; the other targets its integrity (commonly known as "backdoor" attacks). Availability attacks aim to inject so much bad data into your system that whatever boundaries your model learns are basically worthless. Integrity attacks are more insidious because the developer isn't aware of them so attackers can sneak in and get the system to do what they want.
Model theft techniques are used to recover models or information about data used during training which is a major concern because AI models represent valuable intellectual property trained on potentially sensitive data including financial trades, medical records, or user transactions.The aim of adversaries is to recreate AI models by utilizing the public API and refining their own model using it as a guide.
Adversarial examples are inputs to machine learning models that attackers haveintentionally designed to cause the model to make a mistake.Basically, they are like optical illusions for machines.
All of these methods are dangerous and growing in both volume and sophistication. As Ann Johnson Corporate Vice President, SCI Business Development at Microsoft wrote in ablog post:
Despite the compelling reasons to secure ML systems, Microsoft's survey spanning 28 businesses found that most industry practitioners have yet to come to terms with adversarial machine learning. Twenty-five out of the 28 businesses indicated that they don't have the right tools in place to secure their ML systems. What's more, they are explicitly looking for guidance. We found that preparation is not just limited to smaller organizations. We spoke to Fortune 500 companies, governments, non-profits, and small and mid-sized organizations.
Responding to the growing threat, last week, Microsoft, the nonprofit MITRE Corporation, and 11 organizations including IBM, Nvidia, Airbus, and Bosch released theAdversarial ML Threat Matrix, an industry-focused open framework designed to help security analysts to detect, respond to, and remediate threats against machine learning systems. Microsoft says it worked with MITRE to build a schema that organizes the approaches employed by malicious actors in subverting machine learning models, bolstering monitoring strategies around organizations' mission-critical systems.Said Johnson:
Microsoft worked with MITRE to create the Adversarial ML Threat Matrix, because we believe the first step in empowering security teams to defend against attacks on ML systems, is to have a framework that systematically organizes the techniques employed by malicious adversaries in subverting ML systems. We hope that the security community can use the tabulated tactics and techniques to bolster their monitoring strategies around their organization's mission critical ML systems.
The Adversarial ML Threat, modeled after the MITRE ATT&CK Framework, aims to address the problem with a curated set of vulnerabilities and adversary behaviors that Microsoft and MITRE vetted to be effective against production systems. With input from researchers at the University of Toronto, Cardiff University, and the Software Engineering Institute at Carnegie Mellon University, Microsoft and MITRE created a list of tactics that correspond to broad categories of adversary action.
Techniques in the schema fall within one tactic and are illustrated by a series of case studies covering how well-known attacks such as the Microsoft Tay poisoning, the Proofpoint evasion attack, and other attacks could be analyzed using the Threat Matrix. Noted Charles Clancy, MITRE's chief futurist, senior vice president, and general manager of MITRE Labs:
Unlike traditional cybersecurity vulnerabilities that are tied to specific software and hardware systems, adversarial ML vulnerabilities are enabled by inherent limitations underlying ML algorithms. Data can be weaponized in new ways which requires an extension of how we model cyber adversary behavior, to reflect emerging threat vectors and the rapidly evolving adversarial machine learning attack lifecycle.
Mikel Rodriguez, a machine learning researcher at MITRE who also oversees MITRE's Decision Science research programs, said that AI is now at the same stage now where the internet was in the late 1980s when people were focused on getting the technology to work and not thinking that much about longer term implications for security and privacy. That, he says, was a mistake that we can learn from.
The Adversarial ML Threat Matrix will allow security analysts to work with threat models that are grounded in real-world incidents that emulate adversary behavior with machine learning and to develop a common language that allows for better communications and collaboration.
Facebook has launched a multilingual machine learning translation model. Previous models tended to rely on English data as an intermediary. However, Facebooks many-to-many software, called M2M-100, can translate directly between any pair of 100 languages. The software is open-source with the model, raw data, training, and evaluation setup available on GitHub.
M2M-100, if it works correctly, provides a functional product with real-world applications, which can be built on by other developers. In a globalized world, accurate translation of a wide variety of languages is vital. It enables accurate communication between different communities, which is essential for multinational businesses. It also allows news articles and social media posts to be accurately portrayed, reducing instances of misinformation.
GlobalDatas recent thematic report on AI suggests that years of bold proclamations by tech companies eager for publicity have resulted in AI becoming overhyped. The reality has often fallen short of the rhetoric. Principal Microsoft researcher Katja Hofmann argues that AI is transitioning to a new phase, in which breakthroughs occur but at a slower rate than previously suggested. The next few years will require practical uses of AI with tangible benefits, addressing AI to specific use cases.
M2M-100 provides 2,200 translation combinations of 100 languages without relying on English data as a mediator. Among its main competitors, Amazon Translate and Microsoft Translator both support significantly fewer languages than Facebook. However, Google Translate supports 108 languages, both dead and alive, having added five new languages in February 2020.
Google and Facebooks products have offer differences. Google uses BookCorpus and English Wikipedia as training data, whereas Facebook analyzes the language of its users. Facebook is, therefore, more suitable for conversational translation, while Google excels at academic style web page translation. Google performs best when English is the target language, which correlates to the training data used. Facebooks multi-directional model claims there is no English bias, with translations functioning between 2,200 language pairs. Accurate conversational translations based on real-time data and multiple language pairs can fulfil global business needs, making Facebook a market leader.
Facebooks strength in this aspect of AI is unsurprising. GlobalData has given the company a thematic score of 5 out of 5 for machine learning, suggesting that this theme will significantly improve Facebooks future performance.
However, natural language processing (NLP) can be problematic, with language semantics making it hard for algorithms to provide accurate translations. In 2017, Facebook translated the phrase good morning in Arabic, posted on its platform by a Palestinian man, as attack them in Hebrew, resulting in the senders arrest by Israeli police. The open-source nature of the software will help developers recognize pain points. It also allows innovation, enabling multilingual models to be advanced in the future by developers.
Language translation is a high-profile use case for AI due to its applications in conversational plaforms like Amazons Alexa, Googles Assistant, and Apples Siri. The tech giants are racking to improve the performance of their virtual assistants. Facebooks M2M-100 announcement will raise the stakes in AI translation software, pushing the companys main competitors to respond.
In an interconnected, globalized world, accurate translation is essential. Facebook has used its global community and access to large datasets to progress machine learning and AI, creating a practical, real-world use case. Allowing access to the training data and models propels future developments, moving linguistic machine learning away from a traditionally Anglo-centric model.Related Report Download the full report from GlobalData's Report StoreGet the Report
Latest report from Visit GlobalData Store
Read the original here:
Facebook's machine learning translation software raises the stakes - Verdict
Machine learning and AI continue to reach further into IT services and complement applications developed by software engineers. IT teams need to sharpen their machine learning skills if they want to keep up.
Cloud computing services support an array of functionality needed to build and deploy AI and machine learning applications. In many ways, AI systems are managed much like other software that IT pros are familiar with in the cloud. But just because someone can deploy an application, that does not necessarily mean they can successfully deploy a machine learning model.
While the commonalities may partially smooth the transition, there are significant differences. Members of your IT teams need specific machine learning and AI knowledge, in addition to software engineering skills. Beyond the technological expertise, they also need to understand the cloud tools currently available to support their team's initiatives.
Explore the five machine learning skills IT pros need to successfully use AI in the cloud and get to know the products Amazon, Microsoft and Google offer to support them. There is some overlap in the skill sets, but don't expect one individual to do it all. Put your organization in the best position to utilize cloud-based machine learning by developing a team of people with these skills.
IT pros need to understand data engineering if they want to pursue any type of AI strategy in the cloud. Data engineering is comprised of a broad set of skills that requires data wrangling and workflow development, as well as some knowledge of software architecture.
These different areas of IT expertise can be broken down into different tasks IT pros should be able to accomplish. For example, data wrangling typically involves data source identification, data extraction, data quality assessments, data integration and pipeline development to carry out these operations in a production environment.
Data engineers should be comfortable working with relational databases, NoSQL databases and object storage systems. Python is a popular programming language that can be used with batch and stream processing platforms, like Apache Beam, and distributed computing platforms, such as Apache Spark. Even if you are not an expert Python programmer, having some knowledge of the language will enable you to draw from a broad array of open source tools for data engineering and machine learning.
Data engineering is well supported in all the major clouds. AWS has a full range of services to support data engineering, such as AWS Glue, Amazon Managed Streaming for Apache Kafka (MSK) and various Amazon Kinesis services. AWS Glue is a data catalog and extract, transform and load (ETL) service that includes support for scheduled jobs. MSK is a useful building block for data engineering pipelines, while Kinesis services are especially useful for deploying scalable stream processing pipelines.
Google Cloud Platform offers Cloud Dataflow, a managed Apache Beam service that supports batch and steam processing. For ETL processes, Google Cloud Data Fusion provides a Hadoop-based data integration service. Microsoft Azure also provides several managed data tools, such as Azure Cosmos DB, Data Catalog and Data Lake Analytics, among others.
Machine learning is a well-developed discipline, and you can make a career out of studying and developing machine learning algorithms.
IT teams use the data delivered by engineers to build models and create software that can make recommendations, predict values and classify items. It is important to understand the basics of machine learning technologies, even though much of the model building process is automated in the cloud.
As a model builder, you need to understand the data and business objectives. It's your job to formulate the solution to the problem and understand how it will integrate with existing systems.
Some products on the market include Google's Cloud AutoML, which is a suite of services that help build custom models using structured data as well as images, video and natural language without requiring much understanding of machine learning. Azure offers ML.NET Model Builder in Visual Studio, which provides an interface to build, train and deploy models. Amazon SageMaker is another managed service for building and deploying machine learning models in the cloud.
These tools can choose algorithms, determine which features or attributes in your data are most informative and optimize models using a process known as hyperparameter tuning. These kinds of services have expanded the potential use of machine learning and AI strategies. Just as you do not have to be a mechanical engineer to drive a car, you do not need a graduate degree in machine learning to build effective models.
Algorithms make decisions that directly and significantly impact individuals. For example, financial services use AI to make decisions about credit, which could be unintentionally biased against particular groups of people. This not only has the potential to harm individuals by denying credit but it also puts the financial institution at risk of violating regulations, like the Equal Credit Opportunity Act.
These seemingly menial tasks are imperative to AI and machine learning models. Detecting bias in a model can require savvy statistical and machine learning skills but, as with model building, some of the heavy lifting can be done by machines.
FairML is an open source tool for auditing predictive models that helps developers identify biases in their work. Experience with detecting bias in models can also help inform the data engineering and model building process. Google Cloud leads the market with fairness tools that include the What-If Tool, Fairness Indicators and Explainable AI services.
Part of the model building process is to evaluate how well a machine learning model performs. Classifiers, for example, are evaluated in terms of accuracy, precision and recall. Regression models, such as those that predict the price at which a house will sell, are evaluated by measuring their average error rate.
A model that performs well today may not perform as well in the future. The problem is not that the model is somehow broken, but that the model was trained on data that no longer reflects the world in which it is used. Even without sudden, major events, data drift can occur. It is important to evaluate models and continue to monitor them as long as they are in production.
Services such as Amazon SageMaker, Azure Machine Learning Studio and Google Cloud AutoML include an array of model performance evaluation tools.
Domain knowledge is not specifically a machine learning skill, but it is one of the most important parts of a successful machine learning strategy.
Every industry has a body of knowledge that must be studied in some capacity, especially when building algorithmic decision-makers. Machine learning models are constrained to reflect the data used to train them. Humans with domain knowledge are essential to knowing where to apply AI and to assess its effectiveness.
Read the original here:
5 machine learning skills you need in the cloud - TechTarget
Mapping of the explanted human heart
Researchers have designed a new machine learning-based approach for detecting atrial fibrillation (AF) drivers, small patches of the heart muscle that are hypothesised to cause this most common type of cardiac arrhythmia. This approach may lead to more efficient targeted medical interventions to treat the condition, according to the authors of the paper published in the journal Circulation: Arrhythmia and Electrophysiology.
The mechanism behind AF is yet unclear, although research suggests it may be caused and maintained by re-entrant AF drivers, localised sources of repetitive rotational activity that lead to irregular heart rhythm. These drivers can be burnt via a surgical procedure, which can mitigate the condition or even restore the normal functioning of the heart.
To locate these re-entrant AF drivers for subsequent destruction, doctors use multi-electrode mapping, a technique that allows them to record multiple electrograms inside the using a catheter and build a map of electrical activity within the atria. However, clinical applications of this technique often produce a lot of false negatives, when an existing AF driver is not found, and false positives, when a driver is detected where there really is none.
Recently, researchers have tapped machine learning algorithms for the task of interpreting ECGs to look for AF; however, these algorithms require labelled data with the true location of the driver, and the accuracy of multi-electrode mapping is insufficient. The authors of the new study, co-led by Dmitry Dylov from the Skoltech Center of Computational and Data-Intensive Science and Engineering (CDISE, Moscow, Russia) and Vadim Fedorov from the Ohio State University (Columbus, USA) used high-resolution near-infrared optical mapping (NIOM) to locate AF drivers and stuck with it as a reference for training.
NIOM is based on well-penetrating infrared optical signals and therefore can record the electrical activity from within the heart muscle, whereas conventional clinical electrodes can only measure the signals on the surface. Add to this trait the excellent optical resolution, and the optical mapping becomes a no-brainer modality if you want to visualize and understand the electrical signal propagation through the heart tissue, said Dylov.
The team tested their approach on 11 explanted human hearts, all donated posthumously for research purposes. Researchers performed the simultaneous optical and multi-electrode mapping of AF episodes induced in the hearts. ML model can indeed efficiently interpret electrograms from multielectrode mapping to locate AF drivers, with an accuracy of up to 81%. They believe that larger training datasets, validated by NIOM, can improve machine learning-based algorithms enough for them to become complementary tools in clinical practice.
The dataset of recording from 11 human hearts is both extremely priceless and too small. We realiaed that clinical translation would require a much larger sample size for representative sampling, yet we had to make sure we extracted every piece of available information from the still-beating explanted human hearts. Dedication and scrutiny of two of our PhD students must be acknowledged here: Sasha Zolotarev spent several months on the academic mobility trip to Fedorovs lab understanding the specifics of the imaging workflow and present the pilot study at the HRS conference the biggest arrhythmology meeting in the world, and Katya Ivanova partook in the frequency and visualization analysis from within the walls of Skoltech. These two young researchers have squeezed out everything one possibly could, to train the machine learning model using optical measurements, Dylov notes.