Search Immortality Topics:

Page 56«..1020..55565758..7080..»


Category Archives: Machine Learning

Facebook’s machine learning translation software raises the stakes – Verdict

Facebook has launched a multilingual machine learning translation model. Previous models tended to rely on English data as an intermediary. However, Facebooks many-to-many software, called M2M-100, can translate directly between any pair of 100 languages. The software is open-source with the model, raw data, training, and evaluation setup available on GitHub.

M2M-100, if it works correctly, provides a functional product with real-world applications, which can be built on by other developers. In a globalized world, accurate translation of a wide variety of languages is vital. It enables accurate communication between different communities, which is essential for multinational businesses. It also allows news articles and social media posts to be accurately portrayed, reducing instances of misinformation.

GlobalDatas recent thematic report on AI suggests that years of bold proclamations by tech companies eager for publicity have resulted in AI becoming overhyped. The reality has often fallen short of the rhetoric. Principal Microsoft researcher Katja Hofmann argues that AI is transitioning to a new phase, in which breakthroughs occur but at a slower rate than previously suggested. The next few years will require practical uses of AI with tangible benefits, addressing AI to specific use cases.

M2M-100 provides 2,200 translation combinations of 100 languages without relying on English data as a mediator. Among its main competitors, Amazon Translate and Microsoft Translator both support significantly fewer languages than Facebook. However, Google Translate supports 108 languages, both dead and alive, having added five new languages in February 2020.

Google and Facebooks products have offer differences. Google uses BookCorpus and English Wikipedia as training data, whereas Facebook analyzes the language of its users. Facebook is, therefore, more suitable for conversational translation, while Google excels at academic style web page translation. Google performs best when English is the target language, which correlates to the training data used. Facebooks multi-directional model claims there is no English bias, with translations functioning between 2,200 language pairs. Accurate conversational translations based on real-time data and multiple language pairs can fulfil global business needs, making Facebook a market leader.

Facebooks strength in this aspect of AI is unsurprising. GlobalData has given the company a thematic score of 5 out of 5 for machine learning, suggesting that this theme will significantly improve Facebooks future performance.

However, natural language processing (NLP) can be problematic, with language semantics making it hard for algorithms to provide accurate translations. In 2017, Facebook translated the phrase good morning in Arabic, posted on its platform by a Palestinian man, as attack them in Hebrew, resulting in the senders arrest by Israeli police. The open-source nature of the software will help developers recognize pain points. It also allows innovation, enabling multilingual models to be advanced in the future by developers.

Language translation is a high-profile use case for AI due to its applications in conversational plaforms like Amazons Alexa, Googles Assistant, and Apples Siri. The tech giants are racking to improve the performance of their virtual assistants. Facebooks M2M-100 announcement will raise the stakes in AI translation software, pushing the companys main competitors to respond.

In an interconnected, globalized world, accurate translation is essential. Facebook has used its global community and access to large datasets to progress machine learning and AI, creating a practical, real-world use case. Allowing access to the training data and models propels future developments, moving linguistic machine learning away from a traditionally Anglo-centric model.Related Report Download the full report from GlobalData's Report StoreGet the Report

Latest report from Visit GlobalData Store

Read the original here:
Facebook's machine learning translation software raises the stakes - Verdict

Posted in Machine Learning | Comments Off on Facebook’s machine learning translation software raises the stakes – Verdict

Microsoft/MITRE group declares war on machine learning vulnerabilities with Adversarial ML Threat Matrix – Diginomica

(Pixabay)

The extraordinary advances in machine learning that drive the increasing accuracy and reliability of artificial intelligence systems have been matched by a corresponding growth in malicious attacks by bad actors seeking to exploit a new breed of vulnerabilities designed to distort the results.

Microsoft reports it has seen a notable increase in attacks on commercial ML systems over the past four years. Other reports have also brought attention to this problem.Gartner's Top 10 Strategic Technology Trends for 2020, published in October 2019, predicts that:

Through 2022, 30% of all AI cyberattacks will leverage training-data poisoning, AI model theft, or adversarial samples to attack AI-powered systems.

Training data poisoning happens when an adversary is able to introduce bad data into your model's training pool, and hence get it to learn things that are wrong. One approach is to target your ML's availability; the other targets its integrity (commonly known as "backdoor" attacks). Availability attacks aim to inject so much bad data into your system that whatever boundaries your model learns are basically worthless. Integrity attacks are more insidious because the developer isn't aware of them so attackers can sneak in and get the system to do what they want.

Model theft techniques are used to recover models or information about data used during training which is a major concern because AI models represent valuable intellectual property trained on potentially sensitive data including financial trades, medical records, or user transactions.The aim of adversaries is to recreate AI models by utilizing the public API and refining their own model using it as a guide.

Adversarial examples are inputs to machine learning models that attackers haveintentionally designed to cause the model to make a mistake.Basically, they are like optical illusions for machines.

All of these methods are dangerous and growing in both volume and sophistication. As Ann Johnson Corporate Vice President, SCI Business Development at Microsoft wrote in ablog post:

Despite the compelling reasons to secure ML systems, Microsoft's survey spanning 28 businesses found that most industry practitioners have yet to come to terms with adversarial machine learning. Twenty-five out of the 28 businesses indicated that they don't have the right tools in place to secure their ML systems. What's more, they are explicitly looking for guidance. We found that preparation is not just limited to smaller organizations. We spoke to Fortune 500 companies, governments, non-profits, and small and mid-sized organizations.

Responding to the growing threat, last week, Microsoft, the nonprofit MITRE Corporation, and 11 organizations including IBM, Nvidia, Airbus, and Bosch released theAdversarial ML Threat Matrix, an industry-focused open framework designed to help security analysts to detect, respond to, and remediate threats against machine learning systems. Microsoft says it worked with MITRE to build a schema that organizes the approaches employed by malicious actors in subverting machine learning models, bolstering monitoring strategies around organizations' mission-critical systems.Said Johnson:

Microsoft worked with MITRE to create the Adversarial ML Threat Matrix, because we believe the first step in empowering security teams to defend against attacks on ML systems, is to have a framework that systematically organizes the techniques employed by malicious adversaries in subverting ML systems. We hope that the security community can use the tabulated tactics and techniques to bolster their monitoring strategies around their organization's mission critical ML systems.

The Adversarial ML Threat, modeled after the MITRE ATT&CK Framework, aims to address the problem with a curated set of vulnerabilities and adversary behaviors that Microsoft and MITRE vetted to be effective against production systems. With input from researchers at the University of Toronto, Cardiff University, and the Software Engineering Institute at Carnegie Mellon University, Microsoft and MITRE created a list of tactics that correspond to broad categories of adversary action.

Techniques in the schema fall within one tactic and are illustrated by a series of case studies covering how well-known attacks such as the Microsoft Tay poisoning, the Proofpoint evasion attack, and other attacks could be analyzed using the Threat Matrix. Noted Charles Clancy, MITRE's chief futurist, senior vice president, and general manager of MITRE Labs:

Unlike traditional cybersecurity vulnerabilities that are tied to specific software and hardware systems, adversarial ML vulnerabilities are enabled by inherent limitations underlying ML algorithms. Data can be weaponized in new ways which requires an extension of how we model cyber adversary behavior, to reflect emerging threat vectors and the rapidly evolving adversarial machine learning attack lifecycle.

Mikel Rodriguez, a machine learning researcher at MITRE who also oversees MITRE's Decision Science research programs, said that AI is now at the same stage now where the internet was in the late 1980s when people were focused on getting the technology to work and not thinking that much about longer term implications for security and privacy. That, he says, was a mistake that we can learn from.

The Adversarial ML Threat Matrix will allow security analysts to work with threat models that are grounded in real-world incidents that emulate adversary behavior with machine learning and to develop a common language that allows for better communications and collaboration.

View post:
Microsoft/MITRE group declares war on machine learning vulnerabilities with Adversarial ML Threat Matrix - Diginomica

Posted in Machine Learning | Comments Off on Microsoft/MITRE group declares war on machine learning vulnerabilities with Adversarial ML Threat Matrix – Diginomica

5 machine learning skills you need in the cloud – TechTarget

Machine learning and AI continue to reach further into IT services and complement applications developed by software engineers. IT teams need to sharpen their machine learning skills if they want to keep up.

Cloud computing services support an array of functionality needed to build and deploy AI and machine learning applications. In many ways, AI systems are managed much like other software that IT pros are familiar with in the cloud. But just because someone can deploy an application, that does not necessarily mean they can successfully deploy a machine learning model.

While the commonalities may partially smooth the transition, there are significant differences. Members of your IT teams need specific machine learning and AI knowledge, in addition to software engineering skills. Beyond the technological expertise, they also need to understand the cloud tools currently available to support their team's initiatives.

Explore the five machine learning skills IT pros need to successfully use AI in the cloud and get to know the products Amazon, Microsoft and Google offer to support them. There is some overlap in the skill sets, but don't expect one individual to do it all. Put your organization in the best position to utilize cloud-based machine learning by developing a team of people with these skills.

IT pros need to understand data engineering if they want to pursue any type of AI strategy in the cloud. Data engineering is comprised of a broad set of skills that requires data wrangling and workflow development, as well as some knowledge of software architecture.

These different areas of IT expertise can be broken down into different tasks IT pros should be able to accomplish. For example, data wrangling typically involves data source identification, data extraction, data quality assessments, data integration and pipeline development to carry out these operations in a production environment.

Data engineers should be comfortable working with relational databases, NoSQL databases and object storage systems. Python is a popular programming language that can be used with batch and stream processing platforms, like Apache Beam, and distributed computing platforms, such as Apache Spark. Even if you are not an expert Python programmer, having some knowledge of the language will enable you to draw from a broad array of open source tools for data engineering and machine learning.

Data engineering is well supported in all the major clouds. AWS has a full range of services to support data engineering, such as AWS Glue, Amazon Managed Streaming for Apache Kafka (MSK) and various Amazon Kinesis services. AWS Glue is a data catalog and extract, transform and load (ETL) service that includes support for scheduled jobs. MSK is a useful building block for data engineering pipelines, while Kinesis services are especially useful for deploying scalable stream processing pipelines.

Google Cloud Platform offers Cloud Dataflow, a managed Apache Beam service that supports batch and steam processing. For ETL processes, Google Cloud Data Fusion provides a Hadoop-based data integration service. Microsoft Azure also provides several managed data tools, such as Azure Cosmos DB, Data Catalog and Data Lake Analytics, among others.

Machine learning is a well-developed discipline, and you can make a career out of studying and developing machine learning algorithms.

IT teams use the data delivered by engineers to build models and create software that can make recommendations, predict values and classify items. It is important to understand the basics of machine learning technologies, even though much of the model building process is automated in the cloud.

As a model builder, you need to understand the data and business objectives. It's your job to formulate the solution to the problem and understand how it will integrate with existing systems.

Some products on the market include Google's Cloud AutoML, which is a suite of services that help build custom models using structured data as well as images, video and natural language without requiring much understanding of machine learning. Azure offers ML.NET Model Builder in Visual Studio, which provides an interface to build, train and deploy models. Amazon SageMaker is another managed service for building and deploying machine learning models in the cloud.

These tools can choose algorithms, determine which features or attributes in your data are most informative and optimize models using a process known as hyperparameter tuning. These kinds of services have expanded the potential use of machine learning and AI strategies. Just as you do not have to be a mechanical engineer to drive a car, you do not need a graduate degree in machine learning to build effective models.

Algorithms make decisions that directly and significantly impact individuals. For example, financial services use AI to make decisions about credit, which could be unintentionally biased against particular groups of people. This not only has the potential to harm individuals by denying credit but it also puts the financial institution at risk of violating regulations, like the Equal Credit Opportunity Act.

These seemingly menial tasks are imperative to AI and machine learning models. Detecting bias in a model can require savvy statistical and machine learning skills but, as with model building, some of the heavy lifting can be done by machines.

FairML is an open source tool for auditing predictive models that helps developers identify biases in their work. Experience with detecting bias in models can also help inform the data engineering and model building process. Google Cloud leads the market with fairness tools that include the What-If Tool, Fairness Indicators and Explainable AI services.

Part of the model building process is to evaluate how well a machine learning model performs. Classifiers, for example, are evaluated in terms of accuracy, precision and recall. Regression models, such as those that predict the price at which a house will sell, are evaluated by measuring their average error rate.

A model that performs well today may not perform as well in the future. The problem is not that the model is somehow broken, but that the model was trained on data that no longer reflects the world in which it is used. Even without sudden, major events, data drift can occur. It is important to evaluate models and continue to monitor them as long as they are in production.

Services such as Amazon SageMaker, Azure Machine Learning Studio and Google Cloud AutoML include an array of model performance evaluation tools.

Domain knowledge is not specifically a machine learning skill, but it is one of the most important parts of a successful machine learning strategy.

Every industry has a body of knowledge that must be studied in some capacity, especially when building algorithmic decision-makers. Machine learning models are constrained to reflect the data used to train them. Humans with domain knowledge are essential to knowing where to apply AI and to assess its effectiveness.

Read the original here:
5 machine learning skills you need in the cloud - TechTarget

Posted in Machine Learning | Comments Off on 5 machine learning skills you need in the cloud – TechTarget

Machine learning approach could detect drivers of atrial fibrillation – Cardiac Rhythm News

Mapping of the explanted human heart

Researchers have designed a new machine learning-based approach for detecting atrial fibrillation (AF) drivers, small patches of the heart muscle that are hypothesised to cause this most common type of cardiac arrhythmia. This approach may lead to more efficient targeted medical interventions to treat the condition, according to the authors of the paper published in the journal Circulation: Arrhythmia and Electrophysiology.

The mechanism behind AF is yet unclear, although research suggests it may be caused and maintained by re-entrant AF drivers, localised sources of repetitive rotational activity that lead to irregular heart rhythm. These drivers can be burnt via a surgical procedure, which can mitigate the condition or even restore the normal functioning of the heart.

To locate these re-entrant AF drivers for subsequent destruction, doctors use multi-electrode mapping, a technique that allows them to record multiple electrograms inside the using a catheter and build a map of electrical activity within the atria. However, clinical applications of this technique often produce a lot of false negatives, when an existing AF driver is not found, and false positives, when a driver is detected where there really is none.

Recently, researchers have tapped machine learning algorithms for the task of interpreting ECGs to look for AF; however, these algorithms require labelled data with the true location of the driver, and the accuracy of multi-electrode mapping is insufficient. The authors of the new study, co-led by Dmitry Dylov from the Skoltech Center of Computational and Data-Intensive Science and Engineering (CDISE, Moscow, Russia) and Vadim Fedorov from the Ohio State University (Columbus, USA) used high-resolution near-infrared optical mapping (NIOM) to locate AF drivers and stuck with it as a reference for training.

NIOM is based on well-penetrating infrared optical signals and therefore can record the electrical activity from within the heart muscle, whereas conventional clinical electrodes can only measure the signals on the surface. Add to this trait the excellent optical resolution, and the optical mapping becomes a no-brainer modality if you want to visualize and understand the electrical signal propagation through the heart tissue, said Dylov.

The team tested their approach on 11 explanted human hearts, all donated posthumously for research purposes. Researchers performed the simultaneous optical and multi-electrode mapping of AF episodes induced in the hearts. ML model can indeed efficiently interpret electrograms from multielectrode mapping to locate AF drivers, with an accuracy of up to 81%. They believe that larger training datasets, validated by NIOM, can improve machine learning-based algorithms enough for them to become complementary tools in clinical practice.

The dataset of recording from 11 human hearts is both extremely priceless and too small. We realiaed that clinical translation would require a much larger sample size for representative sampling, yet we had to make sure we extracted every piece of available information from the still-beating explanted human hearts. Dedication and scrutiny of two of our PhD students must be acknowledged here: Sasha Zolotarev spent several months on the academic mobility trip to Fedorovs lab understanding the specifics of the imaging workflow and present the pilot study at the HRS conference the biggest arrhythmology meeting in the world, and Katya Ivanova partook in the frequency and visualization analysis from within the walls of Skoltech. These two young researchers have squeezed out everything one possibly could, to train the machine learning model using optical measurements, Dylov notes.

Read the original:
Machine learning approach could detect drivers of atrial fibrillation - Cardiac Rhythm News

Posted in Machine Learning | Comments Off on Machine learning approach could detect drivers of atrial fibrillation – Cardiac Rhythm News

Amwell CMO: Google partnership will focus on AI, machine learning to expand into new markets – FierceHealthcare

Amwell is looking to evolve virtual care beyond just imitating in-person care.

To do that, the telehealth companyexpects to use its latestpartnership with Google Cloud toenable it to tap into artificial intelligence and machine learning technologies to create a better healthcare experience, according to Peter Antall, M.D., Amwell's chief medical officer.

"We have a shared vision to advance universal access to care thats cost-effective. We have a shared vision to expand beyond our borders to look at other markets. Ultimately, its a strategic technology collaboration that were most interested in," Antall said of the company's partnership with the tech giant during a STATvirtual event Tuesday.

Patient experience and the bottom-line impact on a practice

Practices that deliver exceptional experience often demonstrate strong financial performance and efficient operations. Join us to learn how to identify the most impactful connections between patient experience and financial performance, how to measure, track and improve patient experience as it relates to the bottom line, and identify patient experience measures that affect financial performance.

"What we bring to the table is that we can help provide applications for those technologiesthat will have meaningful effects on consumers and providers," he said.

The use of AI and machine learning can improve bot-based interactions or decision support for providers, he said. The two companies also want to explore the use of natural language processing and automated translation to provide more "value to clients and consumers," he said.

Joining a rush of healthcare technology IPOs in 2020, Amwell went public in August, raising$742 million. Google Cloud and Amwell also announced amultiyear strategic partnership aimed at expanding access to virtual care, accompanied by a$100 million investmentfrom Google.

During an HLTH virtual event earlier this month, Google Cloud director of healthcare solutions Aashima Gupta said cloud and artificial intelligence will "revolutionize telemedicine as we know it."

RELATED:Amwell files to go public with $100M boost from Google

"There's a collective realization in the industry that the future will not look like the past," said Gupta during the HTLH panel.

During the STAT event, Antall said Amwellis putting a big focus onvirtual primary care, which has become an area of interest for health plans and employers.

"It seems to be the next big frontier. Weve been working on it for three years, and were very excited. So much of healthcare is ongoing chronic conditions and so much of the healthcare spend is taking care ofchronic conditionsandtaking care of those conditions in the right care setting and not in the emergency department," he said.

The companyworks with 55 health plans, which support over 36,000 employers and collectively represent more than 80million covered lives, as well as 150 of the nations largest health systems. To date, Amwell says it has powered over 5.6million telehealth visits for its clients, including more than 2.9million in the six months ended June 30, 2020.

Amwell is interested in interacting with patients beyond telehealth visits through what Antall called "nudges" and synchronous communication to encouragecompliance with healthy behaviors, he said.

RELATED:Amwell CEOs on the telehealth boom and why it will 'democratize' healthcare

It's an area where Livongo, recently acquired by Amwell competitor Teladoc,has become the category leader by using digital health tools to help with chronic condition management.

"Were moving into similar areas, but doing it in a slightly different matter interms of how we address ongoing continuity of care and how we address certain disease states and overall wellness," Antallsaid, in reference to Livongo's capabilities.

The telehealth company also wants to expand into home healthcare through the integration of telehealth and remote care devices.

Virtual care companies have been actively pursuing deals to build out their service and product lines as the use of telehealth soars. To this end, Amwell recently deepened its relationship with remote device company Tyto Care. Through the partnership, the TytoHome handheld examination device that allows patients to exam their heart, lungs, skin, ears, abdomen, and throat at home, is nowpaired withAmwells telehealth platform.

Looking forward, there is the potential for patients to getlab testing, diagnostic testing, and virtual visits with physicians all at home, Antall said.

"I think were going to see a real revolution in terms ofhow much more we can do in the home going forward," he said.

RELATED:Amwell's stock jumps on speculation of potential UnitedHealth deal: media report

Amwell also is exploring the use of televisions in the home to interact with patients, he said.

"We've done work with some partners and we're working toward a future where, if it's easier for you to click your remote and initiate a telehealth visit that way, thats one option. In some populations, particularly the elderly, a TV could serve as a remote patient device where a doctor or nurse could proactively 'ring the doorbell' on the TV and askto check on the patient," Antall said.

"Its video technology that'salready there in most homes, you just need a camera to go with it and a little bit of software.Its one part of our strategy to be available for the whole spectrum of care and be able to interact in a variety of ways," he said.

See the original post:
Amwell CMO: Google partnership will focus on AI, machine learning to expand into new markets - FierceHealthcare

Posted in Machine Learning | Comments Off on Amwell CMO: Google partnership will focus on AI, machine learning to expand into new markets – FierceHealthcare

Microsoft Introduces Lobe: A Free Machine Learning Application That Allows You To Create AI Models Without Coding – MarkTechPost

Microsoft has releasedLobe, a free desktop application that lets Windows and Mac users create customized AI models without writing any code. Several customers are already using the app for tracking tourist activity around coral reefs, the company said.

Lobeis available on Windows and Mac as a desktop app. Presently it only supports image classification by categorizing the image to a single label overall. Microsoft says that there will be new releases supporting other neural networks in the near future.

To create an AI in Lobe, a user first needs to import a collection of images. These images are used as a dataset to train the application. Lobe analyzes the input images and sifts through a built-in library of neural network architectures to find the most suitable model for processing the dataset. Then it trains the model on the provided data, creating an AI model optimized to scan images for the users specific object or action.

AutoML is a technology that can automate parts and most of the machine learning creation workflow, reducing the advancement costs. Microsoft has made AutoML features available to enterprises in its Azure public cloud. The existing AI tools in Azure target only advanced projects. Lobe being free, easy to access, and convenient to use can now support even simple use cases that were not adequately addressed by the existing AI tools.

The Nature Conservancy is a nonprofit environmental organization that used Lobe to create an AI. This model analyzes the pictures taken by tourists in the Caribbean to identify where and when visitors interact with coral reefs. A Seattle auto marketing firm,Sincro LLC,has developed an AI model that scans vehicle images in online ads to filter out pictures that are less appealing to the customers.

GitHub: https://github.com/lobe

Website: https://lobe.ai/

Related

Continue reading here:
Microsoft Introduces Lobe: A Free Machine Learning Application That Allows You To Create AI Models Without Coding - MarkTechPost

Posted in Machine Learning | Comments Off on Microsoft Introduces Lobe: A Free Machine Learning Application That Allows You To Create AI Models Without Coding – MarkTechPost