Search Immortality Topics:

Page 70«..1020..69707172..8090..»


Category Archives: Machine Learning

Panalgo Brings the Power of Machine-Learning to the Healthcare Industry Via Its IHD Software – AiThority

Panalgos new Data Science module seamlessly integrates machine-learning techniques to identify new insights for patient care

Panalgo, a leading healthcare analytics company, announced the launch of its new Data Sciencemodule for Instant Health Data (IHD), which allows data scientists and researchers to leverage machine-learning to uncover novel insights from the growing volume of healthcare data.

Panalgos flagshipIHD Analytics softwarestreamlines the analytics process by removing complex programming from the equation and allows users to focus on what matters mostturning data into insights. IHD Analytics supports the rapid analysis of a wide range of healthcare data sources, including administrative claims, electronic health records, registry data and more. The software, which is purpose-built for healthcare, includes the most extensive library of customizable algorithms and automates documentation and reporting for transparent, easy collaboration.

Recommended AI News: Financial Data Exchange Adds 39 New Members With Expanding International Footprint

Panalgos new IHD Data Science module is fully integrated with IHD Analytics, and allows for analysis of large, complex healthcare datasets using a wide variety of machine-learning techniques. The IHD Data Science module provides an environment to easily train, validate and test models against multiple datasets.

Healthcare organizations are increasingly using machine-learning techniques as part of their everyday workflow. Developing datasets and applying machine-learning methods can be quite time-consuming, said Jordan Menzin, Chief Technology Officer of Panalgo. We created the Data Science module as a way for users to leverage IHD for all of the work necessary to apply the latest machine-learning methods, and to do so using a single system.

Our new IHD Data Science product release is part of our mission to leverage our deep domain knowledge to build flexible, intuitive software for the healthcare industry, said Joseph Menzin, PhD, Chief Executive Officer of Panalgo. We are excited to empower our customers to answer their most pressing questions faster, more conveniently, and with higher quality.

Recommended AI News: DH2i Featured in 2020 CRN Cloud Partner Program Guide

The IHD Data Science module provides advanced analytics to better predict patient outcomes, uncover reasons for medication non-adherence, identify diseases earlier, and much more. The results from these analyses can be used by healthcare stakeholders to improve patient care.

Research abstracts using Panalgos IHD Data Science module are being presented at this weeks International Conference on Pharmacoepidemiology and Therapeutic Risk Management, including: Identifying Comorbidity-based Subtypes of Type 2 Diabetes: An Unsupervised Machine Learning Approach,andIdentifying Predictors of a Composite Cardiovascular Outcome Among Diabetes Patients UsingMachine Learning.

Recommended AI News: LG Revolutionizes Multi-Screen Experience With Unique LG Wing 5G Smartphone

More:
Panalgo Brings the Power of Machine-Learning to the Healthcare Industry Via Its IHD Software - AiThority

Posted in Machine Learning | Comments Off on Panalgo Brings the Power of Machine-Learning to the Healthcare Industry Via Its IHD Software – AiThority

Microchip Partners with Machine-Learning (ML) Software Leaders to Simplify AI-at-the-Edge Design Using its 32-Bit Microcontrollers (MCUs) – EE Journal

Cartesiam, Edge Impulse and Motion Gestures integrate their machine-learning (ML) offerings into Microchips MPLAB X Integrated Development Environment

CHANDLER, Ariz., September 15, 2020 Microchip Technology(Nasdaq: MCHP)today announced it has partnered with Cartesiam, Edge Impulse and Motion Gestures to simplify ML implementation at the edge using the companys ARM Cortex based 32-bit micro-controllers and microprocessors in its MPLAB X Integrated Development Environment (IDE). Bringing the interface to these partners software and solutions into its design environment uniquely positions Microchip to support customers through all phases of their AI/ML projects including data gathering, training the models and inference implementation.

Adoption of our 32-bit MCUs in AI-at-the-edge applications is growing rapidly and now these designs are easy for any embedded system developer to implement, said Fanie Duvenhage, vice president of Microchips human machine interface and touch function group. It is also easy to test these solutions using our ML evaluation kits such as the EV18H79A or EV45Y33A.

About the Partner Offerings

Cartesiam, founded in 2016,is a software publisher specializing in artificial intelligence development tools for microcontrollers. NanoEdge AI Studio, Cartesiams patented development environment, allows embedded developers, without any prior knowledge of AI, to rapidly develop specialized machine learning libraries for microcontrollers. Devices leveraging Cartesiamstechnology are already in production at hundreds ofsites throughout theWorld

Edge Impulse is the end-to-end developer platform for embedded machine learning, enabling enterprises in industrial, enterprise and wearable markets. The platform is free for developers, providing dataset collection, DSP and ML algorithms, testing and highly efficient inference code generation across a wide range of sensor, audio and vision applications. Get started in just minutes thanks to integrated Microchip MPLAB X and evaluation kit support.

Motion Gestures, founded in 2017, provides powerful embedded AI-based gesture recognition software for different sensors, including touch, motion (i.e. IMU) and vision. Unlike conventional solutions, the companys platform does not require any training data collection or programming and uses advanced machine learning algorithms. As a result, gesture software development time and costs are reduced by 10x while gesture recognition accuracy is increased to nearly 100 percent.

See Demonstrations During Embedded Vision Summit

The MPLAB X IDE ML implementations will be featured during theEmbedded Vision Summit 2020 virtual conference, September 15-17. Attendees can see video demonstrations at the companys virtual exhibit, which will be staffed each day from 10:30 a.m. to 1 p.m. PDT.

Please let us know if you would like to speak to a subject matter expert on Microchips enhanced MPLAB X IDE for ML implementations, or the use of 32-bit microcontrollers in AI-at-the-edge applications. For more information visitmicrochip.com/MLCustomers can get a demo by contacting a Microchip sales representative.

Microchips offering of ML development kits now includes:

EV18H79A: SAMD21 ML Evaluation Kit with TDK 6-axis MEMS

EV45Y33A: SAMD21 ML Evaluation Kit with BOSCH IMU

SAMC21 xPlained Pro evaluation kit (ATSAMC21-XPRO) plus its QT8 xPlained Pro Extension Kit (AC164161): available for evaluating the Motion Gestures solution.

VectorBlox Accelerator Software Development Kit (SDK): enables developers to create low-power, small-form-factor AI/ML applications on Microchips PolarFireFPGAs.

About Microchip Technology

Microchip Technology Inc. is a leading provider of smart, connected and secure embedded control solutions. Its easy-to-use development tools and comprehensive product portfolio enable customers to create optimal designs which reduce risk while lowering total system cost and time to market. The companys solutions serve more than 120,000 customers across the industrial, automotive, consumer, aerospace and defense, communications and computing markets. Headquartered in Chandler, Arizona, Microchip offers outstanding technical support along with dependable delivery and quality. For more information, visit the Microchip website atwww.microchip.com.

Related

Here is the original post:
Microchip Partners with Machine-Learning (ML) Software Leaders to Simplify AI-at-the-Edge Design Using its 32-Bit Microcontrollers (MCUs) - EE Journal

Posted in Machine Learning | Comments Off on Microchip Partners with Machine-Learning (ML) Software Leaders to Simplify AI-at-the-Edge Design Using its 32-Bit Microcontrollers (MCUs) – EE Journal

Etihad trials computer vision and machine learning to reduce food waste – Future Travel Experience

Etihad is testingLumitics Insight Lite technology totrack unconsumed meals from a plane after it lands.

Etihad Airways has partnered with Singapore-based startup Lumitics to trial the use of computer vision and machine learning in order to reduce food wastage on Etihad flights.

The partnership will see Etihad and Lumitics track unconsumed Economy class meals from Etihads flights, with the collated data used to highlight food consumption and wastage patterns across the network. Analysis of the results will help to reduce food waste, improve meal planning and reduce operating costs.

Mohammad Al Bulooki, Chief Operating Officer, Etihad Aviation Group, said: Etihad Airways started the pilot with Lumitics earlier this year before global flying was impacted by COVID-19, and as the airline scales up the flight operations again, it is exciting to restart the project and continue the work that had begun. Etihad remains committed to driving innovation and sustainability through all aspects of the airlines operations, and we believe that this project will have the potential to support the drive to reduce food wastage and, at the same time, improve guest experience by enabling Etihad to plan inflight catering in a more relevant, effective and efficient way.

Lumitics product Insight Lite will track unconsumed meals from a plane after it lands. Using artificial intelligence (AI) and image recognition, Insight Lite is able to differentiate and identify the types and quantity of unconsumed meals based on the design of the meal foils, without requiring manual intervention.

Lumitics Co-founder and Chief Executive Rayner Loi said: Tackling food waste is one of the largest cost saving opportunities for any business producing and serving food. Not only does it make business sense, it is also good for the environment. We are excited to be working with Etihad Airways to help achieve its goals in reducing food waste.

See the article here:
Etihad trials computer vision and machine learning to reduce food waste - Future Travel Experience

Posted in Machine Learning | Comments Off on Etihad trials computer vision and machine learning to reduce food waste – Future Travel Experience

How Machine Learning is Set to Transform the Online Gaming Community – Techiexpert.com – TechiExpert.com

We often equate machine learning to fictional scenarios such as those presented in films including the Terminator franchise and 2001: A Space Odyssey. While these are all entertaining stories, the fact of the matter is that this type of artificial intelligence is not nearly as threatening. On the contrary, it has helped to dramatically enhance the overall user experience (UX) and to streamline many online functions (such as common search results) that we take for granted. Machine learning is also making its presence known within the digital gaming community. Without becoming overly technical, what transformations can we expect to witness and how will these impact the experience of the average gaming enthusiast?

Although games such as Pong and Super Mario Bros. were entertaining for their time, they were also quite predictable. This is why so many users have uploaded speed runs onto websites such as YouTube. However, what if a game actually learned from your previous actions? It is obvious that the platform itself would be much more challenging. This concept is now becoming a reality.

Machine learning can also apply to numerous scenarios. It may be used to provide a greater sense of realism with interacting with a role-playing game. It could be employed to offer speech recognition and to recognise voice commands. Machine learning may also be implemented to create more realistic non-playable characters (NPCs).

Whether referring to fast-paced MMORPGs to traditional forms of entertainment including slot games offered by websites such as scandicasino.vip, there is no doubt that machine learning will soon make its presence known.

We can clearly see that the technical benefits associated with machine learning will certainly be leveraged by game developers. However, it is just as important to mention that this very same technology will have a pronounced impact upon the players themselves. This is largely due to how games can be personalised based around the needs of the player.

We are not only referring to common options such as the ability to modify avatars and skins in this case. Instead, games are evolving to the point that they will base their recommendations off of the behaviours of the players themselves. For example, a plot may change as a result of how a player interacts with other characters. The difficulty of a specific level may be automatically adjusted in accordance with the skill of the player. As machine learning and AI both have the ability to model extremely complex systems, the sheer attention to graphical detail within the games (such as character features and backgrounds) will also become vastly enhanced.

We can see that the future of gaming looks extremely bright thanks to the presence of machine learning. While such systems might appear to have little impact upon traditional platforms such as solitaire, there is no doubt that they will still be felt across numerous other genres. So, get ready for a truly amazing experience in the months and years to come!

View post:
How Machine Learning is Set to Transform the Online Gaming Community - Techiexpert.com - TechiExpert.com

Posted in Machine Learning | Comments Off on How Machine Learning is Set to Transform the Online Gaming Community – Techiexpert.com – TechiExpert.com

PODCAST: NVIDIA’s Director of Data Science Talks Machine Learning for Airlines and Aerospace – Aviation Today

Geoffrey Levene is the Director of Global Business Development for Data Science and Space at NVIDIA.

On this episode of the Connected Aircraft Podcast, we learn how airlines and aerospace manufacturers are adopting the use of data science workstations to develop task-specific machine learning models with Geoffrey Levene, Director, Global Business Development for Data Science and Space at NVIDIA.

In a May 7 blog, NVIDIA one of the worlds largest suppliers of graphics processing units and computer chips to the video gaming, automotive and other industries explained how American Airlines is using its data science workstations to integrate machine learning into its air cargo operations planning. During this interview, Levene expands on other airline and aerospace uses of those same workstations and how they are creating new opportunities for efficiency.

Have suggestions or topics we should focus on in the next episode? Email the host, Woodrow Bellamy atwbellamy@accessintel.com, or drop him a line on Twitter@WbellamyIIIAC.

Listen to this episode below, orcheck it out on iTunesorGoogle PlayIf you like the show, subscribe on your favorite podcast app to get new episodes as soon as theyre released.

Read more:
PODCAST: NVIDIA's Director of Data Science Talks Machine Learning for Airlines and Aerospace - Aviation Today

Posted in Machine Learning | Comments Off on PODCAST: NVIDIA’s Director of Data Science Talks Machine Learning for Airlines and Aerospace – Aviation Today

The tensions between explainable AI and good public policy – Brookings Institution

Democratic governments and agencies around the world are increasingly relying on artificial intelligence. Police departments in the United States, United Kingdom, and elsewhere have begun to use facial recognition technology to identify potential suspects. Judges and courts have started to rely on machine learning to guide sentencing decisions. In the U.K., one in three British local authorities are said to be using algorithms or machine learning (ML) tools to make decisions about issues such as welfare benefit claims. These government uses of AI are widespread enough to wonder: Is this the age of government by algorithm?

Many critics have expressed concerns about the rapidly expanding use of automated decision-making in sensitive areas of policy such as criminal justice and welfare. The most often voiced concern is the issue of bias: When machine learning systems are trained on biased data sets, they will inevitably embed in their models the datas underlying social inequalities. The data science and AI communities are now highly sensitive to data bias issues, and as a result have started to focus far more intensely on the ethics of AI. Similarly, individual governments and international organizations have published statements of principle intended to govern AI use.

A common principle of AI ethics is explainability. The risk of producing AI that reinforces societal biases has prompted calls for greater transparency about algorithmic or machine learning decision processes, and for ways to understand and audit how an AI agent arrives at its decisions or classifications. As the use of AI systems proliferates, being able to explain how a given model or system works will be vital, especially for those used by governments or public sector agencies.

Yet explainability alone will not be a panacea. Although transparency about decision-making processes is essential to democracy, it is a mistake to think this represents an easy solution to the dilemmas algorithmic decision-making will present to our societies.

There are two reasons why. First, with machine learning in general and neural networks or deep learning in particular, there is often a trade-off between performance and explainability. The larger and more complex a model, the harder it will be to understand, even though its performance is generally better. Unfortunately, for complex situations with many interacting influenceswhich is true of many key areas of policymachine learning will often be more useful the more of a black box it is. As a result, holding such systems accountable will almost always be a matter of post hoc monitoring and evaluation. If it turns out that a given machine learning algorithms decisions are significantly biased, for example, then something about the system or (more likely) the data it is trained on needs to change. Yet even post hoc auditing is easier said than done. In practice, there is surprisingly little systematic monitoring of policy outcomes at all, even though there is no shortage of guidance about how to do it.

The second reason is due to an even more significant challenge. The aim of many policies is often not made explicit, typically because the policy emerged as a compromise between people pursuing different goals. These necessary compromises in public policy presents a challenge when algorithms are tasked with implementing policy decisions. A compromise in public policy is not always a bad thing; it allows decision makers to resolve conflicts as well as avoiding hard questions about the exact outcomes desired. Yet this is a major problem for algorithms as they need clear goals to function. An emphasis on greater model explainability will never be able to resolve this challenge.

Consider the recent use of an algorithm to produce U.K. high school grades in the absence of examinations during the pandemic, which provides a remarkable example of just how badly algorithms can function in the absence of well-defined goals. British teachers had submitted their assessment of individual pupils likely grades and ranked their pupils within each subject and class. The algorithm significantly downgraded many thousands of these assessed results, particularly in state schools in low-income areas. Star pupils with conditional university places consequently failed to attain the level they needed, causing much heartbreak, not to mention pandemonium in the centralized system for allocating students to universities.

After a few days of uproar, the U.K. government abandoned the results, instead awarding everyone the grades their teachers had predicted. When the algorithm was finally published, it turned out to have placed most weight on matching the distribution of grades the same school had received in previous years, penalizing the best pupils at typically poorly performing schools. However, small classes were omitted as having too few observations, which meant affluent private schools with small class sizes escaped the downgrading.

Of course, the policy intention was never to increase educational inequality, but to prevent grade inflation. This aim had not been stated publicly beforehandor statisticians might have warned of the unintended consequences. The objectives of no grade inflation, school by school, and of individual fairness were fundamentally in conflict. Injustice to some pupilsthose who had worked hardest to overcome unfavorable circumstanceswas inevitable.

For government agencies and offices that increasingly rely on AI, the core problem is that machine learning algorithms need to be given a precisely specified objective. Yet in the messy world of human decision-making and politics, it is often possible and even desirable to avoid spelling out conflicting aims. By balancing competing interests, compromise is essential to the healthy functioning of democracies.

This is true even in the case of what might at first glance seem a more straightforward example, such as keeping criminals who are likely to reoffend behind bars rather than granting them bail or parole. An algorithm using past data to find patterns willgiven the historically higher likelihood that people from low income or minority communities will have been arrested or imprisonedpredict that similar people are more likely to offend in future. Perhaps judges can stay alert for this data bias and override the algorithm when sentencing particular individuals.

But there is still an ambiguity about what would count as a good outcome. Take bail decisions. About a third of the U.S. prison population is awaiting trial. Judges make decisions every day about who will await trial in jail and who will be bailed, but an algorithm can make a far more accurate prediction than a human about who will commit an offense if they are bailed. According to one model, if bail decisions were made by algorithm, the prison population in the United States would be 40% smaller, with the same recidivism rate as when the decisions are made by humans. Such a system would reduce prison populationsan apparent improvement on current levels of mass incarceration. But given that people of color make up the great majority of the U.S. prison population, the algorithm may also recommend a higher proportion of people from minority groups are denied bailwhich seems to perpetuate unfairness.

Some scholars have argued that exposing such trade-offs is a good thing. Algorithms or ML systems can then be set more specific aimsfor instance, to predict recidivism subject to a rule requiring that equal proportions of different groups get bailand still do better than humans. Whats more, this would enforce transparency about the ultimate objectives.

But this is not a technical problem about how to write computer code. Perhaps greater transparency about objectives could eventually be healthy for our democracies, but it would certainly be uncomfortable. Compromises work by politely ignoring inconvenient contradictions. Should government assistance for businesses hit by the pandemic go to those with most employees or to those most likely to repay? There is no need to answer this question about ultimate aims in order to set specific criteria for an emergency loan scheme. But to automate the decision requires specifying an objectivesave jobs, maximize repayments, or perhaps weight each equally. Similarly, people might disagree about whether the aim of the justice system is retribution or rehabilitation and yet agree on sentencing guidelines.

Dilemmas about objectives do not crop up in many areas of automated decisions or predictions, where the interests of those affected and those running the algorithm are aligned. Both the bank and its customers want to prevent frauds, both the doctor and her patient want an accurate diagnosis or radiology results. However, in most areas of public policy there are multiple overlapping and sometimes competing interests.

There is often a trust deficit too, particularly in criminal justice and policing, or in welfare policies which bring the power of the state into peoples family lives. Even many law-abiding citizens in some communities do not trust the police and judiciary to have their best interests at heart. It is nave to believe that algorithmically enforced transparency about objectives will resolve political conflicts in situations like these. The first step, before deploying machines to make decisions, is not to insist on algorithmic explainability and transparency, but to restore the trustworthiness of institutions themselves. Algorithmic decision-making can sometimes assist good government but can never make up for its absence.

Diane Coyle is professor of public policy and co-director of the Bennett Institute at the University of Cambridge.

Go here to see the original:
The tensions between explainable AI and good public policy - Brookings Institution

Posted in Machine Learning | Comments Off on The tensions between explainable AI and good public policy – Brookings Institution