Search Immortality Topics:

Page 50«..1020..49505152..6070..»


Category Archives: Machine Learning

4 tips to upgrade your programmatic advertising with Machine Learning – Customer Think

Lomit Patel, VP of growth at IMVU and best-selling author of Lean AI, shares lessons learned and practical advice for app marketers to unlock open budgets and sustainable growth with machine learning.

The first step in the automation journey is to identify where you and your team stand. In his book Lean AI: How Innovative Startups Use Artificial Intelligence to Grow, Lomit introduces the Lean AI Autonomy Scale, which ranks companies from 0 to 5 based on their level of AI & automation adoption.

A lot of companies arent fully relying on AI and automation to power their growth strategies. In fact, on a Lean AI Autonomy Scale from 0 to 5, most companies are at stage 2 or 3, where they rely on the AI of some of their partners without fully garnering the potential of these tools.

Heres how app marketers can start working their way up to level 5:

Put your performance strategy to the test by setting the right indicators. Marketers KPIs should be geared towards measuring growth. Identify the metrics that show whats driving more user quality conversions and revenue, such as:

Analyzing data is a critical step towards measuring success through the right KPIs. When getting data ready to be automated and processed with AI, marketers should make sure:

The better the data, the more effective decisions it will allow you to take. By aggregating data, marketers gain a comprehensive view of their efforts, which in turn leads to a better understanding of success metrics.

Youve got to make sure that youre giving them [partners] the right data so that their algorithms can optimize towards your outcomes and clearly define what success is. Lomit Patel.

The role of AI is not to replace jobs or people, but to replace tasks that people do, letting them focus on the things they are good at.

With Lean AI, the machine does a lot of the heavy lifting, allowing marketers to process data and surface insights in a way that wasnt possible beforeand with more data, the accuracy rate continues to go up.

It can be used to:

With our AI machine, were constantly testing different audiences, creatives, bids, budgets, and moving all of those different dials. On average, were generally running about ten thousand experiments at scale. A majority of those are based on creatives, its become a much bigger lever for us. Lomit Patel.

Theres a reason why growth partners have been around for a long time. For a lot of companies, the hassle of taking all marketing operations in-house doesnt make sense. At first, building a huge in-house data science team might seem like a great way to start leveraging AIbut:

Performance partners bring experience from working with multiple players across a number of verticals, making it easier to identify and implement the most effective automation strategy for each marketer. Their knowledge about industry benchmarks and best practices goes a long way in helping marketers outscore their competitors.

Last but not least, once you find the right partners, set them up for success by sharing the right data.

These recommendations are the takeaways from the first episode of App Marketers Unplugged. Created by Jampp, this video podcast series connects industry leaders and influencers to discuss challenges and trends with their peers.

Watch the full App Marketers Unplugged session with Lomit Patel to learn more about how Lean AI can help you gain users insights more efficiently and what marketers need to sail through the automation journey.

Read more here:
4 tips to upgrade your programmatic advertising with Machine Learning - Customer Think

Posted in Machine Learning | Comments Off on 4 tips to upgrade your programmatic advertising with Machine Learning – Customer Think

What is machine learning? Here’s what you need to know – Business Insider – Business Insider

Machine learning is a fast-growing and successful branch of artificial intelligence. In essence, machine learning is the process of allowing a computer system to teach itself how to perform complex tasks by analyzing large sets of data, rather than being explicitly programmed with a particular algorithm or solution.

In this way, machine learning enables a computer to learn how to perform a task on its own and to continue to optimize its approach over time, without direct human input.

In other words, it's the computer that is creating the algorithm, not the programmers, and often these algorithms are sufficiently complicated that programmers can't explain how the computer is solving the problem. Humans can't trace the computer's logic from beginning to end; they can only determine if it's finding the right solution to the assigned problem, which is output as a "prediction."

There are several different approaches to training expert systems that rely on machine learning, specifically "deep" learning that functions through the processing of computational nodes. Here are the most common forms:

Supervised learning is a model in which computers are given data that has already been structured by humans. For example, computers can learn from databases and spreadsheets in which the data has already been organized, such as financial data or geographic observations recorded by satellites.

Unsupervised learning uses databases that are mostly or entirely unstructured. This is common in situations where the data is collected in a way that humans can't easily organize or structure it. A common example of unstructured learning is spam detection, in which a computer is given access to enormous quantities of emails and it learns on its own to distinguish between wanted and unwanted mail.

Reinforcement learning is when humans monitor the output of the computer system and help guide it toward the optimal solution through trial and error. One way to visualize reinforcement learning is to view the algorithm as being "rewarded" for achieving the best outcome, which helps it determine how to interpret its data more accurately.

The field of machine learning is very active right now, with many common applications in business, academia, and industry. Here are a few representative examples:

Recommendation engines use machine learning to learn from previous choices people have made. For example, machine learning is commonly used in software like video streaming services to suggest movies or TV shows that users might want to watch based on previous viewing choices, as well as "you might also like" recommendations on retail sites.

Banks and insurance companies rely on machine learning to detect and prevent fraud through subtle signals of strange behavior and unexpected transactions. Traditional methods for flagging suspicious activity are usually very rigid and rules-based, which can miss new and unexpected patterns, while also overwhelming investigators with false positives. Machine learning algorithms can be trained with real-world fraud data, allowing the system to classify suspicious fraud cases far more accurately.

Inventory optimization a part of the retail workflow is increasingly performed by systems trained with machine learning. Machine learning systems can analyze vast quantities of sales and inventory data to find patterns that elude human inventory planners. These computer systems can make more accurate probability forecasting for customer demand.

Machine automation increasingly relies on machine learning. For example, self-driving car technology is deeply indebted to machine learning algorithms for the ability to detect objects on the road, classify those objects, and make accurate predictions about their potential movement and behavior.

View post:
What is machine learning? Here's what you need to know - Business Insider - Business Insider

Posted in Machine Learning | Comments Off on What is machine learning? Here’s what you need to know – Business Insider – Business Insider

U.S. Special Operations Command Employs AI and Machine Learning to Improve Operations – BroadbandBreakfast.com

December 11, 2020 In todays digital environment, winning wars requires more than boots on the ground. It also requires computer algorithms and artificial intelligence.

The United States Special Operations Command is currently playing a critical role advancing the employment of AI and machine learning in the fight against the countrys current and future advisories, through Project Maven.

To discuss the initiatives taking place as part of the project, General Richard Clarke, who currently serves as the Commander of USSOCOM, and Richard Shultz, who has served as a security consultant to various U.S. government agencies since the mid-1980s, joined the Hudson Institute for a virtual discussion on Monday.

Among other objectives, Project Maven aims to develop and integrate computer-vision algorithms needed to help military and civilian analysts encumbered by the sheer volume of full-motion video data that the Department of Defense collects every day in support of counterinsurgency and counter terrorism operation, according to Clarke.

When troops carry out militarized site exploration, or military raids, they bring back copious amounts of computers, papers, and hard drives, filled with potential evidence. In order to manage enormous quantities of information in real time to achieve strategic objectives, the Algorithmic Warfare Cross-Function task force, launched in April 2017, began utilizing AI to help.

We had to find a way to put all of this data into a common database, said Clarke. Over the last few years, humans were tasked with sorting through this content watching every video, and reading every detainee report. A human cannot sort and shift through this data quickly and deeply enough, he said.

AI and machine learning have demonstrated that algorithmic warfare can aid military operations.

Project Maven initiatives helped increase the frequency of raid operations from 20 raids a month to 300 raids a month, said Schultz. AI technology increases both the number of decisions that can be made, and the scale. Faster more effective decisions on your part, are going to give enemies more issues.

Project Maven initiatives have increased the accuracy of bomb targeting. Instead of hundreds of people working on these initiatives, today it is tens of people, said Clarke.

AI has also been used to rival adversary propaganda. I now spend over 70 percent of my time in the information environment. If we dont influence a population first, ISIS will get information out more quickly, said Clarke.

AI and machine learning tools, enable USSOCOM to understand what an enemy is sending and receiving, what are false narratives, what are bots, and more, the detection of which allows decision makers to make faster, and more accurate, calls.

Military use of machine learning for precision raids and bomb strikes naturally raises concerns. In 2018, more than 3,000 Google employees signed a petition in protest against the companys involvement with Project Maven.

In an open letter addressed to CEO Sundar Pichai, Google employees expressed concern that the U.S. military could weaponize AI and apply the technology towards refining drone strikes and other kinds of lethal attacks. We believe that Google should not be in the business of war, the letter read.

Go here to read the rest:
U.S. Special Operations Command Employs AI and Machine Learning to Improve Operations - BroadbandBreakfast.com

Posted in Machine Learning | Comments Off on U.S. Special Operations Command Employs AI and Machine Learning to Improve Operations – BroadbandBreakfast.com

Information gathering: A WebEx talk on machine learning – Santa Fe New Mexican

Were long past the point of questioning whether machines can learn. The question now is how do they learn? Machine learning, a subset of artificial intelligence, is the study of computer algorithms that improve automatically through experience. That means a machine can learn, independent of human programming. Los Alamos National Laboratory staff scientist Nga Thi Thuy Nguyen-Fotiadis is an expert on machine learning, and at 5:30 p.m. on Monday, Dec. 14, she hosts the virtual presentation Deep focus: Techniques for image recognition in machine learning, as part of the Bradbury Science Museums (1350 Central Ave., Los Alamos, 505-667-4444, lanl.gov/museum) Science on Tap lecture series. Nguyen-Fotiadis is a member of LANLs Information Sciences Group, whose Computer, Computational, and Statistical Sciences division studies fields that are central to scientific discovery and innovation. Learn about the differences between LANLs Trinity supercomputer and the human brain, and how algorithms determine recommendations for your nightly viewing pleasure on Netflix and the like. The talk is a free WebEx virtual event. Follow the link from the Bradburys event page at lanl.gov/museum/events/calendar/2020/12 /calendar-sot-nguyen-fotaidis.php to register.

Here is the original post:
Information gathering: A WebEx talk on machine learning - Santa Fe New Mexican

Posted in Machine Learning | Comments Off on Information gathering: A WebEx talk on machine learning – Santa Fe New Mexican

LeanTaaS Raises $130 Million to Strengthen Its Machine Learning Software Platform to Continue Helping Hospitals Achieve Operational Excellence -…

SANTA CLARA, Calif.--(BUSINESS WIRE)--LeanTaaS, Inc., a Silicon Valley software innovator that increases patient access and transforms operational performance for healthcare providers, announced a $130 million Series D funding round led by Insight Partners with participation from Goldman Sachs. The funds will be used to invest in building out the existing suite of products (iQueue for Operating Rooms, iQueue for Infusion Centers and iQueue for Inpatient Beds) as well as scaling the engineering, product and GoToMarket teams, and expanding the iQueue platform to include new products.

LeanTaaS is uniquely positioned to help hospitals and health systems across the country face the mounting operational and financial pressures exacerbated by the coronavirus. This funding will allow us to continue to grow and expand our impact while helping healthcare organizations deliver better care at a lower cost, said Mohan Giridharadas, founder and CEO of LeanTaaS. Our company momentum over the past several years - including greater than 50% revenue growth in 2020 and negative churn despite a difficult macro environment - reflects the increasing demand for scalable predictive analytics solutions that optimize how health systems increase operational utilization and efficiency. It also highlights how weve been able to develop and maintain deep partnerships with 100+ health systems and 300+ hospitals in order to keep them resilient and agile in the face of uncertain demand and supply conditions.

With this investment, LeanTaaS has raised more than $250 million in aggregate, including more than $150 million from Insight Partners. As part of the transaction, Insight Partners Jeff Horing and Jon Rosenbaum and Goldman Sachs Antoine Munfa will join LeanTaaS Board of Directors.

Healthcare operations in the U.S. are increasingly complex and under immense pressure to innovate; this has only been exacerbated by the prioritization of unique demands from the current pandemic, said Jeff Horing, co-founder and Managing Director at Insight Partners. Even under these unprecedented circumstances, LeanTaaS has demonstrated the effectiveness of its ML-driven platform in optimizing how hospitals and health systems manage expensive, scarce resources like infusion center chairs, operating rooms, and inpatient beds. After leading the companys Series B and C rounds, we have formed a deep partnership with Mohan and team. We look forward to continuing to help LeanTaaS scale its market presence and customer impact.

Although health systems across the country have invested in cutting-edge medical equipment and infrastructure, they cannot maximize the use of such assets and increase operational efficiencies to improve their bottom lines with human based scheduling or unsophisticated tools. LeanTaaS develops specialized software that increases patient access to medical care by optimizing how health systems schedule and allocate the use of expensive, constrained resources. By using LeanTaaS product solutions, healthcare systems can harness the power of sophisticated, AI/ML-driven software to improve operational efficiencies, increase access, and reduce costs.

We continue to be impressed by the LeanTaaS team. As hospitals and health systems begin to look toward a post-COVID-19 world, the agility and resilience LeanTaaS solutions provide will be key to restoring and growing their operations, said Antoine Munfa, Managing Director of Goldman Sachs Growth.

LeanTaaS solutions have now been deployed in more than 300 hospitals across the U.S., including five of the 10 largest health networks and 12 of the top 20 hospitals in the U.S. according to U.S. News & World Report. These hospitals use the iQueue platform to optimize capacity utilization in infusion centers, operating rooms, and inpatient beds. iQueue for Infusion Centers is used by 7,500+ chairs across 300+ infusion centers including 70 percent of the National Comprehensive Cancer Network and more than 50 percent of National Cancer Institute hospitals. iQueue for Operating Rooms is used by more than 1,750 ORs across 34 health systems to perform more surgical cases during business hours, increase competitiveness in the marketplace, and improve the patient experience.

I am excited about LeanTaaS' continued growth and market validation. As healthcare moves into the digital age, iQueue overcomes the inherent deficiencies in capacity planning and optimization found in EHRs. We are very excited to partner with LeanTaaS and implement iQueue for Operating Rooms, said Dr. Rob Ferguson, System Medical Director, Surgical Operations, Intermountain Healthcare.

Concurrent with the funding, LeanTaaS announced that Niloy Sanyal, the former CMO at Omnicell and GE Digital, would be joining as its new Chief Marketing Officer. Also, Sanjeev Agrawal has been designated as LeanTaaS Chief Operating Officer in addition to his current role as the President. "We are excited to welcome Niloy to LeanTaaS. His breadth and depth of experience will help us accelerate our growth as the industry evolves to a more data driven way of making decisions," said Agrawal.

About LeanTaaSLeanTaaS provides software solutions that combine lean principles, predictive analytics, and machine learning to transform hospital and infusion center operations. The companys software is being used by over 100 health systems across the nation which all rely on the iQueue cloud-based solutions to increase patient access, decrease wait times, reduce healthcare delivery costs, and improve revenue. LeanTaaS is based in Santa Clara, California, and Charlotte, North Carolina. For more information about LeanTaaS, please visit https://leantaas.com/, and connect on Twitter, Facebook and LinkedIn.

About Insight PartnersInsight Partners is a leading global venture capital and private equity firm investing in high-growth technology and software ScaleUp companies that are driving transformative change in their industries. Founded in 1995, Insight Partners has invested in more than 400 companies worldwide and has raised through a series of funds more than $30 billion in capital commitments. Insights mission is to find, fund, and work successfully with visionary executives, providing them with practical, hands-on software expertise to foster long-term success. Across its people and its portfolio, Insight encourages a culture around a belief that ScaleUp companies and growth create opportunity for all. For more information on Insight and all its investments, visit insightpartners.com or follow us on Twitter @insightpartners.

About Goldman Sachs GrowthFounded in 1869, The Goldman Sachs Group, Inc. is a leading global investment banking, securities and investment management firm. Goldman Sachs Merchant Banking Division (MBD) is the primary center for the firms long-term principal investing activity. As part of MBD, Goldman Sachs Growth is the dedicated growth equity team within Goldman Sachs, with over 25 years of investing history, over $8 billion of assets under management, and 9 offices globally.

LeanTaaS and iQueue are trademarks of LeanTaaS. All other brand names and product names are trademarks or registered trademarks of their respective companies.

Read the original here:
LeanTaaS Raises $130 Million to Strengthen Its Machine Learning Software Platform to Continue Helping Hospitals Achieve Operational Excellence -...

Posted in Machine Learning | Comments Off on LeanTaaS Raises $130 Million to Strengthen Its Machine Learning Software Platform to Continue Helping Hospitals Achieve Operational Excellence -…

What are the roles of artificial intelligence and machine learning in GNSS positioning? – Inside GNSS

For decades, artificial intelligence and machine learning have advanced at a rapid pace. Today, there are many ways artificial intelligence and machine learning are used behind the scenes to impact our everyday lives, such as social media, shopping recommendations, email spam detection, speech recognition, self-driving cars, UAVs, and so on.

The simulation of human intelligence is programmed to think like humans and mimic our actions to achieve a specific goal. In our own field, machine learning has also changed the ways to solve navigation problems and taken on a significant role in advancing PNT technologies in the future.

LI-TA HSU, HONG KONG POLYTECHNIC UNIVERSITY

Q: Can machine learning replace conventional GNSS positioning techniques?

Actually, it makes no sense to use ML when the exact physics/mathematical models of GNSS positioning are known, and when using machine learning (ML) techniques over any appreciable area to collect extensive data and train the network to estimate receiver locations would be an impractically large undertaking. We, human beings, designed the satellite navigation systems based on the laws of physics discovered. For example, we use Keplers laws to model the position of satellites in an orbit. We use the spread-spectrum technique to model the satellite signal allowing us to acquire very weak signals transmitted from the medium-Earth orbits. We understand the Doppler effect and design tracking loops to track the signal and decode the navigation message. We finally make use of trilateration to model the positioning and use the least square to estimate the location of the receiver. By the efforts of GNSS scientists and engineers for the past several decades, GNSS can now achieve centimeter-level positioning. The problem is; if everything is so perfect, why dont we have a perfect GNSS positioning?

The answer for me as an ML specialist is that the assumptions made are not always valid in all contexts and applications! In trilateration, we assume the satellite signal always transmitted in direct line-of-sight (LOS). However, different layers in the atmosphere can diffract the signal. Luckily, remote-sensing scientists studied the troposphere and ionosphere and came up with sophisticated models to mitigate the ranging error caused by transmission delay. But the multipath effects and non-line-of-sight (NLOS) receptions caused by buildings and obstacles on the ground are much harder to deal with due to their high nonlinearity and complexity.

Q: What are the challenges of GNSS and how can machine learning help with it?

GNSS performs very differently under different contexts. Context means what and where. For example, a pedestrian walks in an urban canyon or a pedestrian sits in a car that drives in a highway. The notorious multipath and NLOS play major roles to affect the performance GNSS receiver under different context. If we follow the same logic of the ionospheric research to deal with the multipath effect, we need to study 3D building models which is the main cause of the reflections. Extracting from our previous research, the right of Figure 1 is simulated based on the LOD1 building model and single-reflected ray-tracing algorithm. It reveals the positioning error caused by the multipath and NLOS is highly site-dependent. In other words, the nonlinearity and complexity of multipath and NLOS are very high.

Generally speaking, ML derives a model based on data. What exactly does ML do best?

Phenomena we simply do not know how to model by explicit laws of physics/math, for example, contexts and semantics.

Phenomena with high complexity, time variance and nonlinearity.

Looking at the challenges of GNSS multipath and the potential of ML, it becomes straightforward to apply artificial intelligence to mitigate multipath and NLOS. One mainstream idea is to use ML to train the models to classify LOS, multipath and NLOS measurements. This idea is illustrated in Figure 2. Three-steps, data labeling, classifier training, and classifier evaluation, are required. In fact, there are also challenges in each step.

Are we confident in our labeling?

In our work, we use 3D city models and ray-tracing simulation to label the measurements we received from the GNSS receiver. The label may not be 100% correct since the 3D models are not conclusive enough to represent the real world. Trees and dynamic objects (vehicles and pedestrians) are not included. In addition, the multiple reflected signals are very hard to trace and the 3D models could have errors.

What are the classes and features?

For the classes, popular selections are the presence (binary) of multipath or NLOS and their associated pseudorange errors. The features are selected based on the variables that are affected by multipath, including carrier-to-noise ratio, pseudorange residual, DOP, etc. If we can assess a step deeper into the correlator, the shape of correlators in code and carrier are also excellent features. Our study evaluates the comparison between the different levels (correlator, RINEX, and NMEA) of features for the GNSS classifier and reveals that the rawer the feature it is, the better classification accuracy can be obtained. Finally, the methods of exploratory data analysis, such as principle component analysis, can better select the features that are more representative to the class.

Are we confident that the data we used to train the classifier are representative enough for the general application cases?

Overfitting of the data is always being a challenge for ML. Multipath and NLOS effects are very difficult in different cities. For example, the architectures in Europe and Asia are very different, producing different multipath effects. Classifiers trained using the data in Hong Kong do not necessarily perform well in London. The categorization of cities or urban areas in terms of their effects on GNSS multipath and NLOS is still an open question.

Q: What are the challenges of integrated navigation systems and how can machine learning can help with them?

Seamless positioning has always been the ultimate goal. However, each sensor has a different performance in different areas. Table 1 gives a rough picture. Inertial sensors seem to perform stably in most areas. But the MEMS-INS suffers from drift and is highly affected by the random noise caused by the temperature variations. Naturally, integrated navigation is a solution. The sensor integration, in fact, shall be regarded in both long-term and short-term.

Long-term Sensor SelectionIn the long term, available sensors for positioning are generally more than enough. The determination of the best subsets of sensors to integrate is the question to ask. Consider an example of seamless positioning for a city dweller travelling from home to the office:

Walking on a street to the subway station (GNSS+IMU)

Walking in a subway station (Wi-Fi/BLE+IMU)

Traveling on a subway (IMU)

Walking in an urban area to the office (VPS+ GNSS+ Wi-Fi/BLE+IMU)

This example clearly shows that seamless positioning should integrate different sensors. The selection of the sensors can be done heuristically or by maximizing the observability of sensors. If the sensors are selected heuristically, we must have the ability to know what context the system is operating under. This is one of the best angles for ML to cut in. In fact, the classification of the scenarios or contexts is exactly what ML does best. A recently published journal paper demonstrates how to detect different contexts using smartphone sensors for context-adaptive navigation (Gao and Groves 2020). Sensors in smartphones are used in the models trained by supervised ML to determine not only the environment but also the behavior (such as transportation modes, including static, pedestrian walk, and sitting on a car or a subway, etc.).

According to their result, the state-of-the-art detection algorithm can achieve over 95% for pedestrians under indoor, intermediate, and outdoor scenarios. This finding encourages the use of ML to intelligently select the right navigation systems for an integrated navigation system under different areas. The same methodology can be easily extended to vehicular applications with a proper modification in the selections of features, classes, and machine learning algorithms.

Short-term Sensor Weighting

Technically speaking, an optimal integrated solution can be obtained if the uncertainty of the sensor can be optimally described. Presumably, the sensors uncertainty remains unchanged under a certain environment. As a result, most of the sensors uncertainty is carefully calibrated before its use in integration systems.

However, the problem is that the environment could change rapidly within a short period of time. For example, a car drives in an urban area with several viaducts or a car drives in an open sky with a canopy of foliage. These scenarios affect the performance of GNSS greatly, however, the affecting periods were too short to exclude the GNSS from the subset of sensors used. The best solution against these unexpected and transient effects are de-weighting the affected sensors in the system.

Due to the complexity of these effects, adaptive tuning of the uncertainty based on ML is getting popular. Our team demonstrated this potential by an experiment of a loosely coupled GNSS/INS integration. This experiment took place in an urban canyon with commercial GNSS and MEMS INS. Different ML algorithms are used to classify the GNSS positioning errors into four classes: healthy, slightly shifted, inaccurate, and dangerous. These are represented as 1 to 4 in the bottom of Figure 4. The top and bottom of the figure show the error of the commercial GNSS solution and the predicted classes by different ML. It clearly shows that ML can do a very good job predicting the class of the GNSS solution, enabling the integrated to allocate proper weighting to GNSS. Table 2 shows the improvement made by the ML-aided integration system.

This is just an example to preliminarily show the potential of ML in estimating/predicting sensors uncertainty. The methodology can also be applied to different sensor integration such as Wi-Fi/BLE/IMU integration. The challenge of the trained classifier may be too specific for a certain area due to the over-fitting of the data. This remains an open research question in the field.

Q: Machine Learning or Deep Learning for Navigation Systems?

Based on research in object recognition in computer science, deep learning (DL) is the currently the mainstream method because it generally outperforms ML when two conditions are fulfilled, data and computation. The trained model of DL is completely data-driven, while ML trains models to fit assumed (known) mathematical models. A rule of thumb to select ML or DL is the availability of the data in hand. If extensive and conclusive data are available, DL achieves excellent performance due to its superiority in data fitting. In the other words, DL can automatically discover features that affect the classes. However, a model trained by ML is much more comprehensible compared to that trained by DL. The DL model becomes like a black box. In addition, the nodes and layers of convolution in DL are used to extract features. The selection of the number of layers and the number of nodes is still very hard to determine, so that in trial-and-error approaches are widely adopted. These are the major challenges in DL.

If a DL-trained neutral network can be perfectly designed for the integrated navigation system, then it should consider both long-term and short-term challenges. Figure 5 shows this idea. Several hidden layers will be designed to predict the environments (or contexts) and the others are to predict the sensor uncertainty. The idea is straightforward, whereas the challenges remain:

Are we confident that the data we used to train the classifier are representative enough for the general applications cases?

What are the classes?

What are the features?

How many layers and the number of nodes should be used?

Q: How does machine learning affect the field of navigation?

ML will accelerate the development of seamless positioning. With the presence of ML in the navigation field, a perfect INS is no longer the only solution. These AI technologies facilitate the selection of the appropriate sensors or raw measurements (with appropriate trust) against complex navigation challenges. The transient selection of the sensors (well-known as plug-and-play) will affect the integration algorithm. Integration R&D engineers in navigation have been working on the Kalman filter and its variants. However, the flexibility of the Kalman filter makes it hard to accommodate the plug-and-play of sensors. The graph optimization that is widely used in the robotics field could be a very strong candidate to integrate sensors for navigation purposes.

Other than GNSS and the integrated navigation system mentioned above, the recently developed visual positioning system (VPS) by Google could replace the visual corner point detection by the semantic information that detected by ML. Looking at how we navigated before GNSS, we compare visual landmarks with our memory (database) to infer where we are and where we are heading. ML can segment and classify images taken by a camera into different classes, including building, foliage, road, curb, etc., and compare the distribution of the semantic information with that in the database in the cloud server. If they match, the associated position and orientation tag in the database can be regarded as the user location.

AI technologies are coming. They will influence navigation research and development. In my opinion, the best we can do is to mobilize AI to tackle the challenges to which we currently lack solutions. It is highly probable that technology advances and learning focus will depend greatly on MLs development and achievement in the field of navigation.

References

(1) Groves PD, Challenges of Integrated Navigation, ION GNSS+ 2018, Miami, Florida, pp. 3237-3264.

(2) Gao H, Groves PD. (2020) Improving environment detection by behavior association for context-adaptive navigation. NAVIGATION, 67:4360. https://doi.org/10.1002/navi.349

(3) Sun R., Hsu L.T., Xue D., Zhang G., Washington Y.O., (2019) GPS Signal Reception Classification Using Adaptive Neuro-Fuzzy Inference System, Journal of Navigation, 72(3): 685-701.

(4) Hsu L.T. GNSS Multipath Detection Using a Machine Learning Approach, IEEE ITSC 2017, Yokohama, Japan.

(5) Yozevitch R., and Moshe BB. (2015) A robust shadow matching algorithm for GNSS positioning. NAVIGATION, 62.2: 95-109.

(6) Chen P.Y., Chen H., Tsai M.H., Kuo H.K., Tsai Y.M., Chiou T.Y., Jau P.H. Performance of Machine Learning Models in Determining the GNSS Position Usage for a Loosely Coupled GNSS/IMU System, ION GNSS+ 2020, virtually, September 21-25, 2020.

(7) Suzuki T., Nakano, Y., Amano, Y. NLOS Multipath Detection by Using Machine Learning in Urban Environments, ION GNSS+ 2017, Portland, Oregon, pp. 3958-3967.

(8) Xu B., Jia Q., Luo Y., Hsu L.T. (2019) Intelligent GPS L1 LOS/Multipath/NLOS Classifiers Based on Correlator-, RINEX-and NMEA-Level Measurements, Remote Sensing 11(16):1851.

(9) Chiu H.P., Zhou X., Carlone L., Dellaert F., Samarasekera S., and Kumar R., Constrained Optimal Selection for Multi-Sensor Robot Navigation Using Plug-and-Play Factor Graphs, IEEE ICRA 2014, Hong Kong, China.

(10) Zhang G., Hsu L.T. (2018) Intelligent GNSS/INS Integrated Navigation System for a Commercial UAV Flight Control System, Aerospace Science and Technology, 80:368-380.

(11) Kumar R., Samarasekera S., Chiu H.P., Trinh N., Dellaert F., Williams S., Kaess M., Leonard J., Plug-and-Play Navigation Algorithms Using Factor Graphs, Joint Navigation Conference (JNC), 2012.

Read more here:
What are the roles of artificial intelligence and machine learning in GNSS positioning? - Inside GNSS

Posted in Machine Learning | Comments Off on What are the roles of artificial intelligence and machine learning in GNSS positioning? – Inside GNSS