Search Immortality Topics:

Page 87«..1020..86878889..100110..»


Category Archives: Machine Learning

Machine learning can give healthcare workers a ‘superpower’ – Healthcare IT News

With healthcare organizations around the world leveraging cloud technologies for key clinical and operational systems, the industry is building toward digitally enhanced, data-driven healthcare.

And unstructured healthcare data, within clinical documents and summaries, continues to remain an important source of insights to support clinical and operational excellence.

But there are countless nuggets of important unstructured data something that does not lend itself to manual search and manipulation by clinicians. This is where automation comes in.

Arun Ravi, senior product leader at Amazon Web Services is copresenting a HIMSS20 Digital presentation on unstructured healthcare data and machine learning, Accelerating Insights from Unstructured Data, Cloud Capabilities to Support Healthcare.

There is a huge shift from volume- to value-based care: 54% of hospital CEOs see the transition from volume to value as their biggest financial challenge, and two-thirds of the IT budget goes toward keeping the lights on, Ravi explained.

Machine learning has this really interesting role to play where were not necessarily looking to replace the workflows, but give essentially a superpower to people in healthcare and allow them to do their jobs a lot more efficiently.

In terms of how this affects health IT leaders, with value-based care there is a lot of data being created. When a patient goes through the various stages of care, there is a lot of documentationa lot of datacreated.

But how do you apply the resources that are available to make it much more streamlined, to create that perfect longitudinal view of the patient? Ravi asked. A lot of the current IT models lack that agility to keep pace with technology. And again, its about giving the people in this space a superpower to help them bring the right data forward and use that in order to make really good clinical decisions.

This requires responding to a very new model that has come into play. And this model requires focus on differentiating a healthcare organizations ability to do this work in real time and do it at scale.

How [do] you incorporate these new technologies into care delivery in a way that not only is scalable but actually reaches your patients and also makes sure your internal stakeholders are happy with it? Ravi asked. And again, you want to reduce the risk, but overall, how do you manage this data well in a way that is easy for you to scale and easy for you to deploy into new areas as the care model continues to shift?

So why is machine learning important in healthcare?

If you look at the amount of unstructured data that is created, it is increasing exponentially, said Ravi. And a lot of that remains untapped. There are 1.2 billion unstructured clinical documents that are actually created every year. How do you extract the insights that are valuable for your application without applying manual approaches to it?

Automating all of this really helps a healthcare organization reduce the expense and the time that is spent trying to extract these insights, he said. And this creates a unique opportunity, not just to innovate, but also to build new products, he added.

Ravi and his copresenter, Paul Zhao, senior product leader at AWS, offer an in-depth look into gathering insights from all of this unstructured healthcare data via machine learning and cloud capabilities in their HIMSS20 Digital session. To attend the session, click here.

Twitter:@SiwickiHealthITEmail the writer:bill.siwicki@himss.orgHealthcare IT News is a HIMSS Media publication.

Read this article:
Machine learning can give healthcare workers a 'superpower' - Healthcare IT News

Posted in Machine Learning | Comments Off on Machine learning can give healthcare workers a ‘superpower’ – Healthcare IT News

Get over 65 hours of Big Data and Machine Learning training for less than $40 – Boing Boing

Even in horrible economic times, a few simple rules hold unshakably true. And one of those rules is that if you possess an in-demand skill, youll always find work and often, at a top market salary, to boot.

If you understand Big Data and how to find order from the chaos of massive stockpiles of raw information, you can land a six-figure salary. Even now. And if you know how to program machines to think and respond for themselves, youre in an even better position to make a very comfortable living.

If youre unsure about your career future or just want to change your tax bracket, the training in The Complete 2020 Big Data and Machine Learning Bundle can hand you everything you need to start down the path toward life as a Big Data analyst or machine learning engineer.

Across 10 courses hosting almost 70 hours of content, this instruction explains the ins and outs of these exploding job fields, even for those who have no experience with statistics or advanced technology.

Half of the courses here look deeply into the process of using big data, the vast amounts of structured and unstructured information that most businesses collect on a daily basis. Of course, youll never get on top of that tidal wave with your eyes and a ream of spreadsheets, so these courses examine the key analytical tools and language data experts use to organize findings and extract mining for all that unprocessed data.

The training covers industry-leading processes and software like Scala, Hadoop, Elasticsearch, MapReduce and Apache Spark, all valuable means to unlock the secrets hidden inside that mountain of numbers.

The other half of the coursework focuses on machine learning as the Machine Learning for Absolute Beginners - Level 1 course offers newbies a real understanding of what machine learning, artificial intelligence, and deep learning really mean.

Helping computers determine how to assess information and adjust their behavior on their own isnt science fiction. Training in learning the Python coding language at the heart of these fields as well as how to use tools like Tensorflow and Keras not only make it all relatable but can put you in a position to get hired as a machine learning expert with the paycheck to match.

This course package usually retails for almost $1,300, but your path to a new career in Big Data and machine learning can start now for a whole lot less, only $39.90.

Do you have your stay-at-home essentials? Here are some you may have missed.

Amazons new Chinese thermal spycam vendor was blacklisted by U.S. over allegations it helped China detain and monitor Uighurs and other Muslim minorities

Mark Di Stefano of the Financial Times is accused by The Independent of accessing private Zoom meetings held by The Independent and The Evening Standard as journalists were learning how coronavirus restrictions would affect them.

Hackers tried to break into the World Health Organization earlier in March, as the COVID-19 pandemic spread, Reuters reports. Security experts blame an advanced cyber-espionage hacker group known as DarkHotel. A senior agency official says the WHO has been facing a more than two-fold increase in cyberattacks since the coronavirus pandemic began.

Look, with everything going on right now, theres a good chance you might have missed some of the cool products offered up over the past few days, all at healthy savings off their original price. Wed feel like we were doing you a disservice if we didnt give you one last shot at em. To []

When you think of the single program that seems to absolutely epitomize business in all its forms, you probably think of Microsoft Excel. Its been around for three decades, its the cornerstone of the ubiquitous Microsoft Office suite and that neat, ordered grid of a spreadsheet is synonymous with 21st-century commerce. While many have Excel []

Youd think the biggest complaint that can be leveled about a pair of earbuds is that they just dont sound all that great. Granted, there are plenty of cut-rate headphones that fall under that category, but wed wager the pet peeve that makes most users throw away earbuds in frustration is when they just dont []

Read the original:
Get over 65 hours of Big Data and Machine Learning training for less than $40 - Boing Boing

Posted in Machine Learning | Comments Off on Get over 65 hours of Big Data and Machine Learning training for less than $40 – Boing Boing

Big data and machine learning are growing at massive rates. This training explains why – The Next Web

TLDR: The Complete 2020 Big Data and Machine Learning Bundle breaks down understanding and getting started in two of the tech eras biggest new growth sectors.

Its instructive to know just how big Big Data really is. And the reality is that its now so big that the word big doesnt even effectively do it justice anymore. Right now, humankind is creating 2.5 quintillion bytes of data every day. And its growing exponentially, with 90 percent of all data created in just the past two years. By 2023, the big data industry will be worth about $77 billion and thats despite the fact that unstructured data is identified as a problem by 95 percent of all businesses.

Meanwhile, data analysis is also the background of other emerging fields, like the explosion of machine learning projects that have companies like Apple scooping up machine learning upstarts.

The bottom is that if you understand Big Data, you can effectively right your own ticket salary-wise. You can jump into this fascinating field the right way with the training in The Complete 2020 Big Data and Machine Learning Bundle, on sale now for $39.90, over 90 percent off from TNW Deals.

This collection includes 10 courses featuring 68 hours of instruction covering the basics of big data, the tools data analysts need to know, how machines are being taught to think for themselves, and the career applications for learning all this cutting-edge technology.

Everything starts with getting a handle on how data scientists corral mountains of raw information. Six of these courses focus on big data training, including close exploration of the essential industry-leading tools that make it possible. If you dont know what Hadoop, Scala or Elasticsearch do or that Spark Streaming is a quickly developing technology for processing mass data sets in real-time, you will after these courses.

Meanwhile, the remaining four courses center on machine learning, starting with a Machine Learning for Absolute Beginners Level 1 course that helps first-timers get a grasp on the foundations of machine learning, artificial intelligence and deep learning. Students also learn about the Python coding languages role in machine learning as well as how tools like Tensorflow and Keras impact that learning.

A training package valued at almost $1,300, you can start turning Big Data and machine learning into a career with this instruction for just $39.90.

Prices are subject to change.

Read next: The 'average' Robinhood trader is no match for the S&P 500, just like Buffett

Read our daily coverage on how the tech industry is responding to the coronavirus and subscribe to our weekly newsletter Coronavirus in Context.

For tips and tricks on working remotely, check out our Growth Quarters articles here or follow us on Twitter.

See the article here:
Big data and machine learning are growing at massive rates. This training explains why - The Next Web

Posted in Machine Learning | Comments Off on Big data and machine learning are growing at massive rates. This training explains why – The Next Web

Massey University’s Teo Susnjak on how Covid-19 broke machine learning, extreme data patterns, wealth and income inequality, bots and propaganda and…

This weeks Top 5 comes from Teo Susnjaka computer scientistspecialising in machine learning. He is a Senior Lecturer in Information Technology at Massey University and is the developer behind GDPLive.

As always, we welcome your additions in the comments below or via email to david.chaston@interest.co.nz.

And if you're interested in contributing the occasional Top 5yourself, contact gareth.vaughan@interest.co.nz.

1. Covid-19 broke machine learning.

As the Covid-19 crisis started to unfold, we started to change our buying patterns. All of a sudden, some of the top purchasing items became: antibacterial soap, sanitiser, face masks, yeast and of course, toilet paper. As the demand for these unexpected items exploded, retail supply chains were disrupted. But they weren't the only ones affected.

Artificial intelligence systems began to break too. The MIT Technology Review reports:

Machine-learning models that run behind the scenes in inventory management, fraud detection, and marketing rely on a cycle of normal human behavior. But what counts as normal has changed, and now some are no longer working.

How bad the situation is depends on whom you talk to. According to Pactera Edge, a global AI consultancy, automation is in tailspin. Others say they are keeping a cautious eye on automated systems that are just about holding up, stepping in with a manual correction when needed.

Whats clear is that the pandemic has revealed how intertwined our lives are with AI, exposing a delicate codependence in which changes to our behavior change how AI works, and changes to how AI works change our behavior. This is also a reminder that human involvement in automated systems remains key. You can never sit and forget when youre in such extraordinary circumstances, says Cline.

Image source: MIT Technology Review

The extreme data capturing a previously unseen collapse in consumer spending that feeds the real-time GDP predictor at GDPLive.net, also broke our machine learning algorithms.

2. Extreme data patterns.

The eminent economics and finance historian, Niall Ferguson (not to be confused with Neil Ferguson who also likes to create predictive models) recently remarked that the first month of the lockdown created conditions which took a full year to materialise during the Great Depression.

The chart below shows the consumption data falling off the cliff, generating inputs that broke econometrics and machine learning models.

What we want to see is a rapid V-shaped recovery in consumer spending. The chart below shows the most up-to-date consumer spending trends. Consumer spending has now largely recovered, but is still lower than that of the same period in 2019. One of the key questions will be whether or not this partial rebound will be temporary until the full economic impacts of the 'Great Lockdown' take effect.

Paymark tracks consumer spending on their new public dashboard. Check it out here.

3. Wealth and income inequality.

As the current economic crisis unfolds, GDP will take centre-stage again and all other measures which attempt to quantify wellbeing and social inequalities will likely be relegated until economic stability returns.

When the conversation does return to this topic, AI might have something to contribute.

Effectively addressing income inequality is a key challenge in economics with taxation being the most useful tool. Although taxation can lead to greater equalities, over-taxation discourages from working and entrepreneurship, and motivates tax avoidance. Ultimately this leaves less resources to redistribute. Striking an optimal balance is not straightforward.

The MIT Technology Reviewreports thatAI researchers at the US business technology company Salesforce implemented machine learning techniques that identify optimal tax policies for a simulated economy.

In one early result, the system found a policy thatin terms of maximising both productivity and income equalitywas 16% fairer than a state-of-the-art progressive tax framework studied by academic economists. The improvement over current US policy was even greater.

Image source: MIT Technology Review

It is unlikely that AI will have anything meaningful to contribute towards tackling wealth inequality though. If Walter Scheidel, author of The Great Leveller and professor of ancient history at Stanford is correct, then the only historically effective levellers of inequality are: wars, revolutions, state collapses and...pandemics.

4. Bots and propaganda.

Over the coming months, arguments over what has caused this crisis, whether it was the pandemic or the over-reactive lockdown policies, will occupy much of social media. According to The MIT Technology Review, bots are already being weaponised to fight these battles.

Nearly half of Twitter accounts pushing to reopen America may be bots. Bot activity has become an expected part of Twitter discourse for any politicized event. Across US and foreign elections and natural disasters, their involvement is normally between 10 and 20%. But in a new study, researchers from Carnegie Mellon University have found that bots may account for between 45 and 60% of Twitter accounts discussing covid-19.

To perform their analysis, the researchers studied more than 200 million tweets discussing coronavirus or covid-19 since January. They used machine-learning and network analysis techniques to identify which accounts were spreading disinformation and which were most likely bots or cyborgs (accounts run jointly by bots and humans).

They discovered more than 100 types of inaccurate Covid-19-19 stories and found that not only were bots gaining traction and accumulating followers, but they accounted for 82% of the top 50 and 62% of the top 1,000 influential retweeters.

Image source: MIT Technology Review

How confident are you that you can tell the difference between a human and a bot? You can test yourself out here. BTW, I failed.

5. Primed to believe bad predictions.

This has been a particularly uncertain time. We humans don't like uncertainty especially once it reaches a given threshold. We have an amazing brain that is able to perform complex pattern recognition that enables us to predict what's around the corner. When we do this, we resolve uncertainty and our brain releases dopamine, making us feel good. When we cannot make sense of the data and the uncertainty remains unresolved, then stress kicks in.

Writing on this in Forbes, John Jennings points out:

Research shows we dislike uncertainty so much that if we have to choose between a scenario in which we know we will receive electric shocks versus a situation in which the shocks will occur randomly, well select the more painful option of certain shocks.

The article goes on to highlight how we tend to react in uncertain times. Aversion to uncertainty drives some of us to try to resolve it immediately through simple answers that align with our existing worldviews. For others, there will be a greater tendency to cluster around like-minded people with similar worldviews as this is comforting. There are some amongst us who are information junkies and their hunt for new data to fill in the knowledge gaps will go into overdrive - with each new nugget of information generating a dopamine hit. Lastly, a number of us will rely on experts who will use their crystal balls to find for us the elusive signal in all the noise, and ultimately tell us what will happen.

The last one is perhaps the most pertinent right now. Since we have a built-in drive that seeks to avoid ambiguity, in stressful times such as this, our biology makes us susceptible to accepting bad predictions about the future as gospel especially if they are generated by experts.

Experts at predicting the future do not have a strong track record considering how much weight is given to them. Their predictive models failed to see the Global Financial Crisis coming, they overstated the economic fallout of Brexit, the climate change models and their forecasts are consistently off-track, and now we have the pandemic models.

Image source:drroyspencer.com

The author suggests that this time "presents the mother of all opportunities to practice learning to live with uncertainty". I would also add that a good dose of humility on the side of the experts, and a good dose of scepticism in their ability to accurately predict the future both from the public and decision makers, would also serve us well.

Read the original post:
Massey University's Teo Susnjak on how Covid-19 broke machine learning, extreme data patterns, wealth and income inequality, bots and propaganda and...

Posted in Machine Learning | Comments Off on Massey University’s Teo Susnjak on how Covid-19 broke machine learning, extreme data patterns, wealth and income inequality, bots and propaganda and…

2020 Current trends in Machine Learning in Education Market Share, Growth, Demand, Trends, Region Wise Analysis of Top Players and Forecasts – Cole of…

Machine Learning in EducationMarket 2020: Inclusive Insight

Los Angeles, United States, May 2020:The report titled Global Machine Learning in Education Market is one of the most comprehensive and important additions to Alexareports archive of market research studies. It offers detailed research and analysis of key aspects of the global Machine Learning in Education market. The market analysts authoring this report have provided in-depth information on leading growth drivers, restraints, challenges, trends, and opportunities to offer a complete analysis of the global Machine Learning in Education market. Market participants can use the analysis on market dynamics to plan effective growth strategies and prepare for future challenges beforehand. Each trend of the global Machine Learning in Education market is carefully analyzed and researched about by the market analysts.

Machine Learning in Education Market competition by top manufacturers/ Key player Profiled: IBM, Microsoft, Google, Amazon, Cognizan, Pearson, Bridge-U, DreamBox Learning, Fishtree, Jellynote, Quantum Adaptive Learning

Get PDF Sample Copy of the Report to understand the structure of the complete report:(Including Full TOC, List of Tables & Figures, Chart) : https://www.alexareports.com/report-sample/849042

Global Machine Learning in Education Market is estimated to reach xxx million USD in 2020 and projected to grow at the CAGR of xx% during 2020-2026. According to the latest report added to the online repository of Alexareports the Machine Learning in Education market has witnessed an unprecedented growth till 2020. The extrapolated future growth is expected to continue at higher rates by 2026.

Machine Learning in Education Market Segment by Type covers: Cloud-Based, On-Premise

Machine Learning in Education Market Segment by Application covers:Intelligent Tutoring Systems, Virtual Facilitators, Content Delivery Systems, Interactive Websites

After reading the Machine Learning in Education market report, readers get insight into:

*Major drivers and restraining factors, opportunities and challenges, and the competitive landscape*New, promising avenues in key regions*New revenue streams for all players in emerging markets*Focus and changing role of various regulatory agencies in bolstering new opportunities in various regions*Demand and uptake patterns in key industries of the Machine Learning in Education market*New research and development projects in new technologies in key regional markets*Changing revenue share and size of key product segments during the forecast period*Technologies and business models with disruptive potential

Based on region, the globalMachine Learning in Education market has been segmented into Americas (North America ((the U.S. and Canada),) and Latin Americas), Europe (Western Europe (Germany, France, Italy, Spain, UK and Rest of Europe) and Eastern Europe), Asia Pacific (Japan, India, China, Australia & South Korea, and Rest of Asia Pacific), and Middle East & Africa (Saudi Arabia, UAE, Kuwait, Qatar, South Africa, and Rest of Middle East & Africa).

Key questions answered in the report:

What will the market growth rate of Machine Learning in Education market?What are the key factors driving the global Machine Learning in Education market size?Who are the key manufacturers in Machine Learning in Education market space?What are the market opportunities, market risk and market overview of the Machine Learning in Education market?What are sales, revenue, and price analysis of top manufacturers of Machine Learning in Education market?Who are the distributors, traders, and dealers of Machine Learning in Education market?What are the Machine Learning in Education market opportunities and threats faced by the vendors in the global Machine Learning in Education industries?What are sales, revenue, and price analysis by types and applications of Machine Learning in Education market?What are sales, revenue, and price analysis by regions of Machine Learning in Education industries?

GetExclusive discount on this report now at https://www.alexareports.com/check-discount/849042

Table of ContentsSection 1 Machine Learning in Education Product DefinitionSection 2 Global Machine Learning in Education Market Manufacturer Share and Market Overview2.1 Global Manufacturer Machine Learning in Education Shipments2.2 Global Manufacturer Machine Learning in Education Business Revenue2.3 Global Machine Learning in Education Market Overview2.4 COVID-19 Impact on Machine Learning in Education IndustrySection 3 Manufacturer Machine Learning in Education Business Introduction3.1 IBM Machine Learning in Education Business Introduction3.1.1 IBM Machine Learning in Education Shipments, Price, Revenue and Gross profit 2014-20193.1.2 IBM Machine Learning in Education Business Distribution by Region3.1.3 IBM Interview Record3.1.4 IBM Machine Learning in Education Business Profile3.1.5 IBM Machine Learning in Education Product Specification3.2 Microsoft Machine Learning in Education Business Introduction3.2.1 Microsoft Machine Learning in Education Shipments, Price, Revenue and Gross profit 2014-20193.2.2 Microsoft Machine Learning in Education Business Distribution by Region3.2.3 Interview Record3.2.4 Microsoft Machine Learning in Education Business Overview3.2.5 Microsoft Machine Learning in Education Product Specification3.3 Google Machine Learning in Education Business Introduction3.3.1 Google Machine Learning in Education Shipments, Price, Revenue and Gross profit 2014-20193.3.2 Google Machine Learning in Education Business Distribution by Region3.3.3 Interview Record3.3.4 Google Machine Learning in Education Business Overview3.3.5 Google Machine Learning in Education Product Specification3.4 Amazon Machine Learning in Education Business Introduction3.5 Cognizan Machine Learning in Education Business Introduction3.6 Pearson Machine Learning in Education Business IntroductionSection 4 Global Machine Learning in Education Market Segmentation (Region Level)4.1 North America Country4.1.1 United States Machine Learning in Education Market Size and Price Analysis 2014-20194.1.2 Canada Machine Learning in Education Market Size and Price Analysis 2014-20194.2 South America Country4.2.1 South America Machine Learning in Education Market Size and Price Analysis 2014-20194.3 Asia Country4.3.1 China Machine Learning in Education Market Size and Price Analysis 2014-20194.3.2 Japan Machine Learning in Education Market Size and Price Analysis 2014-20194.3.3 India Machine Learning in Education Market Size and Price Analysis 2014-20194.3.4 Korea Machine Learning in Education Market Size and Price Analysis 2014-20194.4 Europe Country4.4.1 Germany Machine Learning in Education Market Size and Price Analysis 2014-20194.4.2 UK Machine Learning in Education Market Size and Price Analysis 2014-20194.4.3 France Machine Learning in Education Market Size and Price Analysis 2014-20194.4.4 Italy Machine Learning in Education Market Size and Price Analysis 2014-20194.4.5 Europe Machine Learning in Education Market Size and Price Analysis 2014-20194.5 Other Country and Region4.5.1 Middle East Machine Learning in Education Market Size and Price Analysis 2014-20194.5.2 Africa Machine Learning in Education Market Size and Price Analysis 2014-20194.5.3 GCC Machine Learning in Education Market Size and Price Analysis 2014-20194.6 Global Machine Learning in Education Market Segmentation (Region Level) Analysis 2014-20194.7 Global Machine Learning in Education Market Segmentation (Region Level) AnalysisSection 5 Global Machine Learning in Education Market Segmentation (Product Type Level)5.1 Global Machine Learning in Education Market Segmentation (Product Type Level) Market Size 2014-20195.2 Different Machine Learning in Education Product Type Price 2014-20195.3 Global Machine Learning in Education Market Segmentation (Product Type Level) AnalysisSection 6 Global Machine Learning in Education Market Segmentation (Industry Level)6.1 Global Machine Learning in Education Market Segmentation (Industry Level) Market Size 2014-20196.2 Different Industry Price 2014-20196.3 Global Machine Learning in Education Market Segmentation (Industry Level) AnalysisSection 7 Global Machine Learning in Education Market Segmentation (Channel Level)7.1 Global Machine Learning in Education Market Segmentation (Channel Level) Sales Volume and Share 2014-20197.2 Global Machine Learning in Education Market Segmentation (Channel Level) AnalysisSection 8 Machine Learning in Education Market Forecast 2019-20248.1 Machine Learning in Education Segmentation Market Forecast (Region Level)8.2 Machine Learning in Education Segmentation Market Forecast (Product Type Level)8.3 Machine Learning in Education Segmentation Market Forecast (Industry Level)8.4 Machine Learning in Education Segmentation Market Forecast (Channel Level)Section 9 Machine Learning in Education Segmentation Product Type9.1 Cloud-Based Product Introduction9.2 On-Premise Product IntroductionSection 10 Machine Learning in Education Segmentation Industry10.1 Intelligent Tutoring Systems Clients10.2 Virtual Facilitators Clients10.3 Content Delivery Systems Clients10.4 Interactive Websites ClientsSection 11 Machine Learning in Education Cost of Production Analysis11.1 Raw Material Cost Analysis11.2 Technology Cost Analysis11.3 Labor Cost Analysis11.4 Cost OverviewSection 12 Conclusion

Do Inquiry About The Report Here: https://www.alexareports.com/send-an-enquiry/849042

About Us:Alexa Reports is a globally celebrated premium market research service provider, with a strong legacy of empowering business with years of experience. We help our clients by implementing a decision support system through progressive statistical surveying, in-depth market analysis, and reliable forecast data.

Contact Us:Alexa ReportsPh no: +1-408-844-4624Email: [emailprotected]Site: https://www.alexareports.com

More:
2020 Current trends in Machine Learning in Education Market Share, Growth, Demand, Trends, Region Wise Analysis of Top Players and Forecasts - Cole of...

Posted in Machine Learning | Comments Off on 2020 Current trends in Machine Learning in Education Market Share, Growth, Demand, Trends, Region Wise Analysis of Top Players and Forecasts – Cole of…

Artificial Intelligence, Machine Learning and the Future of Graphs – BBN Times

object(stdClass)#33733 (59) { ["id"]=> string(4) "6201" ["title"]=> string(66) "Artificial Intelligence, Machine Learning and the Future of Graphs" ["alias"]=> string(65) "artificial-intelligence-machine-learning-and-the-future-of-graphs" ["introtext"]=> string(296) "

I am a skeptic of machine learning. There, I've said it. I say this not because I don't think that machine learning is a poor technology - it's actually quite powerful for what it does - but because machine-learning by itself is only half a solution.

To explain this (and the relationship that graphs have to machine learning and AI), it's worth spending a bit of time exploring what exactly machine learning does, how it works. Machine learning isn't actually one particular algorithm or piece of software, but rather the use of statistical algorithms to analyze large amounts of data and from that construct a model that can, at a minimum, classify the data consistently. If it's done right, the reasoning goes, it should then be possible to use that model to classify new information so that it's consistent with what's already known.

Many such systems make use of clustering algorithms - they take a look at data as vectors that can be described in an n-dimensional space. That is to say, there are n different facets that describe a particular thing, such as a thing's color, shape (morphology), size, texture, and so forth. Some of these attributes can be identified by a single binary (does the thing have a tail or not), but in most cases the attributes usually range along a spectrum, such as "does the thing have an an exclusively protein-based diet (an obligate carnivore) or does its does consist of a certain percentage of grains or other plants?". In either case, this means that it is possible to use the attribute as a means to create a number between zero and one (what mathematicians would refer to as a normal orthogonal vector).

Orthogonality is an interesting concept. In mathematics, two vectors are considered orthogonal if there exists some coordinate system in which you cannot express any information about one vector using the other. For instance, if two vectors are at right angles to one another, then there is one coordinate system where one vector aligns with the x-axis and the other with the y-axis. I cannot express any part of the length of a vector along the y axis by multiplying the length of the vector on the x-axis. In this case they are independent of one another.

This independence is important. Mathematically, there is no correlation between the two vectors - they represent different things, and changing one vector tells me nothing about any other vector. When vectors are not orthogonal, one bleeds a bit (or more than a bit) into another. One two vectors are parallel to one another, they are fully correlated - one vector can be expressed as a multiple of the other. A vector in two dimensions can always be expressed as the "sum" of two orthogonal vectors, a vector in three dimensions, can always be expressed as the "sum" of three orthogonal vectors and so forth.

If you can express a thing as a vector consisting of weighted values, this creates a space where related things will generally be near one another in an n-dimensional space. Cats, dogs, and bears are all carnivores, so in a model describing animals, they will tend to be clustered in a different group than rabbits, voles, and squirrels based upon their dietary habits. At the same time cats,, dogs and bears will each tend to cluster in different groups based upon size as even a small adult bear will always be larger than the largest cat and almost all dogs. In a two dimensional space, it becomes possible to carve out a region where you have large carnivores, medium-sized carnivores, small carnivores, large herbivores and so forth.

Machine learning (at its simplest) would recognize that when you have a large carnivore, given a minimal dataset, you're likely to classify that as a bear, because based upon the two vectors size and diet every time you are at the upper end of the vectors for those two values, everything you've already seen (your training set) is a bear, while no vectors outside of this range are classified in this way.

A predictive model with only two independent vectors is going to be pretty useless as a classifier for more than a small set of items. A fox and a dog will be indistinguishable in this model, and for that matter, a small dog such as a Shitsu vs. a Maine Coon cat will confuse the heck out of such a classifier. On the flip side, the more variables that you add, the harder it is to ensure orthogonality, and the more difficult it then becomes determine what exactly is the determining factor(s) for classification, and consequently increasing the chances of misclassification. A panda bear is, anatomically and genetically, a bear. Yet because of a chance genetic mutation it is only able to reasonably digest bamboo, making it a herbivore.

You'd need to go to a very fine-grained classifier, one capable of identifying genomic structures, to identify a panda as a bear. The problem here is not in the mathematics but in the categorization itself. Categorizations are ultimately linguistic structures. Normalization functions are themselves arbitrary, and how you normalize will ultimately impact the kind of clustering that forms. When the number of dimensions in the model (even assuming that they are independent, which gets harder to determine with more variables) gets too large, then the size of hulls for clustering becomes too small, and interpreting what those hulls actually significant become too complex.

This is one reason that I'm always dubious when I hear about machine learning models that have thousands or even millions of dimensions. As with attempting to do linear regressions on curves, there are typically only a handful of parameters that typically drive most of the significant curve fitting, which is ultimately just looking for adequate clustering to identify meaningful patterns - and typically once these patterns are identified, then they are encoded and indexed.

Facial recognition, for instance, is considered a branch of machine learning, but for the most part it works because human faces exist within a skeletal structure that limits the variations of light and dark patterns of the face. This makes it easy to identify the ratios involved between eyes, nose, and mouth, chin and cheekbones, hairlines and other clues, and from that reduce this information to a graph in which the edges reflect relative distances between those parts. This can, in turn, be hashed as a unique number, in essence encoding a face as a graph in a database. Note this pattern. Because the geometry is consistent, rotating a set of vectors to present a consistent pattern is relatively simple (especially for modern GPUs).

Facial recognition then works primarily due to the ability to hash (and consequently compare) graphs in databases. This is the same way that most biometric scans work, taking a large enough sample of datapoints from unique images to encode ratios, then using the corresponding key to retrieve previously encoded graphs. Significantly, there's usually very little actual classification going on here, save perhaps in using courser meshes to reduce the overall dataset being queried. Indeed, the real speed ultimately is a function of indexing.

This is where the world of machine learning collides with that of graphs. I'm going to make an assertion here, one that might get me into trouble with some readers. Right now there's a lot of argument about the benefits and drawbacks of property graphs vs. knowledge graphs. I contend that this argument is moot - it's a discussion about optimization strategies, and the sooner that we get past that argument, the sooner that graphs will make their way into the mainstream.

Ultimately, we need to recognize that the principal value of a graph is to index information so that it does not need to be recalculated. One way to do this is to use machine learning to classify, and semantics to bind that classification to the corresponding resource (as well as to the classifier as an additional resource). If I have a phrase that describes a drink as being nutty or fruity, then these should be identified as classifications that apply to drinks (specifically to coffees, teas or wines). If I come across flavors such as hazelnut, cashew or almond, then these should be correlated with nuttiness, and again stored in a semantic graph.

The reason for this is simple - machine learning without memory is pointless and expensive. Machine learning is fast facing a crisis in that it requires a lot of cycles to train, classify and report. Tie machine learning into a knowledge graph, and you don't have to relearn all the time, and you can also reduce the overall computational costs dramatically. Furthermore, you can make use of inferencing, which are rules that can make use of generalization and faceting in ways that are difficult to pull off in a relational data system. Something is bear-like if it is large, has thick fur, does not have opposable thumbs, has a muzzle, is capable of extended bipedal movement and is omnivorous.

What's more, the heuristic itself is a graph, and as such is a resource that can be referenced. This is something that most people fail to understand about both SPARQL and SHACL. They are each essentially syntactic sugar on top of graph templates. They can be analyzed, encoded and referenced. When a new resource is added into a graph, the ingestion process can and should run against such templates to see if they match, then insert or delete corresponding additional metadata as the data is folded in.

Additionally, one of those pieces of metadata may very well end up being an identifier for the heuristic itself, creating what's often termed a reverse query. Reverse queries are significant because they make it possible to determine which family of classifiers was used to make decisions about how an entity is classified, and from that ascertain the reasons why a given entity was classified a certain way in the first place.

This gets back to one of the biggest challenges seen in both AI and machine learning - understanding why a given resource was classified. When you have potentially thousands of facets that may have potentially been responsible for a given classification, the ability to see causal chains can go a long way towards making such a classification system repeatable and determining whether the reason for a given classification was legitimate or an artifact of the data collection process. This is not something that AI by itself is very good at, because it's a contextual problem. In effect, semantic graphs (and graphs in general) provide a way of making recommendations self-documenting, and hence making it easier to trust the results of AI algorithms.

One of the next major innovations that I see in graph technology is actually a mathematical change. Most graphs that exist right now can be thought of as collections of fixed vectors, entities connected by properties with fixed values. However, it is possible (especially when using property graphs) to create properties that are essentially parameterized over time (or other variables) or that may be passed as functional results from inbound edges. This is, in fact, an alternative approach to describing neural networks (both physical and artificial), and it has the effect of being able to make inferences based upon changing conditions over time.

This approach can be seen as one form of modeling everything from the likelihood of events happening given other events (Bayesian trees) or modeling complex cost-benefit relationships. This can be facilitated even today with some work, but the real value will come with standardization, as such graphs (especially when they are closed network circuits) can in fact act as trainable neuron circuits.

It is also likely that graphs will play a central role in Smart Contracts, "documents" that not only specify partners and conditions but also can update themselves transactional, can trigger events and can spawn other contracts and actions. These do not specifically fall within the mandate of "artificial intelligence" per se, but the impact that smart contracts play in business and society, in general, will be transformative at the very least.

It's unlikely that this is the last chapter on graphs, either (though it is the last in the series about the State of the Graph). Graphs, ultimately, are about connections and context. How do things relate to one another? How are they connected? What do people know, and how do they know them. They underlie contracts and news, research and entertainment, history and how the future is shaped. Graphs promise a means of generating knowledge, creating new models, and even learning. They remind us that, even as forces try to push us apart, we are all ultimately only a few hops from one another in many, many ways.

I'm working on a book calledContext, hopefully out by Summer 2020. Until then, stay connected.

View original post here:
Artificial Intelligence, Machine Learning and the Future of Graphs - BBN Times

Posted in Machine Learning | Comments Off on Artificial Intelligence, Machine Learning and the Future of Graphs – BBN Times