Search Immortality Topics:

Page 130«..1020..129130131132..140..»


Category Archives: Machine Learning

Manchester Digital unveils 72% growth for digital businesses in the region – Education Technology

Three quarters of Greater Manchester's digital tech businesses have experienced significant growth in the last 12 months

New figures from Manchester Digital, the independent trade body for digital and tech businesses in Greater Manchester, have revealed that 72% of businesses in the region have experienced growth in the last year, up from 54% in 2018.

Despite such prosperous results, companies are still calling out for talent, with developer roles standing out as the most in-demand for the seventh consecutive year. The other most sought-after skills in the next three years include data science (15%), UX (15%), and AI and machine learning (11%).

In the race to acquire top talent, almost 25% of Manchester vacancies advertised in the last 12 months remained unfilled, largely due to a lack of suitable candidates and inflated salary demands.

Unveiled at Manchester Digitals annual Skills Festival last week, the Annual Skills Audit, which evaluates data from 250 digital and tech companies and employees across the region, also analysed the various professional pathways into the sector.

The majority (77%) of candidates entering the sector harbour a degree of some sort; however, of the respondents who possessed a degree, almost a quarter claimed it was not relevant to tech, while a further 22% reported traversing through the sector from another career.

In other news: Jisc report calls for an end to pen and paper exams by 2025

On top of this, almost one in five respondents said they had self-taught or upskilled their way into the sector a positive step towards boosting diversity in terms of both the people and experience pools entering the sector.

Its positive to see a higher number of businesses reporting growth this year, particularly from SMEs. While the political and economic landscape is by no means settled, it seems that businesses have strategies in place to help them navigate through this uncertainty, said Katie Gallagher, managing director of Manchester Digital.

Whats particularly interesting in this years audit are the data sets around pathways into the tech sector, added Gallagher. While a lot of people still do report having degrees and wed like to see more variation here in terms of more people taking up apprenticeships, work experience placements etc. its interesting to see that a fair percentage are retraining, self-training or moving to the sector with a degree thats not directly related. Only by creating a talent pool from a wide and diverse range of people and backgrounds can we ensure that the sector continues to grow and thrive sustainably.

When asked what they liked about working for their current employer, employees across the region mentioned flexible work as the number one perk they value (40%). Career progression was also a crucial factor to those aged 18-21, with these respondents also identifying brand prestige as a reason to choose a particular employer.

For this first time this year, weve expanded the Skills Audit to include opinions from employees, as well as businesses. With the battle for talent still one of the biggest challenges employers face, were hoping that this part of the data set provides some valuable insights into why people choose employers and what they value most and consequently helps businesses set successful recruitment and retention strategies, Gallagher concluded.

See the original post here:
Manchester Digital unveils 72% growth for digital businesses in the region - Education Technology

Posted in Machine Learning | Comments Off on Manchester Digital unveils 72% growth for digital businesses in the region – Education Technology

Overview of causal inference in machine learning – Ericsson

In a major operators network control center complaints are flooding in. The network is down across a large US city; calls are getting dropped and critical infrastructure is slow to respond. Pulling up the systems event history, the manager sees that new 5G towers were installed in the affected area today.

Did installing those towers cause the outage, or was it merely a coincidence? In circumstances such as these, being able to answer this question accurately is crucial for Ericsson.

Most machine learning-based data science focuses on predicting outcomes, not understanding causality. However, some of the biggest names in the field agree its important to start incorporating causality into our AI and machine learning systems.

Yoshua Bengio, one of the worlds most highly recognized AI experts, explained in a recent Wired interview: Its a big thing to integrate [causality] into AI. Current approaches to machine learning assume that the trained AI system will be applied on the same kind of data as the training data. In real life it is often not the case.

Yann LeCun, a recent Turing Award winner, shares the same view, tweeting: Lots of people in ML/DL [deep learning] know that causal inference is an important way to improve generalization.

Causal inference and machine learning can address one of the biggest problems facing machine learning today that a lot of real-world data is not generated in the same way as the data that we use to train AI models. This means that machine learning models often arent robust enough to handle changes in the input data type, and cant always generalize well. By contrast, causal inference explicitly overcomes this problem by considering what might have happened when faced with a lack of information. Ultimately, this means we can utilize causal inference to make our ML models more robust and generalizable.

When humans rationalize the world, we often think in terms of cause and effect if we understand why something happened, we can change our behavior to improve future outcomes. Causal inference is a statistical tool that enables our AI and machine learning algorithms to reason in similar ways.

Lets say were looking at data from a network of servers. Were interested in understanding how changes in our network settings affect latency, so we use causal inference to proactively choose our settings based on this knowledge.

The gold standard for inferring causal effects is randomized controlled trials (RCTs) or A/B tests. In RCTs, we can split a population of individuals into two groups: treatment and control, administering treatment to one group and nothing (or a placebo) to the other and measuring the outcome of both groups. Assuming that the treatment and control groups arent too dissimilar, we can infer whether the treatment was effective based on the difference in outcome between the two groups.

However, we can't always run such experiments. Flooding half of our servers with lots of requests might be a great way to find out how response time is affected, but if theyre mission-critical servers, we cant go around performing DDOS attacks on them. Instead, we rely on observational datastudying the differences between servers that naturally get a lot of requests and those with very few requests.

There are many ways of answering this question. One of the most popular approaches is Judea Pearl's technique for using to statistics to make causal inferences. In this approach, wed take a model or graph that includes measurable variables that can affect one another, as shown below.

To use this graph, we must assume the Causal Markov Condition. Formally, it says that subject to the set of all its direct causes, a node is independent of all the variables which are not direct causes or direct effects of that node. Simply put, it is the assumption that this graph captures all the real relationships between the variables.

Another popular method for inferring causes from observational data is Donald Rubin's potential outcomes framework. This method does not explicitly rely on a causal graph, but still assumes a lot about the data, for example, that there are no additional causes besides the ones we are considering.

For simplicity, our data contains three variables: a treatment , an outcome , and a covariate . We want to know if having a high number of server requests affects the response time of a server.

In our example, the number of server requests is determined by the memory value: a higher memory usage means the server is less likely to get fed requests. More precisely, the probability of having a high number of requests is equal to 1 minus the memory value (i.e. P(x=1)=1-z , where P(x=1) is the probability that x is equal to 1). The response time of our system is determined by the equation (or hypothetical model):

y=1x+5z+

Where is the error, that is, the deviation from the expected value of given values of and depends on other factors not included in the model. Our goal is to understand the effect of on via observations of the memory value, number of requests, and response times of a number of servers with no access to this equation.

There are two possible assignments (treatment and control) and an outcome. Given a random group of subjects and a treatment, each subject has a pair of potential outcomes: and , the outcomes Y_i (0) and Y_i (1) under control and treatment respectively. However, only one outcome is observed for each subject, the outcome under the actual treatment received: Y_i=xY_i (1)+(1-x)Y_i (0). The opposite potential outcome is unobserved for each subject and is therefore referred to as a counterfactual.

For each subject, the effect of treatment is defined to be Y_i (1)-Y_i (0) . The average treatment effect (ATE) is defined as the average difference in outcomes between the treatment and control groups:

E[Y_i (1)-Y_i (0)]

Here, denotes an expectation over values of Y_i (1)-Y_i (0)for each subject , which is the average value across all subjects. In our network example, a correct estimate of the average treatment effect would lead us to the coefficient in front of x in equation (1) .

If we try to estimate this by directly subtracting the average response time of servers with x=0 from the average response time of our hypothetical servers with x=1, we get an estimate of the ATE as 0.177 . This happens because our treatment and control groups are not inherently directly comparable. In an RTC, we know that the two groups are similar because we chose them ourselves. When we have only observational data, the other variables (such as the memory value in our case) may affect whether or not one unit is placed in the treatment or control group. We need to account for this difference in the memory value between the treatment and control groups before estimating the ATE.

One way to correct this bias is to compare individual units in the treatment and control groups with similar covariates. In other words, we want to match subjects that are equally likely to receive treatment.

The propensity score ei for subject is defined as:

e_i=P(x=1z=z_i ),z_i[0,1]

or the probability that x is equal to 1the unit receives treatmentgiven that we know its covariate is equal to the value z_i. Creating matches based on the probability that a subject will receive treatment is called propensity score matching. To find the propensity score of a subject, we need to predict how likely the subject is to receive treatment based on their covariates.

The most common way to calculate propensity scores is through logistic regression:

Now that we have calculated propensity scores for each subject, we can do basic matching on the propensity score and calculate the ATE exactly as before. Running propensity score matching on the example network data gets us an estimate of 1.008 !

We were interested in understanding the causal effect of binary treatment x variable on outcome y . If we find that the ATE is positive, this means an increase in x results in an increase in y. Similarly, a negative ATE says that an increase in x will result in a decrease in y .

This could help us understand the root cause of an issue or build more robust machine learning models. Causal inference gives us tools to understand what it means for some variables to affect others. In the future, we could use causal inference models to address a wider scope of problems both in and out of telecommunications so that our models of the world become more intelligent.

Special thanks to the other team members of GAIA working on causality analysis: Wenting Sun, Nikita Butakov, Paul Mclachlan, Fuyu Zou, Chenhua Shi, Lule Yu and Sheyda Kiani Mehr.

If youre interested in advancing this field with us, join our worldwide team of data scientists and AI specialists at GAIA.

In this Wired article, Turing Award winner Yoshua Bengio shares why deep learning must begin to understand the why before it can replicate true human intelligence.

In this technical overview of causal inference in statistics, find out whats need to evolve AI from traditional statistical analysis to causal analysis of multivariate data.

This journal essay from 1999 offers an introduction to the Causal Markov Condition.

Originally posted here:
Overview of causal inference in machine learning - Ericsson

Posted in Machine Learning | Comments Off on Overview of causal inference in machine learning – Ericsson

The 17 Best AI and Machine Learning TED Talks for Practitioners – Solutions Review

The editors at Solutions Review curated this list of the best AI and machine learning TED talks for practitioners in the field.

TED Talks are influential videos from expert speakers in a variety of verticals. TED began in 1984 as a conference where Technology, Entertainment and Design converged, and today covers almost all topics from business to technology to global issues in more than 110 languages. TED is building a clearinghouse of free knowledge from the worlds top thinkers, and their library of videos is expansive and rapidly growing.

Solutions Review has curated this list of AI and machine learning TED talks to watch if you are a practitioner in the field. Talks were selected based on relevance, ability to add business value, and individual speaker expertise. Weve also curated TED talk lists for topics like data visualization and big data.

Erik Brynjolfsson is the director of the MIT Center for Digital Business and a research associate at the National Bureau of Economic Research. He asks how IT affects organizations, markets and the economy. His books include Wired for Innovation and Race Against the Machine. Brynjolfsson was among the first researchers to measure the productivity contributions of information and community technology (ICT) and the complementary role of organizational capital and other intangibles.

In this talk, Brynjolfsson argues that machine learning and intelligence are not the end of growth its simply the growing pains of a radically reorganized economy. A riveting case for why big innovations are ahead of us if we think of computers as our teammates. Be sure to watch the opposing viewpoint from Robert Gordon.

Jeremy Howard is the CEO ofEnlitic, an advanced machine learning company in San Francisco. Previously, he was the president and chief scientist atKaggle, a community and competition platform of over 200,000 data scientists. Howard is a faculty member atSingularity University, where he teaches data science. He is also a Young Global Leader with the World Economic Forum, and spoke at the World Economic Forum Annual Meeting 2014 on Jobs for the Machines.

Technologist Jeremy Howard shares some surprising new developments in the fast-moving field of deep learning, a technique that can give computers the ability to learn Chinese, or to recognize objects in photos, or to help think through a medical diagnosis.

Nick Bostrom is a professor at the Oxford University, where he heads theFuture of Humanity Institute, a research group of mathematicians, philosophers and scientists tasked with investigating the big picture for the human condition and its future. Bostrom was honored as one ofForeign Policys 2015Global Thinkers. His bookSuperintelligenceadvances the ominous idea that the first ultraintelligent machine is the last invention that man need ever make.

In this talk, Nick Bostrom calls machine intelligence the last invention that humanity will ever need to make. Bostrom asks us to think hard about the world were building right now, driven by thinking machines. Will our smart machines help to preserve humanity and our values or will they have values of their own?

Lis work with neural networks and computer vision (with Stanfords Vision Lab) marks a significant step forward for AI research, and could lead to applications ranging from more intuitive image searches to robots able to make autonomous decisions in unfamiliar situations. Fei-Fei was honored as one ofForeign Policys 2015Global Thinkers.

This talk digs into how computers are getting smart enough to identify simple elements. Computer vision expert Fei-Fei Li describes the state of the art including the database of 15 million photos her team built to teach a computer to understand pictures and the key insights yet to come.

Anthony Goldbloom is the co-founder and CEO ofKaggle. Kaggle hosts machine learning competitions, where data scientists download data and upload solutions to difficult problems. Kaggle has a community of over 600,000 data scientists. In 2011 and 2012,Forbesnamed Anthony one of the 30 under 30 in technology; in 2013 theMIT Tech Reviewnamed him one of top 35 innovators under the age of 35, and the University of Melbourne awarded him an Alumni of Distinction Award.

This talk by Anthony Goldbloom describes some of the current use cases for machine learning, far beyond simple tasks like assessing credit risk and sorting mail.

Tufekci is a contributing opinion writer at theNew York Times, an associate professor at the School of Information and Library Science at University of North Carolina, Chapel Hill, and a faculty associate at Harvards Berkman Klein Center for Internet and Society. Her book,Twitter and Tear Gas was published in 2017 by Yale University Press.

Machine intelligence is here, and were already using it to make subjective decisions. But the complex way AI grows and improves makes it hard to understand and even harder to control. In this cautionary talk, techno-sociologist Zeynep Tufekci explains how intelligent machines can fail in ways that dont fit human error patterns and in ways we wont expect or be prepared for.

In his bookThe Business Romantic, Tim Leberecht invites us to rediscover romance, beauty and serendipity by designing products, experiences, and organizations that make us fall back in love with our work and our life. The book inspired the creation of the Business Romantic Society, a global collective of artists, developers, designers and researchers who share the mission of bringing beauty to business.

In this talk, Tim Leberecht makes the case for a new radical humanism in a time of artificial intelligence and machine learning. For the self-described business romantic, this means designing organizations and workplaces that celebrate authenticity instead of efficiency and questions instead of answers. Leberecht proposes four principles for building beautiful organizations.

Grady Booch is Chief Scientist for Software Engineering as well as Chief Scientist for Watson/M at IBM Research, where he leads IBMs research and development for embodied cognition. Having originated the term and the practice of object-oriented design, he is best known for his work in advancing the fields of software engineering and software architecture.

Grady Booch allays our worst (sci-fi induced) fears about superintelligent computers by explaining how well teach, not program, them to share our human values. Rather than worry about an unlikely existential threat, he urges us to consider how artificial intelligence will enhance human life.

Tom Gruberis a product designer, entrepreneur, and AI thought leader who uses technology to augment human intelligence. He was co-founder, CTO, and head of design for the team that created theSiri virtual assistant. At Apple for over 8 years, Tom led the Advanced Development Group that designed and prototyped new capabilities for products that bring intelligence to the interface.

This talk introduces the idea of Humanistic AI. He shares his vision for a future where AI helps us achieve superhuman performance in perception, creativity and cognitive function from turbocharging our design skills to helping us remember everything weve ever read. The idea of an AI-powered personal memory also extends to relationships, with the machine helping us reflect on our interactions with people over time.

Stuart Russell is a professor (and formerly chair) of Electrical Engineering and Computer Sciences at University of California at Berkeley. His bookArtificial Intelligence: A Modern Approach (with Peter Norvig) is the standard text in AI; it has been translated into 13 languages and is used in more than 1,300 universities in 118 countries. He also works for the United Nations, developing a new global seismic monitoring system for the nuclear-test-ban treaty.

His talk centers around the question of whether we can harness the power of superintelligent AI while also preventing the catastrophe of robotic takeover. As we move closer toward creating all-knowing machines, AI pioneer Stuart Russell is working on something a bit different: robots with uncertainty. Hear his vision for human-compatible AI that can solve problems using common sense, altruism and other human values.

Dr. Pratik Shahs research creates novel intersections between engineering, medical imaging, machine learning, and medicine to improve health and diagnose and cure diseases. Research topics include: medical imaging technologies using unorthodox artificial intelligence for early disease diagnoses; novel ethical, secure and explainable artificial intelligence based digital medicines and treatments; and point-of-care medical technologies for real world data and evidence generation to improve public health.

TED Fellow Pratik Shah is working on a clever system to do just that. Using an unorthodox AI approach, Shah has developed a technology that requires as few as 50 images to develop a working algorithm and can even use photos taken on doctors cell phones to provide a diagnosis. Learn more about how this new way to analyze medical information could lead to earlier detection of life-threatening illnesses and bring AI-assisted diagnosis to more health care settings worldwide.

Margaret Mitchells research involves vision-language and grounded language generation, focusing on how to evolve artificial intelligence towards positive goals. Her work combines computer vision, natural language processing, social media as well as many statistical methods and insights from cognitive science. Before Google, Mitchell was a founding member of Microsoft Researchs Cognition group, focused on advancing artificial intelligence, and a researcher in Microsoft Researchs Natural Language Processing group.

Margaret Mitchell helps develop computers that can communicate about what they see and understand. She tells a cautionary tale about the gaps, blind spots and biases we subconsciously encode into AI and asks us to consider what the technology we create today will mean for tomorrow.

Kriti Sharma is the Founder of AI for Good, an organization focused on building scalable technology solutions for social good. Sharma was recently named in theForbes 30 Under 30 list for advancements in AI. She was appointed a United Nations Young Leader in 2018 and is an advisor to both the United Nations Technology Innovation Labs and to the UK Governments Centre for Data Ethics and Innovation.

AI algorithms make important decisions about you all the time like how much you should pay for car insurance or whether or not you get that job interview. But what happens when these machines are built with human bias coded into their systems? Technologist Kriti Sharma explores how the lack of diversity in tech is creeping into our AI, offering three ways we can start making more ethical algorithms.

Matt Beane does field research on work involving robots to help us understand the implications of intelligent machines for the broader world of work. Beane is an Assistant Professor in the Technology Management Program at the University of California, Santa Barbara and a Research Affiliate with MITs Institute for the Digital Economy. He received his PhD from the MIT Sloan School of Management.

The path to skill around the globe has been the same for thousands of years: train under an expert and take on small, easy tasks before progressing to riskier, harder ones. But right now, were handling AI in a way that blocks that path and sacrificing learning in our quest for productivity, says organizational ethnographer Matt Beane. Beane shares a vision that flips the current story into one of distributed, machine-enhanced mentorship that takes full advantage of AIs amazing capabilities while enhancing our skills at the same time.

Leila Pirhaji is the founder ofReviveMed, an AI platform that can quickly and inexpensively characterize large numbers of metabolites from the blood, urine and tissues of patients. This allows for the detection of molecular mechanisms that lead to disease and the discovery of drugs that target these disease mechanisms.

Biotech entrepreneur and TED Fellow Leila Pirhaji shares her plan to build an AI-based network to characterize metabolite patterns, better understand how disease develops and discover more effective treatments.

Janelle Shane is the owner of AIweirdness.com. Her book, You Look Like a Thing and I Love Youuses cartoons and humorous pop-culture experiments to look inside the minds of the algorithms that run our world, making artificial intelligence and machine learning both accessible and entertaining.

The danger of artificial intelligence isnt that its going to rebel against us, but that its going to do exactly what we ask it to do, says AI researcher Janelle Shane. Sharing the weird, sometimes alarming antics of AI algorithms as they try to solve human problems like creating new ice cream flavors or recognizing cars on the road Shane shows why AI doesnt yet measure up to real brains.

Sylvain Duranton is the global leader of BCG GAMMA, a unit dedicated to applying data science and advanced analytics to business. He manages a team of more than 800 data scientists and has implemented more than 50 custom AI and analytics solutions for companies across the globe.

In this talk, business technologist Sylvain Duranton advocates for a Human plus AI approach using AI systems alongside humans, not instead of them and shares the specific formula companies can adopt to successfully employ AI while keeping humans in the loop.

For more AI and machine learning TED talks, browse TEDs complete topic collection.

Timothy is Solutions Review's Senior Editor. He is a recognized thought leader and influencer in enterprise BI and data analytics. Timothy has been named a top global business journalist by Richtopia. Scoop? First initial, last name at solutionsreview dot com.

See original here:
The 17 Best AI and Machine Learning TED Talks for Practitioners - Solutions Review

Posted in Machine Learning | Comments Off on The 17 Best AI and Machine Learning TED Talks for Practitioners – Solutions Review

Machine Learning Patentability in 2019: 5 Cases Analyzed and Lessons Learned Part 1 – Lexology

Introduction

This article is the first of a five-part series of articles dealing with what patentability of machine learning looks like in 2019. This article begins the series by describing the USPTOs 2019 Revised Patent Subject Matter Eligibility Guidance (2019 PEG) in the context of the U.S. patent system. Then, this article and the four following articles will describe one of five cases in which Examiners rejections under Section 101 were reversed by the PTAB under this new 2019 PEG. Each of the five cases discussed deal with machine-learning patents, and may provide some insight into how the 2019 PEG affects the patentability of machine-learning, as well as software more broadly.

Patent Eligibility Under the U.S. Patent System

The US patent laws are set out in Title 35 of the United States Code (35 U.S.C.). Section 101 of Title 35 focuses on several things, including whether the invention is classified as patent-eligible subject matter. As a general rule, an invention is considered to be patent-eligible subject matter if it falls within one of the four enumerated categories of patentable subject matter recited in 35 U.S.C. 101 (i.e., process, machine, manufacture, or composition of matter).[1] This, on its own, is an easy hurdle to overcome. However, there are exceptions (judicial exceptions). These include (1) laws of nature; (2) natural phenomena; and (3) abstract ideas. If the subject matter of the claimed invention fits into any of these judicial exceptions, it is not patent-eligible, and a patent cannot be obtained. The machine-learning and software aspects of a claim face 101 issues based on the abstract idea exception, and not the other two.

Section 101 is applied by Examiners at the USPTO in determining whether patents should be issued; by district courts in determining the validity of existing patents; in the Patent Trial and Appeal Board (PTAB) in appeals from Examiner rejections, in post-grant-review (PGR) proceedings, and in covered-business-method-review (CBM) proceedings; and in the Federal Circuit on appeals. The PTAB is part of the USPTO, and may hear an appeal of an Examiners rejection of claims of a patent application when the claims have been rejected at least twice.

In determining whether a claim fits into the abstract idea category at the USPTO, the Examiners and the PTAB must apply the 2019 PEG, which is described in the following section of this paper. In determining whether a claim is patent-ineligible as an abstract idea in the district courts and the Federal Circuit, however, the courts apply the Alice/Mayo test; and not the 2019 PEG. The definition of abstract idea was formulated by the Alice and Mayo Supreme Court cases. These two cases have been interpreted by a number of Federal Circuit opinions, which has led to a complicated legal framework that the USPTO and the district courts must follow.[2]

The 2019 PEG

The USPTO, which governs the issuance of patents, decided that it needed a more practical, predictable, and consistent method for its over 8,500 patent examiners to apply when determining whether a claim is patent-ineligible as an abstract idea.[3] Previously, the USPTO synthesized and organized, for its examiners to compare to an applicants claims, the facts and holdings of each Federal Circuit case that deals with section 101. However, the large and still-growing number of cases, and the confusion arising from similar subject matter [being] described both as abstract and not abstract in different cases,[4] led to issues. Accordingly, the USPTO issued its 2019 Revised Patent Subject Matter Eligibility Guidance on January 7, 2019 (2019 PEG), which shifted from the case-comparison structure to a new examination structure.[5] The new examination structure, described below, is more patent-applicant friendly than the prior structure,[6] thereby having the potential to result in a higher rate of patent issuances. The 2019 PEG does not alter the federal statutory law or case law that make up the U.S. patent system.

The 2019 PEG has a structure consisting of four parts: Step 1, Step 2A Prong 1, Step 2A Prong 2, and Step 2B. Step 1 refers to the statutory categories of patent-eligible subject matter, while Step 2 refers to the judicial exceptions. In Step 1, the Examiners must determine whether the subject matter of the claim is a process, machine, manufacture, or composition of matter. If it is, the Examiner moves on to Step 2.

In Step 2A, Prong 1, the Examiners are to determine whether the claim recites a judicial exception including laws of nature, natural phenomenon, and abstract ideas. For abstract ideas, the Examiners must determine whether the claim falls into at least one of three enumerated categories: (1) mathematical concepts (mathematical relationships, mathematical formulas or equations, mathematical calculations); (2) certain methods of organizing human activity (fundamental economic principles or practices, commercial or legal interactions, managing personal behavior or relationships or interactions between people); and (3) mental processes (concepts performed in the human mind: encompassing acts people can perform using their mind, or using pen and paper). These three enumerated categories are not mere examples, but are fully-encompassing. The Examiners are directed that [i]n the rare circumstance in which they believe[] a claim limitation that does not fall within the enumerated groupings of abstract ideas should nonetheless be treated as reciting an abstract idea, they are to follow a particular procedure involving providing justifications and getting approval from the Technology Center Director.

Next, if the claim limitation recites one of the enumerated categories of abstract ideas under Prong 1 of Step 2A, the Examiner is instructed to proceed to Prong 2 of Step 2A. In Step 2A, Prong 2, the Examiners are to determine if the claim is directed to the recited abstract idea. In this step, the claim does not fall within the exception, despite reciting the exception, if the exception is integrated into a practical application. The 2019 PEG provides a non-exhaustive list of examples for this, including, among others: (1) an improvement in the functioning of a computer; (2) a particular treatment for a disease or medical condition; and (3) an application of the judicial exception in some other meaningful way beyond generally linking the use of the judicial exception to a particular technological environment, such that the claim as a whole is more than a drafting effort designed to monopolize the exception.

Finally, even if the claim recites a judicial exception under Step 2A Prong 1, and the claim is directed to the judicial exception under Step 2A Prong 2, it might still be patent-eligible if it satisfies the requirement of Step 2B. In Step 2B, the Examiner must determine if there is an inventive concept: that the additional elements recited in the claims provide[] significantly more than the recited judicial exception. This step attempts to distinguish between whether the elements combined to the judicial exception (1) add[] a specific limitation or combination of limitations that are not well-understood, routine, conventional activity in the field; or alternatively (2) simply append[] well-understood, routine, conventional activities previously known to the industry, specified at a high level of generality. Furthermore, the 2019 PEG indicates that where an additional element was insignificant extra-solution activity, [the Examiner] should reevaluate that conclusion in Step 2B. If such reevaluation indicates that the element is unconventional . . . this finding may indicate that an inventive concept is present and that the claim is thus eligible.

In summary, the 2019 PEG provides an approach for the Examiners to apply, involving steps and prongs, to determine if a claim is patent-ineligible based on being an abstract idea. Conceptually, the 2019-PEG method begins with categorizing the type of claim involved (process, machine, etc.); proceeds to determining if an exception applies (e.g., abstract idea); then, if an exception applies, proceeds to determining if an exclusion applies (i.e., practical application or inventive concept). Interestingly, the PTAB not only applies the 2019 PEG in appeals from Examiner rejections, but also applies the 2019 PEG in its other Section-101 decisions, including CBM review and PGRs.[7] However, the 2019 PEG only applies to the Examiners and PTAB (the Examiners and the PTAB are both part of the USPTO), and does not apply to district courts or to the Federal Circuit.

Case 1: Appeal 2018-007443[8] (Decided October 10, 2019)

This case involves the PTAB reversing the Examiners Section 101 rejections of claims of the 14/815,940 patent application. This patent application relates to applying AI classification technologies and combinational logic to predict whether machines need to be serviced, and whether there is likely to be equipment failure in a system. The Examiner contended that the claims fit into the judicial exception of abstract idea because monitoring the operation of machines is a fundamental economic practice. The Examiner explained that the limitations in the claims that set forth the abstract idea are: a method for reading data; assessing data; presenting data; classifying data; collecting data; and tallying data. The PTAB disagreed with the Examiner. The PTAB stated:

Specifically, we do not find monitoring the operation of machines, as recited in the instant application, is a fundamental economic principle (such as hedging, insurance, or mitigating risk). Rather, the claims recite monitoring operation of machines using neural networks, logic decision trees, confidence assessments, fuzzy logic, smart agent profiling, and case-based reasoning.

As explained in the previous section of this paper, the 2019 PEG set forth three possible categories of abstract ideas: mathematical concepts, certain methods of organizing human activity, and mental processes. Here, the PTAB addressed the second of these categories. The PTAB found that the claims do not recite a fundamental economic principle (one method of organizing human activity) because the claims recite AI components like neural networks in the context of monitoring machines. Clearly, economic principles and AI components are not always mutually exclusive concepts.[9] For example, there may be situations where these algorithms are applied directly to mitigating business risks. Accordingly, the PTAB was likely focusing on the distinction between monitoring machines and mitigating risk; and not solely on the recitation of the AI components. However, the recitation of the AI components did not seem to hurt.

Then, moving on to another category of abstract ideas, the PTAB stated:

Claims 1 and 8 as recited are not practically performed in the human mind. As discussed above, the claims recite monitoring operation of machines using neural networks, logic decision trees, confidence assessments, fuzzy logic, smart agent profiling, and case-based reasoning. . . . [Also,] claim 8 recites an output device that transforms the composite prediction output into human-readable form.

. . . .

In other words, the classifying steps of claims 1 and modules of claim 8 when read in light of the Specification, recite a method and system difficult and challenging for non-experts due to their computational complexity. As such, we find that one of ordinary skill in the art would not find it practical to perform the aforementioned classifying steps recited in claim 1 and function of the modules recited in claim 8 mentally.

In the language above, the PTAB addressed the third category of abstract ideas: mental processes. The PTAB provided that the claim does not recite a mental process because the AI algorithms, based on the context in which they are applied, are computationally complex.

The PTAB also addressed the first of the three categories of abstract ideas (mathematical concepts), and found that it does not apply because the specific mathematical algorithm or formula is not explicitly recited in the claims. Requiring that a mathematical concept be explicitly recited seems to be a narrow interpretation of the 2019 PEG. The 2019 PEG does not require that the recitation be explicit, and leaves the math category open to relationships, equations, or calculations. From this, the PTAB might have meant that the claims list a mathematical concept (the AI algorithm) by its name, as a component of the process, rather than trying to claim the steps of the algorithm itself. Clearly, the names of the algorithms are explicitly recited; the steps of the AI algorithms, however, are not recited in the claims.

Notably, reciting only the name of an algorithm, rather than reciting the steps of the algorithm, seems to indicate that the claims are not directed to the algorithms (i.e., the claims have a practical application for the algorithms). It indicates that the claims include an algorithm, but that there is more going on in the claim than just the algorithm. However, instead of determining that there is a practical application of the algorithms, or an inventive concept, the PTAB determined that the claim does not even recite the mathematical concepts.

Additionally, the PTAB found that even if the claims had been classified as reciting an abstract idea, as the Examiner had contended the claims are not directed to that abstract idea, but are integrated into a practical application. The PTAB stated:

Appellants claims address a problem specifically using several artificial intelligence classification technologies to monitor the operation of machines and to predict preventative maintenance needs and equipment failure.

The PTAB seems to say that because the claims solve a problem using the abstract idea, they are integrated into a practical application. The PTAB did not specify why the additional elements are sufficient to integrate the invention. The opinion actually does not even specifically mention that there are additional elements. Instead, the PTABs conclusion might have been that, based on a totality of the circumstances, it believed that the claims are not directed to the algorithms, but actually just apply the algorithms in a meaningful way. The PTAB could have fit this reasoning into the 2019 PEG structure through one of the Step 2A, Prong 2 examples (e.g., that the claim applies additional elements in some other meaningful way), but did not expressly do so.

Conclusion

This case illustrates:

(1) the monitoring of machines was held to not be an abstract idea, in this context; (2) the recitation of AI components such as neural networks in the claims did not seem to hurt for arguing any of the three categories of abstract ideas; (3) complexity of algorithms implemented can help with the mental processes category of abstract ideas; and (4) the PTAB might not always explicitly state how the rule for practical application applies, but seems to apply it consistently with the examples from the 2019 PEG.

The next four articles will build on this background, and will provide different examples of how the PTAB approaches reversing Examiner 101-rejections of machine-learning patents under the 2019 PEG. Stay tuned for the analysis and lessons of the next case, which includes methods for overcoming rejections based on the mental processes category of abstract ideas, on an application for a probabilistic programming compiler that performs the seemingly 101-vulnerable function of generat[ing] data-parallel inference code.

Read more:
Machine Learning Patentability in 2019: 5 Cases Analyzed and Lessons Learned Part 1 - Lexology

Posted in Machine Learning | Comments Off on Machine Learning Patentability in 2019: 5 Cases Analyzed and Lessons Learned Part 1 – Lexology

Artnome Wants to Predict the Price of a Masterpiece. The Problem? There’s Only One. – Built In

Buying a Picasso is like buying a mansion.

Theres not that many of them, so it can be hard to know what a fair price should be. In real estate, if the house last sold in 2008 right before the lending crisis devastated the real estate market basing todays price on the last sale doesnt make sense.

Paintings are also affected by market conditions and a lack of data. Kyle Waters, a data scientist at Artnome, explained to us how his Boston-area firm is addressing this dilemma and in doing so, aims to do for the art world what Zillow did for real estate.

If only 3 percent of houses are on the market at a time, we only see the prices for those 3 percent. But what about the rest of the market? Waters said. Its similar for art too. We want to price the entire market and give transparency.

We want to price the entire market and give transparency.

Artnome is building the worlds largest database of paintings by blue-chip artists like Georgia OKeeffe, including her super famous works, lesser-known items, those privately held, and artworks publicly displayed. Waters is tinkering with the data to create a machine learning model that predicts how much people will pay for these works at auctions. Because this model includes an artists entire collection, and not just those works that have been publicly sold before, Artnome claims its machine learning model will be more accurate than the auction industrys previous practice of simply basing current prices on previous sales.

The companys goal is to bring transparency to the auction house industry. But Artnomes new model faces the old problem: Its machine learning system performs poorly on the works that typically sell for the most the ones that people are the most interested in since its hard to predict the price of a one-of-a-kind masterpiece.

With a limited dataset, its just harder to generalize, Waters said.

We talked to Waters about how he compiled, cleaned and created Artnomes machine learning model for predicting auction prices, which launched in late January.

Most of the information about artists included in Artnomes model comes from the dusty basement libraries of auction houses, where they store their catalog raissons, which are books that serve as complete records of an artists work. Artnome is compiling and digitizing these records representing the first time these books have ever been brought online, Waters said.

Artnomes model currently includes information from about 5,000 artists whose works have been sold over the last 15 years. Prices in the dataset range from $100 at the low end to Leonardo DaVincis record-breaking Salvator Mundi a painting thatsold for $450.3 million in 2017, making it the most expensive work of art ever sold.

How hard was it to predict what DaVincis 500-year-old Mundi would sell for? Before the sale, Christies auction house estimated his portrait of Jesus Christ was worth around $100 million less than a quarter of the price.

It was unbelievable, Alex Rotter, chairman of Christies postwar and contemporary art department, told The Art Newspaper after the sale. Rotter reported the winning phone bid.

I tried to look casual up there, but it was very nerve-wracking. All I can say is, the buyer really wanted the painting and it was very adrenaline-driven.

The buyer really wanted the painting and it was very adrenaline-driven.

A piece like Salvatore Mundi could come to market in 2017 and then not go up for auction again for 50 years. And because a machine learning model is only as good as the quality and quantity of the data it is trained on, market, condition and changes in availability make it hard to predict a future price for a painting.

These variables are categorized into two types of data: structured and unstructured. And cleaning all of it represents a major challenge.

Structured data includes information like what artist painted which painting on what medium, and in whichyear.

Waters intentionally limited the types of structured information he included in the model to keep the system from becoming too unruly to work with. But defining paintings as solely two-dimensional works on only certain mediums proved difficult, since there are so many different types of paintings (Salvador Dali famously painted on a cigar box, after all). Artnomes problem represents an issue of high cardinality, Waters said, since there are so many different categorical variables he could include in the machine learning system.

You want the model to be narrow enough so that you can figure out the nuances between really specific mediums, but you also dont want it to be so narrow that youre going to overfit.

You want the model to be narrow enough so that you can figure out the nuances between really specific mediums, but you also dont want it to be so narrow that youre going to overfit, Waters said, adding that large models also become more unruly to work with.

Other structured data focuses on the artist herself, denoting details like when the creator was born or if they were alive during the time of auction. Waters also built a natural language processing system that analyzes the type and frequency of the words an artist used in her paintings titles, noting trends like Georgia OKeeffe using the word white in many of her famous works.

Including information on market conditions, like current stock prices or real estate data, was important from a structured perspective too.

How popular is an artist, are they exhibiting right now? How many people are interested in this artist? Whats the state of the market? Waters said. Really getting those trends and quantifying those could be just as important as more data.

Another type of data included in the model is unstructured data which, as the name might suggest, is a little less concrete than the structured items. This type of data is mined from the actual painting, and includes information like the artworks dominant color, number of corner points and if faces are pictured.

Waters created a pre-trained convolutional neural network to look for these variables, modeling the project after the ResNet 50 model, which famously won the ImageNet Large Scale Visual Recognition Challenge in 2012 after it correctly identified and classified nearly all of the 14 billion objects featured.

Including unstructured data helps quantify the complexity of an image, Waters said, giving it what he called an edge score.

An edge score helps the machine learning system quantify the subjective points of a painting thatseem intuitive to humans, Waters said. An example might be Vincent Van Goghs series of paintings of red-haired men posing in front of a blue background. When youre looking at the painting, its not hard to see youre looking at self portraits of Van Gogh, by Van Gogh.

Including unstructured data in Artnomes system helps the machine spot visual cues that suggest images are part of a series, which has an impact on their value, Waters said.

When you start interacting with different variables, then you can start getting into more granular details.

Knowing that thats a self-portrait would be important for that artist, Waters said. When you start interacting with different variables, then you can start getting into more granular details that, for some paintings by different artists, might be more important than others.

Artnomes convoluted neural network is good at analyzing paintings for data that tells a deeper story about the work. Butsometimes, there are holes inthe story being told.

In its current iteration, Artnomes model includes both paintings with and without frames it doesnt specify which work falls into which category. Not identifying the frame could affect the dominant color the system discovers, Waters said, adding an error to its results.

That could maybe skew your results and say, like, the dominant color was yellow when really the painting was a landscape and it was green, Waters said.

Interested in convolutional neural networks?Convolutional Neural Networks Explained: Using Pytorch to Understand CNNS

The model also lacks information on the condition of the painting which, again, could impact the artworks price. If the model cant detect a crease in the painting, it might overestimate its value. Also missing is data on an artworks provenance, or its ownership history. Some evidence suggests that paintings that have been displayed by prominent institutions sell for more. Theres also the issue of popularity. Waters hasnt found a concrete way to tell the system that people like the work of Georgia OKeeffe more than the paintings by artist and actor James Franco.

Im trying to think of a way to come up with a popularity score for these very popular artists, Waters said.

An auctioneer hits the hammer to indicate a sale has been made. But the last price the bidder shouts isnt what theyactually pay.

Buyers also must pay the auction house a commission, which varies between auction houses and has changed over time. Waters has had to dig up the commission rates for these outlets over the years and add them to the sales price listed. Hes also had to make sure all sales prices are listed in dollars, converting those listed in other currencies. Standardizing each sale ensures the predictions the model makes are accurate, Waters said.

Youd introduce a lot of bias into the model if some things didnt have the commission, but some things did.

Youd introduce a lot of bias into the model if some things didnt have the commission, but some things did, Waters said. It would be clearly wrong to start comparing the two.

Once Artnomes data has been gleaned and cleaned, information is input into the machine learning system, which Waters structured into a random forest model, an algorithm that builds and merges multiple decision trees to arrive at an accurate prediction. Waters said using a random forest model keeps the system from overfitting paintings into one category, and also offers a level of explainability through its permutation score a metric that basically decides the most important aspects of a painting.

Waters doesnt weigh the data he puts into the model. Instead, he lets the machine learning system tell him whats important, with the model weighing factors like todays S&P prices more heavily than the dominant color of a work.

Thats kind of one way to get the feature importance, for kind of a black box estimator, Waters said.

Although Artnome has been approached by private collectors, gallery owners and startups in the art tech world interested in its machine learning system, Waters said its important this dataset and model remain open to the public.

His aim is for Artnomes machine learning model to eventually function like Zillows Zestimate, which estimates real estate prices for homes on and off the market, and act as a general starting point for those interested in finding out the price of an artwork.

When it gets to the point where people see it as a respectable starting point, then thats when Ill be really satisfied.

We might not catch a specific genre, or era, or point in the art history movement, Waters said. I dont think itll ever be perfect. But when it gets to the point where people see it as a respectable starting point, then thats when Ill be really satisfied.

Want to learn more about machine learning? A Tour of the Top 10 Algorithms for Machine Learning Newbies

See the original post here:
Artnome Wants to Predict the Price of a Masterpiece. The Problem? There's Only One. - Built In

Posted in Machine Learning | Comments Off on Artnome Wants to Predict the Price of a Masterpiece. The Problem? There’s Only One. – Built In

Twitter says AI tweet recommendations helped it add millions of users – The Verge

Twitter had 152 million daily users during the final months of 2019, and it says the latest spike was thanks in part to improved machine learning models that put more relevant tweets in peoples timelines and notifications. The figure was released in Twitters Q4 2019 earnings report this morning.

Daily users grew from 145 million the prior quarter and 126 million during the same period a year earlier. Twitter says this was primarily driven by product improvements, such as the increased relevance of what people are seeing in their main timeline and their notifications.

By default, Twitter shows users an algorithmic timeline that highlights what it thinks theyll be most interested in; for users following few accounts, it also surfaces likes and replies by the people they follow, giving them more to scroll through. Twitters notifications will also highlight tweets that are being liked by people you follow, even if you missed that tweet on your timeline.

Twitter has continually been trying to reverse concerns about its user growth. The services monthly user count shrank for a full year going into 2019, leading it to stop reporting that figure altogether. Instead, it now shares daily users, a metric that looks much rosier.

Compared to many of its peers, though, Twitter still has an enormous amount of room to grow. Snapchat, for comparison, reported 218 million daily users during its final quarter of 2019. Facebook reported 1.66 billion daily users over the same time period.

Twitter also announced a revenue milestone this quarter: it brought in more than $1 billion in quarterly revenue for the first time. The total was just over the milestone $1.01 billion during its final quarter, up from $909 million that quarter the prior year.

Last quarter, Twitter said that its ad revenue took a hit due to bugs that limited its ability to target ads and share advertising data with partners. At the time, the company said it had taken steps to remediate the issue, but it didnt say whether it was resolved. In this quarters update, Twitter says it has since shipped remediations to those issues.

View post:
Twitter says AI tweet recommendations helped it add millions of users - The Verge

Posted in Machine Learning | Comments Off on Twitter says AI tweet recommendations helped it add millions of users – The Verge