Search Immortality Topics:

Page 69«..1020..68697071..8090..»


Category Archives: Machine Learning

Current and future regulatory landscape for AI and machine learning in the investment management sector – Lexology

On Tuesday this week, Mark Lewis, senior consultant in IT, fintech and outsourcing at Macfarlanes, took part in an event hosted by The Investment Association covering some of the use cases, successes and challenges faced when implementing AI and machine learning (AIML) in the investment management industry.

Mark led the conversation on the current regulatory landscape for AIML and on the future direction of travel for the regulation of AIML in the investment management sector. He identified several challenges posed by the current regulatory framework, including those caused by the lack of a standard definition of AI generally and for regulatory purposes. This creates the risk of a fragmented regulatory landscape (an expression used recently by the World Federation of Exchanges in the context of lack of a standard taxonomy for fintech globally) as different regulators tend to use different definitions of AIML. This results in the risk of over- or under-regulating AIML and is thought to be inhibiting firms adopting new AI systems. While the UK Financial Conduct Authority (FCA) and the Bank of England seem to have settled, at least for now, on a working definition of AI as the use of a machine to perform tasks normally requiring human intelligence, and of ML as a subset of AI where a machine teaches itself to perform tasks without being explicitly programmed these working definitions are too generic to be of serious practical use in approaching regulation.

The current raft of legislation and other regulation that can apply to AI systems is uncertain, vast and complex, particularly within the scope of regulated financial services. Part of the challenge is that, for now, there is very little specific regulation directly applicable to AIML (exceptions include GDPR and, for algorithmic high-frequency trading, MiFID II). The lack of understanding of new AIML systems, combined with an uncertain and complex regulatory environment, also has an impact internally within businesses as they attempt to implement these systems. Those responsible for compliance are reluctant to engage where sufficient evidence is not available on how the systems will operate and how great the compliance burden will be. Improvements in explanations from technologists may go some way to assisting in this area. Overall, this means that regulated firms are concerned that their current systems and governance processes for technology, digitisation and related services deployments remain fit-for-purpose when extended to AIML. They are seeking reassurance from their regulators that this is the case. Firms are also looking for informal, discretionary regulatory advice on specific AIML concerns, such as required disclosures to customers about the use of chatbots.

Aside from the sheer volume of regulation that could apply to AIML development and deployment, there is complexity in the sources of regulation. For example, firms must also have regard to AIML ethics and ethical standards and policies. In this context, Mark noted that, this year, the FCA and The Alan Turing Institute launched a collaboration on transparency and explainability of AI in the UK financial services sector, which will lead to the publication of ethical standards and expectations for firms deploying AIML. He also referred to the role of the UK governments Centre for Data Ethics and Innovation (CDEI) in the UKs regulatory framework for AI and, in particular to the CDEIs AI Barometer Report (June 2020), which has clearly identified several key areas that will most likely require regulatory attention, and some with significant urgency. These include:

In the absence of significant guidance, Mark provided a practical, 10-point, governance plan to assist firms in developing and deploying AI in the current regulatory environment, which is set out below. He highlighted the importance of firms keeping watch on regulatory developments, including what regulators and their representatives say about AI, as this may provide an indication of direction in the absence of formal advice. He also advised that firms ignore ethics considerations at their peril, as these will be central to any regulation going forward. In particular, for the reasons given above, he advised keeping up to date with reports from the CDEI. Other topics discussed in the session included lessons learnt for best practice in the fintech industry and how AI has been used to solve business challenges in financial markets.

See the article here:
Current and future regulatory landscape for AI and machine learning in the investment management sector - Lexology

Posted in Machine Learning | Comments Off on Current and future regulatory landscape for AI and machine learning in the investment management sector – Lexology

How do we know AI is ready to be in the wild? Maybe a critic is needed – ZDNet

Mischief can happen when AI is let loose in the world, just like any technology. The examples of AI gone wrong are numerous, the most vivid in recent memory being the disastrously bad performance of Amazon's facial recognition technology, Rekognition, which had a propensity to erroneously match members of some ethnic groups with criminal mugshots to a disproportionate extent.

Given the risk, how can society know if a technology has been adequately refined to a level where it is safe to deploy?

"This is a really good question, and one we are actively working on, "Sergey Levine, assistant professor with the University of California at Berkeley's department of electrical engineering and computer science, told ZDNet by email this week.

Levine and colleagues have been working on an approach to machine learning where the decisions of a software program are subjected to a critique by another algorithm within the same program that acts adversarially. The approach is known as conservative Q-Learning, and it was described in a paper posted on the arXiv preprint server last month.

ZDNet reached out to Levine this week after he posted an essay on Medium describing the problem of how to safely train AI systems to make real-world decisions.

Levine has spent years at Berkeley's robotic artificial intelligence and learning lab developing AI software that to direct how a robotic arm moves within carefully designed experiments-- carefully designed because you don't want something to get out of control when a robotic arm can do actual, physical damage.

Robotics often relies on a form of machine learning called reinforcement learning. Reinforcement learning algorithms are trained by testing the effect of decisions and continually revising a policy of action depending on how well the action affects the state of affairs.

But there's the danger: Do you want a self-driving car to be learning on the road, in real traffic?

In his Medium post, Levine proposes developing "offline" versions of RL. In the offline world, RL could be trained using vast amounts of data, like any conventional supervised learning AI system, to refine the system before it is ever sent out into the world to make decisions.

Also: A Berkeley mash-up of AI approaches promises continuous learning

"An autonomous vehicle could be trained on millions of videos depicting real-world driving," he writes. "An HVAC controller could be trained using logged data from every single building in which that HVAC system was ever deployed."

To boost the value of reinforcement learning, Levine proposes moving from the strictly "online" scenario, exemplified by the diagram on the right, to an "offline" period of training, whereby algorithms are input with masses of labeled data more like traditional supervised machine learning.

Levine uses the analogy of childhood development. Children receive many more signals from the environment than just the immediate results of actions.

"In the first few years of your life, your brain processed a broad array of sights, sounds, smells, and motor commands that rival the size and diversity of the largest datasets used in machine learning," Levine writes.

Which comes back to the original question, to wit, after all that offline development, how does one know when an RL program is sufficiently refined to go "online," to be used in the real world?

That's where conservative Q-learning comes in. Conservative Q-learning builds on the widely studied Q-learning, which is itself a form of reinforcement learning. The idea is to "provide theoretical guarantees on the performance of policies learned via offline RL," Levine explained to ZDNet. Those guarantees will block the RL system from carrying out bad decisions.

Imagine you had a long, long history kept in persistent memory of what actions are good actions that prevent chaos. And imagine your AI algorithm had to develop decisions that didn't violate that long collective memory.

"This seems like a promising path for us toward methods with safety and reliability guarantees in offline RL," says UC Berkeley assistant professor Sergey Levine, of the work he and colleagues are doing with "conservative Q-learning."

In a typical RL system, a value function is computed based on how much a certain choice of action will contribute to reaching a goal. That informs a policy of actions.

In the conservative version, the value function places a higher value on that past data in persistent memory about what should be done. In technical terms, everything a policy wants to do is discounted, so that there's an extra burden of proof to say that the policy has achieved its optimal state.

A struggle ensues, Levine told ZDNet, making an analogy to generative adversarial networks, or GANs, a type of machine learning.

"The value function (critic) 'fights' the policy (actor), trying to assign the actor low values, but assign the data high values." The interplay of the two functions makes the critic better and better at vetoing bad choices. "The actor tries to maximize the critic," is how Levine puts it.

Through the struggle, a consensus emerges within the program. "The result is that the actor only does those things for which the critic 'can't deny' that they are good (because there is too much data that supports the goodness of those actions)."

Also: MIT finally gives a name to the sum of all AI fears

There are still some major areas that need refinement, Levine told ZDNet. The program at the moment has some hyperparameters that have to be designed by hand rather than being arrived at from the data, he noted.

"But so far this seems like a promising path for us toward methods with safety and reliability guarantees in offline RL," said Levine.

In fact, conservative Q-learning suggests there are ways to incorporate practical considerations into the design of AI from the start, rather than waiting till after such systems are built and deployed.

Also: To Catch a Fake: Machine learning sniffs out its own machine-written propaganda

The fact that it is Levine carrying out this inquiry should give the approach of conservative Q-learning added significance. With a firm grounding in real-world applications of robotics, Levine and his team are in a position to validate the actor-critic in direct experiments.

Indeed, the conservative Q-Learning paper, which is lead-authored by Aviral Kumar of Berkeley, and was done with the collaboration of Google Brain, contains numerous examples of robotics tests in which the approach showed improvements over other kinds of offline RL.

There is also a blog post authored by Google if you want to learn more about the effort.

Of course, any system that relies on amassed data offline for its development will be relying on the integrity of that data. A successful critique of the kind Levine envisions will necessarily involve broader questions about where that data comes from, and what parts of it represent good decisions.

Some aspects of what is good and bad may be a discussion society has to have that cannot be automated.

See the article here:
How do we know AI is ready to be in the wild? Maybe a critic is needed - ZDNet

Posted in Machine Learning | Comments Off on How do we know AI is ready to be in the wild? Maybe a critic is needed – ZDNet

Panalgo Brings the Power of Machine-Learning to the Healthcare Industry Via Its Instant Health Data (IHD) Software – PRNewswire

BOSTON, Sept. 15, 2020 /PRNewswire/ -- Panalgo, a leading healthcare analytics company, today announced the launch of its new Data Sciencemodule for Instant Health Data (IHD), which allows data scientists and researchers to leverage machine-learning to uncover novel insights from the growing volume of healthcare data.

Panalgo's flagship IHD Analytics softwarestreamlines the analytics process by removing complex programming from the equation and allows users to focus on what matters most--turning data into insights. IHD Analytics supports the rapid analysis of a wide range of healthcare data sources, including administrative claims, electronic health records, registry data and more. The software, which is purpose-built for healthcare, includes the most extensive library of customizable algorithms and automates documentation and reporting for transparent, easy collaboration.

Panalgo's new IHD Data Science module is fully integrated with IHD Analytics, and allows for analysis of large, complex healthcare datasets using a wide variety of machine-learning techniques. The IHD Data Science module provides an environment to easily train, validate and test models against multiple datasets.

"Healthcare organizations are increasingly using machine-learning techniques as part of their everyday workflow. Developing datasets and applying machine-learning methods can be quite time-consuming," said Jordan Menzin, Chief Technology Officer of Panalgo. "We created the Data Science module as a way for users to leverage IHD for all of the work necessary to apply the latest machine-learning methods, and to do so using a single system."

"Our new IHD Data Science product release is part of our mission to leverage our deep domain knowledge to build flexible, intuitive software for the healthcare industry," said Joseph Menzin, PhD, Chief Executive Officer of Panalgo. "We are excited to empower our customers to answer their most pressing questions faster, more conveniently, and with higher quality."

The IHD Data Science module provides advanced analytics to better predict patient outcomes, uncover reasons for medication non-adherence, identify diseases earlier, and much more. The results from these analyses can be used by healthcare stakeholders to improve patient care.

Research abstracts using Panalgo's IHD Data Science module are being presented at this week's International Conference on Pharmacoepidemiology and Therapeutic Risk Management, including: "Identifying Comorbidity-based Subtypes of Type 2 Diabetes: An Unsupervised Machine Learning Approach," and "Identifying Predictors of a Composite Cardiovascular Outcome Among Diabetes Patients Using Machine Learning."

About Panalgo Panalgo, formerly BHE, provides software that streamlines healthcare data analytics by removing complex programming from the equation. Our Instant Health Data (IHD) software empowers teams to generate and share trustworthy results faster,enabling more impactful decisions. To learn more, visit us athttps://www.panalgo.com. To request a demo of our IHD software, please contact us at [emailprotected].

SOURCE Panalgo

Home

See the original post here:
Panalgo Brings the Power of Machine-Learning to the Healthcare Industry Via Its Instant Health Data (IHD) Software - PRNewswire

Posted in Machine Learning | Comments Off on Panalgo Brings the Power of Machine-Learning to the Healthcare Industry Via Its Instant Health Data (IHD) Software – PRNewswire

New Optimizely and Amazon Personalize Integration Provides More – AiThority

With experimentation and Amazon Personalize, customers can drive greater customer engagement and revenue

Optimizely, the leader in progressive delivery and experimentation, announced the launch of Optimizely for Amazon Personalize, amachine learning(ML) service from Amazon Web Services (AWS) that makes it easy for companies to create personalized recommendations for their customers at every digital touchpoint. The new integration will enable customers to use experimentation to determine the most effective machine learning algorithms to drive greater customer engagement and revenue.

Recommended AI News: Similarweb Adds New Chief Marketing and Technology Officers

Optimizely for Amazon Personalize enables software teams to A/B test and iterate on different variations of Amazon Personalize models using Optimizelys progressive delivery and experimentation platform. Once a winning model has been determined, users can roll out that model using Optimizelys feature flags without a code deployment. With real-time results and statistical confidence, customers are able to offer more touchpoints powered by Amazon Personalize, and continually monitor and optimize them to further improve those experiences.

Recommended AI News: Polyrize Announces Inaugural Shadow Identity Report

Until now, developers needed to go through a slow and manual process to analyze each machine learning model. Now, with Optimizely for Amazon Personalize, development teams can easily segment and test different models with their customer base and get automated results and statistical reporting on the best performing models. Using the business KPIs with the new statistical reports, developers can now easily roll out the best performing model. With a faster process, users can test and learn more quickly to improve key business metrics and deliver more personalized experiences to their customers.

Successful personalization powered by machine learning is now possible, says Byron Jones, VP of Product and Partnerships at Optimizely. Customers often have multiple Amazon Personalize models they want to use at the same time, and Optimizely can provide the interface to make their API and algorithms come to life. Models need continual tuning and testing. Now, with Optimizely, you can test one Amazon Personalize model against another to iterate and provide optimal real-time personalization and recommendation for users.

Recommended AI News: Suzy Online Shopping Study Says 86% of Consumers Will Shop Online Even Following the Pandemic

Go here to see the original:
New Optimizely and Amazon Personalize Integration Provides More - AiThority

Posted in Machine Learning | Comments Off on New Optimizely and Amazon Personalize Integration Provides More – AiThority

Machine Learning as a Service (MLaaS) Market Industry Trends, Size, Competitive Analysis and Forecast 2028 – The Daily Chronicle

The Global Machine Learning as a Service (MLaaS) Market is anticipated to rise at a considerable rate over the estimated period between 2016 and 2028. The Global Machine Learning as a Service (MLaaS) Market Industry Research Report is an exhaustive study and a detailed examination of the recent scenario of the Global Machine Learning as a Service (MLaaS) industry.

The market study examines the global Machine Learning as a Service (MLaaS) Market by top players/brands, area, type, and the end client. The Machine Learning as a Service (MLaaS) Market analysis likewise examines various factors that are impacting market development and market analysis and discloses insights on key players, market review, most recent patterns, size, and types, with regional analysis and figure.

Click here to get sample of the premium report: https://www.quincemarketinsights.com/request-sample-50032?utm_source= DC/hp

The Machine Learning as a Service (MLaaS) Market analysis offers an outline with an assessment of the market sizes of different segments and countries. The Machine Learning as a Service (MLaaS) Market study is designed to incorporate both quantitative aspects and qualitative analysis of the industry with respect to countries and regions involved in the study. Furthermore, the Machine Learning as a Service (MLaaS) Market analysis also provides thorough information about drivers and restraining factors and the crucial aspects which will enunciate the future growth of the Machine Learning as a Service (MLaaS) Market.

Machine Learning as a Service (MLaaS) Market

The market analysis covers the current global Machine Learning as a Service (MLaaS) Market and outlines the Key players/manufacturers: Microsoft, IBM Corporation, International Business Machine, Amazon Web Services, Google, Bigml, Fico, Hewlett-Packard Enterprise Development, At&T, Fuzzy.ai, Yottamine Analytics, Ersatz Labs, Inc., and Sift Science Inc.

The market study also concentrates on the main leading industry players in the Global Machine Learning as a Service (MLaaS) Market, offering information such as product picture, company profiles, specification, production, capacity, price, revenue, cost, and contact information. This market analysis also focuses on the global Machine Learning as a Service (MLaaS) Market volume, Trend, and value at the regional level, global level, and company level. From a global perspective, this market analysis represents the overall global Machine Learning as a Service (MLaaS) Market Size by analyzing future prospects and historical data.

Get ToC for the overview of the premium report https://www.quincemarketinsights.com/request-toc-50032?utm_source=DC/hp

On the basis of Market Segmentation, the global Machine Learning as a Service (MLaaS) Market is segmented as By Type (Special Services and Management Services), By Organization Size (SMEs and Large Enterprises), By Application (Marketing & Advertising, Fraud Detection & Risk Analytics, Predictive Maintenance, Augmented Reality, Network Analytics, and Automated Traffic Management), By End User (BFSI, IT & Telecom, Automobile, Healthcare, Defense, Retail, Media & Entertainment, and Communication)

Further, the report provides niche insights for a decision about every possible segment, helping in the strategic decision-making process and market size estimation of the Machine Learning as a Service (MLaaS) market on a regional and global basis. Unique research designed for market size estimation and forecast is used for the identification of major companies operating in the market with related developments. The report has an exhaustive scope to cover all the possible segments, helping every stakeholder in the Machine Learning as a Service (MLaaS) market.

Speak to analyst before buying this report https://www.quincemarketinsights.com/enquiry-before-buying-50032?utm_source=DC/hp

This Machine Learning as a Service (MLaaS) Market Analysis Research Report Comprises Answers to the following Queries

ABOUT US:

QMI has the most comprehensive collection of market research products and services available on the web. We deliver reports from virtually all major publications and refresh our list regularly to provide you with immediate online access to the worlds most extensive and up-to-date archive of professional insights into global markets, companies, goods, and patterns.

Contact:

Quince Market Insights

Office No- A109

Pune, Maharashtra 411028

Phone: APAC +91 706 672 4848 / US +1 208 405 2835 / UK +44 1444 39 0986

Email: [emailprotected]

Web: https://www.quincemarketinsights.com

Read the original post:
Machine Learning as a Service (MLaaS) Market Industry Trends, Size, Competitive Analysis and Forecast 2028 - The Daily Chronicle

Posted in Machine Learning | Comments Off on Machine Learning as a Service (MLaaS) Market Industry Trends, Size, Competitive Analysis and Forecast 2028 – The Daily Chronicle

How Amazon Automated Work and Put Its People to Better Use – Harvard Business Review

Executive Summary

Replacing people with AI may seem tempting, but its also likely a mistake. Amazons hands off the wheel initiative might be a model for how companies can adopt AI to automate repetitive jobs, but keep employees on the payroll by transferring them to more creative roles where they can add more value to the company. Amazons choice to eliminate jobs but retain the workers and move them into new roles allowed the company to be more nimble and find new ways to stay ahead of competitors.

At an automation conference in late 2018, a high-ranking banking official looked up from his buffet plate and stated his objective without hesitation: Im here, he told me, to eliminate full-time employees. I was at the conference becauseafter spending months researching how Amazon automates workat its headquarters,I was eager to learn how other firms thought about this powerful technology. After one short interaction, it was clear that some have it completely wrong.

For the past decade, Amazon has been pushing to automate office work under a program now known as Hands off the Wheel. The purpose was not to eliminate jobs but to automate tasks so that the company could reassign people to build new products to do more with the people on staff, rather than doing the same with fewer people. The strategy appears to have paid off: At a time when its possible to start new businesses faster and cheaper than ever before, Hands off the Wheel has kept Amazon operating nimbly, propelled it ahead of its competitors, and shownthat automating in order to fire can mean missing bigopportunities. As companies look at how to integrate increasingly powerful AI capabilities into their businesses, theyd do well to consider this example.

The animating idea behind Hands off the Wheel originated at Amazons South Lake Union office towers, where the company began automating work in the mid-2010s under an initiative some called Project Yoda. At the time, employees in Amazons retail management division spent their days making deals and working out product promotions as well as determining what items to stock in its warehouses, in what quantities, and for what price. But with two decades worth of retail data at its disposal, Amazons leadership decided to use the force (machine learning) to handle the formulaic processes involved in keeping warehouses stocked. When you have actions that can be predicted over and over again, you dont need people doing them, Neil Ackerman, an ex-Amazon general manager, told me.

The project began in 2012, when Amazon hired Ralf Herbrich as its director of machine learning and made the automation effort one of his launch projects. Getting the software to be goodat inventory management and pricing predictions took years, Herbrich told me, because his team had to account for low-volume product orders that befuddled its data-hungry machine-learning algorithms. By 2015, the teams machine-learning predictions were good enough that Amazons leadership placed them in employees software tools, turning them into a kind of copilot for human workers. But at that point the humans could override the suggestions, and many did, setting back progress.

Eventually, though, automation took hold. It took a few years to slowly roll it out, because there was training to be done, Herbrich said. If the system couldnt make its own decisions, he explained, it couldnt learn. Leadership required employees to automate a large number of tasks, though that varied across divisions. In 2016, my goals for Hands off the Wheel were 80% of all my activity, one ex-employee told me. By 2018 Hands off the Wheel was part of business as usual. Having delivered on his project, Herbrich left the company in 2020.

The transition to Hands off the Wheel wasnt easy. The retail division employees were despondent at first, recognizing that their jobs were transforming. It was a total change, the former employee mentioned above said. Something that you were incentivized to do, now youre being disincentivized to do. Yet in time, many saw the logic. When we heard that ordering was going to be automated by algorithms, on the one hand, its like, OK, whats happening to my job? another former employee, Elaine Kwon, told me. On the other hand, youre also not surprised. Youre like, OK, as a business this makes sense.

Although some companies might have seen an opportunity to reduce head count, Amazon assigned the employees new work. The companys retail division workers largely moved into product and program manager jobs fast-growing roles within Amazon that typically belong to professional inventors. Productmanagers oversee new product development, while program managers oversee groups of projects. People who were doing these mundane repeated tasks are now being freed up to do tasks that are about invention, Jeff Wilke, Amazons departing CEO of Worldwide Consumer, told me. The things that are harder for machines to do.

Had Amazon eliminated those jobs, it would have made its flagship business more profitable but most likely would have caused itself to miss its next new businesses. Instead of automating to milk a single asset, it set out to build new ones. Consider Amazon Go, the companys checkout-free convenience store. Go was founded, in part, by Dilip Kumar, an executive once in charge of the companys pricing and promotions operations. While Kumar spent two years acting as a technical adviser to CEO Jeff Bezos, Amazons machine learning engineers began automating work in his old division, so he took a new lead role in a project aimed at eliminating the most annoying part of shopping in real life: checking out. Kumar helped dream up Go, which is now a pillar of Amazons broader strategy.

If Amazon is any indication, businesses that reassign employees after automating their work will thrive. Those that dont risk falling behind.In shaky economic times, the need for cost-cutting could make it tempting to replace people with machines, but Ill offer a word of warning: Think twice before doing that. Its a message I wish I had shared with the banker.

Read the original here:
How Amazon Automated Work and Put Its People to Better Use - Harvard Business Review

Posted in Machine Learning | Comments Off on How Amazon Automated Work and Put Its People to Better Use – Harvard Business Review