Financial services firms have been increasingly incorporating Artificial Intelligence (AI) into their strategies to drive operational and cost efficiencies. Firms must ensure effective governance of any use of AI. The Financial Conduct Authority (FCA) is active in this area, currently collaborating with The Alan Turing Institute to examine a potential framework for transparency in the use of AI in financial markets.
In simple terms, AI involves algorithms that can make human-like decisions, often on the basis of large volumes of data, but typically at a much faster and more efficient rate. In 2019, the FCA and the Bank of England (BoE) issued a survey to almost 300 firms, including banks, credit brokers, e-money institutions, financial market infrastructure firms, investment managers, insurers, non-bank lenders and principal trading firms, to understand the extent to which they were using Machine Learning (ML), a sub-category of AI. While AI is a broad concept, ML involves a methodology whereby a computer programme learns to recognise patterns of data without being explicitly programmed.
The key findings included:
The use cases for ML identified by the FCA and BoE were largely focused around the following areas:
Anti-money laundering and countering the financing of terrorism
Financial institutions have to analyse customer data continuously from a wide-range of sources in order to comply with their AML obligations. The FCA and BoE found that ML was being used at several stages within the process to:
Firms were increasingly using Chatbots, which enable customers to contact firms without having to go through human agents via call centres or customer support. Chatbots can reduce the time and resources needed to resolve consumer queries.
ML can facilitate faster identification of user intent and recommend associated content which can help address consumers issues. For more complex matters which cannot be addressed by the Chatbot, the ML will transfer the consumer to a human agent who should be better placed to deal with the query.
Sales and trading
The FCA and BoE reported that ML use cases in sales and trading broadly fell under three categories ranging from client-facing to pricing and execution:
The majority of respondents in the insurance sector used ML to price general insurance products, including motor, marine, flight, building and contents insurance. In particular, ML applications were used for:
Insurance claims management
Of the respondents in the general insurance sector, 83% used ML for claims management in the following scenarios:
ML currently appears to provide only a supporting role in the asset management sector. Systems are often used to provide suggestions to fund management (which apply equally to portfolio decision-making or execution only trades):
All of these applications have back-up systems and human-in-the-loop safeguards. They are aimed at providing fund managers with suggestions, with a human in charge of the decision making and trade execution.
Although there is no overarching legal framework which governs the use of AI in financial services, Principle 3 of the FCAs Principles for Business makes clear that firms must take reasonable care to organise and control their affairs responsibly and effectively, with adequate risk management systems. If regulated activities being conducted by firms are increasingly dependent on ML or, more broadly, AI, firms will need to ensure that there is effective governance around the use of AI and that systems and controls adequately ensure that the use of ML and AI is not causing harm to consumers or the markets.
There are a number of risks in adopting AI, for example, algorithmic bias caused by insufficient or inaccurate data (note that the main barrier to widespread adoption of AI is the availability of data) and lack of training of systems and AI users, which could lead to poor decisions being made. It is therefore imperative that firms fully understand the design of the MI, have stress-tested the technology prior to its roll-out in business areas and have effective quality assurance and system feedback measures in place to detect and prevent poor outcomes.
Clear records should be kept of the data used by the ML, the decision making around the use of ML and how systems are trained and tested. Ultimately, firms should be able to explain how the ML reached a particular decision.
Where firms outsource to AI service providers, they retain the regulatory risk if things go wrong. As such, the regulated firm should ensure it carries out sufficient due diligence on the service provider, that it understands the underlying decision-making process of the service providers AI and ensure the contract includes adequate monitoring and oversight mechanisms where the AI services are important in the context of the firms regulated business, and appropriate termination provisions.
The FCA announced in July 2019 that it is working with The Alan Turing Institute on a year-long collaboration on AI transparency in which they will propose a high-level framework for thinking about transparency needs concerning uses of AI in financial markets. The Alan Turing Institute has already completed a project on explainable AI with the Information Commissioner in the content of data protection. A recent blog published by the FCA stated:
the need or desire to access information about a given AI system may be motivated by a variety of reasons there are a diverse range of concerns that may be addressed through transparency measures. one important function of transparency is to demonstrate trustworthiness which, in turn, is a key factor for the adoption and public acceptance of AI systems transparency may [also] enable customers to understand and where appropriate challenge the basis of particular outcomes.
Read the original post:
Using Machine Learning in Financial Services and the regulatory implications - Lexology
Recommendation and review posted by Ashlie Lopez