Legal issues with robo-advice, bots, machine learning and algorithms in financial services
April 7th, 2017 / Innovate Finance / Featured Insights
By Bird & Bird
In our increasingly interconnected, data-rich, digital world, the ability of AI and machine learning algorithms to provide us with personalised services is expanding. For financial services, this opens up new opportunities for efficient and customised customer service and opening access to financial advice to a wider market. However, automating decision-making, IP creation and processing of personal data presents several legal issues.
Areas where automation in decision-making might give rise to legal disputes include incorrect advice leading to financial harm and potentially mis-selling, discrimination in areas such as credit-risk assessment or product-pricing, and mis-use of personal data in breach of data protection laws. If AI can draw on internet sources, how is the veracity or validity of information tested and infringement of third party intellectual property avoided?
Current technology is quite a way off the robo-advisor “thinking” for itself. Instead the advice it provides is only as good as the algorithms it runs on and the data it receives. Errors could arise at different points in the supply chain but if a “bad” decision is made, is it the original developer, the operator of the service or the end consumer who bears the risk? This is one of the areas being considered by the parliamentary inquiry into the use of algorithms in decision-making.
Removing the human advisor increases the possibility of the customer misunderstanding the advice being given or challenging the automated decision being made. Appeals processes will need to be able to access evidence and make sense of the decision-making process undertaken by the algorithm or bot. What counts as negligence by a financial institution when dealing with consumers via bots, robo-advisers and other algorithms? What element of human quality assurance will be required?
Understandably, whilst the authorities and regulators are keen to encourage FinTech innovations to bring competition into retail banking and better inform consumers of their options, the new risks for consumers, financial institutions and, ultimately, financial stability are of concern. There is a need to ensure that customers understand the risk profile of dealing with emerging models and for FinTech organisations to understand how their models fit into regulatory regimes, particularly in the context of whether their model is providing advice or guidance.
The FCA has introduced its Regulatory Sandbox and Advice Unit as part of Project Innovate as a way to navigate some of the risks identified above. These initiatives support the development of automated advice tools and allow them to be tested in a controlled environment. If successful, the innovations can then be made subject to full regulatory requirements.
Communication with consumers and regulators, clear consents and legal terms, and sensible control, security and oversight systems will all play an important part in determining the extent to which services built on AI and machine-learning become sustainable and embedded into