Fintech Awards 2018

5 human interface, based on data input by the customer regarding means, wants and needs etc, and measured against product models and performance data to find appropriate investments. Automating these processes with AI offers the ability to manage downwards the costs of servicing a given market while potentially eliminating rogue variables caused by human fallibility. AI could thereby help to make financial services products more accessible to the public, enabling them to be offered at a price that is affordable to a greater section of the public. However we cannot forget potential risks: what if an insurance pricing algorithm becomes so keenly aligned to risk that a segment of higher risk, and potentially vulnerable, customers are effectively priced out of the market? How can an algorithm be held accountable if a customer feels that a decision about their credit card application was wrong? And what if the questions about investment intentions are too focused on what customers say they want, and miss out on the nuances of a customer’s wishes and fears that an experienced human advisor may know to pick up on and pursue? What could the regulators do to address these potential risks, and the consumer detriment that would ensue if they materialised? One option, and likely only part of any solution, is to ensure firms are mindful of the consumer and market protection outcomes and objectives at the root of the regulations with which they must comply, and they will be held accountable when their products and services fail to deliver those outcomes. For example, the UK’s Financial Conduct Authority (FCA) requires firms providing services to consumers to ensure that they are treating their customers fairly, and being clear, fair and not misleading. The onus is then on firms to ensure that whatever new developments they have, these outcomes are consistently being achieved. For the insurance firm described above, this could involve paying close attention to the parameters and design of the algorithm, to ensure that, for example, a certain pricing threshold is not breached. For the credit card firm, this could be ensuring that if a customer’s application is declined, they are provided with information about how that decision was reached, and what factors it was based upon. For the robo-adviser proposition, this could involve a periodic review of investments and portfolios by a human adviser. Practically, regulators will need to work with firms to ensure that the need to comply with such outcomes does not block development. Since 2016, the FCA has made available a regulatory ‘sandbox’ for firms, to let them develop new ideas in a ‘safe’ surrounding, to contain risks of customer detriment while products are in development, and to offer support in identifying appropriate consumer protection safeguards that may be built into new products and services. The FCA is now exploring the expansion of this sandbox to a global staging: working with other regulators around the world to support firms that may offer their products in more than one regulatory jurisdiction. The FCA has also been meeting with organisations who are working to expand the current boundaries and applications, at specialist events around the UK, such as the FinTech North 2018 series of conferences, which raise the profile of FinTech capability in the North of England. By working together to balance potentially competing factors such as technological development and consumer protection, regulators and the industry may be able to provide a stable platform to develop AI, while overcoming or at least assuaging the potential fears of the target audience for these developments. In 2001: a Space Odyssey, the conflict between AI and humans was only resolved by the ‘death’ of the AI. Let’s hope that in real life, a way of co-existence can be found instead. For more information, please contact: Roseyna Jahangir, Associate at Womble Bond Dickinson (UK) LLP Email: [email protected] Tel: 0207 788 2377

RkJQdWJsaXNoZXIy NTY1MjM3