Polina Baranova
The adoption of innovative processes in a highly regulated industry need to take into account the regulatory side of the equation. Because of this, Polina Baranova, Quantitative Research Associate at JP Morgan, proposes that there is a need for explainable artificial intelligence if artificial intelligence and machine learning are to take off in quantitative finance.
John Hull, Maple Financial Professor Of Derivatives & Risk Management, Joseph L. Rotman School of Management at University Of Toronto: "data science will affect pretty much all aspects of finance"
Machine learning (ML) and artificial intelligence (AI) techniques bring insights where classic theory assumptions do not hold or are computationally burdensome, and are particularly powerful in decision-making automation. In spite of these inspiring results, one question remains unanswered: interpretability.
According to a recent McKinsey survey[1], only 26% of respondents from the finance industry have adopted AI in their products so far, making finance the second to last industry in terms of AI adoption. One of the obstacles in a more broad-spread AI uptake is the ‘lack of interpretability’ and the challenging explanations of advanced model outcomes. Due to these issues, traditional methods or simplistic ML techniques are often more popular in quantitative finance. Interpretability becomes crucial once the model or technique is ready to be embedded into financial products, providing evidence to facilitate regulators' or risk-takers' reliance on the model.
Therefore, next big thing in quantitative finance is indeed the development of explainable artificial intelligence (XAI) tools, to enable a broader adoption of AI. XAI focuses not only on building model interpretations, but also on developing more interpretable models without sacrificing accuracy, aiming to solve a long-standing trade-off between interpretability and precision.
XAI Concept, DARPA, https://www.darpa.mil/program/explainable-artificial-intelligence
The problem of interpretability was raised as early as the 1970’s and has recently received more attention. One example is the development of model-agnostic approaches such as LIME and SHAP, presented at the NIPS conference in 2016 and 2017 respectively. The authors proposed these feature-based approaches as a generic way to improve predictions interpretability. Under the ‘Defense Advanced Research Project Agency’ (DARPA), the XAI project aims to develop explainable models using a combination of deep and non-deep learning, which enables models to explain themselves. The important part of explainable models is the so-called explanation interface, which directly interacts with humans and collects performance metrics.
One of several possible applications for such methods in quantitative finance could be the automation of inventory management. Due to the high-dimensionality of this problem, only sophisticated models are likely to get sufficient accuracy to automate decisions on borrowing, lending, and reinvesting. At the same time, trading desks should be comfortable in delegating such decisions, or in other words, to ‘rely on the model’. By applying SHAP, for instance, inventory management functions can obtain transparency on model decisions and features: which external and/or internal factors contributed to them, and how each feature influenced the decision at this particular point of time for a given instrument (e.g. if borrow rates, inventory levels or client activity patterns were the main drivers, etc.). In addition, such models often reveal deep insights into the data, which may entirely change the decision-making process. Similar considerations could be used to analyse the performance of fund managers by allocators, sizing investments and so forth.
XAI is still in the process of formalisation and research consolidation; however, XAI is already considered in various industries, including telecommunication and defence[2]. There are also some advances in using and explaining AI in credit ratings[3]. Quantitative finance should adopt and invest in XAI to unlock advanced AI and ML techniques, improving services and decision-making practices.
The specificity of computational intelligence (CI) lies in the fact that it represents a set of “nature-inspired” computational techniques that are applicable to complex real-world problems for which traditional mathematical modelling does not work, either because they are too complex mathematically, or because they are stochastic in nature. The methods used by CI attempt to mimic the human way of reasoning, i.e. using inexact and incomplete information, with the declared scope of generating controlled and adaptive actions.
Find out more >>
Michael Chui, Sankalp Malhotra, 2018, “AI adoption advances, but foundational barriers remain”, McKinsey & Company. Available at: https://www.mckinsey.com/featured-insights/artificial-intelligence (Accessed: 1 April 2019)
D. Gunning, “Explainable artificial intelligence (XAI)”, Defense Advanced Research Projects Agency, Available at : http://www.darpa.mil/program/explainable-artificial-intelligence (Accessed: 1 April, 2019)
Equifax, 2018, “Equifax Launches NeuroDecision® Technology”. Available at: https://investor.equifax.com/news-and-events/news/2018/03-26-2018-143044126 (Accessed: 1 April 2019)
Opinions and estimates constitute the author’s judgement of the Article above as of the date of this Material, are provided for your informational purposes only (and are not an independent verification of the Article) and are subject to change without notice. Neither J.P. Morgan Securities plc nor its affiliates and / or subsidiaries (collectively J.P. Morgan) warrant its completeness or accuracy. This Material is not the product of J.P. Morgan’s Research Department and therefore, has not been prepared in accordance with legal requirements to promote the independence of research, including but not limited to, the prohibition on the dealing ahead of the dissemination of investment research. It is not a research report and is not intended as such. This Material is not intended as research, a recommendation, advice, offer or solicitation for the purchase or sale of any financial product or service, or to be used in any way for evaluating the merits of participating in any transaction. Please consult your own advisors regarding legal, tax, accounting or any other aspects including suitability implications for your particular circumstances. J.P. Morgan disclaims any responsibility or liability whatsoever for the quality, accuracy or completeness of the information herein, and for any reliance on, or use of this material in any way. This Material is provided on a confidential basis and may not be reproduced, redistributed or disseminated, in whole or in part, without the prior written consent of J.P. Morgan. Any unauthorized use is strictly prohibited. Important disclosures at: www.jpmorgan.com/disclosures. © 2018 JPMorgan Chase & Co. All rights reserved.