Model risk management in the age of AI
Snapshot summary from RiskMinds International 2024
Learn how model risk management and related regulations are evolving in banks and other financial institutions.
Thank you for readingModel risk management in the age of AI
Save the date!
The approach banks take to running their model risk management programmes, including the application of Three Lines of Defense (3LOD) to manage model risk in today’s new AI paradigm, was a key topic of discussion among risk officers at RiskMinds International 2024.
Over the past decade, banks have had to adapt to a wider set of risks beyond credit risk, operational risk, market risk, financial crime etc. This has led to increased challenges for risk officers in determining which risk models are fit for purpose, to protect banks’ interests on the global stage.
In terms of the arc of evolution of model risk, over the last three decades or more they were inherently predictive models; that is, regression-type models. Even when machine learning statistical techniques began to be used, it was still with a view to develop forecasting capabilities to manage risk. Much of the regulation focused on this use to determine whether pricing models were fit for use and capable of producing the right level of forecasting.
However, there are inherent limitations. Gaussian pricing models fail to capture fat tails in asset return distributions, while Gaussian copula models are static and do not account for the dynamic nature of financial markets. During the ‘07/08 global financial crisis, being able to risk manage fat tails was paramount yet users were unaware of the limitations.
On April 4th 2011, the Federal Reserve and the Office of the Comptroller of the Currency formerly introduced “Supervisory Guidance on Model Risk Management," or SR 11-7, to improve the quality of models, ensure they were adequately tested and validated, and also to improve transparency in the use of these models.
Explore what we can learn from SR 11-7's success
Understand the risks of AI washing & model misrepresentation
Discover how the industry approaches artificial brain monitoring
Charting the future of AI models with Ratul Ahmed, Group Head of Model Validation at Commerzbank
Explore neural networks with Cécile Auger, Head of Model Risk Management at Murex
This regulation has proven to be remarkably robust. But will it stand up to the new AI paradigm, as global financial institutions increasingly adopt AI tools to handle the complexities of today’s risk landscape?
We are now at a pivotal point where models are becoming omnipresent
“We are starting to see models pop up in areas that have nothing to do with traditional forecasting areas, becoming part of utility tools and utility-enhancing tools, such as the use of AI to help with model risk. So the question is, how do you manage those tools? I think that's now the big change that we're facing in the industry.”
Currently, there is no globally accepted definition of AI for financial regulatory purposes, although there is alignment towards the OECD definition, which it updated in November 2023. It states that an AI system is: “A machine-based system that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments."
The latest example of this is SS1/23 in the UK, which goes a little bit further in certain aspects than SR 11-7. The SS1/23 model risk regulation update introduced by the Bank of England seeks to enhance model risk management frameworks for UK banks. It broadens the definition of a "model" to include deterministic quantitative methods and emphasises governance, independent validation, and comprehensive documentation.
“The truth of the matter is that the US has close to 15 years of an edge in this space over Europe,” commented the CRO of a US bank group. In their view, model risk has been a bit of a regulatory blind spot for the region. “The regulatory push, as long as it's done in a balanced manner, is very important.”
The longevity of SR 11-7 lies in the fact that the Fed wrote it with the intent of making it more of a principles-based document rather than a prescriptive document. Key to this was to avoid defining models in a narrow manner, and instead use a generic definition. This meant that the regulation required forecasts, especially if no econometric data was used and various assumptions were being made, to be properly validated and risk managed.
There is a perception in the financial industry that a model is something that requires data and econometric methodology to using data. That completely overlooks structural models, where you're making assumptions about the world even if you don't necessarily have data to calibrate parameters.
AI models apply in this context as they may be used to enhance risk frameworks even if the data isn’t available. Take climate risk, for example. There is no historical data set that explains climate change. No one knows what’s going to happen. Assumptions have to be made.
To quote the famous German philosopher Hegel:
We learn from history that we do not learn from history.
Watch this interview with Ratul Ahmed, Group Head of Model Validation at Commerzbank from RiskMinds International 2024, where she discusses:
the biggest risks of adopting AI technology
AI's impact on model risk, data quality, fairness, and regulatory compliance
methodologies for assessing AI models
the importance of AI literacy
how technology has transformed risk management practices over the years
During the discussion on “Efficient model risk audit and validation in new paradigms”, it was noted that AI models are “still at their teenage stage” and that while explaining their use remains difficult, a solution will be found.
A recent white paper authored by the Financial Stability Institute points out that AI use may heighten some existing risks, such as model risk due to a lack of explainability of AI models.
“One of the difficulties we have with all of these different technologies is to identify the right AI models to use. Many people are getting the source code or putting models in production directly with the IT guys without identifying them,” remarked the head of model risk audit at a large European bank.
Banks have to consider how best to avoid blindspots in respect to AI adoption, with respect to the Three Lines of Defense (3LOD) model, in particular, the third line of defense: internal audit. Evaluating processes deployed at the first and second lines, and identifying deficiencies to detect any potential flaws in model risk management practices, are key elements to the internal audit. Risks could arise, therefore, if, as remarked above, AI models cannot be identified and verified.
The SEC has already begun to charge firms that make false claims over how they use artificial intelligence. Toronto-based Delphia was charged for claiming to use AI and ML to analyse clients' spending and social media data for investment decisions, which was not the case. Global Predictions was another firm charged by the SEC for misrepresentation.
Speakers emphasised the need for transparency and accountability in the use of AI models.
We’ve heard of greenwashing. Now we have ‘AI washing’. We must make sure that even if it's inconvenient, we are disclosing to clients when we are using AI and show it to them
Model validation is an important aspect of model risk management but of equal importance is how the model is deployed, what the controls, what monitoring looks like etc.
In an AI context, this takes on greater import. Many AI models are being used for productivity enhancement and non-forecasting purposes. As such, emphasis should not be on conceptual soundness and validation, as it is with traditional predictive models, but on monitoring performance. This will require the creation of model categories that are unique to AI and unique to novel uses, and therefore require updates or enhancements to banks’ existing model risk frameworks.
During RiskMinds International 2024, 29% of delegates confirmed they were using AI (deep learning) for customer interaction: i.e. chatbots, robot advisors, profile-based offer/pricing.
A slightly lower percentage (17%) said they were using AI for compliance risk: AML, sanctions, market abuse, internal/external fraud detection models.
In a conversation at RiskMinds International 2024, Cécile Auger, Head of Model Risk Management at Murex, explores:
the impact of neural networks on calculations and infrastructure
the challenges of managing complex portfolios of derivatives
the unexpected advantages of implemented neural networks
Generative AI models have exploded in popularity over the last two years. These are not, however, a predictive tool but rather a content generation tool based on large language models. Such models are, essentially, artificial brains. Figuring out how they generate content is fraught with complexity. To use a human analogy, it would be like performing brain surgery.
In the traditional world of model risk management you are probably going to fail because it's almost like you're trying to figure out how an artificial brain is handling this or that problem. It's a new paradigm. And we need to figure out how to address it
Monitoring programmes are expected to become increasingly important for risk management and internal audit teams. AI models will need to be treated like staff and subjected to regular performance reviews.
Risk executives expect this to lead to a wider set of monitoring tools, output control tools, and making sure models stay within conceptual boundaries, rather than trying to understand and validate the assumptions being made. Such frameworks will need to allow the continual access of data being used by generative AI models, to ensure monitoring is as safe and accurate as possible.
With respect to model validation, it will require assessing whether using a particular AI model offers more benefits than risks before reaching the internal audit level.
I'm here to make sure that the process has been followed, that the tests have been completed fully, that the information is correct and not to see whether we should or not use the model. This is a board governance decision
A three-step plan was outlined as a suggested option for banks to consider, in order to build an efficient methodology for model risk, audit and validation.
Aim to optimise planning between the three lines of defense to avoid separate reviews across different lines happening simultaneously.
When looking at different risk regulations, they all expect the model validation function to have a view on the ratings system being used. Widening the scope of the mission to include the ratings system will increase oversight, especially when several models are being used within a single IT system.
Developing reliance is possible only once the above two steps on improving planning and model coverage have been incorporated. When it comes to doing a review at the 1st, 2nd or 3rd line of defense, improved resilience will help to focus on what matters, with a greater level of efficiency.
Regardless of how risk models continue to evolve, with or without the influence of generative AI tools, the onus will still be on humans to make the right judgment calls to protect banks’ interests. Risk executives are broadly enthusiastic about the way in which AI could help improve model risk management but as was mentioned several times during RiskMinds International 2024, it will also require regulatory pragmatism and a strong internal risk culture.