Generative AI: A winning strategy for banks’ risk and compliance functions
Andreas Kremer, Angela Luget and Anke Raufuss, McKinsey & Company
Generative AI – with its ability to create content including audio, code, images, text, simulations, and videos – has been described as the biggest transformational shift since digitisation.
As a result, the race is on. More than 80% of AI research is focused on gen AI, and in the first quarter of 2023, VCs more than doubled their investments compared to the entirety of 2022 – itself a record year.
Generative AI is a quantum leap. From modelling to automating manual tasks to posing new challenges, gen AI, through constant learning and self-improvement, is already changing how institutions manage risks and stay compliant with regulations.
While it is imperative for risk and compliance functions to manage risks that come with this new technology, gen AI can also drive significant gains for risk management in terms of productivity, efficiency, and effectiveness. As a result, organisations are understandably moving fast to adopt gen AI.
Whether you are considering gen AI or have already taken steps to implement the technology, there are significant benefits to be gained by risk and compliance functions.
Generative AI has the potential to revolutionise the way risks are managed in banks during the next 3-5 years. This could lead to the creation of a "Risk Intelligence Center" powered by AI and gen AI that serves all Lines of Defense (LODs). The center would provide automated reporting, improved risk transparency, higher efficiency in risk-related decision-making, and partial automation in drafting and updating policies and procedures to reflect changing regulatory requirements.
Generative AI is expected to fundamentally shift the way risk and compliance organisations operate, moving away from task-oriented activities towards partnering with lines of business on strategic risk prevention and control-by-design in new customer journeys, often referred to as a 'shift left'. Generative AI will not replace humans but rather enable them to perform tasks significantly faster with the same level of quality, freeing up capacity for risk professionals to advise businesses on new product development and strategic business decisions, explore emerging risk trends and scenarios, strengthen resilience, and proactively improve risk and controls processes.
The Risk Intelligence Center will act as a reliable and efficient source of information, enabling risk managers to make informed decisions swiftly and accurately. For instance, we at McKinsey have started using our enterprise gen AI virtual expert – Lilli. Lilli is a conversational AI tool with access to a carefully selected corpus of McKinsey knowledge. Lilli can provide tailored answers to questions posed to it by McKinsey colleagues and enable them to access and synthesise our proprietary information and assets. Similarly, risk and compliance functions and their stakeholders can use their own virtual experts to influence risk decisioning by gathering additional information (e.g., transaction with other banks, potential red flags, market news scan, asset prices) or to collect data, assess climate risk assessment for answering counterparty questions.
CROs will need to consider the potential risks. This is a critical dimension for the initial gen AI explorations. All leaders need to be aware of the risks associated with this new technology; these risks can be broadly divided into eight categories.
1. Impaired fairness, such as the output of a gen AI model may be inherently biased against a particular group of users.
2. IP infringement, such as copyright violations or plagiarism incidents as foundation models typically leverage internet-based data.
3. Privacy concerns, such as unauthorised public disclosure of personal or sensitive information using gen AI models.
4. Malicious use, such as dissemination of false content or use of gen AI by criminals to create false identities, orchestrate phishing attacks or scam customers.
5. Security threats, vulnerabilities within gen AI systems that can be breached or exploited.
6. Performance and explainability risks, such as models hallucinating factually incorrect answers or providing outdated information.
7. Strategic Risk, through non-compliance with ESG standards or regulations, societal or reputational risk.
8. Third-party risk, such as leakage of proprietary data in the public realm due to use of third-party tools.
Organisations that can extract value from gen AI have a focused top-down approach to start the journey. Scaling the applications requires the development of a gen AI ecosystem, including a strategic roadmap, risk and governance, technology, operating model, talent and data.
In a future state, we expect the entire risk and compliance function to be empowered by gen AI. This implies a profound culture change which requires all risk and compliance professionals to be conversant with this new technology, its capabilities, its limitations and how to mitigate them.
In the end, gen AI promises to be a significant shift for organisations – but offers outsized promise for risk and compliance functions – even though the technology is still developing. CROs will lead the delicate balance of harnessing the powers of gen AI to drive institution-wide growth while managing the risks that are posed by this new technology to ensure responsible adoption.
The authors would like to thank Stephan Beitz, Rahul Agarwal and Claudia Satrústegui of McKinsey & Company for their contribution to this article.