[ad_1]
Adam Muhtar and Dragos Gorduza
Imagine a world where machines can assist humans in navigating across complex financial rules. What was once far-fetched is rapidly becoming reality, particularly with the emergence of a class of deep learning models based on the Transformer architecture (Vaswani et al (2017)), representing a whole new paradigm to language modelling in recent times. These models form the bedrock of revolutionary technologies like large language models (LLMs), opening up new ways for regulators, such as the Bank of England, to analyse text data for prudential supervision and regulation.
Analysing text data forms a core part of regulators’ day-to-day work. For instance, prudential supervisors receive large amounts of documents from regulated firms, where they meticulously review these documents to triangulate the various requirements of financial regulations, such as ensuring compliance and identifying areas of risk. As another example, prudential regulation policy makers regularly produce documents such as policy guidelines and reporting requirement directives, which also require reference to financial regulations to ensure consistency and clear communication. This frequent cross-referencing and retrieving information across document sets can be a laborious and time-consuming task, a task in which the proposed machine learning model in this article could potentially assist.
Tackling this problem using traditional keyword search methods often fall short in addressing the variability, ambiguity, and complexity inherent in natural language. This is where the latest generation of language models come into play. Transformer-based models utilise a novel ‘self-attention mechanism’ (Vaswani et al (2017)), enabling machines to map inherent relationships between words in a given text and therefore capture the underlying meaning of natural language in a more sophisticated way. This machine learning approach of mapping how language works could potentially be applied to the regulatory and policy contexts, functioning as automated systems to assist supervisors and policymakers in sifting through documents to retrieve relevant information based on the user’s needs. In this article, we explore how we could leverage on this technology and apply it on a niche and complex domain such as financial regulations.
Transforming financial supervision with Transformers
Transformer-based models come in three different variants: encoders, decoders, and sequence-to-sequence (we will focus on the first two in this article). Many of the well-known LLMs such as the Llama, Gemini, or GPT models, are decoder models, trained on text obtained from the internet and built for generic text generation. While impressive, they are susceptible to generating inaccurate information, a phenomenon known as ‘model hallucination’, when used on highly technical, complex, and specialised domains such as financial regulations.
A solution to model hallucination is to anchor an LLM’s response by providing the model real and accurate facts about the subject via a technique called ‘Retrieval Augmented Generation’ (RAG). This is where Transformer encoders play a useful role. Encoder models can be likened to that of a knowledgeable guide: with the appropriate training, encoders are able to group texts with similar inherent meaning into numerical representations of those text (known in the field as ’embeddings’) that are clustered together. These embeddings allows us to perform mathematical operations on natural language, such as indexing and searching through embeddings for the closest match for a given query of interest.
Figure 1: Semantic search using Transformer encoder models (depiction of encoder based on Vaswani et al (2017))
A RAG framework would first utilise an encoder to run a semantic search for the relevant information, and then pass the outputs on to a decoder like GPT to generate the appropriate response given the output provided. The use of Transformer encoders open up new possibilities for more context-aware applications.
Gaps in the intersection of AI and financial regulations
Building this regulatory knowledge-aware guide requires a Transformer encoder model that is trained on a corpus of text from the relevant field in question. However, most of the open-source encoder models are either trained on general domain texts (eg BERT, RoBERTa, XLNet, MPNet), all of which are unlikely to have a deep understanding of financial regulations. There are also models like FinBERT that are trained on financial news text and are fine-tuned for finance. However, these models still lack the depth of technical understanding due to the lack domain-specific financial regulation text required during model training. A new type of fine-tuned model, trained directly on regulations, is needed to allow a comprehensive understanding of regulations.
Financial regulations are complex texts from the standpoint of their vocabulary, their syntax, and interconnected network of citations. This complexity poses significant challenges when adapting language models for prudential supervision. Another hurdle is the lack of readily available machine-readable data sets of important financial regulations, such as the Basel Framework. Producing this data set is, in itself, a valuable research output that could help drive future innovation in this field as well as potentially being an integral foundation to building other domain adapted models for financial regulation.
PRET: Prudential Regulation Embeddings Transformers
Currently, a pioneering effort is under way to fill this gap by developing a domain-adapted model known as Prudential Regulation Embeddings Transformer (PRET), specifically tailored for financial supervision. PRET is an initiative to enhance the precision of semantic information retrieval within the field of financial regulations. PRET’s novelty lies in its training data set: web-scraped rules and regulations from the Basel Framework that is pre-processed and transformed into a machine-readable corpus, coupled with LLM-generated synthetic text. This targeted approach provides PRET with a deep and nuanced understanding of the Basel Framework language, overlooked by broader models.
In our exploration of leveraging AI for financial supervision, we are mindful that our approach with PRET is experimental. An important component in the development of PRET is a model fine-tuning step to optimise performance on a specific task: information retrieval. This step employs a technique known as generative pseudo labelling (as described in Wang et al (2022)), which involves:
- Creating a synthetic entry – ie the LLM-generated text such as questions, summaries, or statements – relating to a given financial rule in question that users might hypothetically ask.
- The financial rule in question becomes the ‘correct’ answer by default, relative to the synthetically generated text.
- Coupling the previous two pairs with ‘wrong’ answers – ie unrelated rules from other chapters – in order to train the model to discern which answers are right from wrong.
As there are no such human-generated question-answer data sets of sufficient size to train this model, we rely on existing LLMs to synthetically generate these data sets. The training objective of our model is to form a mapping between the various inputs a user could potentially ask with the correct information that are relevant to the user’s input, ie a semantic search model. To do this, the model aims to minimise the difference between the synthetically generated ‘query’ and the ‘positive’ while maximising the difference between the ‘query’ and the ‘negative’, as illustrated in Figure 2. This corresponds visually to making the positive and query line up as much as possible while making the query and the negative as distant as possible.
Figure 2: Fine-tuning training objective
It is a sophisticated way to train our model to (i) distinguish between closely related pieces of information and (ii) ensure it can effectively match queries with the correct parts of the regulatory text. Maximising performance relative to this objective allows PRET to connect the dots between regulatory text and related summaries, questions, or statements. This model fine-tuning process not only enhances its capability to comprehend financial terminology, but also aims to improve its effectiveness in accurately identifying and accessing the requisite information.
AI and the future of prudential supervision and regulation
The potential rewards of such systems – increased efficiency and the ability to quickly navigate through complex regulatory texts – paint a promising picture for the future. Nonetheless, we are mindful of the long road ahead, which includes the difficulty of evaluating whether the interpretation of such models is a ‘shallow’ one (ie surface level mapping of the rules) or a ‘deep’ one (ie grasping the underlying principles that give rise to these rules). The distinction is critical; while AI systems such as these can assist humans through scale and speed, its capacity to understand the fundamental concepts anchoring modern financial regulatory frameworks remains a subject of intense study and debate. In addition to this, any AI-based tools developed to assist supervisors and policymakers will be subject to appropriate and rigorous testing prior to use in real-world scenarios.
Developing PRET is a first step towards building models that are domain-adapted for central banking and regulatory use-cases, which we can expand across more document sets such as other financial regulation texts, policy papers, and regulatory returns, to name a few. Through efforts like these, we hope to leverage on recent technological developments to assist and amplify the capabilities of supervisors and policymakers. In this journey, PRET is both a milestone and a starting point, paving the way towards a future where machines can assist regulators in a complex and niche field like prudential supervision and regulation.
Adam Muhtar works in the Bank’s RegTech, Data and Innovation Division and Dragos Gorduza is a PhD student at Oxford University.
If you want to get in touch, please email us at bankunderground@bankofengland.co.uk or leave a comment below.
Comments will only appear once approved by a moderator, and are only published where a full name is supplied. Bank Underground is a blog for Bank of England staff to share views that challenge – or support – prevailing policy orthodoxies. The views expressed here are those of the authors, and are not necessarily those of the Bank of England, or its policy committees.
Share the post “Leveraging language models for prudential supervision”
[ad_2]