The financial industry is increasingly embracing Artificial Intelligence (AI), anticipating a 3.5-fold growth in the use of AI and machine learning within the next three years. While ethical concerns, such as bias and fairness, are typically associated with issues of conduct and consumer protection, it’s essential to recognize that AI’s impact extends beyond these domains. As outlined in DP5/22, AI introduces the potential for financial and monetary stability risks.

AI models, driven by algorithms, are essentially sets of mathematical instructions designed to solve complex problems. These models apply a quantitative approach that incorporates various theories, techniques, and assumptions to process input data into meaningful output data. Traditional financial models typically operate based on fixed, explicit rules and parameters. In contrast, AI models can learn and adapt iteratively, allowing them to evolve.

AI offers numerous advantages within the financial sector, empowering institutions to provide consumers with a better understanding of their financial behaviour and tailored options. For instance, AI can automate actions that serve the best interests of customers, such as automatically transferring funds between accounts to prevent overdraft fees. This not only enhances customer experiences but also promotes financial well-being. However, it is crucial to remain vigilant and address potential biases and ethical concerns to mitigate financial stability and conduct risks associated with AI deployment in the financial industry.

How AI can produce or amplify bias

Artificial Intelligence (AI) has witnessed remarkable advancements, with machines now capable of performing complex tasks without human intervention. However, the increasing use of AI models has also raised concerns about the potential for bias in their outputs. Biases in AI can emerge from various sources, including the data used to train the models and the inherent structures of the algorithms themselves. These biases have far-reaching implications, as they can lead to discriminatory decisions and outcomes that disproportionately affect specific demographic groups.

One prominent example of AI bias emerged in the insurance sector, where a healthcare algorithm aimed at predicting patients’ health risk scores exhibited significant bias. The algorithm, trained on cost data, demonstrated a pattern of underestimating the severity of health conditions in Black patients compared to their White counterparts. This led to a concerning situation where Black patients received inadequate healthcare services, highlighting the potential harm that biased AI models can cause.

In recent times, there has been a surge in the use of generative AI models, which are deep-learning models designed to generate statistically probable outputs based on raw data. This proliferation of AI models has attracted significant media attention, particularly regarding their propensity to amplify existing biases.

The finance and insurance industries, in particular, rely heavily on AI algorithms to make decisions related to risk assessment, such as determining creditworthiness or evaluating geographical risk exposure to natural disasters. These algorithms must operate in an unbiased manner, as biased decisions can have severe economic and social consequences.

AI bias can be categorized into two main types: data bias and societal bias. Data bias originates from the biased nature of the data used to train AI models. When AI models are trained on data that reflects societal biases, they tend to perpetuate and amplify these biases on a larger scale. An alarming example of data bias was brought to light by Joy Buolamwini, who discovered that several facial recognition software systems exhibited higher error rates when used on minority ethnic individuals, especially women. The error rates were significantly lower when applied to White individuals. The root cause of this issue was imbalanced training data, which consisted of predominantly White male subjects. As a result, these AI models focused primarily on White subjects, illustrating the profound impact that training data can have on the behaviour of AI algorithms.

Simply removing protected characteristics from input data does not guarantee the elimination of bias. AI models are sophisticated enough to uncover underlying correlations that lead to biased decision-making based on non-protected features. This is a complex challenge, as even non-protected features can act as proxies for protected characteristics, ultimately perpetuating bias. For instance, practices like redlining in insurance and mortgage lending, which historically involved offering exploitative interest rates to minority ethnic individuals compared to their White counterparts, may have created discriminatory patterns in historical data. If AI models are trained on such biased data, they can learn and replicate these discriminatory behaviours, potentially leading to unjust decisions.

AI models are designed to maximize their overall prediction accuracy based on their training data. As a consequence, if certain demographic groups are overrepresented in the training data, the AI model may optimize for those groups, inadvertently favouring them over others. For example, statistically trained systems like Google Translate may default to masculine pronouns because there is a higher occurrence of masculine pronouns in their training data. The biased translations created by one AI model then become part of the training data for the next, potentially creating a feedback loop that perpetuates and amplifies biases.

Societal bias in AI refers to the influence of societal norms and legacies on the behaviour of AI algorithms. This type of bias is often the result of the AI model being trained on historical data that reflects societal norms and practices. An illustrative example is the case of a recruitment algorithm developed by Amazon. This algorithm negatively evaluated female applicants because it was trained on resumes submitted to the company over ten years, a period during which the technology industry was predominantly male. The algorithm learned to favour candidates who described themselves using verbs more commonly found in male engineers’ resumes, such as “executed” and “captured.” Conversely, it penalized resumes that included phrases like “women’s,” as in “women’s chess club captain.” Importantly, this gender bias was a blind spot for the initial reviewers and validators of the algorithm’s outputs, indicating the subtlety of societal bias and its potential to go unnoticed.

In conclusion, the use of AI models, driven by data and structure, can introduce and amplify biases, potentially leading to discriminatory outcomes. The consequences of AI bias are far-reaching, affecting various sectors, from healthcare and finance to recruitment and translation services. Recognizing and addressing the multifaceted nature of AI bias is crucial to ensure that AI technology is developed and deployed in a responsible and unbiased manner. Efforts to reduce AI bias must encompass careful curation of training data, rigorous evaluation of model outputs, and continuous monitoring and improvement of AI systems to make them more equitable and fair. As AI continues to play an increasingly prominent role in our lives, addressing bias is essential to promote inclusivity and fairness in decision-making processes.

Also Read: The Role of Artificial Intelligence in the Future of Estate Planning

Bias and financial stability

The growing influence of AI in finance has raised concerns about its potential impact on financial stability. If multiple firms employ opaque or “black box” models in their trading strategies, it becomes challenging for both these firms and regulatory authorities to predict the market consequences of AI-driven actions. The Financial Stability Board has warned that widespread use of such models by financial services firms could introduce macro-level risks.

Issues of fairness in AI algorithms are not only a matter of individual concern but can also exacerbate financial stability risks. Trust is a fundamental component of financial stability, and during periods of low trust or heightened panic, financial institutions experience increased instability. This can manifest in various forms, including market turbulence or bank runs. The De Nederlandsche Bank emphasizes that while fairness primarily pertains to conduct risk, it is crucial for society’s trust in the financial sector that AI applications employed by financial firms, either individually or collectively, do not unintentionally disadvantage specific customer groups.

Research by Bartlett et al. (2019) reveals that, while FinTech algorithms exhibit 40% less discrimination than face-to-face lenders, Latinx and African-American groups paid 5.3 basis points more for purchase mortgages and 2.0 basis points more for refinance mortgages compared to their White counterparts. These disparities highlight that although AI has made progress in addressing discrimination observed in traditional lending decisions, some degree of bias persists within AI systems. Such biases can erode trust, particularly among the affected groups, potentially contributing to financial instability concerns. Addressing fairness in AI is not only a matter of ethics but also a critical component of ensuring trust and financial stability in the industry.

Also Read: The Reality of Artificial Intelligence for Family Offices

The concept of trust is paramount, not only for the overall financial system but also for individual institutions’ stability. When financial institutions employ biased or unfair AI, they expose themselves to reputational and legal risks. Prudential regulators, while setting capital requirements, often take these risks into account. While AI-related risks may not seem substantial in isolation, their cumulative effect, combined with other risks, can impact capital positions and potentially lead to significant losses.

Although we have not yet witnessed a major incident stemming from these risks, the concerns are becoming more evident. One illustrative example is the credit card application algorithm used by Apple and Goldman Sachs. It was observed that this algorithm offered smaller credit lines to women compared to men. Notably, the algorithm did not consider gender as an explicit input, yet it seemingly developed proxies for gender and made biased lending decisions based on sex. In this case, the New York State Department of Financial Services found no violation of fair lending requirements. However, it emphasized that the incident had brought the issue of equal credit access to public attention, sparking vigorous conversations about sex-based bias in lending, the risks associated with using algorithms and machine learning to set credit terms, and the heavy reliance on credit scores to evaluate applicants’ creditworthiness. While no regulatory violations were identified in this instance, future events with different outcomes, potentially resulting in adverse regulatory findings, could lead to reputational damage for firms employing such algorithms, thereby eroding trust.

AI, if not implemented conscientiously, can introduce bias and ethical concerns in financial services and other sectors. Beyond the apparent issues of bias, fairness, and ethics, these concerns could pose stability risks for financial institutions and the entire financial system. Given the expected continued adoption and acceleration of AI, central banks must assess the significance of these risks related to bias, fairness, and other ethical considerations. They need to determine whether AI usage poses a threat to financial stability and, if so, devise appropriate risk management strategies. Such considerations are vital for preserving trust and stability in the financial sector.


 

Leave a Reply

Your email address will not be published. Required fields are marked *