Finance Meets AI: Considerations for Public Authorities | Blog

Finance


Imagine a world where a single click approves a loan, insurance claims are settled in seconds, and financial advisors aren’t human, but highly intelligent machines. This is not science fiction anymore; it is the reality of today’s financial landscape. As AI continues to revolutionize the financial sector, it brings both unprecedented opportunities and unique challenges. How can public authorities support the financial sector to harness the power of AI while ensuring it operates within safe and ethical boundaries? Following our recent blog on the global AI regulatory landscape, we propose three key considerations that could enable regulatory authorities to proactively support the responsible use of AI in finance: increased coordination across broader policy domains, iterative engagement with multiple stakeholders, and enhanced adaptive regulation. We think this comprehensive approach could help mitigate AI risks and unleash its potential for financial inclusion.

Balancing opportunities and risks 

AI is being used across the financial sector to analyze vast amounts of data about consumers and determine what products they qualify for. These sophisticated tools are used for customer onboarding, credit scoring, insurance underwriting, and claim processing, virtual help desks, robo-advice, trading, portfolio and risk management, fraud detection, cybersecurity, and Anti-Money Laundering/Counter Financing of Terrorism (AML/CFT) compliance. For financial service providers, using AI can unleash efficiency gains and economies of scale. For consumers, it can help deliver tailored experiences and hyper-personalized financial products. However, AI has the potential to amplify existing financial and non-financial risks, compromise market integrity, and cause consumer harm. 

The benefits and risks of AI largely depend on its use case. We identified three elements to consider in a structured assessment of the benefits and risks of using AI in the financial sector. These include the input data, the model itself, and outputs generated by it. For each of these elements, there are associated opportunities and risks. While we do not intend to present an exhaustive list, Figure 1 summarizes our analysis.

One risk often highlighted when assessing AI is algorithmic biases. This issue can arise from many sources, including the fact that input data can be unrepresentative, incorrect, or incomplete for training the model, and reflect historical biases. Such biases can be embedded during the modelling stage when protected characteristics, such as race, gender, or religion, can be included or retrieved using proxies (e.g., zip code). This can foster output biases, preventing low-income and excluded individuals and businesses from accessing affordable financial products. This is one of the key issues that needs to be carefully addressed so that AI doesn’t create new inequalities or exacerbate existing inequalities that hinder financial inclusion. 

Considerations for public authorities

While the use of AI comes with risks, it is important to consider that in many cases, those are the amplification of existing risks, which may already be covered by existing rules. Even so, specific guidance on how existing regulation can be applied to the use of AI would be beneficial. We have identified three key areas to be considered by public authorities.   

1) Consider broader policy domains and stakeholders

AI in finance covers more than just financial and securities acts – it includes other policy domains such as data privacy, data protection, consumer protection, competition law, operational resilience, recovery planning, and cybersecurity. To ensure a holistic response, financial authorities can consider:

  • Harmonizing definitions and ethical principles. A global AI taxonomy could align terminology and frameworks, reducing regulatory arbitrage. Authorities could provide guidance on the implications of adopting ethical benchmarks (e.g., safe, fair, ethical, trustworthy) for input data, algorithm training, and output data.
  • Promoting cross-sectoral and multi-stakeholder coordination. Authorities could foster knowledge exchange of use cases with financial firms, technology providers, regulators, government, and civil society.
  • Incentivizing data-sharing schemes. Data fuels AI. Authorities could strengthen data infrastructure, promote data-sharing, and establish governance structures for data-sharing through open data frameworks that can encourage AI innovation, as well as put in place necessary data protection mechanisms.
  • Ensuring data protection compliance. This would include empowering customers to have more control over their data and enforcing the right to “be forgotten” when input data is no longer needed.
  • Assessing AI under competition law. Authorities could evaluate whether AI algorithms could lead to tacit collusive pricing practices that violate antitrust laws or foster unhealthy competition.
  • Encouraging the application of existing consumer protection rules. AI can power sophisticated forms of financial fraud, cyber threats, and data privacy concerns, which makes it critical for authorities to ensure consistent treatment under the law for similar products, services, and activities. 

2) Assess the suitability of existing regulation

Even though financial regulation is “technology neutral”, AI brings new challenges and amplifies financial and non-financial risks, due to its complexity, potential for autonomous decision-making, and governance issues. Some areas for authorities to consider when evaluating the suitability and clarity of their existing rules to mitigate AI-related risks include: 

  • Financial stability and contagion risks: Identify whether risk management frameworks quantify and mitigate AI-driven market fluctuations, including potential herding or contagion effects, magnifying market booms and busts.
  • Third-party and outsourcing risk: Assess the implications of outsourcing of critical infrastructure and processes, including AI tools. Consider the potential concentration of third-party risk (especially the risks associated with relying on a small number of dominant cloud providers) and its implications for systemic and reputational risk whenever unintended data leakages occur.
  • Data model risk: Examine whether existing risk management frameworks account for AI specifics such as algorithm biases and model hallucinations.
  • Cybersecurity risk: Clarify the responsibility of AI providers, developers, and users in protecting personal data if data breaches or misappropriation occur.
  • Explainability and transparency: Update disclosure requirements to help consumers and investors understand AI-generated outcomes (e.g., credit scoring decisions or investment decisions). 

For each of these risks, it is crucial that authorities define clear monitoring metrics and periodically update them to align with the rapidly changing AI landscape, assessing their accuracy and relevance to understand the potential loss and harm. 

3) Conduct targeted consultations to inform regulation

The debate on AI is often informed by hypothetical concerns without adequate evidence. Conducting public consultations to gather industry feedback can offer regulators actionable and up-to-date intelligence. As an example, financial authorities in the EU, Japan, New Zealand, the UK, and the U.S. have conducted public consultations to identify the most pressing AI risks for market participants. In addition, authorities can consider implementing consumer advisory panels to facilitate a nuanced understanding of consumer protection and customer experience issues with AI-powered tools, and ultimately disseminate findings around consumer and conduct risks.  

The financial industry as a whole needs to have a conversation about how AI can be used to foster financial inclusion and create positive outcomes for everyone.

Regulating AI is only one piece of the puzzle. The financial industry as a whole needs to have a conversation about how AI can be used to foster financial inclusion and create positive outcomes for everyone. As the use of AI continues to grow, CGAP is working to better understand how AI can positively impact financial inclusion objectives and how the benefits for traditionally excluded and underserved customers can be materialized while mitigating risks.    



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *