Artificial intelligence (AI) is not a novel concept in finance; so, what is driving renewed interest in addressing regulatory gaps around the technology? A straightforward answer is that the rapid pace of AI developments requires both proactive and adaptive responses from multiple country authorities and intergovernmental bodies. Currently, there are global variations in the scope and approach of existing regulations, as well as in determining what additional aspects should be regulated and how to implement these regulations.
Many institutions have been following the developments in AI regulation. Standard-setting bodies and international organizations have released regulatory trackers (OECD and EU Commission), readiness indicators (IMF and UNESCO), comprehensive regulatory overviews (FSB, 2024; FSI, 2021, 2024; IAIS, 2023; IMF, 2024; OECD, 2021a, 2021b, 2023, and 2024, and World Bank, 2025), and consultations to financial institutions on use-cases and regulatory constraints (EU Commission, 2024; IAIS, 2024; IOSCO, 2025).
CGAP identified five key global developments in AI regulation through desk research covering more than a hundred jurisdictions.
1) AI has become a strategic priority worldwide
By February 2025, at least 116 jurisdictions (see Figure 1) have taken decisive steps to promote AI through national strategies such as India’s #AIforall, the Mexican Agenda AI 2030, Singapore’s NAIS2.0, and Zambia’s AI strategy for job creation. Whether or not these strategies are legally binding, they signal a long-term commitment to leveraging AI for growth, innovation, and productivity. Establishing a time-bound agenda can encourage AI development, deployment, adoption, and regulation in strategic sectors such as agriculture, education, energy, finance, health, industry, and technology.
Figure 1
2) There is no “gold standard” definition of AI
Standard-setting bodies and authorities conceptualize AI differently
(see Figure 2). Common features include: (i) the performance of human tasks involving reasoning, learning, and decision-making, and (ii) the ability to collect and interpret vast amounts of information.
Figure 2
3) Hard and soft cross-sectoral regulation coexist
Hard regulation, or legally binding rules applicable to a wide range of sectors, has been enacted in 31 jurisdictions, including China, the EU, Peru, and South Korea. Some 17 more jurisdictions are proposing a bill or are assessing the need to draft “sector-agnostic” regulation (e.g., Brazil, Ghana, Indonesia, Switzerland, Thailand, and Uruguay – see Figure 3). Enacted regulation spans from generic to specific. For example, the EU, as an early regulatory adopter, established a risk-based approach to ensure trust and human oversight throughout the AI lifecycle, and more broadly on several AI applications. By contrast, China regulates concrete AI applications, which include the introduction of a centralized algorithm registry, requirements to disclose the sources of data training, and specific recommendations for generative AI.
Figure 3
Similarly, a total of 85 jurisdictions (see Figure 4), including those with binding requirements, have introduced soft regulation (non-binding guidelines such as codes of conduct, or high-level principles). This can be done either through self-regulation (e.g., Australian Watermarking of AI Safety Standard), codes of practice (e.g., Canadian AIDA), or ethical frameworks (e.g., Hong Kong’s Ethical AI Framework). In practice, endorsing benchmarking principles, including those adopted in intergovernmental instances (ASEAN, Bletchley Declaration, G7, G20, OECD, UN, and UNESCO), helps regulators set ethical boundaries without immediately resorting to legislative action. Alternatively, these principles are embedded into law.
Figure 4
4) The ideal governance model for AI oversight is still undetermined
Given the cross-sectoral nature of AI, governance approaches vary from self-regulation (e.g., Australia’s labeling AI safety standard v2) to dedicated AI agencies and registries (e.g., Peru’s Specialized Authority for AI, China’s Cyberspace Administration-CAC), or the attachment of oversight functions to existing authorities depending on specific use-cases, including financial supervisors and data protection agencies (e.g., Indonesia, Mauritius, the EU). However, the jury is out about which is the optimal governance structure for AI, as it remains context-specific.
5) Specific guidance for AI applications in finance is still at a nascent stage
At least 50 jurisdictions (see Figure 5) have released AI-specific guidelines for financial institutions. We identified four types of tools that typically follow a soft approach, except when used to address heavily regulated activities or to ensure compliance with existing rules.
Figure 5
(i) Ethical principles
Principles such as accountability, fairness, soundness, and transparency are increasingly supported by financial authorities. The DNB’s SAFEST, HKMA’s high-level principles on AI, Indonesia’s OJK AI Guideline, Korea’s FSC guideline, and Singapore’s MAS FEAT exemplify ethical boundaries for AI use-cases in finance aligned with national and supranational commitments reported above.
(ii) Targeted consultations
These consultations actively seek feedback from financial institutions with regard to the adoption rate of AI, the exposure to third-party risk, the degree of autonomous decision-making, and potential regulatory barriers and gaps (e.g., Japan’s BoJ, New Zealand’s FMA, Sweeden’s Finansinspektionen, UK’s FCA, U.S. Treasury). Interestingly, these surveys have uncovered the need to harmonize existing regulatory requirements at both national and supranational levels. Specifically, U.S. respondents raised issues with conflicting state laws, while UK firms highlighted differing regulatory practices with the EU that could result in regulatory arbitrage.
(iii) Supervisory views and guidance
Authorities are disclosing their views on opportunities and risks of adopting AI in finance (e.g., Luxembourg, Nigeria) and prioritizing supervisory activities due to emerging contagion and concentration risks (e.g., Austria’s FMA). Efforts are underway to: (i) ensure compliance with existing requirements (e.g., ESMA on AI implications for MiFID II); (ii) clarify how existing rules can be translated into AI-specific features, including model risk due to bias and hallucinations (e.g., Canada, EBA, Germany, Malaysia, and the UK); and (iii) inform financial consumers about the inherent risks of AI-powered tools (e.g., ESMA warning on the use of AI for investing, and FINRA warning on investment fraud using GenAI).
(iv) AI-tailored rules
Mauritius introduced the “Robotic and AI Enabled Advisory License” to oversee providers leveraging AI for automated financial advice. In Colombia, customers receiving automated financial advice can request supplemental financial advice from certified human advisors alongside robo-advice. Qatar also launched AI-tailored rules requiring financial institutions that are using, developing, or deploying AI to strengthen their risk governance frameworks, obtain approvals prior to launching an AI tool, and inform customers on how the decision-making process involving AI may affect them.
Overall, our research suggests that regulations governing the use of AI in finance are still in the early stages
This could be attributed to three factors: (i) the prevailing consensus around the principle of technology neutrality in financial regulation; (ii) the fact that existing frameworks already address relevant financial and non-financial risks, now potentially amplified by AI; and (iii) the cross-sectoral nature of AI, spanning across multiple regulatory domains. While regulation alone is not a silver bullet to mitigate risks and harness the potential of AI, the financial industry is increasingly seeking guidance on its responsible use. Our next blog in this series builds on this landscaping exercise and looks to inform the discussion on considerations for regulatory authorities looking to effectively govern the usage of AI in finance.
N/A: Includes countries for which we could not find any information online in a searchable format, or cases where information about initiatives released by regulatory authorities is unavailable. In some cases, it also includes jurisdictions where authorities have mentioned their intention to release a strategy or regulation, but there is no formal record of these initiatives. The boundaries, colors, and any other information shown on the map do not imply, on the part of CGAP and the WBG, any judgment on the legal status of any territory or any endorsement or acceptance of such boundaries.