AI and Responsible Finance: A Double-Edged Sword | Blog

Finance

Artificial intelligence (AI) is rapidly revolutionizing our lives, interactions, and finances. It offers a transformative opportunity to boost financial inclusion, especially in emerging and developing economies (EMDEs). AI enables financial institutions to operate more efficiently and cost-effectively, improving performance and unlocking capabilities beyond human limits, particularly in terms of speed and precision. For example, AI-powered digital credit now serves millions of consumers with data trails who previously lacked access to formal credit. AI can also streamline customer onboarding, personalize financial products, and assist in daily financial management. 

We have started exploring how AI exacerbates risks for consumers and how it can help different ecosystem actors, especially authorities and providers, manage those same risks.

But AI’s application in finance is also accelerating and exacerbating a range of risks for the financial sector, including risks for consumers who increasingly use digital financial services (DFS). If not properly addressed, these risks could undermine the progress AI aims to achieve and derail efforts to build more Responsible Digital Finance Ecosystems. As part of CGAP’s agenda to help digital finance become more responsible, we have started exploring how AI exacerbates risks for consumers and how it can help different ecosystem actors, especially authorities and providers, manage those same risks.

Using CGAP’s DFS consumer risks typology established in 2022, we found that AI can exacerbate four main types of risks. 

1) Fraud

AI is making fraudsters more “intelligent” by personalizing services. Generative AI tools like Large Language Models (LLMs) are used for sophisticated phishing scams and identity theft. The ease of creating deepfakes, cracking passwords, and manipulating text poses a significant threat. No-code AI tools lower the barrier for cybercriminals, enabling easier creation of malware and automated attacks. At a recent FinCoNet conference last November, several cases of central bank officials being impersonated using deep fakes were discussed, including the Central Bank Governor of Romania

2) Consumer data misuse

The rapid deployment of LLMs introduces poorly understood risks, including system prompts containing sensitive data. Data misuse is exacerbated by the lack of transparency in many AI systems. While AI can unlock financial services for the unbanked, algorithms trained on biased data or data that does not represent traditionally excluded people can amplify societal biases. In the UK, the head of the Financial Conduct Authority has warned that the use of AI in the insurance sector could lead some vulnerable consumers to become uninsurable. This raises critical questions for inclusive finance. 

3) Lack of transparency

The “black box” nature of many AI models makes it difficult to identify and address these biases, leading to potentially harmful consequences for vulnerable consumers. For example, some unscrupulous financial service providers can exploit consumer vulnerability to charge an unfair price by assessing consumer transactions and behavior to see if they may have an urgent need to borrow, even at a very high price.

The complexity of AI algorithms may exacerbate the lack of transparency in the financial sector, as it can be difficult to identify and challenge unfair or discriminatory outcomes. As highlighted by the OECD, AI has the potential to enhance large-scale consumer misinformation and disinformation. This risk may lead to reduced trust in the financial sector among consumers and threaten progress in financial inclusion. 

4) Inadequate redress mechanisms 

The shift to AI-driven customer service, while potentially improving efficiency, can also hinder access to redress. Poorly designed chatbots may fail to adequately address complex complaints, leaving consumers frustrated and without effective recourse. They can also give the wrong or incomplete answers to consumers. As stated by CFPB, “Even when chatbots can identify that a dispute is being made by the customer, there may be technical limitations to their ability to research and resolve that dispute.” Social norms around complaints, particularly among vulnerable consumers, can further exacerbate this issue.

We believe that over-indebtedness could be provoked by a combination of the above risks. AI can increase the risks of over-indebtedness, thus negatively affecting financial health. For example, both GSMA and CGAP found some concerning digital credit-related debt stress in West Africa. 

Thankfully, AI also brings new solutions to build more Responsible Digital Finance Ecosystems. If consistently used by consumers, financial service providers, and authorities, the following solutions could result in DFS risk reduction and better outcomes for consumers. 

AI can help consumers improve their digital and financial literacy 

This includes education on fraud prevention, data privacy, and how to recognize and report discriminatory practices. This could be a game changer for vulnerable consumers, such as people with disabilities, who represent about 16% of the global population. As noted in a UN report, “AI makes communication possible through eye-tracking and voice-recognition software, enabling persons with disabilities to access information and education. There is also great potential for AI to help consumers make more savvy decisions and change their daily financial management, thus improving their financial health. 

AI brings new solutions to financial service providers for fraud prevention

Our desk research, conducted with initial inputs from Caribou Digital, found that many fintech firms offer AI-powered solutions to financial service providers to better protect themselves and their consumers from financial fraud. This includes real-time user behavior analysis, fraudulent document detection, SIM swap identification, anti-phishing measures, and proactive warnings to users. Global research found that fraud detection was the most common use of AI in the financial industry. Additionally, AI can play a significant role in helping financial service providers and consumers avoid social engineering. For example, liveness detection AI tools can help avoid identity theft and synthetic IDs. AI-driven nudges could also help providers better train their employees and agents to protect consumers against fraud and other risks.

AI can enable financial service providers to responsibly use consumer data 

This allows FSPs to personalize financial services, with improved customer relationship management (CRM). It can also help them assess consumer needs and offer solutions that correspond to their aspirations and capabilities. For example, AI could help providers identify consumers who are at risk of over-indebtedness. AI chatbots, coupled with human intervention, could ensure a smooth financial journey for consumers. Providers can also use AI to better assess consumer risk profiles and ensure the right outcomes for them. Some digital credit providers use mobile phone data to help identify and segment their customers’ risk profiles. This model can work well if the sector is well-regulated and supervised.

AI-powered suptech can greatly support financial sector authorities

Indeed, several standard-setting bodies, such as the BIS, IAIS, and IOSCO, are already documenting the opportunity for AI to identify and mitigate risks, particularly in the area of anti-money laundering / countering the financing of terrorism (AML/CFT), and fraud detection. Our research conducted in India in collaboration with the Reserve Bank of India Innovation Hub and Decodis showed that LLM could help authorities better understand the nature of risks being faced by digital credit borrowers. Risks included debt collection practices and misuse of consumers’ data. There is also a well-documented case of an AI-powered chatbot used by the BSP in the Philippines that interacts in real time with consumers.

AI holds immense potential for driving financial inclusion in EMDEs, but the jury is still out on whether it can make digital finance ecosystems more responsible. 

AI holds immense potential for driving financial inclusion in EMDEs, but the jury is still out on whether it can make digital finance ecosystems more responsible. In the next few months, we will deepen our understanding of AI-enabled solutions available for key actors in the ecosystem, especially authorities and providers, to curb the rising risks that AI creates for consumers using digital finance. Identifying and implementing these solutions will require an ecosystemic approach with increased collaboration between fintechs, DFS providers, authorities, and consumer representatives, as not one single actor has the means to fully mitigate AI-related risks.

Source link

Leave a Reply

Your email address will not be published. Required fields are marked *