The U.S. Department of the Treasury (“Treasury”) has released a Request for Information on the Uses, Opportunities, and Risks of Artificial Intelligence (“AI”) in the Financial Services Sector (“RFI”). Written comments are due by August 12, 2024.
AI is a broad topic and the term is sometimes used indiscriminately; as the RFI suggests, most AI systems being used or contemplated in the financial services sector involve machine learning, which is a subset of AI. The RFI implicitly concedes that Treasury is playing “catch up” and quickly needs to learn more about AI and how industry is using it. The RFI discusses a vast array of complex issues, including anti-money laundering (“AML”) and anti-fraud compliance, as well as fair lending and consumer protection concerns – particularly those pertaining to bias.
The Press Release and Related Remarks: What the RFI Seeks to Accomplish
In the press release for the RFI, Under Secretary for Domestic Finance Nellie Liang stated that Treasury is seeking to understand
. . . . how AI is being used within the financial services sector and the opportunities and risks presented by developments and applications of AI within the sector, including potential obstacles for facilitating responsible use of AI within financial institutions, the extent of impact on consumers, investors, financial institutions, businesses, regulators, end-users, and any other entity impacted by financial institutions’ use of AI, and recommendations for enhancements to legislative, regulatory, and supervisory frameworks applicable to AI in financial services. Treasury is seeking a broad range of perspectives on this topic and is particularly interested in understanding how AI innovations can help promote a financial system that delivers inclusive and equitable access to financial services.
Relatedly, during recent remarks at the Financial Stability Oversight Council Conference on Artificial Intelligence and Financial Stability, Secretary of the Treasury Janet Yellen announced that the use of AI by financial companies was nearing the top of the agenda for Treasury due to both the tremendous opportunities and significant risks posed by AI. Opportunities presented by AI include enhancement of efforts to combat fraud and illicit finance through AI’s ability to detect anomalies, and its capacity to improve efficiency, accuracy, and access to financial products. Risks presented by AI include vulnerabilities arising “from the complexity and opacity of AI models; inadequate risk management frameworks to account for AI risks; and interconnections that emerge as many market participants rely on the same data and models.” Further, according to Secretary Yellen, “[c]oncentration among vendors developing models, providing data, and providing cloud services may also introduce risks, which could amplify existing third-party service provider risks.” Finally, building AI systems by using insufficient or faulty data can create or perpetuate bias.
The RFI Builds on Prior Work Relating to AI and the Financial Sector
The RFI begins by stating that it seeks to build on work that Treasury has done to date, and describes prior reports and outreach relating to AI. They include:
- A November 2022 report, Assessing the Impact of New Entrant Non-bank firms on Competition in Consumer Finance Markets, which found that many non-bank firms were using AI to expand their capabilities and service offerings, which created new data privacy and surveillance risks. “Additionally, the report identified concerns related to bias and discrimination in the use of AI in financial services, including challenges with explainability – that is, the ability to understand a model’s output and decisions, or how the model establishes relationships based on the model input – and ensuring compliance with fair lending requirements; the potential for models to perpetuate discrimination by using and learning from data that reflect and reinforce historical biases; and the potential for AI tools to expand capabilities for firms to inappropriately target specific individuals or communities (e.g., low- to moderate-income communities, communities of color, women, rural, tribal, or disadvantaged communities).” As noted below, the issue of “explainability” is also important in regards to using AI in AML compliance.
- A December 2023 RFI soliciting input on developing a national financial inclusion strategy, which included questions related to the use of AI in the provision of consumer financial services.
- A March 2024 report on AI and cybersecurity, which identified opportunities and challenges that AI presents to the security and resiliency of the financial services sector, and outlined next steps to address AI-related operational risk, cybersecurity, and fraud challenges.
- The 2024 National Strategy for Combating Terrorist and Other Illicit Financing, which found that “innovations in AI, including machine learning and large language models such as generative AI, have significant potential to strengthen anti-money laundering/countering the financing of terrorism (AML/CFT) compliance by helping financial institutions analyze large amounts of data and more effectively identify illicit finance patterns, risks, trends, and typologies.”
- The December 2018 Joint Statement on Innovative Efforts to Combat Money Laundering and Terrorist Financing issued by the Financial Crimes Enforcement Network (“FinCEN”) and the federal banking agencies, “which encouraged banks to use existing tools or adopt new technologies, including AI, to identify and report money laundering, terrorist financing, and other illicit financial activity.”
The RFI observes that Section 6209 the Anti-Money Laundering Act of 2020 requires Treasury to issue a rule specifying standards for testing technology and related internal processes designed to facilitate effective compliance with the BSA by FIs. This rulemaking has yet to occur. The RFI further notes that FinCEN hosted a FinCEN Exchange in February 2023 to discuss how AI is used for monitoring and detecting illicit financial activity, and that FinCEN “regularly engages financial institutions on the topic through the BSA Advisory Group Subcommittee on Innovation and Technology, and BSAAG Subcommittee on Information Security and Confidentiality.”
These steps by FinCEN are positive. However, and as we have blogged (here, here, here and here), FinCEN and other regulators have been talking for years about encouraging “technological innovation” in regards to AML compliance programs. But as we have observed, for these aspirational statements to have real-world meaning, it is incumbent for regulators – and, perhaps most importantly, for front-line examiners from the banking regulatory agencies – to allow financial institutions room for error in the implementation of new technologies such as AI. Some financial institutions may be reluctant to pursue technological innovation such as AI because they are concerned that examiners will respond negatively or will make adverse findings against the financial institution if the new technology creates unforeseen problems. Similarly, some financial institutions may be concerned that new technologies may reveal unwitting historical compliance failures that otherwise would not have been uncovered, and which then will haunt the financial institution in the absence of some sort of regulatory safe harbor. Further, it may be difficult for examiners, used to traditional technologies, to become comfortable with the “explainability” of the outputs and decisions of an AML compliance system using AI, because AI necessarily involves a system that operates at a speed that is exponentially beyond humans’ ability to think. New technology can be costly, so the benefits of using AI for AML compliance need to be clear.
Thus, for innovation to succeed and be utilized to a meaningful degree, on-the-ground expectations and demands by regulators must be tempered. It is critical for financial institutions to engage with regulators, but regulators also must be responsive, agile, and knowledgeable. With these considerations in mind, we turn to the substance of the RFI and its specific requests.
Definitions and Areas of Focus
Treasury takes broad view of what is a “financial institution” (“FI”) for the purposes of the Request, and includes not just traditional FIs covered by the Bank Secrecy Act (“BSA”) but also financial technology companies, or fintechs, and “any other company that facilitates or provides financial products or services under the regulatory authority of the federal financial regulators and state financial or securities regulators.” The RFI also makes clear that it seeks information from many stakeholders, including consumer and small business advocates, academics, and nonprofits.
The RFI provides the following definition of AI, as set forth in 15 U.S.C. § 9401(3):
[A] machine-based system that can, for a given set of human defined objectives, make predictions, recommendations or decisions influencing real or virtual environments. Artificial intelligence systems use machine and human-based inputs to perceive real and virtual environments; abstract such perceptions into models through analysis in an automated manner; and use model inference to formulate options for information or action.
Treasury is focused on the latest developments in AI involving machine learning models that learn from data and automatically adapt and improve with minimal human interference, rather than relying on human programming. This includes emerging AI technologies involving deep learning neural networks such as generative AI.
Treasury hopes to learn through its request what types of AI models and tools FIs are actually using. Specifically, the RFI seeks insights into the uses of AI by FIs in the following areas:
- Provision of products and services.
- Risk management. This is a broad topic and includes “credit risk, market risk, operational risk, cyber risk, fraud and illicit finance risk, compliance risk (including fraud risk), reputation risk, interest rate risk, liquidity risk, model risk, counterparty risk, and legal risk, as well as the extent to which financial institutions may be exploring the use of AI for treasury management or asset-liability management[.]”
- Capital markets.
- Internal operations.
- Customer service.
- Regulatory compliance. This includes “capital and liquidity requirements, regulatory reporting or disclosure requirements, BSA/AML requirements, consumer and investor protection requirements, and license management[.]”
- Marketing.
The Requests for Information
The RFI sets forth 19 questions, which are often detailed and consist of multiple, complex questions. Prior to outlining the 19 questions, the RFI provides a few pages of general discussion regarding four topics which are addressed, in various forms, by the questions themselves: potential opportunities and risks; explainability and bias; consumer protection and data privacy; and third party risk (referencing the federal banking agencies’ June 2023 interagency guidance on third-party risk management). As to explainability and bias, the RFI expresses this concern:
Financial institutions may have an incomplete understanding of where the data used to train certain AI models and tools was acquired and what the data contains, as well as how the algorithms or structures are developed for those AI models and tools. For instance, machine-learning algorithms that internalize data based on relationships that are not easily mapped and understood by financial institution users create questions and concerns regarding explainability, which could lead to difficulty in assessing the conceptual soundness of such AI models and tools.
We will highlight only a few of the 19 questions in the RFI, and our descriptions will be general:
Treasury wants to understand how AI use can differ across FIs of different sizes and complexity, and expresses concern that small FIs may face barriers to the use of AI. Question 4 therefore seeks comment on those barriers and how small FIs intend to mitigate any associated risks.
Question 5 asks how AI has provided specific benefits to “low-to-moderate income consumers and/or underserved individuals and communities (e.g., communities of color, women, rural, tribal, or disadvantaged communities).” Question 5 likewise asks how AI has improved fair lending and consumer protection. Questions 9 and 10 in part ask the converse. Question 9 asks about the risks posed by AI to low-to-moderate income consumers and underserved individuals and communities, such as falling prey to “predatory targeting” by AI or discrimination related to lending and other consumer-related activities, and what are FIs doing to mitigate these risks. Question 10 asks what FIs are doing to address any increase in fair lending and consumer protection risks, and how existing legal requirements can be strengthened or expanded.
Question 6 asks if AI for FIs is being developed in-house, by third parties, or through open-source code. Through this question, Treasury appears concerned that the same AI models and tools will exist across multiple FIs (potentially because that outcome may lead to additional risk if there are issues with a particular AI).
Question 7 asks how FIs are expecting to apply risk management to the use of AI and emerging AI technologies. In its request, Treasury appears concerned with gaps in human oversight of AI and the ability of humans to understand AI outputs to prevent bias.
Question 11 focuses on increases to data privacy risk, and how existing data privacy protections, such as those in the Gramm-Leach-Bliley Act, can be strengthened.
Question 12 addresses fraud risk. It asks “[h]ow are financial institutions, technology companies, or third-party service providers addressing and mitigating potential fraud risks caused by AI technologies? . . . . Given AI’s ability to mimic biometrics (such as a photos/video of a customer or the customer’s voice) what methods do financial institutions plan to use to protect against this type of fraud (e.g., multifactor authentication)?” Relatedly, Question 13 asks how stakeholders “expect to use AI to address and mitigate illicit finance risks? What challenges do organizations face in adopting AI to counter illicit finance risks? How do financial institutions use AI to comply with applicable AML/CFT requirements? What risks may such uses create?”
Questions 15 through 17 pertain to third-party risks, including risks involving data confidentiality.
Question 19 asks about how differences in approaches between the United States and other jurisdictions pose concerns for managing AI-related risks on an enterprise-wide basis.
Throughout the RFI, Treasury seeks input as to legislative or regulatory steps that might be taken in relation to AI. It is very likely that Treasury will issue regulations related to AI, although it is very unclear what form such regulations would take, or what exact topics would be addressed.
In advance of any action from Treasury, FIs should proactively implement compliance and oversight regimes of their AI projects from the beginning. FIs should ensure adequate human oversight and testing of AI-related products; as AI continues to develop over time, any AI-based products should undergo regular testing by financial institutions. FIs also should be careful to avoid creating or using AI that will result in harm to disadvantaged groups (with a particular focus on lending and other consumer-facing products). Finally, FIs should consider whether to invest in developing AI in-house versus using third-party applications – Treasury is clearly concerned about data privacy risks and potential global risks of AI causing problems across multiple FIs, as well as ensuring that any AI is tailored to the specific circumstances and risks facing a particular FI.
If you would like to remain updated on these issues, please click here to subscribe to Money Laundering Watch. Please click here to find out about Ballard Spahr’s Anti-Money Laundering Team.