Loans

Ministry of Finance report examines gaps in bank artificial intelligence risk management



On Wednesday, the U.S. Treasury Department released a report Regarding artificial intelligence and cybersecurity, it provides an overview of the cybersecurity risks that artificial intelligence poses to banks and its methods, and highlights the differences in the ability of large and small banks to detect fraud.

The report discusses the inadequacy of financial institutions to manage AI risks by not specifically addressing AI risks in their risk management frameworks, and how this trend is hindering the widespread use of emerging AI technologies by financial institutions.

Artificial intelligence is redefining cybersecurity and fraud in financial services, and that’s why under President Joe Biden’s guidance, said Domestic Finance Deputy Minister Nellie Leung. October Executive Order On Artificial Intelligence Security – The Treasury Department authored this report.

“Treasury’s AI report builds on our success public-private partnership for Secure cloud adoption and lays out a clear vision for how financial institutions can securely plan their lines of business and disrupt rapidly evolving AI-driven fraud. ” Press release.

The report is based on 42 in-depth interviews with representatives from banks of all sizes, financial industry trade associations, cybersecurity and anti-fraud service providers that include artificial intelligence capabilities in their products and services, and others.

Among the key findings in the report, Treasury found that “many financial institution representatives” believe their existing practices are consistent with the National Institute of Standards and Technology’s Artificial Intelligence Risk Management Framework, which Released January 2023However, these actors also encounter the challenge of establishing practical enterprise-wide policies and controls for emerging technologies such as generative artificial intelligence (particularly large language models).

The report states: “Discussion participants noted that while their risk management plans should map and measure the unique risks posed by technologies such as large language models, these technologies are new and may have challenges in assessing, benchmarking, and assessing cybersecurity. Challenging.” Read.

With this in mind, the report recommends expanding the NIST AI Risk Framework “to include more substantive information related to AI governance, particularly as it relates to the financial sector.” That’s exactly what NIST is doing. Upgraded version its cybersecurity risk management framework last month.

The report states: “Treasury will assist NIST’s National Artificial Intelligence Security Institute in establishing a financial sector-specific working group under the new AI Alliance architecture with the goal of extending the AI ​​risk management framework to financial sector-specific situations. .”

On the issue of banks taking a cautious approach to large language models, respondents to the report said these models are “still under development and are currently very expensive to implement and difficult to validate for high-assurance applications,” which is why most companies Select for “low-risk, high-reward use cases, such as code generation aids for upcoming deployments.”

The Treasury report noted that some smaller institutions currently do not use large language models at all, and financial firms that use them do not use public APIs to consume them. Instead, banks use these models through enterprise solutions deployed in their own virtual cloud networks, tenants or multi-tenant deployments.

In other words, banks are trying to keep their data from AI companies as secret as possible.

Banks are also investing in technology that creates greater confidence in the output of their AI products. For example, the report briefly discusses the retrieval-augmented generation (RAG) method, an advanced method for deploying large language models that several institutions report using.

RAG enables companies to search and generate text from their documents in the following ways: Reliably avoid hallucinations – i.e. completely fabricated and erroneous text generation – and minimize the extent to which outdated training material poisons the LL.M.’s response.

The report covers a number of other topics, including the need for financial sector companies to develop standardized strategies for managing AI-related risks, the need for adequate staffing and training to implement advanced AI technologies, and the need for AI-related risk-based regulation. . How the financial sector and banks are dealing with adversarial artificial intelligence.

“All stakeholders in the financial industry must become proficient in this area and fully understand the capabilities and inherent risks of artificial intelligence to effectively protect institutions, their systems, and their clients and customers,” the report concludes.





Source link

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button