AI, especially Generative AI, is reshaping financial services, enhancing products, client interactions, and productivity. However, challenges like hallucinations and model unpredictability make safe deployment complex. Rapid advancements require flexible governance.
Financial institutions are eager to adopt AI but face regulatory hurdles. Existing frameworks may not address AI’s unique risks, necessitating an adaptive governance model for safe and compliant integration.
The following framework has been developed by FINOS (Fintech Open Source Foundation) members, providing comprehensive catalogue or risks and associated mitigation. We suggest using our heuristic risk identification framework to determine which risks are most relevant for a given use case.
Risk Catalogue
Operational
Hallucination and Inaccurate Outputs
LLM hallucinations refer to instances when a large language model ...
Read moreInstability in foundation model behaviour
Instability in foundation model behaviour would manifest itself as deviations ...
Read moreNon-deterministic behaviour
A fundamental property of LLMs is the non-determinism of their ...
Read moreAvailability of foundational model
RAG systems are proliferating due to the low barrier of ...
Read moreLack of foundation model versioning
Inadequate or unpublished API versioning and/or model version control may ...
Read moreInadequate system alignment
AlignmentThere is a specific goal you want to achieve when ...
Read moreBias and Discrimination
AI trained on historical/internet data may embed biases. Can lead ...
Read moreLack of Explainabililty
Black Box Nature of Generative Models Difficult to interpret and ...
Read moreModel Overreach & Misuse
The impressive capabilities of GenAI can lead to overestimation of ...
Read moreData Quality & Drift
Generative AI’s outputs depend on the quality and recency of ...
Read moreReputational Risk
AI failures or misuse can quickly become public incidents, eroding ...
Read moreSecurity
Unauthorized Access and Data Leaks
TODO: Make this non-vector store specificVector stores are specialized databases ...
Read moreTampering with the foundational model
The SaaS-based LLM provider is a 3rd party supplier and ...
Read moreData Poisoning
Adversaries can tamper with AI training or fine-tuning data to ...
Read morePrompt injection
Users of the application or malitious internal agents can craft ...
Read moreRegulatory and Compliance
Information Leaked to Hosted Model
In the provided system architecture, sensitive data is transmitted to ...
Read moreRegulatory Compliance and Oversight
Financial services are heavily regulated, and AI use does not ...
Read moreIntellectual Property (IP) and Copyright
Generative AI models often train on datasets that may include ...
Read more