AI, especially Generative AI, is reshaping financial services, enhancing products, client interactions, and productivity. However, challenges like hallucinations and model unpredictability make safe deployment complex. Rapid advancements require flexible governance.
Financial institutions are eager to adopt AI but face regulatory hurdles. Existing frameworks may not address AI’s unique risks, necessitating an adaptive governance model for safe and compliant integration.
The following framework has been developed by FINOS (Fintech Open Source Foundation) members, providing comprehensive catalogue or risks and associated mitigation. We suggest using our heuristic risk identification framework to determine which risks are most relevant for a given use case.
Risk Catalogue
Operational
Hallucination and Inaccurate Outputs
SummaryLLM hallucinations occur when a model generates confident but incorrect ...
Read moreFoundation Model Versioning
SummaryFoundation model instability refers to unpredictable changes in model behavior ...
Read moreNon-Deterministic Behaviour
SummaryLLMs exhibit non-deterministic behaviour, meaning they can generate different outputs ...
Read moreAvailability of Foundational Model
SummaryFoundation models often rely on GPU-heavy infrastructure hosted by third-party ...
Read moreInadequate System Alignment
SummaryAI alignment risk arises when a system’s behaviour diverges from ...
Read moreBias and Discrimination
SummaryAI systems can systematically disadvantage protected groups through biased training ...
Read moreLack of Explainability
SummaryAI systems, particularly those using complex foundation models, often lack ...
Read moreModel Overreach / Expanded Use
SummaryModel overreach occurs when AI systems are used beyond their ...
Read moreData Quality and Drift
SummaryGenerative AI systems rely heavily on the quality and freshness ...
Read moreReputational Risk
SummaryAI failures or misuse—especially in customer-facing systems—can quickly escalate into ...
Read moreSecurity
Information Leaked to Vector Store
SummaryLLM applications pose data leakage risks not only through vector ...
Read moreTampering With the Foundational Model
SummaryFoundational models provided by third-party SaaS vendors are vulnerable to ...
Read moreData Poisoning
SummaryData poisoning occurs when adversaries tamper with training or fine-tuning ...
Read morePrompt Injection
SummaryPrompt injection occurs when attackers craft inputs that manipulate a ...
Read moreRegulatory and Compliance
Information Leaked To Hosted Model
SummaryUsing third-party hosted LLMs creates a two-way trust boundary where ...
Read moreRegulatory Compliance and Oversight
SummaryAI systems in financial services must comply with the same ...
Read moreIntellectual Property (IP) and Copyright
SummaryGenerative AI models may be trained on copyrighted or proprietary ...
Read moreMitigation Catalogue
Preventative
Data Filtering From External Knowledge Bases
PurposeThis control addresses the critical need to sanitize, filter, and ...
Read moreUser/App/Model Firewalling/Filtering
Effective security for AI systems involves monitoring and filtering interactions ...
Read moreSystem Acceptance Testing
PurposeSystem Acceptance Testing (SAT) for AI systems is a crucial ...
Read moreData Quality & Classification/Sensitivity
PurposeThe integrity, security, and effectiveness of any AI system deployed ...
Read moreLegal and Contractual Frameworks for AI Systems
PurposeRobust legal and contractual agreements are essential for governing the ...
Read moreQuality of Service (QoS) and DDoS Prevention for AI Systems
PurposeThe increasing integration of Artificial Intelligence (AI) into financial applications, ...
Read moreAI Model Version Pinning
PurposeModel Version Pinning is the deliberate practice of selecting and ...
Read moreRole-Based Access Control for AI Data
PurposeRole-Based Access Control (RBAC) is a fundamental security mechanism designed ...
Read moreEncryption of AI Data at Rest
PurposeEncryption of data at rest is a fundamental security control ...
Read moreAI Firewall Implementation and Management
PurposeAn AI Firewall is conceptualized as a specialized security system ...
Read moreDetective
AI Data Leakage Prevention and Detection
PurposeData Leakage Prevention and Detection (DLP&D) for Artificial Intelligence (AI) ...
Read moreAI System Observability
PurposeAI System Observability encompasses the comprehensive collection, analysis, and monitoring ...
Read moreAI System Alerting and Denial of Wallet (DoW) / Spend Monitoring
PurposeThe consumption-based pricing models common in AI services (especially cloud-hosted ...
Read moreHuman Feedback Loop for AI Systems
PurposeA Human Feedback Loop is a critical detective and continuous ...
Read moreProviding Citations and Source Traceability for AI-Generated Information
PurposeThis control outlines the practice of designing Artificial Intelligence (AI) ...
Read moreUsing Large Language Models for Automated Evaluation (LLM-as-a-Judge)
Purpose“LLM-as-a-Judge” (also referred to as LLM-based evaluation) is an emerging ...
Read morePreserving Source Data Access Controls in AI Systems
PurposeThis control addresses the critical requirement that when an Artificial ...
Read more