A heuristic approach to identifying GenAI risks

Before deploying any generative AI application, it’s crucial to perform a structured risk assessment of the specific use case. We propose a heuristic-based approach – essentially a guided set of questions and considerations – to identify the risk profile of a GenAI implementation. This approach ensures that stakeholders systematically evaluate how an AI will be used, what could go wrong, and the controls that should be applied. Below is a step-by-step heuristic framework that can be applied to various GenAI use cases (from customer chatbots and advisory tools to internal process automation):

A. Define the Use Case and Context

Start by clearly articulating what the GenAI application will do, who/what it will affect, and the environment in which it operates. Key questions:

Understanding the context will frame the risk tolerance. For instance, an AI assisting an internal developer has more leeway for error than one directly advising clients on investments.

NOTE: the above should be scope to just those use cases that utilise GenAI.

B. Identify Data Involved

Evaluate the data inputs and outputs of the AI, as data is a major source of risk:

This step often yields specific risks like “customer addresses are supplied to the GenAI model – privacy risk” or “the AI is trained on month-old market data – stale output risk.”

C. Model and Technology

Assess the type of model and technical setup:

This technical review will flag issues like “Using a black-box API from vendor X – need to address vendor risk and lack of explainability” or “Model hasn’t been validated on our type of data – model risk present.”

D. Output and Decision Impact

Consider what the AI will produce and the potential consequences:

Evaluating output impact helps categorize the use case’s criticality (e.g., informative vs. decision-making, internal vs. customer-facing) which correlates with risk level.

E. Regulatory Mapping

Cross-check the use case against relevant regulations and laws:

By mapping regulations, you’ll identify specific compliance risks (like “the system might generate unapproved marketing language – violate ad rules” or “We can’t explain denials – potential fair lending issue”). This step ensures no regulatory angle is overlooked.

F. Security Considerations

Examine how the AI could be attacked or could fail from a security perspective:

This security analysis yields risks like “prompt injection could cause data leak” or “lack of monitoring could let misuse go undetected” which feed into needed controls.

G. Controls and Safeguards Identification

At this point, you should be able to identify a comprehensive set of risks - using the FINOS AI Governance Framework as a guidance. One risks have been identified, list potential controls and see if they are in place:

This step effectively produces a risk control matrix for the use case.

H. Decision and Documentation

Finally, use the findings to make an informed decision and document it: