AIR-OP-017

Lack of Explainabililty

  • Black Box Nature of Generative Models
    • Difficult to interpret and understand.
    • Lack of transparency in decision-making.
    • Hard for firms to explain AI-driven decisions to stakeholders.
    • Increases regulatory and consumer concerns.
    • Conceals errors and biases.
    • Makes it difficult to assess model soundness.
    • Transparency and accountability are critical.
    • Firms risk deploying AI without fully understanding it.
    • Can lead to inappropriate use or undiagnosed failures.
    • Traditional testing methods may not work for complex models.

Key Mitigations

Related Standards