As financial institutions increase their use of AI, opaque decision-making systems risk undermining public trust, regulatory compliance, and risk management, according to a new report by CFA Institute, the global association of investment professionals.

The report, Explainable AI in Finance: Addressing the Needs of Diverse Stakeholders, examines the rising complexity of AI systems in areas such as credit scoring, investment management, insurance underwriting, and fraud detection.

It argues for the adoption of “explainable AI” (XAI), a set of techniques intended to make AI decision-making more transparent, auditable, and understandable to humans.

Dr Cheryll-Ann Wilson
Dr Cheryll-Ann Wilson

“AI systems are no longer working quietly in the background, they are influencing high-stakes financial decisions that affect consumers, markets, and institutions,”

said Dr Cheryll-Ann Wilson, CFA, the report’s author and a senior affiliate researcher at CFA Institute.

“If we can’t explain how these systems work, or worse, if we misunderstand them, we risk creating a crisis of confidence in the very technologies meant to improve financial decision-making.”

The study highlights that different stakeholders, regulators, risk managers, investment professionals, developers, and clients, require distinct forms of explanation.

It introduces a framework that maps explainability requirements to user roles, aiming to embed transparency across the financial value chain.

The report distinguishes between tools built into AI systems at the outset, known as “ante-hoc” methods, and those applied after decisions are made, referred to as “post-hoc” methods.

Source: CFA Institute
Source: CFA Institute

Ante-hoc techniques rely on simple, transparent rules that can be easily followed, while post-hoc tools explain what influenced a particular outcome.

These may highlight key data points or illustrate how a decision could have differed under alternative circumstances, such as if a borrower had reported a higher income.

The study outlines how such methods are being applied in risk assessment, investment decision-making, and regulatory compliance.

The report calls for the development of global standards and benchmarks to assess the quality of AI explanations, while emphasising the need to design XAI interfaces suited to both technical and non-technical users.

Source: CFA Institute
Source: CFA Institute

It also recommends advancing real-time explainability in systems used for rapid financial decisions, alongside investment in human-AI collaboration through user training and adapted workflows.

In addition, the study explores emerging approaches such as evaluative AI, which presents evidence for and against decisions to reduce automation bias, and neurosymbolic AI, which combines logical reasoning with deep learning to improve interpretability.

As regulatory initiatives gather pace, including the EU AI Act and proposals in the UK, CFA Institute urges financial institutions to act in advance of formal requirements.

 

Featured image credit: Edited by Fintech News Hong Kong, based on image by joshimo via Freepik