Skip to content
iStock-1213574690
1 min read

What we need to know about AI explainability

The Explainability of AI models is a widely debated topic at an academic and industrial level, and several techniques have recently been proposed to address it.

According to McKinsey:

“Explainability is the capacity to express why an AI system reached a particular decision, recommendation, or prediction. Developing this capability requires understanding how the AI model operates and the data types used to train it.”

Interpretability-by-design, for example, is based on the idea that Explainability should be a priority in the design of AI algorithms rather than a problem to be addressed ex-post.

On the other hand, highly effective techniques have been developed to identify which characteristics of a piece of data have contributed most to the decision made by an AI model.

However, it is essential to state that the difficulty in explaining the behaviour of AI models need not prevent their use in real applications, even when the stakes are high.

For decades, properly trained dogs have been used in security, defence and rescue activities without them being able to explain their behaviour precisely.

Similarly, AI algorithms can be used to build quantitative investment strategies even when their internal functioning is not perfectly interpretable in the eyes of the investor, provided that they produce performing, reliable, consistent results that can be generalized in different market scenarios.

For more information on how to start using AI to build quantitative investment strategies, contact us.

RELATED ARTICLES