Skip to content
Axyon Lens
Apr 29, 2026 3:23:43 PM2 min read

The AI Explainability Debate

The AI Explainability Debate
2:52

The AI Explainability Debate

The Financial Times recently published a piece (paywalled) where Martin Lueck, co-founder of Aspect Capital, warns against delegating investment decisions to AI. His argument centres on explainability: if a model cannot clarify why it recommends a trade, he will not stake his firm's reputation on it.

Lueck's concern about opacity is historically consistent. According to the article, his departure from Man Group in 1995 was partly driven by black-box models that gave investors no transparency. That experience shaped his philosophy. However, the financial technology landscape of 2026 is not that of 1995. Today’s investors should be wary of anchoring a forward-looking industry to any rearward-looking argument.

Lueck's instinct to keep the human at the centre of the research process is sound. But which hypotheses to pursue and when are precisely where AI can add the most value. In this case, the machines are not asking to replace the analysts but offering guidance on where they can most effectively spend their time.

At Axyon AI, explainability is built into the core of how our models work. We developed the Axyon Lens framework to address the concern Lueck raises: giving investors a clear, structured view of why the model says what it says at every level of the decision. The framework works like a series of lenses, each bringing a different layer of the investment process into focus.

How does it work?

At the broadest level, it examines strategy performance: how the overall approach has behaved over time and what drove its results. Zooming in further, it identifies which themes or sectors the model has rotated towards and, crucially, why. Focusing further still, you can trace that sector view down to the individual data signals that drove it.

The mechanism behind this is a well-established technique called SHAP* values — a way of attributing each prediction to the specific inputs that shaped it. Think of it like a receipt for every forecast: not just the total, but an itemised breakdown of what added to the view and what detracted from it. If the model turns constructive on a sector, a portfolio manager can see exactly which signals moved the needle — and by how much.

This is the practical answer to the black-box problem, a major concern. With the Axyon Lens approach, the model surfaces a recommendation and also shows that it is working. Analysts retain full authority over what to do with that information, but no longer have to take the signal on faith.

Transparency is relevant to our approach to AI and is one of our company’s values. The Axyon Lens framework demonstrates this commitment in practice: no prediction without explanation, no signal taken on faith.

 

*SHAP - Shapley Additive Explanations - values provide a systematic way to explain how a machine learning model makes predictions by quantifying the contribution of each input feature to the final output.