RESEARCH & DEVELOPMENT
As Technology is in our DNA, since our foundation, we have been dedicated to enhancing our AI-based models through active engagement with academic and business research.
Learn how our Research and Development journey has led us to our present achievements.
RESEARCH TIMELINE
2024
Machine Learning and HAR Models for Realized Volatility Forecasting. An Application in Brent Crude Front Month Futures Market
As Volatility forecasting is critical for risk management and speculative trading, the thesis investigates the application of Machine Learning and Heterogeneous Autoregressive (HAR) models to forecast Realized Volatility in the highly volatile Brent Crude Oil market. The study evaluates whether ML models outperform traditional HAR models in forecasting Realized Volatility, a volatility estimator based on high-frequency data, by testing both approaches across multiple forecast horizons: one day, one week, and one month.
2024
Optimise Heterogeneous Ensemble Search
Ensemble Learning has become increasingly prevalent as an efficient paradigm in Machine Learning, combining multiple weak models to produce robust predictions across various fields. A key challenge within Ensemble Learning is ensuring diversity among models to explore different data patterns and maintain heterogeneity. This thesis presents a project focused on optimising the parameter search process for heterogeneous ensemble models that incorporate diverse architectures and tasks.
2024
Enhancing Financial Time Series Analysis
The development of the Talos web application represents a significant advancement within the corporate context of Axyon AI, providing an intuitive and efficient user interface to streamline the dataset generation process for training ML models.
2023
Study and Implementation of Quantum-inspired Boosting Algorithms for AI-powered Financial Asset Management
This thesis develops and benchmarks a Qboost-based algorithm to enhance Axyon AI’s ensemble learning (EL) pipeline for multi-label classification. EL combines multiple weak learners for more robust predictions, with boosting—an iterative EL method—focusing on training examples where prior models performed poorly, improving stability and accuracy. This project explores adiabatic quantum annealing (AQA) on neutral atom processors, aiming to overcome these limitations.
2023
Unsupervised anomaly detection on time series data
In this project, we aimed at improving a classifier’s performance using the estimated likelihood distribution of a training dataset. The study involved injecting noise into datasets, training classifiers with varying noise levels, estimating data density using unsupervised methods, and exploring relationships between classifier scores, losses, and density estimates. The approach was tested on mixed datasets and financial data, adjusting the classifier's behavior based on density estimates.
2023
Comparison of Learning-To-Rank (LTR) models: Computational Aspects and Application to a Document Ranking Problem
In today's digital landscape, vast accessible resources have prompted the development of efficient Information Retrieval systems using machine learning, specifically through a discipline called Learning to Rank. This approach aims to order information sources for relevant query responses on abundant data, and can be applied beyond recommendation systems and search engines, e.g. to financial investments. This work presents a study on LTR, covering its theory, neural network applications, and practical implementation for document ranking using Python and MSLR-WEB10K dataset.
2022
Out-of-distribution detection methods on deep neural network encodings of images and tabular data
The performance of supervised learning models trained on time-series data can be affected by changing phenomena and drifts. Our prior work in this area focused on detecting outliers in tabular datasets derived from financial time series, yielding promising outcomes but revealing limitations. This project aimed to improve on this by operating in the latent space of deep neural networks, detecting anomalies in the model's internal data representation compared to the learned representations from training data. This shift emphasizes assessing whether the model's data representation is anomalous rather than the input data itself.
2022
Does Catastrophic Forgetting Negatively Affect Financial Predictions?
Nowadays, financial markets produce a large amount of data in the form of historical time series, which quantitative researchers have recently attempted at predicting with deep learning models. These models are constantly updated with new incoming data in an online fashion. However, artificial neural networks tend to exhibit poor adaptability, fitting the last-seen trends, without keeping the information from the previous ones. Continual learning studies this problem, called catastrophic forgetting, to preserve the knowledge acquired in the past and exploiting it for learning new trends. This paper evaluates and highlights continual learning techniques applied to financial historical time series in a context of binary classification (upward or downward trend). The main state-of-the-art algorithms have been evaluated with data derived from a practical scenario, highlighting how the application of continual learning techniques allows for better performance in the financial field against conventional online approaches.
2021
Multivariate Autoregressive Denoising Diffusion Model for Value-at-Risk Evaluation
The Value-at-Risk (VaR) is a common risk measure, often required by financial regulators, typically estimated based on simple closed-form distributions. In this work, we built up on our existing GAN-based model for VaR estimation, by comparing it to newer deep learning approaches, namely an Autoregressive Denoising Diffusion Model based on the Timegrad architecture and a model based on Low-Rank Gaussian Copula Processes.
2021
FF4 EuroHPC Project Axyon AI - Leveraging HPC for AI and DL-powered Solutions for Asset Management
This is a 15-month research project under the FF4 EuroHPC framework, where Axyon AI leads a consortium of partners including CINECA and AImageLab. The project has the overall goal of improving the service offered by Axyon AI to its clients through several technological advancements. In particular, three main areas of improvement have been identified: computational scalability, risk management and adaptiveness of AI models.
2021
Continual Learning Techniques for Financial Time Series
The problem of Continual Learning has drawn much interest in recent years, as training AI models able to learn new tasks or move to new domains poses the risk of forgetting earlier knowledge. In this study, we have applied several CL methods to train time series forecasting models in the financial domain, using Bayesian changepoint detection methods to segment series into different regimes and thus framing the problem as one of Domain-Incremental Learning.
Work presented at Ital-IA 2022.
2021
VaR Estimation with conditional GANs and GCNs
The Value-at-Risk (VaR) is a common risk measure, often required by financial regulators, typically estimated based on simple closed-form distributions. In this work, we aimed at overcoming the need for parametric assumptions through the use of deep generative models, namely a conditional generative adversarial networks (CGAN). We further extended the model to the multivariate case, by enabling the interaction of multiple stocks through graph convolutions in the generator.
Work presented at SIMAI 2021.
2020
ESAX: Enhancing the Scalability of the Axyon Platform
In this work, carried out jointly with HPC consultants from CINECA, we brought the computational scalability of the Axyon Platform to a new level, almost quadrupling the previous peak of parallely executed jobs. Moreover, we added support to distributed training on multi-GPU/multi-node HPC clusters, and stress-tested our Platform using Marconi100, the 11th largest supercomputer in the world at the time of the project.
2020
Alternative Data for ML-based Asset Performance
Forecasting Alternative data is structured or unstructured data that is not typically used by traditional investment companies and that can provide insights into the future performance of a financial asset. This study examined the possibility of including alternative data sources into Axyon IRIS ML-based predictive models, by comparing the performance before and after the addition of data series extracted from Google Trends.
2019
Reinforcement Learning for Asset Allocation
Reinforcement Learning (RL) has drawn a lot of attention thanks to its successful applications in many fields, most notably to playing games. In this work, we have designed and implemented an RL framework for the task of tactical asset allocation, given a portfolio of equity and fixed income assets. Our approach based on Policy Gradient made use of a particular reward function accounting not only for P&L but also for diversification and stability.
2019
SHAPE Project Axyon AI: a scalable HPC Platform for AI Algorithms in Finance
The goal of this work was to maximize the efficiency of accessing different types of remote computational resources potentially available to our proprietary Machine Learning platform, without losing the flexibility provided by in-house compute power. This is a mandatory requirement for a FinTech company oftentimes working with proprietary data that cannot be uploaded to cloud systems. We achieved this by designing and implementing a scalable and flexible DB-centric Master-Slave system architecture able to exploit any connected internal or external computational resource (including an HPC cluster) in a flawless and secure way. This was the first project that marked a fruitful and ongoing collaboration with CINECA, the largest Italian computing centre.
2018
Extension and Industrialization of Generative Neural Networks for Financial Time Series Modelling and Forecasting
In this work, we built up on our previous work in generative modelling, extending our GAN model designed for the conditional generation of financial time series. In particular, the contribution of this research activity was twofold: (i) we modified the generator so as to obtain a recurrent sequence-to-sequence architecture, and (ii) we added a self-attention mechanism, bringing improved performance and interpretability.
2018
Deep Generative Neural Networks for Financial Time Series Modelling and Forecasting
In this work, we applied the Generative Adversarial Network (GAN) framework to the challenging task of financial time series generation. We showed how this model can be used to simulate future market scenarios by introducing a conditioning in the generator, using a recurrent neural network. To the best of our knowledge, this was the first application of GANs to financial time series at the time of this work. We presented our results at Nvidia GTC Europe 2018 in Munich.
2017
Deep Learning for Portfolio Allocation
In this work, we combined AI with an existing quantitative portfolio allocation model. In particular, we used the prediction of a Deep Neural Network as “investor views” in the Black-Litterman allocation model.
This MSc thesis won the SIAT Technical Analyst Award 2019.
2017
Deep Q-Learning Techniques for Forex Trading
In this work, we applied Reinforcement Learning techniques (and in particular Deep Q-Learning) to the challenging problem of finding profitable trading strategies in the Forex market by trial-and-error in a simulated market. This was Axyon AI’s first of many MSc thesis projects in collaboration with the AImageLab research group at the University of Modena and Reggio Emilia.
SCHEDULE A DEMO WITH OUR TEAM
Talk directly with our AI experts and understand how we can help you boost your investment strategies with real predictive AI-powered solutions