Understand how our AI engine works under the hood
Click on the images below to explore how we harness data and transform it into powerful investment decisions:
AI solutions for enhanced investment management
Axyon AI is, first and foremost, a technology company. Our team of engineers, scientists and quantitative researchers is on a mission to build a cutting-edge technology stack for AI-based time series modelling.
Explore the methodologies driving our AI model development and AI platform and learn more about our commitment to the construction of our innovative and robust processes.
What is Axyon IRIS®?
Axyon AI Platform: Proprietary Auto-ML platform for financial time series. A highly automated process that explores and optimizes (i) a wide array of ML algorithms, including deep neural network models
(ii) their free parameters, (iii) hyperparameters, and (iv) the selection of input variables, and produces an optimized ensemble of models Tailored for financial time series data of any kind and frequency.
Axyon IRIS®: Data processing, inference and distribution engine of AI-powered investment strategies to institutional investors such as asset managers, hedge funds and corporate trading desks.
What kind of AI models are used in Axyon IRIS®?
Our focus is not on individual models, just like for car manufacturers the focus is not on fixing individual cars but rather on designing and implementing a high-performing, efficient and automated assembly line for producing cars. And this process evolves over time through incremental improvements. At the moment, Axyon IRIS® strategies are powered by ensembles of supervised learning models including neural network-based (both tabular and sequential) and tree-based models. First, a large number of candidate models is explored with a hyperparameter search algorithm. Secondly, such models are optimally combined to constitute an AI model ensemble that is used for historical (out-of-sample) prediction generation and is ultimately deployed to production.
What kind of data do we use?
We ingest data of different kinds and from multiple sources. The main data types used to create Axyon IRIS® datasets are EOD and intraday market data, fundamental indicators, macroeconomic indicators, related indexes or securities (e.g. commodities, VIX, FX, etc.), sentiment indicators extracted from news and social media, options data, and analyst forecasts. In very specific cases and with selected partners, we support the possibility of including proprietary indicators that our client might have developed - we currently do this with our two largest clients.
How is feature engineering performed?
Over time, we have created a feature store for financial time series problems that we maintain and continuously improve. Most of the data modelling work is carried out by our Quant Analysis team, whose work is integrated with our ML process, ensuring that every single feature can be computed with point-in-time data, is stationary, is expressed in a format that can be fed into supervised AI models, and it can be computed in live settings under time constraints.
How are feature selection & hyperparameter search performed in Axyon IRIS®?
The feature selection and hyperparameter optimization process employed to develop Axyon IRIS® models is part of our proprietary expertise and cannot be fully disclosed. The process involves an extensive exploration of the search space, whose dimension mainly depends on the dataset complexity (number of input variables) and modelling assumptions, which determine the set of AI models available to the search algorithm. In most real-world cases, a thorough exploration of this space is computationally infeasible (there may be more hyperparameter configurations than atoms in the universe), even with thousands of parallel compute hours on a cluster of GPU-powered computational nodes. Thus, we resort to a proprietary stochastic optimization method for optimizing hyperparameters and feature (sub)sets.
Are predictions in Axyon IRIS® explainable? How?
We believe that the explainability of a Machine Learning model can be as crucial as its performance in many business applications for many reasons (trust, regulatory, technical, ethical, etc.). Our AI platform natively supports a state-of-the-art technique called SHAP (short for Shapley Additive Explanations) to decompose each prediction into the contributions of each individual input variable. In other words, for each input variable we can compute a (positive or negative) contribution, and if we sum all these contribution we obtain the model’s prediction. As the typical number of input variables per model is in the range of 200~500 features, to simplify the interpretation of these contributions we can group together all features belonging to the same semantic “category” (e.g. all features related to the fundamentals of a certain stock) and compute their aggregated contribution. By doing this, we can explain a prediction in terms of a small number (5-6) of “feature categories”, e.g. corresponding to classical financial factors, thanks to the fact that SHAP values are additive.
HIGH-PERFORMING COMPUTING (HPC) INFRASTRUCTURE
Engineered with precision, our scalable HPC infrastructure supports our complex AI modelling development, ensuring our solutions are not only innovative but are also rendered with exceptional speed, robustness and accuracy.
Scalable and flexible DB-centric master-slave system architecture
A paragon of scalability and flexibility. Adapting to multifaceted AI modelling tasks, this architecture navigates through diverse computational demands, guaranteeing that your AI models are developed, tested, and deployed with superior efficiency, even as your data ecosystems expand and evolve.
Versatile Computational Resources
Our infrastructure leverages a spectrum of computational resources, both internal (on-premise) and external (cloud), including robust HPC clusters. This blend of diverse resources ensures that your AI modelling capabilities are not bound by computational constraints, offering an unrivalled platform where your data can be processed, analysed, and modelled with unparalleled depth and breadth.
Reliability and Data Security
Our models are entrusted to a system where reliability and data security are critical. Our focus on these elements ensures that the models and data are shielded with fortified security protocols and supported by a stable, dependable computational backbone, providing us with an unswervingly secure and reliable environment throughout our AI development journey.
Scalable HPC infrastructure
Our scalable HPC infrastructure is a commitment to our relentless innovation and improvement pursuit. Designed, developed, and scrupulously maintained, its capabilities have been further enhanced through intensive research projects undertaken in alliance with HPC specialists from CINECA. Explore our detailed whitepaper to learn more about our technological advancements and methodologies.