wroXAI

introduction

wroXAI by CIRG is an experimental innovative prototype platform for generating intelligent explainations of advanced AI/ML models for complex data structures and their decisions

A PRELIMINARY RELEASE VERSION EXPECTED IN SPRING 2025

accurate

aims at delivering accurate and convincing explanations corresponding to model's decisions and model's structures

interpretable

explanations are based on interpretable variables derived directly from the input data without black-boxes

latent representations

our explanation engine uses latent data representations precomputed by modern deep learning approaches

explanation rules

explanations are generated by explanation rules and schemas optimized by heuristic algorithms

explanation engine based on latent data representations and explanation rules

wroXAI explanation engine constructs explanation rules in an optimization process using an evolutionary algorithm enhanced by incorporating latent data representations from modern deep learning approaches.

Evolving millions of candidate explanation rules, evaluating them on precomputed AI/ML models and thousands of their decisions, wroXAI aims at delivering explanation rules that are able to generate accurate and convincing explanations based on interpretable variables.

Recommender Systems

explanation rules for recommender systems are developed with latent product representations from the LightGCN, SRGNN or TAGNN models

Time Series

time series explanations are generated by rules created using time series embeddings precomputed by the TS2Vec or T-Rep models

Graph Structures

explanations of graph structures come from GNNExplainer and custom approaches with latent node representations, such as the TGN model

General Architecture

wroXAI is a python package, based on PyTorch for GPU computations. See the wroXAI documentation for more details (available soon).

Recommender Systems

Recommender Systems aim at suggesting to a given user the most appropriate products from a given set of available products. In order to construct the list of relevant products, recommendation models process input data containing earlier user experiences and interactions with products. For instance, in the perspective of an e-commerce application, such interactions may include browsing a product, adding a product to a cart, buying a product or reviewing a product.


Explaining recommendations makes them more thrustworthy for the user, increases the user's confidence in decision making and consequently improves the effectiveness of the recommender system in the real-world scenerio. However, explaining recommendations is a challanging task, because recent Recommender Systems are often a type of deep learning black box.


Many types of Recommendation Systems exist, related to various types of real-world applications, so many approaches to explaining recommendations may be applied, ranging from simple SHAP values to GNNExplainer subgraphs.

Figure: Schema of Session-based Recommender Systems with Graph Neural Networks (such as SRGNN or TAGNN) with the Amazon Review dataset

Session-based Recommender Systems are a type of Recommender Systems that aims at suggesting, for a given sequence of products browsed by a given user, the most accurate next product to present to the user, among a given set of available products.


In our recent research, published in [1], we focus on Target Attentive Graph Neural Network (TAGNN) Session-basased Recommender System, but our studies may be easily applied also to other types of Session-based Recommender Systems. Our research concerns explaining session-based recommendations using Grammatical Evolution. A Session-based Recommender System processes a given sequence of products browsed by a user and suggests the most relevant next product to display to the user. We propose an approach with a grammatical expression that provides explanations of recommendations generated by Session-based Recommender Systems as well as an evolutionary algorithm, GE-XAI-SBRS, based on Grammatical Evolution, with its own initialization and crossover operators, to construct such a grammatical expression. Our approach uses latent product representations, so-called vector embeddings, generated by the Recommender Systems and providing some additional knowledge on dependencies between products.

Figure: Explaining recommendations of Session-based Recommender Systems

Time Series

Time series classification focuses on assigning sequences of observations to predefined classes. Observations are data points registered over time, such as weather data of a specific place recorded each day during a time period, interactions of a specific user with a computer system recorded after each activity during a work session or measurements of parameters of a specific technical device recorded each moment during a technical inspection. Sequences of observations are temporal patterns describing the phenomenon under study over a given time period.


Time series classification includes feature-based approaches where relevant features, such as trends, seasonalities or statistical estimations are extracted and used for classification; model-based approaches based on decision trees, support vector machines or neural networks applied to sequential data; deep learning approaches that construct latent data representations for sequences of observations and use them in the classification downstream task.


Time series prediction focuses on forecasting values of future observations based on sequences of historical observations. It aims at discovering the behavior of the phenomenon under study and using it for predicting the future.


Time series prediction includes statistical approaches based on regression, such as ARIMA or GARCH; machine learning approaches, such as random forests or gradient boosting; deep learning approaches based on recurrent neural networks.


Explaining time series classification or prediction supports the decision making process by delivering proper justifications for proper decisions. Although basic approaches, such as regression models, are usually self-explainable, the advanced deep learning approaches are often a type of black box, so their decisions require explanations before executing in real-world applications.

Figure: Time series in their original representation

Our approach to explaining time series models focuses on constructing understandable explanation rules with interpretable variables by heuristic algorithms based on latent data representations derived from deep learning models. Currently, our research uses latent time series representations coming from TS2Vec or T-Rep. TS2Vec creates time series representations with contrastive learning and augmented context views in order to provide more efficient contextual representations for each timestamp. T-Rep is a self-supervised manner of learning time series representations taking also into account temporal feature extraction. Constructing explanation rules takes into consideration the knowledge derived from latent time series representations, such as proper distance or similarity measures in the latent data space, in order to ensure the selection of the most important and the most diverse features influencing the decision.

Figure: Time series in their latent representation

Graph Structures

Complex real-world problems are usually related to many dependencies, occurring between real-world objects, and consequently in the registered data. Such dependencies are often described by various types of graphs. In simple approaches, such relations are often ignored, due to the simplicity of the model. However, more and more modern AI/ML approaches use graph structures enabling modeling the complex dependencies. One of them is Graph Neural Networks (GNNs).


GNNs are popular approaches for modeling social behaviors, classifying objects on the basis of their features and relations to other ones, and predicting the relations between objects in complex systems.

Figure: A graph structure in Session-based Recommender System with the Amazon Review dataset

Our experience with explaining graph structures concerns deep learning approaches such as GNNExplainer and its extensions, as well as, custom models with understandable explanation rules constructed by heuristic algorithms based on latent data representations derived from deep learning models. In our research, latent representations of graph structures come from Temporal Graph Networks (TGNs) and similar models.

additional features & perspectives

References:

[1] P. Lipinski, K. Balcer, Explaining Session-based Recommendations using Grammatical Evolution, https://doi.org/10.1145/3638530.3664156

[2] S. Wu et al., Session-based Recommendation with Graph Neural Networks, https://arxiv.org/abs/1811.00855

[3] F. Yu et al., TAGNN: Target Attentive Graph Neural Networks for Session-based Recommendation, https://arxiv.org/abs/2005.02844

[4] Z. Yue et al., TS2Vec: Towards Universal Representation of Time Series, https://arxiv.org/abs/2106.10466

[5] A. Fraikin, A. Bennetot, S. Allassonnière, T-Rep: Representation Learning for Time Series using Time-Embeddings, https://arxiv.org/abs/2310.04486

[6] R. Ying et al., GNNExplainer: Generating Explanations for Graph Neural Networks, https://arxiv.org/abs/1903.03894

[7] E. Rossi et al., Temporal Graph Networks for Deep Learning on Dynamic Graphs, https://arxiv.org/abs/2006.10637