An Explainer for Temporal Graph Neural Networks

Image credit: Unsplash

Abstract

Temporal graph neural networks (TGNNs) have been widely used for modeling time-evolving graph-related tasks due to their ability to capture both graph topology dependency and non-linear temporal dynamic. The explanation of TGNNs is of vital importance for a transparent and trustworthy model. However, the complex topology structure and temporal depen-dency make explaining TGNN models very challenging. In this paper, we propose a novel explainer framework for TGNN models. Given a time series on a graph to be explained, the framework can identify dominant explanations in the form of a probabilistic graphical model in a time period. Case studies on the transportation domain demonstrate that the proposed approach can discover dynamic dependency structures in a road network for a time period.

Publication
In 2022 IEEE Global Communications Conference
Click the Cite button above to demo the feature to enable visitors to import publication metadata into their reference management software.
Create your slides in Markdown - click the Slides button to check out the example.
Wenchong He
Wenchong He
Ph.D. Candidate in Computer Science

I am a Ph.D. candidate in Department of Computer & Information Science & Engineering at the University of Florida. My broad research areas are data science, machine learning and artificial intelligence. Specifically my research focuses on spatiotemporal data mining, knowledge-informed machine learning, trustworthy AI as well as interdisciplinary scientific applications in climate science, environmental monitoring and physics simulation. I am on the academic and industry job market for tenure-track faculty or research scientist position.