Liang Zhao's Project Page 

CRII: III: Interpretable Models for Spatio-Temporal Social Event Forecasting
using Social Sensors

PI: Liang Zhao
Department of Information Science and Technology
George Mason University
Email: lzhao9 AT gmu DOT edu
[ Project Summary] [ Software ] [ Selected Publications ] [ Relevant Courses] [ People]

Project Summary

In recent years, model interpretability has attracted exponentially increasing attentions as machine learning is beginning to be applied to ever more practical applications. For example, the General Data Protection Regulations (GDPR) mandates the interpretability of models that make important decisions by May 28, 2018. As a domain with significant impact on society, the interpretability of spatio-temporal social event forecasting models is particularly important in order to earn the trust of practitioners and become widely adopted in their everyday work flow. However, like conventional machine learning models, models for social event forecasting still primarily focus on prediction accuracy and are rapidly becoming too sophisticated and obscure to be easily understood by human operators. There is thus an urgent need for interpretable models in spatio-temporal social event forecasting to fill the increasing gap between data scientists and practitioners. In address this, this project will develop a novel spatio-temporal social event forecasting framework that can jointly optimize the model accuracy and interpretability, and automatically illustrate the explanatory process of prediction generation.

The special characteristics of spatio-temporal social event forecasting like spatial dependency (i.e., non-iid) and high-dimensional large data pose unique challenges for constructing interpretable models. To address these issues, the PI will utilize conditional independence and spatial topology to boost the sparsity of spatial dependence patterns. The PI will then move on to exploit the the hierarchical conjunction lattice of primitive data features to enforce the conciseness and sparsity of expository high-level representations of the data. Finally, strategies for evaluating model interpretability in social event forecasting will be extensively investigated.


  • Selected Publications



    Relevant Courses


    Contact information:

    Room 5343, Engineering Building
    George Mason University
    4400 Univ. Dr., Fairfax, VA 22030
    Telephone: +1 703-993-5910