Liang Zhao's Project Page 

CRII: III: Interpretable Models for Spatio-Temporal Social Event Forecasting
using Social Sensors

PI: Liang Zhao
Department of Information Science and Technology
George Mason University
Email: lzhao9 AT gmu DOT edu
[ Project Summary] [ Software ] [ Selected Publications ] [ Relevant Courses] [ People]

Project Summary

In recent years, model interpretability has attracted exponentially increasing attentions as machine learning is beginning to be applied to ever more practical applications. For example, the General Data Protection Regulations (GDPR) mandates the interpretability of models that make important decisions by May 28, 2018. As a domain with significant impact on society, the interpretability of spatio-temporal social event forecasting models is particularly important in order to earn the trust of practitioners and become widely adopted in their everyday work flow. However, like conventional machine learning models, models for social event forecasting still primarily focus on prediction accuracy and are rapidly becoming too sophisticated and obscure to be easily understood by human operators. There is thus an urgent need for interpretable models in spatio-temporal social event forecasting to fill the increasing gap between data scientists and practitioners. In address this, this project will develop a novel spatio-temporal social event forecasting framework that can jointly optimize the model accuracy and interpretability, and automatically illustrate the explanatory process of prediction generation.

The special characteristics of spatio-temporal social event forecasting like spatial dependency (i.e., non-iid) and high-dimensional large data pose unique challenges for constructing interpretable models. To address these issues, the PI will utilize conditional independence and spatial topology to boost the sparsity of spatial dependence patterns. The PI will then move on to exploit the the hierarchical conjunction lattice of primitive data features to enforce the conciseness and sparsity of expository high-level representations of the data. Finally, strategies for evaluating model interpretability in social event forecasting will be extensively investigated.


  • Selected Publications

    Liang Zhao, Feng Chen, and Yanfang Ye. Efficient Learning with Exponentially-Many Conjunctive Precursors for Interpretable Spatial Event Forecasting. IEEE Transactions on Knowledge and Data Engineering (TKDE), (impact factor: 2.775), to appear, 2019.

    Liang Zhao, Amir Alipour-Fanid, Martin Slawski and Kai Zeng. Prediction-time Efficient Classification Using Feature Computational Dependencies. in Proceedings of the 24st ACM SIGKDD Conference on Knowledge Discovery and Data Mining (KDD 2018), research track (acceptance rate: 18.4%), London, United Kingdom, Aug 2018, to appear.

    Junxiang Wang, Liang Zhao, and Yanfang Ye. Semi-supervised Multi-instance Interpretable Models for Flu Shot Adverse Event Detection. 2018 IEEE International Conference on Big Data (BigData 2018(acceptance rate: 18.9%), Seattle, USA, Dec 2018, to appear.

    Lei Zhang, Liang Zhao, Xuchao Zhang, Wenmo Kong, Zitong Sheng, and Chang-Tien Lu. Situation-Based Interpretable Learning for Personality Prediction in Social Media. 2018 IEEE International Conference on Big Data (BigData 2018(acceptance rate: 18.9%), Seattle, USA, Dec 2018, to appear.


    Relevant Courses


    Contact information:

    Room 5343, Engineering Building
    George Mason University
    4400 Univ. Dr., Fairfax, VA 22030
    Telephone: +1 703-993-5910