Humanitarian & Social Informatics Lab, GMU
NSF RAPID Award # 2029719 RAPID/Collaborative Research: Human-AI Teaming for Big Data Analytics to Enhance Response to the COVID-19 Pandemic
Sponsor: National Science Foundation
Principal Investigator, GMU: Dr. Hemant Purohit
Collaborators:
Principal Investigator, UT Austin and Project Lead: Dr. Keri Stephens (Award # 2029692)
Principal Investigator, Brigham Young Univ.: Dr. Amanda Hughes (Award # 2029698)
Partner, Montgomery County Community Emergency Response Team, MD: Mr. Steve Peterson and Mr. Greg St. James

About


Social media data can provide important clues and local knowledge that can help emergency managers and responders better comprehend and capture the evolving nature of many disasters. Yet humans alone cannot grasp the vast data generated by social media, so computers are used to assist. Very little is currently known about how to leverage the skills of humans and machines when they work together (human-machine teaming) to identify meaningful patterns in social media data. Therefore, the fundamental issues this Rapid Response Research (RAPID) project seeks to address are 1) understanding the process of real-time decisions that human digital volunteers make when they rapidly convert social media data into structured codes the machine (Artificial Intelligence algorithms) can understand, and 2) using this knowledge to improve human-machine teaming. This project advances the field by revealing the unique abilities that both humans and machines bring when working together to comprehend social media patterns during an evolving disaster. It supports education and diversity by providing research experiences to diverse students, as well as generating data useful for interdisciplinary courses teaching teamwork, social media analysis, and human-machine teaming. Finally, the findings can help emergency managers better train their volunteers who comb through social media using their understanding of the local knowledge and built environment to help machines see new patterns in data. Hence, this project supports NSF's mission to promote the progress of science and to advance the nation's health, prosperity, and welfare by articulating the unique value that both humans and computers bring that can lead to better decisions during disasters. The goal of this research is to better understand the real-time decisions that human annotators make under different environmental constraints, and how those contribute to the learning of Artificial Intelligence (AI) models. Under time constraints and information overload, human decision-making capabilities are limited; yet, humans still have a unique ability to understand the contextual references to the structures in the built environment that machines cannot recognize. For example, the meaning of the tweet, "Memorial is overloaded" -- which means the hospital, called Memorial, is out of beds for patients -- can be lost on AI systems that lack the knowledge of the built environment. This example demonstrates the value that humans in the loop offer in a human-AI teaming context.
This research focuses on capturing the ephemeral data from a variety of social media sources and our two research thrusts include: 1) online observations of Community Emergency Response Team (CERT) volunteers and a manager (a collaborator on this project) using think-aloud and cognitive interviewing strategies to reveal the real-time mental models used to make coding decisions for annotation tasks; and 2) an empirical analysis of different sampling algorithms for active (machine) learning paradigms to develop a typology of machine errors under diverse contexts that affect the quality of human decision making for annotation. This research will generate design guidelines that bridge the gap between the mechanisms used for real-time data processing with AI models and the understanding of context contributed by a human user teaming with the AI models. Using theories of human decision-making combined with knowledge of how AI functions, this project provides a real-time, mid-disaster examination of 1) how humans understand, process, and interpret social media messages, and 2) how to refine AI algorithms to optimize active learning paradigm. This understanding will provide a theoretical framework enabling future research to develop protocols to optimize human-AI teaming by using concepts such as motivation and information theory. This work can help emergency managers conduct better training of their CERT volunteers and other annotators and provide clearer guidelines for how to communicate the unique value that humans bring to the annotation process for AI systems. Both our protocols and developed understanding of how humans interact with AI systems will be helpful for global health organizations, local and state-level disaster decision-makers, as well as provide direction for the vast CERT network in the United States.

Project Updates


Selected Publications


  • K. Stephens, K. Nader, A. Hughes, A. Harris, C. E. Montagnolo, A. Stevens, Y. Senarath, and H. Purohit. (2021). Conducting Online-Computer-Mediated Interviews and Observations: Challenges, Ethics, and Best Practices. HICSS.

People


Faculty:
- Prof. Hemant Purohit

Students:
- Yasas Senarath (PhD research assistant)
- Rahul Pandey (PhD research assistant)

Contact


If interested in this project and want to pursue PhD or MS Thesis, then you can mail at h p u r o h i t _a_t_ g m u _d_o_t_ e d u with your resume, transcripts and any prior research papers.