HUMAN-AUTOMATION INTERACTION

With increasing automation in all aspects of society, humans are being tasked with interacting and collaborating with autonomous systems in a variety of contexts. At the core of successful cooperation between human and machine is trust. To that end, we are interested in mathematically modeling how human trust, and other human behaviors, evolve during interactions with automated systems and how feedback control principles can be applied to achieve the promise of automation---improving human quality of life. Through collaborations with the REID Lab here at Purdue, we have published several articles on this topic that consider novel ways of estimating human trust via psychophysiological sensing (such as galvanic skin response (GSR) and electroencephalography (EEG)) as well as using behavioral data. More recently we extended our modeling to include human workload, and we have demonstrated, experimentally, the use of adaptive user interface (UI) transparency for improving human performance when assisted by an automated decision aid. We have several ongoing projects in our group that build on our foundational work in this area. This includes human cognitive state-based control for automation designed to assist humans with skill building and modeling of human trust across multiple timescales. We are also studying how human operators make decisions in process manufacturing and using machine learning techniques to assess how automation, combined with operator expertise, can achieve better, and more consistent performance. Graduate Research Assistant(s): Matthew Konishi, Jianqi Ruan, Katie Williams and Madeleine Yuh Recent Research Poster(s): ►Reimagining Human Machine Interactions Through Trust Based Feedback