Divya Srivastava Presents at 5th International Conference on Human Computer Interaction Theory and Applications (HUCAPP 2021)

FEB 10, 2021 — 3rd-year CEC graduate student, Divya Srivastava, presents her work, “Effect of Interaction Design of Reinforcement Learning Agents on Human Satisfaction in Partially Observable Domains” virtually at the 5th International Conference on Human Computer Interaction Theory and Applications (HUCAPP 2021). The work is coauthored by Spencer Frazier (GT’s Human-Centered AI Lab), Dr. Mark Reidl (GT’s Human-Centered AI Lab), and Dr. Karen Feigh.

ONR – Overall Decision Making Process Simulation


Decision makers are consistently asked to make decisions about the course of action required to achieve mission success regardless of the time pressure and the quantity and quality of information available. To be successful, they will adapt their decision strategies to the environment and even use heuristics, simple rules that use little information and can be processed quickly. To support these decision makers, we are designing proactive decision support systems that support adaptive decision making along a range analytic and heuristic strategies.

NASA Authority & Autonomy 2010-2013


NextGen systems are envisioned to be composed of human and automated agents interacting with dynamic flexibility in the allocation of authority and autonomy. The analysis of such concepts of operation requires methods for verifying and validating that the range of roles and responsibilities potentially assignable to the human and automated agents does not lead to unsafe situations. Such analyses must consider the conditions that could impact system safety including the environment, human behavior and operational procedures, methods of collaboration and organization structures, policies and regulations. 

              Agent-based simulation has shown promise toward modeling such complexity but requires a tradeoff between fidelity and the number of simulation runs that can be explored in a reasonable amount of time. Model checking techniques can verify that the modeled system meets safety properties but they require that the component models are of sufficiently limited scope so as to run to completion. By analyzing simulation traces, model checking can also help to ensure that the simulation’s design meets the intended analysis goals. Thus leveraging these types of analysis methods can help to verify operational concepts addressing the allocation of authority and autonomy. To make the analyses using both techniques more efficient, common representations for model components, methods for identifying the appropriate safety properties, and techniques for determining the set of analyses to run are required.

This project is performed in association with Dr. Ellen Bass of Drexel University, Dr. Elsa Gunter of the University of Illinios, and John Rushby of SRI.