Research

Human-AI Interaction in Autonomous Aerial Vehicles

Looking 10-20 years into the future, in the rapidly advancing world of autonomous, special ops, cargo, and medevac aircraft, all of the basic aviation functions will likely be handled completely and competently by Al agents embedded within air vehicles. Under nominal conditions (and many basic off-nominal situations) the Al-controlled vehicle will operate autonomously and independently without input from onboard personnel. However, no mission is ever completely nominal, and many open questions remain about how on-board personnel and Al controlling the vehicle should collaborate effectively. Fluency is the quality of interaction between a human and a robot and has been used to evaluate many aspects of human-robot teaming. In this project, we ask: What is the impact of Human-Al teaming, i.e. fluency on mission effectiveness, and how can it be fostered and maintained? ​

 The goal of this research is to enable appropriate human-Al collaboration needed to deal with off-nominal events by (1) characterizing the challenges to fluency created by human biases and cognitive limitations as they impact Human-Al interaction, (2) quantifying the impact of fluency on mission effectiveness, and (3) exploring and validating mitigation strategies. Specifically, we seek to understand elements of fluency that are needed for an Al agent to seek and receive assistance from onboard personnel who have no direct training in piloting or Al programming. When devising mitigation strategies, our focus will be on mitigations that can be employed dynamically in response to the operator’s behaviors or cognitive state, or in response to drops in fluency. By deploying changes to the Al system and how it engages with the human team member, the Al system can mimic positive attributes in human teaming whereby members change their behavior toward one another based on context and assessment of what is needed to achieve the mission goals. 

Investigation of Critical Attributes for Transparency and Operator Performance in Human Autonomy Teaming (TOPHAT) for Intelligent Mission Planning

Teams tend to be high-performing when they have an accurate shared mental model. A shared mental model (SMM) is the understanding of the exterior world, as well as who within a team has both the ability to perform certain tasks as well as the responsibility to see that they are performed correctly. It incorporates understanding bout who has access to what information and what communication mechanisms are in place. It also incorporates the prior experiences of the team that allows team members to reference and leverage those experiences to reduce communication burdens.  

While significant research has been conducted on SMMs developed between human-centric teams, less is understood about the importance of, and the mechanisms necessary to create and maintain a shared mental model between humans and more sophisticated automation, i.e. autonomy and particularly autonomy found in learning agents such as those powered by AI or Machine Learning. We wish to leverage the creative and adaptable capabilities of humans and the horsepower of machines to provide maximal task and team performance using human-autonomy teaming. In such cases, the SMM must exist in both the human mind and in the agent’s memory structures.  It must be updatable, and changes must be communicated in both directions. And it must be used by the autonomous agent to reason and make decisions.  

This research focuses primarily on understanding the kinds of mechanisms by which a shared mental model could be created, and changes passed to a human from an autonomous agent. We investigate the critical attributes that impact the formation and maintenance of a SMM between a human and an AI teammate to better understand how shared mental models can improve human-autonomy teaming by facilitating collaborative judgment and shared situational awareness.  

IEEE International Conference on Systems, Man, and Cybernetics: Impact of Missing Information and Strategy on Decision Making (Best Paper Award)

Performance Decision makers frequently encounter environments without perfect information, in which factors such as the distribution of missing information and estimates of missing information significantly impact decision accuracy and speed. This work presents an experiment which modifies an environment with missing information (total information, option imbalance, cue balance) and examines user estimates of the missing information to understand how accuracy and decision speed respond under time pressure. Results indicate that regardless of the way missing information is estimated, certain distributions of missing information reduce decision accuracy. Results from this work also indicate that beyond information distribution and estimation strategy, differences in decision strategy adopted may explain significant differences in decision performance. High performers tend to ignore a greater percentage of information instead of attempting to estimate it, thereby adopting a strategy more heuristic in nature.

IEEE International Conference on Systems, Man, Cybernetics Presentation: Differentiating ‘Human in the Loop’ Decision Strategies

Recently, research by groups in academia, industry, and government has shifted toward the development of AI and machine learning tools to advise human decision-making in complex, dynamic problems. Within this collaborative environment, humans alone are burdened with the task of managing team strategy due to the AI-agent’s use of an unrealistic model of the human-agent’s decision-making process. This work investigates the use of an unsupervised machine learning method to enable AI-systems to differentiate between human decision-making strategies, enabling improved team collaboration and decision support. An interactive experiment is designed in which human-agents are subjected to a complex decision-making environment (a storm tracking interface) in which the provided visual data sources change over time. Behavioral data from the human-agent is collected, and a k-means clustering algorithm is used to identify individual decision strategies. This approach provides evidence of three distinct decision strategies which demonstrated similar degrees of success as measured by task performance. One cluster utilized a more analytic approach to decision-making, spending more time observing and interacting with each data source, while the other two clusters utilized more heuristic decision-making strategies. These findings indicate that if AI-based decision support systems utilize this approach to distinguish between the human-agent’s decision strategies in real-time, the AI could develop an improved “awareness” of team strategy, enabling better collaboration with human teammates.

Real-time guidance algorithms for helicopter shipboard landing

Helicopter shipboard landing is one of the most challenging operations for pilots to execute owing to the random ship deck motion, turbulence due to airwake interactions, and poor visibility due to sea sprays, weather conditions and at night. Active research in this field has been focused on developing schemes to either autonomously pilot the vehicle to land on the ship deck or elements to assist the pilot such as guidance and visual cueing schemes, ship deck motion prediction, etc. The first portion of our research focused on developing a real-time guidance algorithm, utilizing Model Predictive Path Integral (MPPI) approach, to predict the helicopter’s future vehicle position and orientation, which is fed to the pilot as a visual cue.

Since pilot workload issues are a limiting factor to define allowable operating conditions for a given helicopter-ship combination, it is crucial to determine the impact of any new pilot-assist guidance-cueing scheme on pilot workload. The second portion of our research is focusing on understanding the term pilot workload, and to determine if an objective metric could be developed by analyzing pilot control activity in the presence and absence of a guidance-cueing scheme. This research direction attempts to answer whether mental workload is captured in pilot control activity and to determine if the introduction of a new guidance-cueing scheme alleviates or transfers pilot workload

Impact of Shared Mental Models on Human-AI Interaction and Mission Effectiveness

Human teams are most effective when the members of the team utilize a shared mental model (SMM), meaning a shared perception of goals and actions through effective communication and an understanding of their fellow team members’ goals and likely methods. Currently humans and AI teams share no such model.  At best, humans working closely with AI begin to anticipate what the AI can do and when it can be trusted, as is the case in medical decision making. But more commonly, as in the case of the Tesla Autopilot mistaking a truck for a cloud, the human often does not have sufficient insight or experience to understand when to distrust the AI. 

These real-life examples bring to light the fact that while AI is becoming more accurate, users often do not understand when it can be trusted and more importantly when it cannot be relied upon. By utilizing the concept of a shared mental model, I assert that, human-AI teams can become more effective, and reduce the dissonance between us and AI systems.

The objective of this research is to develop a shared mental model that is accessible and updatable by both humans and AI and to demonstrate that joint human-AI systems which include a shared mental model (SMM) perform better at dynamic decision making tasks. Central to this research is the belief that the human must be supported in an intelligible way (meaning the human must have some understanding of the AI-system) and that the AI must have understanding of its human teammate. This concept of mutual understanding of the problems, goals, information cues, strategies, and roles of each teammate is referred to as the SMM.

We use a combination of the theories from robotics, computer science, and psychology to develop a proactive AI-agent that can advise a human decision-maker, acting as a teammate rather than a tool. This AI-agent will improve human decision making by utilizing not only the problem parameters but developing cognizance of both the heuristic and analytic strategies that decision makers rely on during high pressure decision tasks.

NSF NRI-Small: Understanding Neuromuscular Adaptations in Human-Robot Physical Interaction for Adaptive Robot Co-Workers

ForceFeedback Mechanism

The goal of this award is to develop theories, methods, and tools to understand the mechanisms of neuromotor adaptation in human-robot physical interaction.  Human power-assisting systems, e.g., powered lifting devices that aid human operators in manipulating heavy or bulky loads, require physical contact between the operator and machine, creating a coupled dynamic system. This coupled dynamic has been shown to introduce inherent instabilities and performance degradation due to a change in human stiffness; when instability is encountered, a human operator often attempts to control the oscillation by stiffening their arm, which leads to a stiffer system with more instability.  The project will establish control algorithms for robot co-workers that proactively adjust the contact impedance between the operator and robotic manipulator for achieving higher performance and stability. This research will 1) understand the association between neuromuscular adaptations and system performance limits, 2) develop probabilistic methods to classify and predict the transition of operator’s cognitive and physical states from physiological measures, and 3) integrate this knowledge into a structure of shared human-robot and demonstrate the efficacy in a powered lifting device with real-world constraints at vehicle assembly facilities.

If successful, the research will benefit the communities interested in the adaptive shared control approach for advanced manufacturing and process design, including automobile, aerospace, and military. Such next-generation manufacturing is expected to improve productivity and reduce assembly time as well as the physical burden of assembly line workers. Research outcomes will be integrated into current courses at both graduate and undergraduate levels.

This work is in collaboration with Dr. Jun Ueda (PI), Dr. Minoru Shinohara, and Dr. Wayne Book

 

Divya Srivastava Presents at 5th International Conference on Human Computer Interaction Theory and Applications (HUCAPP 2021)

FEB 10, 2021 — 3rd-year CEC graduate student, Divya Srivastava, presents her work, “Effect of Interaction Design of Reinforcement Learning Agents on Human Satisfaction in Partially Observable Domains” virtually at the 5th International Conference on Human Computer Interaction Theory and Applications (HUCAPP 2021). The work is coauthored by Spencer Frazier (GT’s Human-Centered AI Lab), Dr. Mark Reidl (GT’s Human-Centered AI Lab), and Dr. Karen Feigh.

NSTRF – Decision Support System Development for Human Extravehicular Activity

spacewalk

Human spaceflight is arguably one of mankind’s most challenging engineering feats, requiring carefully crafted synergy between human and technological capabilities. One critical component of human spaceflight pertains to the activity conducted outside the safe confines of the spacecraft, known as Extravehicular Activity (EVA). Successful execution of EVAs requires significant effort and real-time communication between astronauts who perform the EVA and the ground personnel who provide real-time support. As NASA extends human presence into deep space, the time delay associated with communication relays between the flight crew and support crew will cause a shift from a real-time to an asynchronous communication environment. Asynchronous communication has been identified in the literature as an operational issue that must be addressed to ensure future mission success. There is a need to infuse advanced technologies with onboard systems to support crew decision-making in the absence of ground support. A decision support system (DSS) is one possible solution to enhance astronauts’ capability to identify, diagnose, and recover from time critical irregularities during EVAs without relying on real-time ground support.

The intent of this work is to (1) identify the system constraints on EVA operations, (2) develop the requirements for a DSS for operation within an asynchronous communication environment, (3) identify the characteristics of the DSS design are likely to fulfill the DSS requirements, and (4) assess how well the prototyped DSS performs in asynchronous EVA. The proposed research aims to examine how the EVA work domain is currently established using a constraint-based cognitive engineering framework to inform the design of a DSS. The prototype will then undergo an iterative design and evaluation process within a simulated asynchronous EVA environment. This thesis will contribute the underlying science needed to design a DSS within the EVA work domain to enable future mission operations.