Human-AI Interaction in Autonomous Aerial Vehicles

Looking 10-20 years into the future, in the rapidly advancing world of autonomous, special ops, cargo, and medevac aircraft, all of the basic aviation functions will likely be handled completely and competently by Al agents embedded within air vehicles. Under nominal conditions (and many basic off-nominal situations) the Al-controlled vehicle will operate autonomously and independently without input from onboard personnel. However, no mission is ever completely nominal, and many open questions remain about how on-board personnel and Al controlling the vehicle should collaborate effectively. Fluency is the quality of interaction between a human and a robot and has been used to evaluate many aspects of human-robot teaming. In this project, we ask: What is the impact of Human-Al teaming, i.e. fluency on mission effectiveness, and how can it be fostered and maintained? ​

 The goal of this research is to enable appropriate human-Al collaboration needed to deal with off-nominal events by (1) characterizing the challenges to fluency created by human biases and cognitive limitations as they impact Human-Al interaction, (2) quantifying the impact of fluency on mission effectiveness, and (3) exploring and validating mitigation strategies. Specifically, we seek to understand elements of fluency that are needed for an Al agent to seek and receive assistance from onboard personnel who have no direct training in piloting or Al programming. When devising mitigation strategies, our focus will be on mitigations that can be employed dynamically in response to the operator’s behaviors or cognitive state, or in response to drops in fluency. By deploying changes to the Al system and how it engages with the human team member, the Al system can mimic positive attributes in human teaming whereby members change their behavior toward one another based on context and assessment of what is needed to achieve the mission goals. 

Investigation of Critical Attributes for Transparency and Operator Performance in Human Autonomy Teaming (TOPHAT) for Intelligent Mission Planning

Teams tend to be high-performing when they have an accurate shared mental model. A shared mental model (SMM) is the understanding of the exterior world, as well as who within a team has both the ability to perform certain tasks as well as the responsibility to see that they are performed correctly. It incorporates understanding bout who has access to what information and what communication mechanisms are in place. It also incorporates the prior experiences of the team that allows team members to reference and leverage those experiences to reduce communication burdens.  

While significant research has been conducted on SMMs developed between human-centric teams, less is understood about the importance of, and the mechanisms necessary to create and maintain a shared mental model between humans and more sophisticated automation, i.e. autonomy and particularly autonomy found in learning agents such as those powered by AI or Machine Learning. We wish to leverage the creative and adaptable capabilities of humans and the horsepower of machines to provide maximal task and team performance using human-autonomy teaming. In such cases, the SMM must exist in both the human mind and in the agent’s memory structures.  It must be updatable, and changes must be communicated in both directions. And it must be used by the autonomous agent to reason and make decisions.  

This research focuses primarily on understanding the kinds of mechanisms by which a shared mental model could be created, and changes passed to a human from an autonomous agent. We investigate the critical attributes that impact the formation and maintenance of a SMM between a human and an AI teammate to better understand how shared mental models can improve human-autonomy teaming by facilitating collaborative judgment and shared situational awareness.  

Real-time guidance algorithms for helicopter shipboard landing

Helicopter shipboard landing is one of the most challenging operations for pilots to execute owing to the random ship deck motion, turbulence due to airwake interactions, and poor visibility due to sea sprays, weather conditions and at night. Active research in this field has been focused on developing schemes to either autonomously pilot the vehicle to land on the ship deck or elements to assist the pilot such as guidance and visual cueing schemes, ship deck motion prediction, etc. The first portion of our research focused on developing a real-time guidance algorithm, utilizing Model Predictive Path Integral (MPPI) approach, to predict the helicopter’s future vehicle position and orientation, which is fed to the pilot as a visual cue.

Since pilot workload issues are a limiting factor to define allowable operating conditions for a given helicopter-ship combination, it is crucial to determine the impact of any new pilot-assist guidance-cueing scheme on pilot workload. The second portion of our research is focusing on understanding the term pilot workload, and to determine if an objective metric could be developed by analyzing pilot control activity in the presence and absence of a guidance-cueing scheme. This research direction attempts to answer whether mental workload is captured in pilot control activity and to determine if the introduction of a new guidance-cueing scheme alleviates or transfers pilot workload

Impact of Shared Mental Models on Human-AI Interaction and Mission Effectiveness

Human teams are most effective when the members of the team utilize a shared mental model (SMM), meaning a shared perception of goals and actions through effective communication and an understanding of their fellow team members’ goals and likely methods. Currently humans and AI teams share no such model.  At best, humans working closely with AI begin to anticipate what the AI can do and when it can be trusted, as is the case in medical decision making. But more commonly, as in the case of the Tesla Autopilot mistaking a truck for a cloud, the human often does not have sufficient insight or experience to understand when to distrust the AI. 

These real-life examples bring to light the fact that while AI is becoming more accurate, users often do not understand when it can be trusted and more importantly when it cannot be relied upon. By utilizing the concept of a shared mental model, I assert that, human-AI teams can become more effective, and reduce the dissonance between us and AI systems.

The objective of this research is to develop a shared mental model that is accessible and updatable by both humans and AI and to demonstrate that joint human-AI systems which include a shared mental model (SMM) perform better at dynamic decision making tasks. Central to this research is the belief that the human must be supported in an intelligible way (meaning the human must have some understanding of the AI-system) and that the AI must have understanding of its human teammate. This concept of mutual understanding of the problems, goals, information cues, strategies, and roles of each teammate is referred to as the SMM.

We use a combination of the theories from robotics, computer science, and psychology to develop a proactive AI-agent that can advise a human decision-maker, acting as a teammate rather than a tool. This AI-agent will improve human decision making by utilizing not only the problem parameters but developing cognizance of both the heuristic and analytic strategies that decision makers rely on during high pressure decision tasks.

NSTRF – Decision Support System Development for Human Extravehicular Activity

spacewalk

Human spaceflight is arguably one of mankind’s most challenging engineering feats, requiring carefully crafted synergy between human and technological capabilities. One critical component of human spaceflight pertains to the activity conducted outside the safe confines of the spacecraft, known as Extravehicular Activity (EVA). Successful execution of EVAs requires significant effort and real-time communication between astronauts who perform the EVA and the ground personnel who provide real-time support. As NASA extends human presence into deep space, the time delay associated with communication relays between the flight crew and support crew will cause a shift from a real-time to an asynchronous communication environment. Asynchronous communication has been identified in the literature as an operational issue that must be addressed to ensure future mission success. There is a need to infuse advanced technologies with onboard systems to support crew decision-making in the absence of ground support. A decision support system (DSS) is one possible solution to enhance astronauts’ capability to identify, diagnose, and recover from time critical irregularities during EVAs without relying on real-time ground support.

The intent of this work is to (1) identify the system constraints on EVA operations, (2) develop the requirements for a DSS for operation within an asynchronous communication environment, (3) identify the characteristics of the DSS design are likely to fulfill the DSS requirements, and (4) assess how well the prototyped DSS performs in asynchronous EVA. The proposed research aims to examine how the EVA work domain is currently established using a constraint-based cognitive engineering framework to inform the design of a DSS. The prototype will then undergo an iterative design and evaluation process within a simulated asynchronous EVA environment. This thesis will contribute the underlying science needed to design a DSS within the EVA work domain to enable future mission operations. 

ONR – Interactive Machine Learning

PacMan

We are interested in machines that can learn new things from people who are not Machine Learning (ML) experts. We propose a research agenda framed around the human factors (HF) and ML research questions of teaching an agent via demonstration and critique. Ultimately, we will develop a training simulation game with several nonplayer characters, all of which can be easily taught new behaviors by an end-user.

With respect to the Science of Autonomy, this proposal is focused on Interactive Intelligence. We seek to understand how an automated system can partner with a human to learn how to act and reason about a new domain. Interactive learning machines that adapt to the needs of a user have long been a goal of AI research. Machine Learning (ML) promises a way to build adaptive systems while avoiding tedious pre-programming (which in sufficiently complex domains is almost impossible); however, we have yet to see many successful applications where machines learn from everyday users. ML techniques are not designed for input from na¨ıve users, remaining by and large a tool built by experts for experts.
Many prior efforts in designing Machine Learning systems with human input, pose the problem as: “what can I get the human to do to help my machine learn better?” Instead, our goal is for systems to learn from everyday people, we reframe the problem as: “how can machines take better advantage of the input that an everyday person is going to be able to provide?” This approach is Interactive Machine Learning (IML), and brings Human Factors to the problem of Machine Learning. IML has two major complementary research goals: (1) to develop interaction protocols for people to teach an ML agent in a way they find natural and intuitive. (2) to design ML algorithms that take better advantage of a human teacher’s guidance; that is, to understand formally how to optimize the information source that is humans, even when those humans have imperfect models of the learning algorithms or suboptimal policies themselves. Our research agenda addresses both of these IML research questions in two complementary types of learning interactions:

  • Learning from Demonstrations (LfD)—A human teacher provides demonstrations of the desired behavior in a given task domain, from which the agent infers a policy of action.
  • Learning from Critique—A human teacher watches critiques the behavior with high-level feedback.

This project is the joint effort with Dr. Andrea Thomaz (PI), Dr. Charles Isbell and Dr. Mark Riedl.

ONR STTR – Designing Contextual Decision Support Systems

STTR-COCOM

Support improved decision making under high stress, uncertain operational conditions through the development of proactive, context-based decision support aids. The objective of this project is to create a scientifically-principled design specification and prototype concepts for a set of decision aids capable of supporting decision making and judgment across multi-faceted mission with dynamic tasking requirements. The result will be a consistent approach to proactive decision support that will facilitate rapid, affordable development for different functions in the combat center, minimize training, insertion into combat systems, and increase end user adoption and utilization.

ONR – Overall Decision Making Process Simulation

ODMP

Decision makers are consistently asked to make decisions about the course of action required to achieve mission success regardless of the time pressure and the quantity and quality of information available. To be successful, they will adapt their decision strategies to the environment and even use heuristics, simple rules that use little information and can be processed quickly. To support these decision makers, we are designing proactive decision support systems that support adaptive decision making along a range analytic and heuristic strategies.

NASA Authority & Autonomy 2010-2013

AuthorityAutonomy

NextGen systems are envisioned to be composed of human and automated agents interacting with dynamic flexibility in the allocation of authority and autonomy. The analysis of such concepts of operation requires methods for verifying and validating that the range of roles and responsibilities potentially assignable to the human and automated agents does not lead to unsafe situations. Such analyses must consider the conditions that could impact system safety including the environment, human behavior and operational procedures, methods of collaboration and organization structures, policies and regulations. 

              Agent-based simulation has shown promise toward modeling such complexity but requires a tradeoff between fidelity and the number of simulation runs that can be explored in a reasonable amount of time. Model checking techniques can verify that the modeled system meets safety properties but they require that the component models are of sufficiently limited scope so as to run to completion. By analyzing simulation traces, model checking can also help to ensure that the simulation’s design meets the intended analysis goals. Thus leveraging these types of analysis methods can help to verify operational concepts addressing the allocation of authority and autonomy. To make the analyses using both techniques more efficient, common representations for model components, methods for identifying the appropriate safety properties, and techniques for determining the set of analyses to run are required.

This project is performed in association with Dr. Ellen Bass of Drexel University, Dr. Elsa Gunter of the University of Illinios, and John Rushby of SRI.