Many aviation missions today are accomplished by a heterogeneous crew of pilots and mission specialists. As fully Automated Pilots (AP) are integrated into aviation crews, effective teaming will be necessary for safety assurance and mission effectiveness. This flight simulator study explored teaming between a non-pilot human operator and an AP collaborating on a maritime Intelligence, […]
Richard Agbeyibor, Vedant Ruia, and Jack Kolb get their paper accepted at the 2024 Human Factors and Ergonomics Society (HFES) Conference
This study explores human-autonomy collaboration between a future autonomous pilot and a human crew member pursuing a joint Intelligence, Surveillance, and Reconnaissance (ISR) mission. We introduce a novel open-sourced autonomous ISR interaction domain simulating real-world scenarios. As aviation increasingly integrates autonomy, our focus lies in understanding how various autonomous capabilities and interface features affect trust, […]
Richard Agbeyibor, Vedant Ruia, and Jack Kolb get their publication accepted at the AIAA Aviation Forum and ASCEND
The maturation of autonomy for electric vertical take-off and landing aircraft will soon make it possible to execute military intelligence, surveillance, reconnaissance (ISR) missions aboard crewed autonomous aerial vehicles. This research experimentally investigates factors that may influence the quality of interaction (i.e., team fluency) between a non-pilot human operator and the AI pilot responsible for […]
Yosef Razin’s article on HRI trust accepted at the ACM Transactions on Human-Robot Interactions
Abstract: Trust is crucial for technological acceptance, continued usage, and teamwork. However, human-robot trust, and human-machine trust more generally, suffer from terminological disagreement and construct proliferation. By comparing, mapping, and analyzing well-constructed trust survey instruments, this work uncovers a consensus structure of trust in human-machine interaction. To do so, we identify the most frequently cited […]
Simulated Mental Models and Active Replanning for Human-Robot Interaction
This project Introduces a communication framework to facilitate efficient information synchronization between an autonomous system and a human operator under scenarios where instant data transfer is not available. We utilize mental models to represent the system’s high-level state and employ a replanning algorithm to adapt to the dynamically changing environment in real time. Our work […]
Human-AI Interaction in Autonomous Aerial Vehicles: A MedEvac Scenario
This project explores the interaction between human operators (novice flight medics) and AI pilots in autonomous aerial vehicles during medical evacuation situations. The primary objective is to evaluate how changes in workload and cognitive biases influence the fluency of human-AI interaction and overall mission effectiveness. Through simulated medical evaluation scenarios, this research seeks to assess […]
STARLIT – xGEO Wargames
This project aims to design wargames for space scenarios for the next 20 years, particularly in the xGEO domain. The xGEO domain presents new challenges as many operators do not have experience in this domain, resulting in training gaps. We are focused on designing scenarios specific to the cislunar space. Starting now, this project is […]
Exploring Shared Mental Models in Household Human-Robot Teams
In human-human teams, we often infer the situation awareness of our teammates to inform our planning and decision-making. What if we applied this to human-AI teams? This project explores how autonomous systems can estimate the belief states of human teammates. We deploy a robot agent to a collaborative household cooking domain, where the agent constructs […]
Project SURI: Multi-Phenomenological, Autonomous, and Understandable SDA and XDA Decision Support
As a multi-institutional effort between CU Boulder, Georgia Tech, and Texas A&M, this research serves to utilize developments in cognitive engineering, autonomy, and decision-making in the context of modern astrodynamics to improve multi-target tracking and dim object detection. While CU Boulder & Texas A&M will be focusing on sensor exploitation, placement, and processing, the CEC […]
Multimodal Cueing Systems for Rotorcraft Performance in High-Workload Environments
The Vertical Lift Research Center of Excellence (VLRCOE) Rotorcraft Flight Simulation Laboratory is tasked with conducting research into cueing modalities that improve situational awareness and reduce pilot workload during the critical phases of flight across a broad spectrum of flight operations, both maritime and land-based. Our research is multidisciplinary, drawing from the fields of aviation […]