Understanding Human Decision Processes: Inferring Decision Strategies From Behavioral Data
Sarah E. Walsh and Karen M. Feigh
Full content can be found at: https://journals.sagepub.com/doi/pdf/10.1177/15553434221122899
Understanding Human Decision Processes: Inferring Decision Strategies From Behavioral Data
Sarah E. Walsh and Karen M. Feigh
Full content can be found at: https://journals.sagepub.com/doi/pdf/10.1177/15553434221122899
The role of shared mental models in human-AI teams: a theoretical review
Robert W. Andrews, J. Mason Lilly, Divya Srivastava & Karen M. Feigh
Full content can be found at: https://www.tandfonline.com/doi/pdf/10.1080/1463922X.2022.2061080
11/11/2022:
Divya Srivastava, 5th year Mechanical Engineering student, successfully proposed her PhD thesis entitled ‘Transparency and Operator Performance in Human-Autonomy Teams.’
Summary:
Human-autonomy teams aim to leverage the different strengths of humans and autonomous systems respectively to exceed the individual capabilities of each through collaboration. Highly effective human teams develop and utilize a shared mental model (SMM): a synchronized under- standing of the external world and of the tasks, responsibilities, capabilities, and limits of each team member. Recent works assert that the same should apply to human-autonomy teams; however, con- temporary AI commonly consists of “black box” systems, whose internal processes cannot easily be viewed or interpreted. Users can easily develop inaccurate mental models of such systems, impeding SMM development and thus team performance.
This thesis seeks to support the human’s side of Human-AI SMMs in the context of AI-advised Decision Making, a form of teaming in which an AI suggests a solution to a human operator, who is responsible for the final decision. This work focuses on improving shared situation awareness by providing more context to the AI’s internal processing, which should lead the human to a more accurate mental model of the task and the AI, and improved team performance. It will provide a validated implementation of how human mental models of AI can be elicited and measured by researchers and system designers, a quantitative link between factors that influence human mental models and human-autonomy team performance in the context of explainable AI, and finally, it will offer design guidance for increasing non-algorithmic transparency in human-autonomy teams based on empirical results, so that the guidance can be applied to other domains.
The 2022 IEEE International Conference on Human-Machine Systems (ICHMS), held in Orlando, Florida, conducted a Doctoral Research Award Competition (DAC) or doctoral research contributions. Contributions were ranked for both paper submissions and conference presentations by a conference review team. Two CEC Lab members won awards, Sarah Walsh (5th year Robotics PhD Candidate) and Divya Srivastava (5th year Mechanical Engineering PhD Candidate).
1st Place – Sarah Walsh with co-author Karen Feigh
“Consideration of Strategy-specific Adaptive Decision Support”
2nd Place – Divya Srivastava with co-authors J. Mason Lilly and Karen M. Feigh
“The Impact of Improving Shared Situation Awareness on AI-Advised Decision Making”
3rd Place – Jiancheng Nie with co-authors Yusuke Sugahara and Yukio Takeda
“Design of Wearable Robotic Support Limbs for Walking Assistance Based on Configurable Support Polygon”
Each awardee received a commemorative plaque, identifying the conference and contribution. Awardees also received an honorarium from the conference.
Recently, research by groups in academia, industry, and government has shifted toward the development of AI and machine learning tools to advise human decision-making in complex, dynamic problems. Within this collaborative environment, humans alone are burdened with the task of managing team strategy due to the AI-agent’s use of an unrealistic model of the human-agent’s decision-making process. This work investigates the use of an unsupervised machine learning method to enable AI-systems to differentiate between human decision-making strategies, enabling improved team collaboration and decision support. An interactive experiment is designed in which human-agents are subjected to a complex decision-making environment (a storm tracking interface) in which the provided visual data sources change over time. Behavioral data from the human-agent is collected, and a k-means clustering algorithm is used to identify individual decision strategies. This approach provides evidence of three distinct decision strategies which demonstrated similar degrees of success as measured by task performance. One cluster utilized a more analytic approach to decision-making, spending more time observing and interacting with each data source, while the other two clusters utilized more heuristic decision-making strategies. These findings indicate that if AI-based decision support systems utilize this approach to distinguish between the human-agent’s decision strategies in real-time, the AI could develop an improved “awareness” of team strategy, enabling better collaboration with human teammates.
Human teams are most effective when the members of the team utilize a shared mental model (SMM), meaning a shared perception of goals and actions through effective communication and an understanding of their fellow team members’ goals and likely methods. Currently humans and AI teams share no such model. At best, humans working closely with AI begin to anticipate what the AI can do and when it can be trusted, as is the case in medical decision making. But more commonly, as in the case of the Tesla Autopilot mistaking a truck for a cloud, the human often does not have sufficient insight or experience to understand when to distrust the AI.
These real-life examples bring to light the fact that while AI is becoming more accurate, users often do not understand when it can be trusted and more importantly when it cannot be relied upon. By utilizing the concept of a shared mental model, I assert that, human-AI teams can become more effective, and reduce the dissonance between us and AI systems.
The objective of this research is to develop a shared mental model that is accessible and updatable by both humans and AI and to demonstrate that joint human-AI systems which include a shared mental model (SMM) perform better at dynamic decision making tasks. Central to this research is the belief that the human must be supported in an intelligible way (meaning the human must have some understanding of the AI-system) and that the AI must have understanding of its human teammate. This concept of mutual understanding of the problems, goals, information cues, strategies, and roles of each teammate is referred to as the SMM.
We use a combination of the theories from robotics, computer science, and psychology to develop a proactive AI-agent that can advise a human decision-maker, acting as a teammate rather than a tool. This AI-agent will improve human decision making by utilizing not only the problem parameters but developing cognizance of both the heuristic and analytic strategies that decision makers rely on during high pressure decision tasks.