Abstract: With increasing opportunities for the integration of AI-based decision support tools, human interactions with AI systems must be studied under different contexts. The quality of interactions is said to improve when humans have a mental model (MM) of their AI teammates. This study tests this hypothesis for a hierarchical human-AI triad in a decision-making setting. The impact of humans’ MM of AI on various performance metrics is studied in the presence and absence of mutually compatible MMs between the two agents. Mutually incompatible models lead to dissonance between the agents, causing an overall degradation in their joint activity. Results showed that operators exercised increased vigilance when they did not have a MM of their AI teammates, but having an accurate MM improved decision utility over time without reducing speed or increasing participants’ task load. It also led to greater acceptance of the AI’s suggestions without inducing biases towards the AI. Additionally, operators reported lesser effort and mental demand and had more accurate judgments of the relative competence of the two agents during dissonance. These findings motivate further research on understanding how different levels of MMs between humans and their AI teammates lead to different outcomes in complex collaborative settings.