SUMMARY
Human-autonomy teams aim to leverage the different strengths of humans and autonomous systems respectively to exceed the individual capabilities of each through collaboration. Highly effective human teams develop and utilize a shared mental model (SMM): a synchronized understanding of the external world and of the tasks, responsibilities, capabilities, and limits of each team member. Recent works assert that the same should apply to human-autonomy teams; however, contemporary AI commonly consists of “black box” systems, whose internal processes cannot easily be viewed or interpreted. Users can easily develop inaccurate mental models of such systems, impeding SMM development and thus team performance.This thesis seeks to support the human’s side of Human-AI SMMs in the context of AI-advised Decision Making, a form of teaming in which an AI suggests a solution to a human operator, who is responsible for the final decision. This work focuses on improving shared situation awareness by providing more context to the AI’s internal processing, which should lead the human to a more accurate mental model of the task and the AI, and improved team performance. It will provide a validated implementation of how human mental models of AI can be elicited and measured by researchers and system designers, a quantitative link between factors that influence human mental models and human-autonomy team performance, and finally, it will offer design guidance for increasing non-algorithmic transparency in human-autonomy teams based on empirical results, so that the guidance can be applied to other domains