SUMMARY
AI-advised Decision Making is a form of human-autonomy teaming in which an AI recommender system suggests a solution to a human operator, who is responsible for the final decision. This dissertation work seeks to empower and most effectively utilize these human decision makers by supporting their cognitive process of judgement. We propose doing so by providing the decision maker with relevant information that the AI uses to generate possible courses of action, as an alternative solution to explaining or interpreting complex AI models. This dissertation contributes a technique of supporting the human's judgement process that is effective in (1) boosting the human decision maker's situation awareness and task performance, (2) calibrating their trust in AI teammates, and (3) reducing overreliance on an AI partner. Additionally, participants were able to determine the AI’s error boundary, which enabled them to know when and when not to rely on the AI’s advice. These and associated findings are then summarized as design guidance for increasing non-algorithmic transparency in human-autonomy teams, so that the guidance can be applied to other domains.