SUBJECT: M.S. Thesis Presentation
   
BY: Hayden Nichols
   
TIME: Wednesday, April 13, 2022, 12:30 p.m.
   
PLACE: GTMI, 101
   
TITLE: Leveraging Adversarial Machine Learning Techniques for Deceptive Sampling-Based Motion Planning
   
COMMITTEE: Dr. Anirban Mazumdar, Chair (ME)
Dr. Jun Ueda (ME)
Dr. Jonathan Rogers (AE)
 

SUMMARY

There are many applications in which a mobile agent wants to avoid having its intent known to an observer. Additionally, a mobile agent may want to have deceptive actions that convey an intent other than its true objective. Examples of this include preserving privacy in a high-surveillance environment or confusing an opponent in an adversarial setting.

However, this desire for deception can conflict with the need for a low path cost. Optimal plans such as those produced by RRT* may have low path cost. However, optimality is correlated with predictability, in that observers can often predict the intent of an optimal agent. Similarly, a deceptive path that moves in such a way to confuse the observer may take too long to reach the goal.

The work presented in this thesis attempts to balance these conflicting objectives by drawing inspiration from adversarial machine learning. Presented in this thesis is a novel planning algorithm, dubbed Adversarial RRT*. Adversarial RRT* attempts to deceive machine learning classifiers by incorporating a predicted measure of deception into the planner cost function. Adversarial RRT* considers both path cost and a measure of predicted deceptiveness in order to produce a trajectory with low path cost that still has deceptive properties. Performance of Adversarial RRT* is demonstrated with two measures of deception, using a simulated Dubins' vehicle.