SUBJECT: Ph.D. Dissertation Defense
   
BY: Muhammad Zubair Irshad
   
TIME: Monday, December 4, 2023, 12:30 p.m.
   
PLACE: CODA or Zoom (gatech.zoom.us/j/97893083376?), 114
   
TITLE: Learning 3D Robotics Perception using Inductive Priors
   
COMMITTEE: Dr. Zsolt Kira, Chair (IC)
Dr. Aaron Young (ME)
Dr. Nader Sadegh (ME)
Dr. Shreyas Kousik (ME)
Dr. Adrien Gaidon (TRI)
 

SUMMARY

Recent advances in deep learning have led to a 'data-centric intelligence' in the last decade i.e. artificially intelligent models unlocking the potential to ingest a large amount of data and be really good at performing digital tasks such as text-to-image generation, machine-human conversation, and image recognition. This thesis covers the topic of learning with structured inductive bias and priors to design approaches and algorithms unlocking the potential of 'principle-centric intelligence' for the real-world. Prior knowledge (priors for short), often available in terms of past experience as well as assumptions of how the world works, helps the autonomous agent generalize better and adapt their behavior based on past experience. In this thesis, I demonstrate the use of prior knowledge in three different robotics perception problems. 1. object-centric 3D reconstruction, 2. vision and language for decision-making, and 3. 3D scene understanding. To solve these challenging problems, I propose various sources of prior knowledge including 1. geometry and appearance priors from synthetic data, 2. modularity and semantic map priors, and 3. semantic, structural, and contextual priors. I study these priors for solving robotics 3D perception tasks and propose ways to efficiently encode them in deep learning models. Some priors are used to warm-start the network for transfer learning, others are used as hard constraints to restrict the action space of robotics agents. While classical techniques are brittle and fail to generalize to unseen scenarios and data-centric approaches require a large amount of labeled data, this thesis aims to build intelligent agents which require very-less real-world data or data acquired only from simulation to generalize to highly dynamic and cluttered environments in novel simulations (i.e. sim2sim) or real-world unseen environments (i.e. sim2real) for a holistic scene understanding of the 3D world.