SUBJECT: Ph.D. Proposal Presentation
   
BY: Matthew Dunbrack
   
TIME: Friday, March 15, 2024, 1:00 p.m.
   
PLACE: Virtual (https://bit.ly/3SWrcAp), Teams
   
TITLE: Robust Adversarial Reinforcement Learning for Antineutrino-based Nuclear Reactor Safeguards
   
COMMITTEE: Dr. Anna Erickson, Chair (Mechanical Engineering)
Dr. Steven Biegalski (Mechanical Engineering)
Dr. Fan Zhang (Mechanical Engineering)
Dr. Nathaniel Bowden (Rare Event Detection (LLNL))
Dr. Rachel Carr (Physics (USNA))
 

SUMMARY

Antineutrino-based nuclear safeguards have been proposed to address many nuclear reactor verification challenges. In theory, these systems can detect reactor on-off status, monitor thermal power levels, and verify the special nuclear material (SNM) within a core. The situational details of these proposed capabilities, however, dictate the plausibility of applying antineutrino detectors for nuclear safeguards. For the most complex proposed capability, verifying SNM inventory, system performance depends highly on both general reactor-detector parameters, such as the reactor design of interest and detector efficiency, as well as scenario unknowns, such as diverted assembly targets and replacement fuels. In this work, I develop an object-oriented modeling and simulation tool for researchers and decision makers to explore various system-scenario parameters for antineutrino-based safeguards development and assessment. This tool is comprised of 5 modules: adversarial agent, diversion simulation, spectra simulation, system sensitivity, and protagonist agent. By iterating over these modules, the adversarial agent learns to select the most threatening diversion scenario while the protagonist agent trains the most well-prepared diversion classifier. This iterative process, referred to as robust adversarial reinforcement learning, will result in a fully robust nuclear safeguard - equally ready for any diversion scenarios of interest.