SUBJECT: Ph.D. Proposal Presentation
   
BY: Haoxiang Huang
   
TIME: Wednesday, March 30, 2022, 1:00 p.m.
   
PLACE: http://bluejeans.com/104033745/6844, virtual
   
TITLE: Neural Networks with Inputs Based on Domain of Dependence and A Converging Sequence for Solving Conservation Laws
   
COMMITTEE: Prof. Vigor Yang, Co-advisor, Co-Chair (AE)
Prof. Timothy Lieuwen, Co-advisor, Co-Chair (ME)
Prof. Yingjie Liu, Co-advisor (MATH)
Prof. Ellen Mazumdar (ME)
Prof. Joseph Oefelein (AE)
 

SUMMARY

Recent research on solving partial differential equations with deep neural networks (DNNs) has demonstrated that spatiotemporal-function approximators defined by auto-differentiation are effective for approximating nonlinear problems. However, it remains a challenge to resolve discontinuities in nonlinear conservation laws using forward methods with DNNs without beginning with part of the solution. In this study, we incorporate first-order numerical schemes into DNNs to set up the loss function approximator instead of auto-differentiation from traditional deep learning framework such as the TensorFlow package, thereby improving the effectiveness of capturing discontinuities in Riemann problems. We introduce a novel neural network method. A local low-cost solution is first used as the input of a neural network to predict the high-fidelity solution at a space-time location. The challenge lies in the fact that there is no way to distinguish a smeared discontinuity from a steep smooth solution in the input, thus resulting in “multiple predictions” of the neural network. To overcome the difficulty, two solutions of the conservation laws from a converging sequence, computed from low-cost numerical schemes, and in a local domain of dependence of the space-time location, serve as the input. In this work, we apply the methods to one-dimensional and two-dimensional Euler systems, and also introduce some new variations. Numerical results demonstrate that the methods not only work very well in one dimension but also perform equally well in two dimensions. Despite smeared local input data, the neural network methods are able to predict shocks and contacts sharply, as well as smooth parts of the solution accurately. The neural network methods are efficient and relatively easy to train because they are local solvers.