SUBJECT: M.S. Thesis Presentation
   
BY: Joel Blumer
   
TIME: Friday, March 13, 2015, 2:00 p.m.
   
PLACE: MARC Building, 201
   
TITLE: Cross-Scale Model Validation with Aleatory and Epistemic Uncertainty
   
COMMITTEE: Dr. Yan Wang, Co-Chair (ME)
Dr. David McDowell, Co-Chair (ME)
Dr. Laura Swiler (Sandia National Laboratories)
 

SUMMARY

Nearly every decision must be made with a degree of uncertainty regarding the outcome. Decision making based on modeling and simulation predictions needs to incorporate and aggregate uncertain evidence. To validate multiscale simulation models, it may be necessary to consider evidence collected at a length scale that is different from the one at which a model predicts. In addition, traditional methods of uncertainty analysis do not distinguish between two types of uncertainty: uncertainty due to inherently random inputs, and uncertainty due to lack of information about the inputs. This thesis examines and applies a Bayesian approach for model parameter validation that uses generalized interval probability to separate these two types of uncertainty. A generalized interval Bayes’ rule (GIBR) is used to combine the evidence and update belief in the validity of parameters. The method is first applied to validate the parameter set for a molecular dynamics simulation of defect formation due to radiation. Evidence is supplied by the comparison with physical experiments. Because the simulation includes variables whose effects are not directly observable, a generalized hidden Markov model (GHMM) is implemented to incorporate the uncertainty associated with measurement in belief update. Several methods are attempted to use intervals to represent complete ignorance of probabilities’ values. The result from the GIBR/GHMM method is verified using Monte Carlo simulations. In a second example, the proposed method is applied to combining the evidence from two models of crystal plasticity at different length scales. Each model is updated by comparison with physical experiments, and then the models are updated by comparing one to the other. Each of these is validated using Monte Carlo simulation. Finally, the interval-valued updates are compared to real-valued probability updates.