SUBJECT: Ph.D. Proposal Presentation
   
BY: Jiten Patel
   
TIME: Monday, March 28, 2011, 11:00 a.m.
   
PLACE: MARC Building, 301
   
TITLE: Design of Reliable Mesostructures using Probabilistic Neural Networks
   
COMMITTEE: Dr. Seung-Kyum Choi, Chair (ME)
Dr. David Rosen (ME)
Dr. Richard W. Neu (ME)
Dr. Rafi Muhanna (CEE)
Dr. Bruce R. Ellingwood (CEE)
 

SUMMARY

Mesostructure materials are materials that have a characteristic cell length in the range of 0.1 to 10 mm. Small truss structures, honeycombs, and foams are examples of mesostructures. Topology optimization can be used for the design of efficient mesostructure, which changes the truss structure design problem to a problem of optimum material distribution in the design domain. Since structures are faced with uncertain material, force and boundary conditions a stochastic optimization procedure underlying the Reliability-Based Topology Optimization (RBTO) procedure will ensure higher reliability in structures. Since the reliability assessment part in the RBTO procedure can be computationally expensive due to the Finite Element Analysis (FEA) procedures, a classification based surrogate modeling approach is proposed. The accuracy of a classification-based surrogate model for reliability assessment is improved by augmenting the training data with a large number of unlabeled data. The selected algorithm, Semi-Supervised Learning (SSL) has the ability to reduce the computation cost of reliability assessment process drastically by the need to invoke the FEA only a fraction of times than that is required by only using labeled data. A combination of Probabilistic Neural Network (PNN) and Expectation- Maximization (EM) method is considered to use the labeled and unlabeled data simultaneously in order to improve the accuracy of the PNN classifier. The proposed algorithm first trains a PNN classifier using the labeled data. This classifier is then used to probabilistically label the unlabeled data. In the third step a new PNN classifier is trained using both labeled and unlabeled data. This procedure is iterated until convergence. Representative examples show the efficacy of this procedure in estimating the probability of failure (Pf) by showing that this procedure leads to a considerable improvement in the classifier performance when both labeled and unlabeled data are used.