Summer School CEA-EDF-INRIA 2011

June 27–July 8, 2011, CEA Cadarache, France

Uncertainty Quantification for Numerical Model Validation

SUMMER SCHOOL CEA-EDF-INRIA 2011

June 27th–July 8th, 2011, CEA Cadarache, Saint Paul-lez-Durance, France

Organized by: Bertrand Iooss (EDF R&D, France) and Marc Sancandi (CEA-CESTA, France)

Contribution of François Hemez (Los Alamos National Laboratory, U.S.A.)

Synopsis:

The module“Uncertainty Treatment in Computer Experiments” introduces techniques used to quantify prediction uncertainty. Quantification includes the propagation and assessment of how much prediction uncertainty originates from a numerical simulation. It also includes understanding which phenomenology controlsit (“mesh discretization, parameter variability, interaction effects?”). The module is broken-up into five lectures, long of 1.5 hours each: 1) Verification, 2) Sensitivity Analysis, 3) Sampling, 4) Test-analysis Correlation, and 5) End-to-end Example. The objective is to familiarize the students with techniques used to verify the performance of computer codes; assess the numerical uncertainty of discrete solutions; design computer experiments; perform analysis-of-variance and effect screening studies; develop fast-running, statistical emulators; propagate model parameter uncertainty through numerical simulations; compare measurements to predictions; and calibrate parameters.For simplicity, the application examples discussed emphasize Engineering Mechanics even though the techniques presented are general-purpose and can, therefore, be applied to any numerical simulation. The material is extracted from a graduate-level course taught at the University of California San Diego on the Verification and Validation (V&V) of computational models. The concepts learned during the first four lectures are practiced during hands-on, training sessions.

Short Bio:

François Hemez has been Technical Staff Member at Los Alamos National Laboratory since 1997. Prior to joining Los Alamos, he was research associate at the French National Center for Scientific Research (CNRS) from 1994-1997. François earned a Ph.D. from the University of Colorado in 1993 and graduated from Ecole Centrale Paris, France, in 1989. At Los Alamos, François spent seven years in the Engineering Division, one of which as leader of the Validation Methods team. In 2005, he joined the X-Division for nuclear weapons design. François has managed the verification project of the Advanced Scientific Computing (ASC) program for two years. He currently manages a $4M-per-year project to assess the predictive capability of ASC codes. His research interests revolve around the development of methods for Verification and Validation (V&V), uncertainty quantification and decision-making, and their applicationto engineering, wind energy, and weapon physics projects. He developed and taught the first-ever, graduate-level course offered in a U.S. University in the discipline of V&V (University of California San Diego, 2006). François received the Junior Research Award of the European Association of Structural Dynamics in 2005;two U.S. Department of Energy Defense Program Awards of Excellence for applying V&V to programmatic work at Los Alamos in 2006; and the D.J. DeMichele Award of the Society for Experimental Mechanics in 2010. François has authored 300+ technical publications or reports (including 23 peer-reviewed papers), and given 120+ invited lectures or short-courses, since 1994.

Tentative Outline of Lectures:

1) Code and Solution Verification (1½ hours)

  • Code verification
  • How to define benchmark problems
  • Method of Manufactured Solutions (MMS)
  • The concepts of Modified Equation Analysis (MEA), consistency, and convergence
  • Truncation error and the asymptotic convergence of numerical solutions
  • The Richardson extrapolation and solution verification
  • The Grid Convergence Index
  • Quantification of solution uncertainty

2)Design-of-experiments, Sensitivity Analysis, and Meta-modeling (1½ hours)

  • Description of the modeling uncertainty and lack-of-knowledge
  • Principles of the design of (physical or computer) experiments
  • Full-factorialand fractional factorial designs
  • Orthogonal arrays, central composite designs
  • 2^N and 2^(N-k) designs, statistical aliasing
  • Rationale for effect screening (“where is an observed variability coming from?”)
  • The concept of effect screening using a design of computer experiments
  • Local sensitivity analysis as a “primitive” screening technique
  • The analysis-of-variance (ANOVA)
  • Main effect and total effect sensitivity indices
  • The concept of meta-modeling using a design of computer experiments
  • Polynomial emulators
  • Kriging emulators
  • Estimation of quality of statistical emulators

3) Propagation of Probabilistic Uncertainty (1½ hours)

  • Sampling methods for the forward propagation of (probabilistic) uncertainty
  • Monte Carlo, Quasi Monte Carlo
  • Stratified sampling (Latin Hypercube Sampling)
  • Convergence of statistical estimates
  • Sampling methods for the inverse propagation of uncertainty
  • The Metropolis-Hastings algorithm and Markov Chain Monte Carlo (MCMC)
  • Fast probability integrators for reliability analysis

4) Test-analysis Correlation and Model Calibration (1½ hours)

  • The concepts of response features and correlation metrics
  • Advantages and limitations of the “view-graph norm”
  • Statistical tests used to account for probabilistic uncertainty
  • Principal component decomposition-based metrics
  • Parameter calibration (“what is it? what are the dangers?”)
  • Formulation and resolution of an calibration problem
  • Model calibration under uncertainty

5) An End-to-end Example of Verification and Validation(1½ hours)

  • Introduction of an engineering example of transient dynamics simulation
  • Verification of the finite element software
  • Design and execution of computer experiments (predictions)
  • Design of physical experiments (measurements)
  • Down-selecting of the statistically significant effects
  • Small-scale, validation experiments and the reduction of parameter uncertainty
  • Uncertainty propagation and final test-analysis correlation

Tentative Outline of Training Sessions:

1) Code Verification (1½ hours)

  • Verify the performance of a simple finite element model. Analyze a 1D beam-bending problem and compare the discrete solutions, obtained by varying the number of finite elements, to the exact (analytical) solution (Homework-03).
  • Derive analytically the modified equation of a 1D, advection-diffusion equation. Use the results to assess the behavior of truncation error (Homework-04).

2) Solution Verification (1½ hours)

  • Analyze the asymptotic convergence of discrete solutions obtained by refining a computational mesh. Estimate the order-of-convergence of the numerical method, Grid Convergence Index, and level of prediction uncertainty. Application to the Fourier approximation of a discontinuous function (Homework-05) or the finite element approximation of a frame structure (Homework-06).

3)Design-of-experiments, Sensitivity Analysis, and Meta-modeling (1½ hours)

  • Propagate a design-of-experiments through a finite element analysis. Use the simulation results to perform an analysis-of-variance and screen the statistically significant effects. Develop a fast-running, polynomial emulator of the finite element model. Application to a vibrating, mass-spring system(Homework-10) or the scaled model of a three-story frame building (Homework-TBD).

4) Propagation of Probabilistic Uncertainty and Model Calibration (1½ hours)

  • Use sampling techniques to propagate uncertainty forward through a finite element model. Quantify the prediction uncertainty and compare the statistics of predictions to those of physical measurements (test-analysis correlation). (Homework-TBD.)
  • Calibrate parameters to improve the overall goodness-of-fit of the model. Application to a vibrating, mass-spring system(Homework-11) or the scaled model of a three-story frame building (Homework-TBD).

Version 1.0 — December-15-2010