Workshop on
Model Management and Reduced Order Model Approaches for Simulation Driven Optimization

Rice University, October 11&12, 2010

Location | Invited Speakers | Schedule | Titles and Abstracts | Slides | Photos | Registration | Lodging | Campus Parking | Dining | Local Attractions | Organizing Committee | Sponsors

Titles and Abstracts

Harbir Antil

Department of Computational and Applied Mathematics
Rice University
Houston, Texas 77005

Reduced Order Modeling for Parametric Nonlinear PDE Constrained Problems Using POD-DEIM

When proper orthogonal decomposition (POD) or another projection based technique is used to generate reduced order models, the number of equations and unknowns is typically reduced dramatically. However, for nonlinear or parametrically varying problems, the cost of evaluating the reduced order models still depends on the size of the full order model and is still expensive. To overcome this bottleneck, Chaturantabut and Sorensen developed the Discrete Empirical Interpolation Method (DEIM), which generates reduced order models that typically can be evaluated at a cost that only depends on the size of the reduced order model and therefore generates a truly useful reduced order model.

We demonstrate why model reduction by POD alone is insufficient and outline the DEIM. We then extend the POD-DEIM method to finite element solutions of nonlinear partial differential equations (PDEs) and to the solution of shape optimization problems governed by PDEs.

Joint work with M. Heinkenschloss and D. C. Sorensen.

Lorenz T. Biegler

Chemical Engineering Department
Carnegie Mellon University
Pittsburgh, PA 15213 USA

A Filter Trust Region Method for Optimization with Reduced Order Process Models

Process models of chemical and energy plants are frequently multi-scale and span the range from lumped algebraic models to spatially distributed PDE models. As more challenging models are incorporated, they can become prohibitively expensive for optimization. Instead, reduced order models (ROMs) are frequently constructed and applied for an optimization study. Nevertheless, such models may not capture all of the features of the more detailed model and may not provide sufficient accuracy to guarantee convergence to an optimum.

Over the past two decades trust-region methods have been developed that can provide an adaptive framework for ROM-based optimization. They not only restrict the step around the root-point, but also synchronize ROM updates with the information obtained over the course of optimization, thus providing a robust and globally convergent framework. In this talk, we develop a filter-based approach, where infeasibilities and objective function improvement are traded in a multicriterion strategy. Such an approach has favorable convergence properties and overcomes the sensitivity to penalty parameters. This approach is demonstrated on a process optimization case study.

George Biros

Computational Science and Engineering
College of Computing
Georgia Institute of Technology
266 Ferst Drive
Atlanta GA 30332-0765

Full Waveform Acoustic and Elastic Reconstructions with Multiple Sources

We consider the inverse medium problem for the time-harmonic wave equation with broadband and multi-point illumination in the low frequency regime. Such a problem finds many applications in geosciences (e.g. ground penetrating radar), non-destructive evaluation (acoustics), and medicine (optical tomography). We use an integral-equation (Lippmann-Schwinger) formulation, which we discretize using a quadrature method. We consider only small perturbations (Born approximation). To solve this inverse problem, we use a least-squares formulation. We present a new fast algorithm for the efficient solution of this particular least-squares problem.

If Nfr is the number of excitation frequencies, N_s the number of different source locations for the point illuminations, Nd the number of detectors, and N the parametrization for the scatterer, a dense singular value decomposition for the overall input-output map will have [min(Ns Nfr Nd, N)]2 x max(Ns NfrNd, N) cost. We have developed a fast SVD-based preconditioner that brings the cost down to O( Ns Nfr Nd N) thus, providing orders of magnitude improvements over a black-box dense SVD and an unpreconditioned linear iterative solver.

Joint work with Stephanie Chaillat, Computational Science and Engineering, Georgia Institute of Technology.

Jane Elsemüller

Technische Universität Darmstadt
Fachbereich Mathematik
Dolivostr. 15
64293 Darmstadt, Germany

Optimal Flow Control Based on POD for the Cancellation of Tollmien-Schlichting Waves by Plasma Actuators

We consider the cancellation of Tollmien-Schlichting waves by plasma actuators and introduce the idea of Model Predictive Control (MPC) in connection with model reduction for the optimal control of this process. We use Proper Orthogonal Decomposition (POD) for the low-order description of the flow model and the optimization will be performed with the reduced system. We present several methods for the improvement of the reduced model. Its quality is verified in comparison to the results of a Finite Volume based Large Eddy simulation for the considered problem.

C. Tim Kelley

Department of Mathematics , Box 8205
Center for Research in Scientific Computation
North Carolina State University
Raleigh, NC 27695-8205

Interpolatory Approximations of Molecular Potential Energy Surfaces

Simulation the relaxation of a molecules confirmation after excitation by light has applications in sensors and solar energy, for example. Doing this in a efficient manner requires that one replace an expensive molecular dynamics simulation with a reduced order model that can be used in a numerical dynamics code. In this talk we show how the Smolyak sparse interpolation algorithm can be used to design a look-ahead integrator. We will discuss the motivating application, the algorithmic issues, and our solutions.

Robert Michael Lewis

Department of Mathematics
College of William & Mary
P.O. Box 8795
Williamsburg, VA 23187-8795

Approximation Correction Techniques for Nonlinear Programming

We discuss a variety of correction techniques for managing the use of approximation models of differing fidelity in optimization. The idea of approximation models in optimization is to use lower-fidelity, but less computationally expensive, versions of the optimization problem to aid in the calculation of steps for a higher-fidelity, but more computationally expensive, version of the optimization problem.

The correction techniques we discuss are intended to improve the utility of information computed from lower-fidelity models (e.g., values of the objective and constraints) for making predictions about the higher-fidelity models. Some of the techniques use additive and multiplicative correction to ensure that the corrected low-fidelity responses agree with the high-fidelity responses to first or second order at the current design point. This condition underlies convergence results for nonlinear programming algorithms. Other of the correction methods assume that the models of differing fidelity are associated with different levels of discretization and correct coarse-grid calculations. Of particular interest are optimization problems involving linear and quadratic functionals of the solutions of time-dependent partial differential equations.

Stephen G. Nash

Systems Engineering & Operations Research Department
George Mason University
Mailstop 4A6
Fairfax, VA 22030

Convergence and Descent Properties for a Class of Multilevel Optimization Algorithms

I present a multilevel optimization approach (called MG/Opt) for the solution of constrained optimization problems. The approach assumes that one has a hierarchy of models, ordered from fine to coarse, of an underlying optimization problem, and that one is interested in finding solutions at the finest level of detail. In this hierarchy of models, calculations on coarser levels are less expensive, but also are of less fidelity, than calculations on finer levels. The intent of MG/Opt is to use calculations on coarser levels to accelerate the progress of the optimization on the finest level.

Global convergence (i.e., convergence to a Karush-Kuhn-Tucker point from an arbitrary starting point) is ensured by requiring a single step of a convergent method on the finest level, plus a line-search (or other globalization technique) for incorporating the coarse level corrections. The convergence results apply to a broad class of algorithms with minimal assumptions about the properties of the coarse models.

I also analyze the descent properties of the algorithm, i.e., whether the coarse level correction is guaranteed to result in improvement of the fine level solution. Although additional assumptions are required in order to guarantee improvement, the assumptions required are likely to be satisfied by a broad range of optimization problems.

Ekkehard Sachs

FB 4 - Department of Mathematics
University of Trier
54286 Trier, Germany

Error Estimates for Reduced Order Models in Partial IntegroDifferential Equations

In this talk we consider as an example problem a partial integrodifferential equation (PIDE). For calibrating the model parameters we use a reduced order model of the PIDE based on POD. We phrase the PIDE in a weak formulation and derive error estimates using a time dependent bilinear form for the integral term. These estimates are essential for a convergence analysis in the model management of a nonquadratic trust region function. We present numerical results and give an outlook for more efficient algorithmic frameworks using an array of models. Furthermore, we address the question how to deal with nonsmooth initial data. Such data result into a fairly long error propagation unless one uses special smoothing techniques. We show numerically that it is necessary to use the same techniques in the POD model framework, i.e. the POD model should not be based on the original physical model but rather on the original numerical model.

Stefan Ulbrich

Technische Universität Darmstadt
Fachbereich Mathematik
Dolivostr. 15
64293 Darmstadt, Germany

Adaptive Multilevel Methods for PDE-Constrained Optimization Based on Adaptive Finite Element or Reduced Order Approximations

We present an adaptive multilevel generalized SQP-method for optimal control problems governed by nonlinear PDEs with control constraints. During the optimization iteration the algorithm generates a hierarchy of adaptively refined discretizations, which can be based on adaptive finite element approximations or on reduced order methods like Reduced Basis Methods or POD. The adaptive refinement strategy is based on error estimators for the PDE, adjoint PDE and a criticality measure. We consider first the case of an adaptive finite element discretization and discuss then the extension of the algorithm to adaptive approximations by reduced order models. We conclude the talk by showing numerical results.

Joint work with J. Carsten Ziems, Department of Mathematics, TU Darmstadt.

Qiqi Wang

Aerospace Computational Design Laboratory
Massachusetts Institute of Technology
Bldg 33 Rm 408
77 Massachusetts Avenue
Cambridge, MA 02139

Variance Reduction in Computational Risk Assessment using Reduced Order Models

Computational risk assessment uses mathematical models and computational methods to estimate the probability of failure in an engineering system, given uncertainty in the inputs, operating condition, and other factors. When the probability of such events is small, traditional Monte Carlo method requires evaluation of the mathematical model on a very large number of random samples. When the model is computationally expensive to evaluate, accurate risk assessment is often computationally infeasible without methods of reducing the number of required samples.

We present several methods for reducing the variance of Monte Carlo method, thus the number of required samples, by using reduced order models of computationally intensive models. These methods combine reduced order models with methods widely used in statistics, including control variates, importance sampling and stratified sampling. We demonstrate these techniques in engineering risk assessment problems involving unsteady hydrodynamics and hypersonic air breathing engines. We show that our methods reduce the computation cost by an order of magnitude, yet computes unbiased estimate of the probabilities of failure.

This web-page is maintained by Matthias Heinkenschloss