Seminar Mathematical Optimization (4 ECTS)

für Master Wirtschaftsmathematik
Lecturer: Prof. Dr. Claudia Schillings

Dates

Thursday, 12:00 – 13:30 h.

The seminar is intended for master students writing their thesis at the chair and Phd students.

News

23.9.2021Andreas Van Barel (KU Leuven)Multilevel Monte Carlo methods for robust optimization of PDEs

Abstract: We consider PDE constrained optimization problems where the partial differential equation has uncertain coefficients modeled by means of random variables or random fields. The goal of the optimization is to determine a robust optimum, i.e., an optimum that is satisfactory in a broad parameter range. Since the stochastic space is often high dimensional, a multilevel Monte Carlo (MLMC) method is presented to efficiently calculate the gradient and the Hessian. Under some mild assumptions, the resulting estimated quantities are the exact gradient and Hessian of the estimated cost functional, which is important in practice for some optimization algorithms. Furthermore, to speed up the optimization process, we consider a multigrid optimization technique based on the so-called MG/OPT framework. Each of the levels in the MG/OPT hierarchy then contains its own underlying MLMC hierarchy. The MG/OPT levels allow the algorithm to exploit the structure inherent in the PDE, speeding up the convergence to the optimum (regardless of the problem being deterministic or stochastic). In contrast, the MLMC levels exist to exploit structure present in the stochastic dimensions of the problem. We discuss some details regarding the construction of these nested hierarchies and relate this to previous work. Depending on the specific problem, large reductions in the number of samples on the expensive levels and/or in the number of optimization iterations can be observed.
B6, A101
30.9.2021Claudia Schillings

Neural Network based One-Shot Inversion and Optimization

We study the use of novel techniques arising in machine learning for inverse problems. Our approach replaces the complex forward model by a neural network, which is trained simultaneously in a one-shot sense when estimating the unknown parameters from data, i.e. the neural network is trained only for the unknown parameter. By establishing a link to the Bayesian approach to inverse problems, an algorithmic framework is developed which ensures the feasibility of the parameter estimate w.r.\ to the forward model. We propose an efficient, derivative-free optimization method based on variants of the ensemble Kalman inversion. Numerical experiments show that the ensemble Kalman filter for neural network based one-shot inversion is a promising direction combining optimization and machine learning techniques for inverse problems.

B6, C301
7.10.2021Jonas Latz (Heriot-Watt University)

Stochastic gradient descent in continuous time: discrete and continuous data Abstract: Optimisation problems with discrete and continuous data appear in statistical estimation, machine learning, functional data science, robust optimal control, and variational inference. The `full' target function in such an optimisation problems is given by the integral over a family of parameterised target functions with respect to a discrete or continuous probability measure. Such problems can often be solved by stochastic optimisation methods: performing optimisation steps with respect to the parameterised target function with randomly switched parameter values. In this talk, we discuss a continuous-time variant of the stochastic gradient descent algorithm. This so-called stochastic gradient process couples a gradient flow minimising a parameterised target function and a continuous-time `index' process which determines the parameter. We first briefly introduce the stochastic gradient processes for finite, discrete data which uses pure jump index processes. Then, we move on to continuous data. Here, we allow for very general index processes: reflected diffusions, pure jump processes, as well as other Lévy processes on compact spaces. Thus, we study multiple sampling patterns for the continuous data space. We show that the stochastic gradient process can approximate the gradient flow minimising the full target function at any accuracy. Moreover, we give convexity assumptions under which the stochastic gradient process with constant learning rate is geometrically ergodic. In the same setting, we also obtain ergodicity and convergence to the minimiser of the full target function when the learning rate decreases over time sufficiently slowly. We illustrate the applicability of the stochastic gradient process in a simple polynomial regression problem with noisy functional data, as well as in physics-informed neural networks approximating the solution to certain partial differential equations.

B6, A101
14.10.2021

Vesa Kaarnioja (LUT University)

Modeling random domains using periodic random variables

Abstract: Computational measurement models may involve several uncertain simulation parameters: not only can the material properties of a heterogeneous medium be unknown, but the shape of the structure itself can be uncertain as well. In this talk, we discuss a parameterization for an uncertain domain using a random perturbation field in which a countable number of independent random variables enter the random field as periodic functions. The random field can be constructed to have a prescribed mean and covariance function. As an application, we design simple quasi-Monte Carlo cubature rules in order to study how uncertainty in the domain shape impacts the stochastic response of an elliptic PDE.

This is joint work with Harri Hakula (Aalto University), Helmut Harbrecht (University of Basel), Frances Y. Kuo (UNSW Sydney), and Ian H. Sloan (UNSW Sydney).

B6, A101
 
21.10.2021Vicky Hartmann (ITWM / U Mannheim) B6, C301
28.10.2021Fabienne KrausMachine learning-based conditional mean filter: a generalization of the ensemble Kalman filter for nonlinear data assimilationB6, C301
4.11.2021Rhys Scherer B6, C301
11.11.2021Eneas Hensel B6, C301
18.11.2021Philipp Guth B6, C301
25.11.2021Matei Hanu B6, C301
9.12.2021Simon Weissmann (U Heidelberg)A multilevel subset simulation for estimating rare events via shaking transformations
Abstract:
In this talk, we analyse a multilevel version of subset simulation to estimate the probability of
rare events for complex physical systems. Given a sequence of nested failure domains of in-
creasing size, the rare event probability is expressed as a product of conditional probabilities.
The proposed estimator uses different model resolutions and varying numbers of samples across
the hierarchy of nested failure sets. The key idea in our proposed estimator is the use of a se-
lective refinement strategy that guarantees the critical subset property which may be violated
when changing model resolution from one failure set to the next. In order to estimate the prob-
abilities of the underlying subsets we formulate and analyse a parallel one-path algorithm based
on shaking transformations. Considering a physical model based on Gaussian transformation
we can verify the ergodicity of the resulting Markov chain. Additionally, we present a detailed
complexity analysis of the considered subset simulation.
Zoom