Faster Uncertainty Quantification for Inverse Problems with Conditional Normalizing Flows

07/15/2020
by   Ali Siahkoohi, et al.
0

In inverse problems, we often have access to data consisting of paired samples (x,y)∼ p_X,Y(x,y) where y are partial observations of a physical system, and x represents the unknowns of the problem. Under these circumstances, we can employ supervised training to learn a solution x and its uncertainty from the observations y. We refer to this problem as the "supervised" case. However, the data y∼ p_Y(y) collected at one point could be distributed differently than observations y'∼ p_Y'(y'), relevant for a current set of problems. In the context of Bayesian inference, we propose a two-step scheme, which makes use of normalizing flows and joint data to train a conditional generator q_θ(x|y) to approximate the target posterior density p_X|Y(x|y). Additionally, this preliminary phase provides a density function q_θ(x|y), which can be recast as a prior for the "unsupervised" problem, e.g. when only the observations y'∼ p_Y'(y'), a likelihood model y'|x, and a prior on x' are known. We then train another invertible generator with output density q'_ϕ(x|y') specifically for y', allowing us to sample from the posterior p_X|Y'(x|y'). We present some synthetic results that demonstrate considerable training speedup when reusing the pretrained network q_θ(x|y') as a warm start or preconditioning for approximating p_X|Y'(x|y'), instead of learning from scratch. This training modality can be interpreted as an instance of transfer learning. This result is particularly relevant for large-scale inverse problems that employ expensive numerical simulations.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset
Success!
Error Icon An error occurred

Sign in with Google

×

Use your Google Account to sign in to DeepAI

×

Consider DeepAI Pro