Our paper titled Neural-network learning of SPOD latent
dynamics was just accepted for publication in Journal
of Computational Physics.
Thanks to all authors:
Andrea Lario (SISSA, Italy), Romit Maulik
(Argonne National Laboratory, USA), Oliver Schmidt
(University of California San Diego, USA), and Gianluigi
Rozza (SISSA, Italy).
Plain-language description
Deep learning strategies are gaining traction as a tool
for surrogate modeling (also referred to as emulation
or non-intrusive reduced order modeling) of dynamical
systems. Data associated to dynamical systems usually
have both a space and a time component. For instance,
one can think of the evolution in time of the temperature
at various locations worldwide. The various locations
where the temperature is provided constitute the space
component of the data, while the time snapshots constitute
the time component. A common case in computational
physics is when we have access to values of a given
quantity at S spatial locations (or grid points),
for N time snapshots. In this case, the dimension
of the problem is S x N, where the spatial
component S might be extremely large. The key in
surrogate modeling or emulation is to identify a suitable
representation of the data that reduces the high dimensionality
of S to a more computationally manageable number,
that we denote with Sr << S. The new reduced
space of dimension Sr, often referred to as the latent
space, is where we seek to learn the time evolution of the
system. The learnt reduced space can then be used to reconstruct
the original high-dimensional space by inverting the data
compression. Emulation techniques are particularly useful
when fast predictions are required, e.g., in the context
of digital twins.
Abstract
We aim to reconstruct the latent space dynamics of high dimensional,
quasi-stationary systems using model order reduction via the spectral
proper orthogonal decomposition (SPOD). The proposed method is based
on three fundamental steps: in the first, once that the mean flow
field has been subtracted from the realizations (also referred to
as snapshots), we compress the data from a high-dimensional
representation to a lower dimensional one by constructing the SPOD
latent space; in the second, we build the time-dependent coefficients
by projecting the snapshots containing the fluctuations onto the SPOD
basis and we learn their evolution in time with the aid of recurrent
neural networks; in the third, we reconstruct the high-dimensional
data from the learnt lower-dimensional representation. The proposed
method is demonstrated on two different test cases, namely, a
compressible jet flow, and a geophysical problem known as the
Madden-Julian Oscillation. An extensive comparison between SPOD
and the equivalent POD-based counterpart is provided and differences
between the two approaches are highlighted. The numerical results
suggest that the proposed model is able to provide low rank predictions
of complex statistically stationary data and to provide insights
into the evolution of phenomena characterized by specific range
of frequencies. The comparison between POD and SPOD surrogate
strategies highlights the need for further work on the characterization
of the interplay of error between data reduction techniques
and neural network forecasts.