Please login to view abstract download link
The development of simplified computational models of complex fundamental phenomena in physics, chemistry, astronomy and biology is an ongoing challenge. The purpose of such simplified models is typically to reduce computational cost at a minimal loss of accuracy. At the same time, more importantly, these models can provide fundamental understanding of underlying phenomena. Recently, the following two concepts have gained importance in computational science: (i) machine learning (in particular neural networks) and (ii) structure-preserving (mimetic or invariant-conserving) computing for mathematical models in physics, chemistry, astronomy, biology and more. While neural networks are very strong as high-dimensional universal function approximators, they require enormous datasets for training and tend to perform poorly outside the range of training data. On the other hand, structure-preserving methods are strong in providing accurate solutions to complex mathematical models from science. The goal of the UNRAVEL project is to better understand neural networks to enable the design of highly efficient, tailor-made neural networks built on top of and interwoven with structure-preserving properties of the underlying science problems that can serve as the simplified models mentioned above. This is unexplored terrain, and will lead to novel types of machine learning that are much more effective and have a much lower need for abundant sets of data.