Transferability of Neural Network Ensemble Models between different Open-Source Codes

  • Karpowski, Tim Jeremy Patrick (STFS, TU Darmstadt)
  • Ferraro, Federica (STFS, TU Darmstadt)
  • Fortes, Emilano Manuel (Barcelona Supercomputing Center)
  • Both, Ambrus (Barcelona Supercomputing Center)
  • Mira, Daniel (Barcelona Supercomputing Center)
  • Hasse, Christian (STFS, TU Darmstadt)

Please login to view abstract download link

Artificial neural networks (ANNs) are regarded as general function approximators. With the recent increase in computational power, fitting such networks using gradient base optimization algorithms has become possible. In many areas, neural networks now outperform previous analytical or empirical models, e.g. [1]. These successes suggest that similar performances are possible when applied to combustion closure. Recent studies were able to predict the reactive source terms utilizing ANNs in simulations [2–5]. While these methods achieved simulation times faster than direct evaluation of detailed chemistry, they are still slower, or at best comparable to traditional tabulated chemistry approaches. Many of those works used multiple networks, each responsible for a subset of the complete thermochemical manifold of the training data [2–4]. The decomposition was employed because ANNs are currently not trainable to a sufficient relative tolerance when the predicted quantity spans multiple orders of magnitude. Additionally, with an increase in the number of subdomains, simpler networks might be able to predict their respective subdomain, improving performance. Here a problem arises in the context of open-source software. Contrary to previous, readily re-implementable methods, ensemble models are not directly transferable between codes. Thus this work aims to showcase how such ensemble models can be trained and deployed in an open-source CFD framework, here OpenFOAM. Furthermore, the possible distribution of such models is discussed and illustrated by the transfer and deployment of these models into the second CFD code Alya. Finally, the speed and performance of these systems will be evaluated for canonical flame configurations. References [1] J. Jumper, R. Evans, A. Pritzel, T. Green, M. Figurnov, O. Ronneberger, K. Tunyasuvunakool, R. Bates, A. ˇ Z´ıdek, A. Potapenko, et al., Highly accurate protein structure prediction with AlphaFold, Nature 596, 583 (2021). [2] M. Haghshenas, P. Mitra, N. D. Santo, and D. P. Schmidt, Acceleration of chemical kinetics computation with the learned intelligent tabulation (LIT) method, Energies 14 (2021). [3] T. Ding, S. Rigopoulos, and W. Jones, Machine learning tabulation of thermochemistry of fuel blends, Applications in Energy and Combustion Science 12 (2022). [4] K. Wan, C. Barnaud, L. Vervisch, and P. Domingo, Chemistry reduction using machine learning trained from non-premixed micro-mixing modeling: Application to DNS of a sy