Videos

Train Small, Model Big: Data-Driven Scaleup for Large-Scale Physical Simulations

Presenter
January 9, 2025
Abstract
Traditional numerical methods for solving partial differential equations are widely employed in scientific discovery. However, as models are refined—such as through h-refinement or p-refinement—to better capture intricate system details, problem sizes can grow substantially due to the finer spatial and temporal discretizations required. This expansion can easily reach exa-scale, significantly increasing computational costs. Furthermore, these refinements often introduce numerical instabilities, such as those stemming from CFL constraints and mesh distortions, which in turn limit time-step sizes and make long-duration simulations extremely challenging. Together, these factors create a major bottleneck in computational efficiency, impeding advancements in science and technology, particularly in scenarios that rely on critical decision-making. To address these challenges, we propose integrating data-driven bases within a component-based reduced order model framework, enabling larger element sizes and scalable solutions for larger problems. This approach embodies the ideal machine learning paradigm: “train small, model big.” By leveraging physics-driven identification of equations within the reduced space of each component, our method enables more robust extrapolation compared to purely data-driven approaches. In this talk, we will outline the general framework for component-based reduced order models and highlight substantial performance gains, including a 1000x speed-up in lattice-type structural design, a 1000x scale-up in nonlinear Navier–Stokes flow simulations over porous media, and extensions to nonlinear manifold reduced order models for time-dependent Burgers’ equations.