Videos

Transfer learning in deep operator networks

Presenter
June 7, 2023
Abstract
Transfer learning allows knowledge gained while learning to execute one task (source) to be transferred to a related but distinct task (target), thereby resolving the cost of data collecting and labeling, potential computational power restrictions, and dataset distribution mismatches. Based on the deep operator network, we propose a new transfer learning framework for task-specific learning (functional regression in partial differential equations) under conditional shift (DeepONet). Task-specific operator learning is achieved by fine-tuning task-specific layers of the target DeepONet with a hybrid loss function that allows for the matching of individual target samples while simultaneously preserving the global features of the target data's conditional distribution. By embedding conditional distributions onto a reproducing kernel Hilbert space, we minimize the statistical distance between labelled target data and the surrogate prediction on unlabelled target data, as inspired by conditional embedding operator theory. We demonstrate the benefits of our approach for a variety of transfer learning scenarios involving nonlinear partial differential equations under varying conditions caused by geometric domain shifts and model dynamics. Despite significant discrepancies between the source and target domains, our transfer learning architecture enables fast and effective learning of heterogeneous tasks.