Co-organizing minisymposium on "Transfer Learning and Multi-Fidelity Approaches to Alleviate Data Sparsity in Machine Learning" at SIAM-MDS22

Mini-symposium Abstract: Machine learning (ML) models of sufficient predictive accuracy often rely on a significant volume of data for the inverse problem of model calibration. This may be limiting due to, for example, prohibitive expense of computer simulations or lack of field/experimental data. ML models have been mainly applied to tasks and domains that, while impactful, have sufficient volume of data. However, when deploying ML models for scientific or engineering tasks, they are sometimes invoked in conditions that do not overlap the set of scenarios for which the model was trained, or in scenarios with insufficient high-fidelity data for training purposes. State-of-the-art ML models, despite exhibiting superior performance on the domain they were trained on, suffer detrimental loss in performance in such extrapolatory or data-sparse settings. This loss in performance is also unpredictable and sensitive to both the amount and nature of data sparsity.

Transfer learning is the process in which knowledge gained through similar training tasks is used to improve the training process on a new task, possibly suffering from limited data. Alternatively, data of lower levels of fidelity might improve the training task of interest by supplementing sparse high-fidelity data through multi-fidelity training frameworks. This mini-symposium will focus on novel methodologies and applications of transfer learning and multi-fidelity data fusion to enhance the performance of ML models in sparse data settings.

Cosmin Safta
Cosmin Safta
Distinguished Member of Technical Staff

My research interests include uncertainty quantification, machine learning, and statistics.