The second Workshop on “Machine Learning Meets Differential Equations: From Theory to Applications” (ML-DE), co-located with ECAI 2025, is set to explore the synergies between Machine Learning (ML) and Differential Equations (DE), highlighting how these fields can collaborate to enhance prediction accuracies and advance explainable AI. This event will focus on integrating DEs into ML architectures and applying ML to solve complex DE problems, aiming to foster innovations in fields ranging from physics to biology. With a commitment to sustainable AI practices, the workshop is dedicated to developing energy-efficient algorithms and reducing the computational footprint of ML, encouraging a community-driven approach to pushing the boundaries of research and application at the intersection of ML and DE.
This concise description encapsulates the workshop’s objectives, themes, and the potential impact on various scientific and technological domains, positioning it as a pivotal meeting point for researchers and practitioners interested in the future of intelligent technologies.
Please find a detailed description of the workshop here. Also have a look at last year’s workshop to get an impression of the workshop.
Invited Talks:
- Ricardo Baptista, Department of Statistical Sciences, University of Toronto, and Faculty Affiliate at the Vector Institute: “Memorization and Regularization in Generative Diffusion Models” Abstract: Diffusion models have emerged as a powerful framework for generative modeling in the information sciences and many scientific domains. To generate samples from the target distribution, these models rely on learning the the gradient of the data distribution’s log-density using a score matching procedure. A key element for the success of diffusion models is that the optimal score function is not identified when solving the denoising score matching problem. In fact, the optimal score in both unconditioned and conditioned settings leads to a diffusion model that returns to the training samples and effectively memorizes the data distribution. In this presentation, we study the dynamical system associated with the optimal score and describe its long-term behavior relative to the training samples. Lastly, we show the effect of two forms of score function regularization on avoiding memorization: restricting the score’s approximation space and early stopping of the training process. These results are numerically validated using distributions with and without densities including image-based inverse problems for scientific machine learning applications.
- Andrei A. Klishin, Department of Mechanical Engineering, University of Hawaiʻi at Mānoa: “Statistical Mechanics of Dynamical System Identification” Abstract: Recovering dynamical equations from observed noisy data is the central challenge of system identification. We develop a statistical mechanics approach to analyze sparse equation discovery algorithms, which typically balance data fit and parsimony via hyperparameter tuning. In this framework, statistical mechanics offers tools to analyze the interplay between complexity and fitness similarly to that of entropy and energy in physical systems. To establish this analogy, we define the hyperparameter optimization procedure as a two-level Bayesian inference problem that separates model form selection from parameter inference and enables the computation of the posterior parameter distribution in closed form. Our approach provides uncertainty quantification, crucial in the low-data limit that is frequently encountered in real-world applications. A key advantage of employing statistical mechanical concepts, such as free energy and the partition function, is to connect the large data limit to thermodynamic limit and characterize the sparsity- and noise-induced phase transitions that delineate correct from incorrect identification. We thus provide a method for closed-loop inference, estimating the noise in a given model and checking if the model is tolerant to that noise amount. This perspective of sparse equation discovery is versatile and can be adapted to various other equation discovery algorithms.
Important Dates
- Submission Deadline:
15th of June 2025 23:59 CEST14th of July 2025 23:59 CEST - Notification of Acceptance:
15th of July 2025latest by 20th of August 2025 - Workshop Date: 26th of October, Full day
- Review Transfer Option Deadline: 20th of July 2025 23:59 CEST
Highlights of Call for Papers
Explore the core of our Call for Papers for the ML-DE Workshop, dedicated to the convergence of Machine Learning and Differential Equations. For detailed submission guidelines and topics of interest, ensure to visit our Call for Papers page.
Brief Submission Guidelines
- Paper Format: Submissions must be formatted according to the PMLR LaTeX template. Either use our prefilled version on overleaf or the official PMLR Template and change according to our Example
- Length: Up to 8 pages, excluding references and appendices.
- Blind Review: Submissions should be anonymized to adhere to our double-blind review process.
- Submission Link: https://easychair.org/conferences/?conf=mlde2025
- Review Transfer Option: For papers not accepted in the ECAI main track, authors can submit to our workshop. Please follow the rules from the main conference regarding “RESUBMISSION FROM OTHER CONFERENCES” that can be found here. Please use this format when submitting your paper. For questions contact us at MLDEWorkshopECAI25@hsu-hh.de.
Snapshot of Topics of Interest
We invite submissions that bridge Machine Learning with Differential Equations, covering a spectrum from theoretical innovations to practical applications. For a full exploration of the themes and how to submit, visit the full Call for Papers.
Engage with us in advancing this interdisciplinary nexus!