Student Speakers and Posters

SAMBa abounds in good work and interesting topics. To showcase as much of our work as possible during the two days of this conference, there will be both student talks and a poster session. Scroll towards the end of the page to view the titles and abstract of the talks.

Poster Session

Most of the creators of the posters will be standing near their posters during this session to answer questions and explain nuances.

  • Sinyoung Park: New directions in constrained spectral clustering for networks
  • Sadegh Salehi: An adaptively inexact first-order method for bilevel learning

Student Speakers

Below is the list of titles and abstracts for each presentation.


Tuesday Morning

Dáire O'Kane

Title

Solving underdamped Langevin dynamics

Abstract

Underdamped Langevin dynamics (ULD) is a stochastic differential equation, popular in both molecular dynamics and machine learning communities. The former, due to it's description of the position and velocity of interacting particles and the latter due to its ergodic properties, making it useful in bayesian learning. We propose efficient higher order solvers for ULD and prove their convergence as well as contraction properties. This allows us to establish bounds in the 2-Wasserstein metric for the rate of convergence to the invariant measure.

Sangeetha Sampath

Title

Estimating River Bathymetry for Non-Uniform Flow

Abstract

Flood prediction is crucial for disaster management and urban planning, with flooding costing the UK approximately £1.5 billion annually. Accurate river bathymetry, essential for reliable flood models, is traditionally resource-intensive to obtain. This research introduces a novel method for estimating river bathymetry using available datasets, bypassing extensive field measurements. Utilizing hydraulic geometry and shallow water equations, we develop a model for non-uniform flow conditions. We compare two optimization techniques: nonlinear least squares and the Nelder-Mead method. While nonlinear least squares can overfit noisy data, the Nelder-Mead method proves more robust and efficient. Our trapezoidal river model highlights the importance of precise parameter calibration. Findings show the Nelder-Mead method significantly outperforms nonlinear least squares in noisy data environments, enhancing flood prediction accuracy. This study offers a practical methodology for estimating river bathymetry, improving flood models in data-sparse regions. The talk will cover methodologies, key findings, and implications for future research and flood management.

João Luiz De Oliveira Madeira

Title

Can deleterious mutations surf deterministic population waves?

Abstract

In this work, we study the deterministic scaling limit of a model introduced by Foutel-Rodier and Etheridge in 2020 to study the impact of cooperation and competition on the fitness of an expanding asexual population whose individual birth and death rates depend on the local population density. The interacting particle system can be mathematically described as particles performing symmetric random walks that undergo a birth-death process with rates that depend on the local number of particles. Phenomenologically, each particle represents a chromosome, and we keep track during the process of two features of each particle: its location and its number of deleterious mutations. After each birth event, with some positive probability, the daughter particle can acquire an additional mutation which will decrease its reproduction rate when compared to its parent. We show that under an appropriate scaling, the process converges weakly to an infinite system of partial differential equations, proving a conjecture of Foutel-Rodier and Etheridge. For the case where the reaction term satisfies a Fisher-KPP condition, we prove a conjecture of Foutel-Rodier and Etheridge regarding the spreading speed of the population into an empty habitat. We also prove some further results regarding the asymptotic behaviour of the system of PDEs in the monostable case. This is a joint work with Sarah Penington and Marcel Ortgiese.

Seb Scott

Title

What does it mean for a regulariser to be good?

Abstract

Given an image corrupted by noise, a classical approach to retrieve a denoised image is via variational regularisation wherein you minimise a function consisting of a data-fidelity term, which encourages your reconstruction to not look totally different from the noisy image, and a regulariser term which should penalise undesirable properties. The balance between these two terms controlled by a non-negative scalar called the regularisation parameter.
While the regulariser can be hand crafted, mathematically it is not obvious what it means for a regulariser to be a “good” choice. One characterisation is whether selecting a strictly positive regularisation parameter value is in some sense optimal. In this talk we will motivate a framework for determining optimal regularisation parameters and provide a new condition that characterises when zero is not an optimal parameter.


Tuesday Afternoon

Xinle Tian

Title

Multi-response linear regression estimation based on low-rank pre-smoothing

Abstract

Pre-smoothing is a technique aimed at increasing the signal-to-noise ratio in data to improve subsequent estimation and model selection in regression problems. However, pre-smoothing has thus far been limited to the univariate response regression setting. Motivated by the widespread interest in multi-response regression analysis in many scientific applications, this article proposes a technique for data pre-smoothing in this setting based on low rank approximation. We establish theoretical results on the performance of the proposed methodology, and quantify its benefit empirically in a number of simulated experiments. We also demonstrate our proposed low rank pre-smoothing technique on real data arising from the environmental sciences.

Beth Stokes

Title

Speed and shape of population fronts with density dependent diffusion

Abstract

Understanding how and why animal populations disperse is a key question in movement ecology. There are many reasons for dispersal, such as overcrowding and searching for food, territory or potential mates. These behaviours are often dependent on the local density of the population. Motivated by this, we investigate travelling wave solutions in reaction-diffusion models of animal range expansion in the case that population diffusion is density dependent. We find that the speed of the selected wave depends critically on the strength of diffusion at low density. For sufficiently large low-density diffusion, the wave propagates at a speed predicted by a simple linear analysis. For small or zero low-density diffusion, the linear analysis is not sufficient, but a variational approach yields exact or approximate expressions for the speed and shape of population fronts.

Pablo Arratia Lopez

Title

Solving Dynamic Inverse Problems with Neural Fields

Abstract

Image reconstruction for dynamic inverse problems with highly undersampled data poses a major challenge: not accounting for the dynamics of the process leads to a non-realistic motion with no time regularity. Variational approaches that penalize time derivatives or introduce motion model regularizers have been proposed to relate subsequent frames and improve image quality using grid-based discretization. Neural fields offer an alternative parametrization of the desired spatiotemporal quantity with a deep neural network, a lightweight, continuous, and biased towards smoothness representation. The inductive bias has been exploited to enforce time regularity for dynamic inverse problems resulting in neural fields optimized by minimizing a data-fidelity term only. In this paper we investigate and show the benefits of introducing explicit PDE-based motion regularizers, namely, the optical flow equation, in 2D+time computed tomography for the optimization of neural fields. We also compare neural fields against a grid-based solver and show that the former outperforms the latter.


Wednesday Morning

Henry Writer

Title

Free-surface flow over generalised topographies using an arclength formulation

Abstract

We are interested in understanding the mathematical behaviours of free-surface waves generated by flow passing over a varying bottom topography in two-dimensional flow problems. Such wave-structure interaction problems are relevant in the design of flood control infrastructure and, at very different length scales, the behaviour of air currents generated by winds passing over mountains.
Historically, in modelling the fluid as a potential flow, there is a large amount of prior work investigating waves using a conformal mapping and/or boundary-integral approach. For simple geometries, such as flow over a step, a substantial amount of analytical insight can be drawn. In this talk, we present an alternative formulation, which uses an arc-length characterisation of the problem; this method allows us to consider more general bottom topographies. The challenge of this approach is that flow equations now include multiple coupled integral equations. In this talk, we discuss the limitations of prior methods, introduce the arc-length formulation, and discuss numerical solutions and asymptotic results.

Matt Evans

Title

Modern Approaches for Sweeping During Neutron Transport: Cycle-Free Polygonal Mesh Design

Abstract

The Boltzmann transport equation (BTE) is a fundamental integro-differential equation in physics and mathematics, originally developed to model the distribution of particles in thermodynamic systems not in equilibrium. Today, it has extensive applications in radiation transport, neutronics, and modern nuclear reactor development. Despite the nonlinear and high-dimensional nature of the BTE, which complicates the proof of existence and uniqueness of solutions, numerical solutions are essential and have been sought after for over a century.
One approach for approximating solutions to the BTE is the discrete ordinates method (DOM). This method discretises the velocity domain into energy groups and directions, generating a largely coupled system. Applying some numerical schemes to a further spatial decomposition results in a vast linear system. For a specific direction and energy group, and under additional constraints, the system can be evaluated element-wise based solely on its upwind spatial neighbours. This is a “sweep,” and a correct element ordering effectively generates a lower triangular linear system to solve for each direction/energy group.
However, sweeping is known to have drawbacks. It can reduce the computational complexity of a matrix inverse, but only for specific spatial decompositions without interfering with the subsequent numerical method. It is prone to deadlocks, where boundary conditions of some elements are coupled and form a cycle. This does not admit a lower triangular system, and we don't see the benefits of sweeping.
In this talk, we explore a new property of Voronoi tessellations that admits cycle-free sweeping in any dimension. We demonstrate this in the context of a finite volume scheme.

Abby Barlow

Title

Analysis of a household-scale model for the invasion of Wolbachia into a resident mosquito population

Abstract

Dengue is a common vector-borne disease. It is transmitted between humans by Aedes mosquitoes and is widespread throughout tropical and subtropical regions, particularly urban areas. Wolbachia is an intracellular bacteria that can infect Aedes mosquitoes. Infection inhibits vector competence and so the release of Wolbachia-positive mosquitoes can help to control dengue.
Wolbachia release at the scale of a town or city may be reasonably approximated by deterministic models, however the outcome and impact of the release on individual households is stochastic. Releases might also be made at household scale, where households are able to acquire their own population of Wolbachia-infected mosquitoes. In this talk, we introduce a mathematical model for the release of Wolbachia-infected mosquitoes at the household scale. We use a continuous time Markov chain framework to investigate the dynamics of the introduction, quasi-stationary distributions and the probability of households reaching a state in which all resident mosquitoes are Wolbachia-infected. We make comparison to a deterministic model equivalent in order to interpret how the stochasticity impacts the bistability between the wildtype only and Wolbachia only steady states and the invasion threshold of the Wolbachia-infected population.

Matthew Pawley

Title

Testing for time-varying extremal dependence

Abstract

Climate change is known to cause changes in extreme weather patterns over time. This violates a common statistical assumption that observations are identically distributed and has ramifications for risk quantification. We devise a procedure to test for changes in the joint tail behaviour of a (potentially large) collection of random variables. The test performs well across a series of simulated experiments and a real-world case study regarding extreme Red Sea surface temperatures.


Wednesday Afternoon

Christian Jones

Title

The Sharp Corner Singularity of the White-Metzner Model

Abstract

Many materials we encounter in life are really quite complicated! One such class of material exhibits viscoelastic properties, that is, they display behaviours of viscous liquids (such as oil and water) and elastic solids (such as rubber). These viscoelastic materials arise almost everywhere, from the tendons in your body, right through to the Earth's tectonic plates!
But how do we model them? In this talk, we will attempt to answer part of this question by analysing the flow of a White-Metzner fluid around a re-entrant corner. Along the way, we'll encounter continuum mechanics, asymptotics and even a weird looking time derivative. Naturally, being a fluids talk, there will be some complicated equations, but hopefully I can convince you that they are nowhere near as scary as they look!

Sadegh Salehi

Title

An Adaptively Inexact Algorithm for Bilevel Learning

Abstract

In various imaging and data science domains, tasks are modeled using variational regularization, which poses challenges in manually selecting regularization parameters, especially when employing regularizers involving a large number of parameters. To tackle this, gradient-based bilevel learning, as a large-scale approach, can be used to learn parameters from data. However, the unattainability of exact function values and gradients with respect to parameters (hypergradients) necessitates reliance on inexact evaluations. State-of-the-art inexact gradient-based methods face difficulties in selecting accuracy sequences and determining appropriate step sizes due to unknown Lipschitz constants of hypergradients.
In this talk, we present our algorithm, the "Method of Adaptive Inexact Descent (MAID)," featuring a provably convergent backtracking line search that incorporates inexact function evaluations and hypergradients. This ensures convergence to a stationary point and adaptively determines the required accuracy. Our numerical experiments demonstrate MAID's practical superiority over state-of-the-art methods on an image denoising problem. Importantly, we showcase MAID's robustness across different initial accuracy and step size choices.

Henry Lockyer

Title

Learning Splitting and Composition Methods

Abstract

Splitting and composition methods are widely used for solving initial value problems (IVPs) due to their ability to simplify complicated evolutions into more manageable subproblems. These subproblems can be solved efficiently and accurately, leveraging properties like linearity, sparsity and reduced stiffness. Traditionally, these methods are derived using analytic and algebraic techniques from numerical analysis, including truncated Taylor series and their Lie algebraic analogue, the Baker--Campbell--Hausdorff formula. These tools enable the development of high-order numerical methods that provide exceptional accuracy for small timesteps. Moreover, these methods often (nearly) conserve important physical invariants, such as mass, unitarity, and energy.
However, in many practical applications the computational resources are limited. Thus, it is crucial to identify methods that achieve the best accuracy within a fixed computational budget, which might require taking relatively large time steps. In this regime, high-order methods derived with traditional methods often exhibit large errors since they are designed to be asymptotically optimal. Machine Learning techniques offer a potential solution since they can be trained to efficiently solve a given IVP for large timesteps, but they are often purely data-driven, come with limited convergence guarantees in the small-timestep regime and do not necessarily conserve physical invariants.
In this work, we propose machine learned splitting and composition methods that are computationally efficient for large time steps and have provable convergence and conservation guarantees in the small-timestep limit.