Student Speakers and Posters
SAMBa abounds in good work and interesting topics. To showcase as much of our work as possible during the two days of this conference, there will be both student talks and a poster session. Scroll towards the end of the page to view the titles and abstract of the talks.
Poster Session
Most of the creators of the posters will be standing near their posters during this session to answer questions and explain nuances.
 Sinyoung Park: New directions in constrained spectral clustering for networks
 Sadegh Salehi: An adaptively inexact firstorder method for bilevel learning
Student Speakers
Below is the list of titles and abstracts for each presentation.
Tuesday Morning
Dáire O'Kane
 Title

Solving underdamped Langevin dynamics
 Abstract

Underdamped Langevin dynamics (ULD) is a stochastic differential equation, popular in both molecular dynamics and machine learning communities. The former, due to it's description of the position and velocity of interacting particles and the latter due to its ergodic properties, making it useful in bayesian learning. We propose efficient higher order solvers for ULD and prove their convergence as well as contraction properties. This allows us to establish bounds in the 2Wasserstein metric for the rate of convergence to the invariant measure.
Sangeetha Sampath
 Title

Estimating River Bathymetry for NonUniform Flow
 Abstract

Flood prediction is crucial for disaster management and urban planning, with flooding costing the UK approximately £1.5 billion annually. Accurate river bathymetry, essential for reliable flood models, is traditionally resourceintensive to obtain. This research introduces a novel method for estimating river bathymetry using available datasets, bypassing extensive field measurements. Utilizing hydraulic geometry and shallow water equations, we develop a model for nonuniform flow conditions. We compare two optimization techniques: nonlinear least squares and the NelderMead method. While nonlinear least squares can overfit noisy data, the NelderMead method proves more robust and efficient. Our trapezoidal river model highlights the importance of precise parameter calibration. Findings show the NelderMead method significantly outperforms nonlinear least squares in noisy data environments, enhancing flood prediction accuracy. This study offers a practical methodology for estimating river bathymetry, improving flood models in datasparse regions. The talk will cover methodologies, key findings, and implications for future research and flood management.
João Luiz De Oliveira Madeira
 Title

Can deleterious mutations surf deterministic population waves?
 Abstract

In this work, we study the deterministic scaling limit of a model introduced by FoutelRodier and Etheridge in 2020 to study the impact of cooperation and competition on the fitness of an expanding asexual population whose individual birth and death rates depend on the local population density. The interacting particle system can be mathematically described as particles performing symmetric random walks that undergo a birthdeath process with rates that depend on the local number of particles. Phenomenologically, each particle represents a chromosome, and we keep track during the process of two features of each particle: its location and its number of deleterious mutations. After each birth event, with some positive probability, the daughter particle can acquire an additional mutation which will decrease its reproduction rate when compared to its parent. We show that under an appropriate scaling, the process converges weakly to an infinite system of partial differential equations, proving a conjecture of FoutelRodier and Etheridge. For the case where the reaction term satisfies a FisherKPP condition, we prove a conjecture of FoutelRodier and Etheridge regarding the spreading speed of the population into an empty habitat. We also prove some further results regarding the asymptotic behaviour of the system of PDEs in the monostable case. This is a joint work with Sarah Penington and Marcel Ortgiese.
Seb Scott
 Title

What does it mean for a regulariser to be good?
 Abstract

Given an image corrupted by noise, a classical approach to retrieve a denoised image is via variational regularisation wherein you minimise a function consisting of a datafidelity term, which encourages your reconstruction to not look totally different from the noisy image, and a regulariser term which should penalise undesirable properties. The balance between these two terms controlled by a nonnegative scalar called the regularisation parameter.
While the regulariser can be hand crafted, mathematically it is not obvious what it means for a regulariser to be a “good” choice. One characterisation is whether selecting a strictly positive regularisation parameter value is in some sense optimal. In this talk we will motivate a framework for determining optimal regularisation parameters and provide a new condition that characterises when zero is not an optimal parameter.
Tuesday Afternoon
Xinle Tian
 Title

Multiresponse linear regression estimation based on lowrank presmoothing
 Abstract

Presmoothing is a technique aimed at increasing the signaltonoise ratio in data to improve subsequent estimation and model selection in regression problems. However, presmoothing has thus far been limited to the univariate response regression setting. Motivated by the widespread interest in multiresponse regression analysis in many scientific applications, this article proposes a technique for data presmoothing in this setting based on low rank approximation. We establish theoretical results on the performance of the proposed methodology, and quantify its benefit empirically in a number of simulated experiments. We also demonstrate our proposed low rank presmoothing technique on real data arising from the environmental sciences.
Beth Stokes
 Title

Speed and shape of population fronts with density dependent diffusion
 Abstract

Understanding how and why animal populations disperse is a key question in movement ecology. There are many reasons for dispersal, such as overcrowding and searching for food, territory or potential mates. These behaviours are often dependent on the local density of the population. Motivated by this, we investigate travelling wave solutions in reactiondiffusion models of animal range expansion in the case that population diffusion is density dependent. We find that the speed of the selected wave depends critically on the strength of diffusion at low density. For sufficiently large lowdensity diffusion, the wave propagates at a speed predicted by a simple linear analysis. For small or zero lowdensity diffusion, the linear analysis is not sufficient, but a variational approach yields exact or approximate expressions for the speed and shape of population fronts.
Pablo Arratia Lopez
 Title

Solving Dynamic Inverse Problems with Neural Fields
 Abstract

Image reconstruction for dynamic inverse problems with highly undersampled data poses a major challenge: not accounting for the dynamics of the process leads to a nonrealistic motion with no time regularity. Variational approaches that penalize time derivatives or introduce motion model regularizers have been proposed to relate subsequent frames and improve image quality using gridbased discretization. Neural fields offer an alternative parametrization of the desired spatiotemporal quantity with a deep neural network, a lightweight, continuous, and biased towards smoothness representation. The inductive bias has been exploited to enforce time regularity for dynamic inverse problems resulting in neural fields optimized by minimizing a datafidelity term only. In this paper we investigate and show the benefits of introducing explicit PDEbased motion regularizers, namely, the optical flow equation, in 2D+time computed tomography for the optimization of neural fields. We also compare neural fields against a gridbased solver and show that the former outperforms the latter.
Wednesday Morning
Henry Writer
 Title

Freesurface flow over generalised topographies using an arclength formulation
 Abstract

We are interested in understanding the mathematical behaviours of freesurface waves generated by flow passing over a varying bottom topography in twodimensional flow problems. Such wavestructure interaction problems are relevant in the design of flood control infrastructure and, at very different length scales, the behaviour of air currents generated by winds passing over mountains.
Historically, in modelling the fluid as a potential flow, there is a large amount of prior work investigating waves using a conformal mapping and/or boundaryintegral approach. For simple geometries, such as flow over a step, a substantial amount of analytical insight can be drawn. In this talk, we present an alternative formulation, which uses an arclength characterisation of the problem; this method allows us to consider more general bottom topographies. The challenge of this approach is that flow equations now include multiple coupled integral equations. In this talk, we discuss the limitations of prior methods, introduce the arclength formulation, and discuss numerical solutions and asymptotic results.
Matt Evans
 Title

Modern Approaches for Sweeping During Neutron Transport: CycleFree Polygonal Mesh Design
 Abstract

The Boltzmann transport equation (BTE) is a fundamental integrodifferential equation in physics and mathematics, originally developed to model the distribution of particles in thermodynamic systems not in equilibrium. Today, it has extensive applications in radiation transport, neutronics, and modern nuclear reactor development. Despite the nonlinear and highdimensional nature of the BTE, which complicates the proof of existence and uniqueness of solutions, numerical solutions are essential and have been sought after for over a century.
One approach for approximating solutions to the BTE is the discrete ordinates method (DOM). This method discretises the velocity domain into energy groups and directions, generating a largely coupled system. Applying some numerical schemes to a further spatial decomposition results in a vast linear system. For a specific direction and energy group, and under additional constraints, the system can be evaluated elementwise based solely on its upwind spatial neighbours. This is a “sweep,” and a correct element ordering effectively generates a lower triangular linear system to solve for each direction/energy group.
However, sweeping is known to have drawbacks. It can reduce the computational complexity of a matrix inverse, but only for specific spatial decompositions without interfering with the subsequent numerical method. It is prone to deadlocks, where boundary conditions of some elements are coupled and form a cycle. This does not admit a lower triangular system, and we don't see the benefits of sweeping.
In this talk, we explore a new property of Voronoi tessellations that admits cyclefree sweeping in any dimension. We demonstrate this in the context of a finite volume scheme.
Abby Barlow
 Title

Analysis of a householdscale model for the invasion of Wolbachia into a resident mosquito population
 Abstract

Dengue is a common vectorborne disease. It is transmitted between humans by Aedes mosquitoes and is widespread throughout tropical and subtropical regions, particularly urban areas. Wolbachia is an intracellular bacteria that can infect Aedes mosquitoes. Infection inhibits vector competence and so the release of Wolbachiapositive mosquitoes can help to control dengue.
Wolbachia release at the scale of a town or city may be reasonably approximated by deterministic models, however the outcome and impact of the release on individual households is stochastic. Releases might also be made at household scale, where households are able to acquire their own population of Wolbachiainfected mosquitoes. In this talk, we introduce a mathematical model for the release of Wolbachiainfected mosquitoes at the household scale. We use a continuous time Markov chain framework to investigate the dynamics of the introduction, quasistationary distributions and the probability of households reaching a state in which all resident mosquitoes are Wolbachiainfected. We make comparison to a deterministic model equivalent in order to interpret how the stochasticity impacts the bistability between the wildtype only and Wolbachia only steady states and the invasion threshold of the Wolbachiainfected population.
Matthew Pawley
 Title

Testing for timevarying extremal dependence
 Abstract

Climate change is known to cause changes in extreme weather patterns over time. This violates a common statistical assumption that observations are identically distributed and has ramifications for risk quantification. We devise a procedure to test for changes in the joint tail behaviour of a (potentially large) collection of random variables. The test performs well across a series of simulated experiments and a realworld case study regarding extreme Red Sea surface temperatures.
Wednesday Afternoon
Christian Jones
 Title

The Sharp Corner Singularity of the WhiteMetzner Model
 Abstract

Many materials we encounter in life are really quite complicated! One such class of material exhibits viscoelastic properties, that is, they display behaviours of viscous liquids (such as oil and water) and elastic solids (such as rubber). These viscoelastic materials arise almost everywhere, from the tendons in your body, right through to the Earth's tectonic plates!
But how do we model them? In this talk, we will attempt to answer part of this question by analysing the flow of a WhiteMetzner fluid around a reentrant corner. Along the way, we'll encounter continuum mechanics, asymptotics and even a weird looking time derivative. Naturally, being a fluids talk, there will be some complicated equations, but hopefully I can convince you that they are nowhere near as scary as they look!
Sadegh Salehi
 Title

An Adaptively Inexact Algorithm for Bilevel Learning
 Abstract

In various imaging and data science domains, tasks are modeled using variational regularization, which poses challenges in manually selecting regularization parameters, especially when employing regularizers involving a large number of parameters. To tackle this, gradientbased bilevel learning, as a largescale approach, can be used to learn parameters from data. However, the unattainability of exact function values and gradients with respect to parameters (hypergradients) necessitates reliance on inexact evaluations. Stateoftheart inexact gradientbased methods face difficulties in selecting accuracy sequences and determining appropriate step sizes due to unknown Lipschitz constants of hypergradients.
In this talk, we present our algorithm, the "Method of Adaptive Inexact Descent (MAID)," featuring a provably convergent backtracking line search that incorporates inexact function evaluations and hypergradients. This ensures convergence to a stationary point and adaptively determines the required accuracy. Our numerical experiments demonstrate MAID's practical superiority over stateoftheart methods on an image denoising problem. Importantly, we showcase MAID's robustness across different initial accuracy and step size choices.
Henry Lockyer
 Title

Learning Splitting and Composition Methods
 Abstract

Splitting and composition methods are widely used for solving initial value problems (IVPs) due to their ability to simplify complicated evolutions into more manageable subproblems. These subproblems can be solved efficiently and accurately, leveraging properties like linearity, sparsity and reduced stiffness. Traditionally, these methods are derived using analytic and algebraic techniques from numerical analysis, including truncated Taylor series and their Lie algebraic analogue, the BakerCampbellHausdorff formula. These tools enable the development of highorder numerical methods that provide exceptional accuracy for small timesteps. Moreover, these methods often (nearly) conserve important physical invariants, such as mass, unitarity, and energy.
However, in many practical applications the computational resources are limited. Thus, it is crucial to identify methods that achieve the best accuracy within a fixed computational budget, which might require taking relatively large time steps. In this regime, highorder methods derived with traditional methods often exhibit large errors since they are designed to be asymptotically optimal. Machine Learning techniques offer a potential solution since they can be trained to efficiently solve a given IVP for large timesteps, but they are often purely datadriven, come with limited convergence guarantees in the smalltimestep regime and do not necessarily conserve physical invariants.
In this work, we propose machine learned splitting and composition methods that are computationally efficient for large time steps and have provable convergence and conservation guarantees in the smalltimestep limit.