The list of talks for the current year can be found here.
Date  Speaker  Title/Abstract 

27 January 2017  Oliver Penrose HeriotWatt 
The Paradox of Irreversibility The behaviour of material objects can be modelled mathematically in two distinct ways. One type of description uses macroscopic concepts such as temperature, density, fluid flow velocity and so on. The time evolution is then described by partial differential equations such as the heat conduction equation. The alternative description uses microscopic concepts : the positions and velocities of the individual particles (atoms or molecules) consititing the material object. The microscopic time evolution can then be described in terms of the mechanics of particle systems. The paradox of irreversibility consists in an apparent contradiction between these two descriptions: on the one hand, the macroscopic behaviour of material objects is irreversible and the relevant partial differential equations are not symmetrical under time reversal. On the other hand the equations of microscopic mechanics are symmetrical under time reversal. How can we reconcile the irreversible macroscopic behaviour with the reversibility (timereversal symmetry) of the laws of mechanics from which it is supposed to be derivable? 
17 February 2017  Adrian Bowman Glasgow 
Surfaces, shapes and anatomy Threedimensional surface imaging, through laserscanning or stereophotogrammetry, provides highresolution data defining the surface shape of objects. In an anatomical setting this can provide invaluable quantitative information, for example on the success of surgery. Two particular applications are in the success of facial surgery and in developmental issues with associated facial shapes. An initial challenge is to extract suitable information from these images, to characterise the surface shape in an informative manner. Landmarks are traditionally used to good effect but these clearly do not adequately represent the very much richer information present in each digitised images. Curves with clear anatomical meaning provide a good compromise between informative representations of shape and simplicity of structure, as well as providing guiding information for full surface representations. Some of the issues involved in analysing data of this type will be discussed and illustrated. Modelling issues include the measurement of asymmetry and longitudinal patterns of growth. 
24 February 2017  Daniela Kuhn Birmingham 
Designs and decompositions  randomness to the rescue video link Probabilistic methods have proved to be an invaluable tool in Combinatorics, and have led to major progress in recent years. As an illustration of this, I will discuss several examples in the area of design theory and graph decompositions. Here the aim is to split a large object into many suitable small ones. Classical results in this area have often been limited to symmetric structures, as these allow for the exploitation of these symmetries or the use of algebraic techniques. Probabilistic approaches had a huge impact on the area recently. They allow for the construction of designs/decompositions in more complex or general settings, and have led to the solution of a number of longstanding problems. 
31 March 2017  Epifanio Virga Pavia 
Onsager's roots of density functional theory video link Onsager's celebrated theory for liquid crystals, put forward in 1949, showed that purely steric, repulsive interactions between molecules can explain the ordering transition that underpins the formation of the nematic phase. Often Onsager's theory is considered as the first successful instance of modern density functional theory. It was however a theory rooted in its time, in the theory that Mayer had proposed in the late 1930's with the aim of explaining condensation of real gases. Despite its undeniable success, Onsager's theory lacks rigour at its onset. This lecture will review in a historical perspective the conceptual basis of Onsager's theory and it will show how this theory can be made rigorous by use of Penrose's tree identity, a powerful technical tool already exploited to ensure convergence to Mayer's cluster expansion. Against all appearances, this is not a technical lecture; it is concerned more with the ideas underneath a successful mathematical theory of ordering in soft matter systems than with the equally important details of analysis. 
12 May 2017  Angel Manuel Ramos del Olmo Madrid 
Mathematical modelling of epidemics and application to real
cases: Animal diseases and 201416 Ebola epidemic. We will present two new deterministic spatialtemporal epidemiological model, called BeFAST (Between Farm Animal Spatial Transmission) and BeCoDiS (BetweenCountries Disease Spread). BeFAST focuses on the spread of animal diseases between and within farms. The major original ideas introduced by BeFAST are the study of both within and between farms spread, the use of real database and dynamic coefficients. BeCoDiS is a mathematical model able to simulate the spread of a human disease. It is based on the combination of an IndividualBased model (where countries are the individuals) simulating the betweencountry interactions (here, people movement) and disease spread, with a compartmental model (ODEs) simulating the withincountry disease spread. The principal characteristic of our approach is the consideration of the following effects at the same time: people movement between countries, control measure effects and dynamic coefficients fitted to each country. At the end of a simulation, both models returns outputs referring to outbreaks characteristics (e.g., epidemic magnitude, risk areas, etc.). We will show some results obtained with these models, when applying them to some real cases: classical swine fever epidemics and foot and mouth disease real cases will be simulated with BeFAST and the 201416 ebola epidemic will be simulated with BeCoDiS. 
19 May 2017  Vladas Sidoravicius Courant Institute, New York 
Selfinteracting random walks  old questions, new surprises Simple random walks, random walks in random environments or random sceneries for nearly a century fascinated and challenged probabilists and physicists. I will focus on a very particular class of walks, namely self interacting walks, walks which remember their past trajectory, and interact with it in the future. It could be selfattracting way, when walker wants to stay close to where it was before, or it could be selfrepelling way, when walker tries to move away from its own trajectory. What is long term behaviour of such processes? It naturally depends on a type and strength of interaction. During last three decades several deep and beautiful mathematical studies were done on this topic. However many questions are still open and are challenging mathematicians. During my talk I will explain some of old and new results, which are involving phase transitions, shape theorems, etc. 
1 June 2017  Patrick Gérard Paris 
Integrability versus wave turbulence in Hamiltonian PDEs video link (no slides) In the world of Hamiltonian partial differential equations, integrability and wave turbulence are usually considered as opposite paradigms. The main reason is that conservation laws of the most famous integrable systems prohibit any transition to high frequencies, which is one the important features of wave turbulence. In this talk I will discuss a new integrable system obtained by time averaging from a simple nonlinear wave model. I will explain how the special structure of this system, involving classical operators from harmonic analysis, allow to prove long time transition to high frequencies for generic solutions. 
9 June 2017  Ian Leary Southampton 
Subgroups and Computability video link In the 1940's it was already known that every countable group occurs as a subgroup of a finitely generated group. Also it was known that this cannot possibly be true with "finitely presented" instead: there aren't enough finitely presented groups to fit them all in! Around 1960 Graham Higman worked out which groups do arise as subgroups of finitely presented groups, establishing a (then) surprising link with computability. I will survey these 60 and 80 year old results, and (time and audience patience permitting) mention my own recent contribution. 
Date  Speaker  Title/Abstract 

30 September 2016  Colin Fox University of Otago, Dunedin  Bath IMI Global Chair 201617 
A Subjective History of Subjective Probability Bath
IMI Global Chair Public Lecture Bayesian inference is the hotnewthing in the hotnewfield of Uncertainty Quantification, in which one quantifies the errors inherent in interpreting (imprecise) observations in terms of (incorrect) physical models. In statistics and especially physics, inverse probability, as Bayesian inference was originally known, is actually the hotoldthing. For example, it was the concept of probability used by Maxwell to formulate his kinetic theory of gases that became statistical physics. Information theory, as formulated by electronic engineers, developed those ideas further to make possible the electronic world we now know. How then did this "subjective" notion of probability fall out of favour with statisticians, despite being the basis of experimentallyverified theories, and why are statisticians now at the forefront of its modern development? This talk presents a history of the Bayesian inference and subjective probability, as viewed by a Bayesian Physicist. 
14 October 2016  Michael Field Rice/Imperial 
Chaos and the Art of Visualising Complexity Bath IMI Public
Lecture Chaos is closely identified in popular culture with the butterfly effect  for example, as seen in the movie The Butterfly Effect and on a brand of bottled water which has this line prominently displayed on the label: “According to Chaos Theory, the tiny flutter of a butterfly's wing can cause a cyclone on the other side of the world”. This talk will address the question of what chaos is (and is not) and how one can visualise and describe the general mathematics of chaos and complex dynamics. It will also include some striking images of chaos and numerical demonstrations. As the talk is intended for a general audience, rather than using explicit mathematics, Michael will show relations with physics and the natural world as well as address the question of the impact that butterflies may or may not have on causing exceptional weather. 
4 November 2016  Agata Smoktunowicz Edinburgh 
On recent results and applications of noncommutative rings
In the first part of this talk, we look at some surprising connections between finite and infinite matrices and noncommutative rings. We also consider how finding solutions of differential equations may help to solve some open problems in noncommutative algebra. In the second half, we mention some recent applications of noncommutative ring theory to geometry (Acons), group theory (Engel groups), braces and settheoretic solutions of the YangBaxter equation. 
11 November 2016  Mike Tipping Bath 
The Rise and Rise of Machine Learning Bath IMI Public
Lecture Machine learning has gained significant prominence in the past few years, particularly as a headline technology underpinning the recent renaissance of artificial intelligence (AI). Almost all the latest highprofile progress in AI  from driverless cars to AlphaGo  is powered by machine learning, leading many to identify it as the next big breakthrough technology. At the same time, the technology is becoming ever more ubiquitous behind the scenes, powering systems for speech recognition, web search and electronic payment security. Featuring numerous examples and illustrations, this talk will endeavour to explain what machine learning actually is, how it relates to other established disciplines (such as statistics), and, in the context of presenting a brief history of the topic, will consider the factors behind its seemingly irresistible rise. 
25 November 2016  Angela Stevens Münster 
Mathematical Models for SelfOrganization, Attraction, and Repulsion in Biology Based on the phenomenon of chemotaxis  the motion towards higher concentrations of an attractive chemical signal  the mathematical challenges and results for chemotaxis/aggregation equations are discussed, and set into context with biological findings. 
Date  Speaker  Title/Abstract 

12 February 2016  Roger Moser Bath 
Some Minimisation Problems in Differential Geometry Given two points on a Riemannian manifold, is there a shortest path connecting them? If the manifold is connected and complete, then the answer is yes. This result is wellknown, but other, similar questions, involving higher dimensional objects or higher derivatives, are more difficult to answer. Using harmonic maps and polyharmonic maps as examples, I will discuss some ideas from the calculus of variations and how they can be applied in the context of differential geometry. Many of the underlying tools have their roots in linear functional analysis, but for these problems, we have to deal with nonlinear differential operators. Sometimes, however, we do still have a rich geometric structure, and the challenge is to make use of this instead of linearity. 
19 February 2016  Ania Zalewska Bath 
Risky Business of Measuring Risk Bath IMI Public Lecture Taking account of risk is fundamental to good decision making and assessment of performance. As such, it is important to recognise its presence in any business, economic and public policy situation. But integrating risk into our judgements, and taking account of it in an informative manner, is severely hampered if we are unable to measure it well  or do not have an understanding of its sources and potential consequences. When measuring levels of risk, we often fall a short of quantifying changes in risk. The recent financial crisis and the subsequent attempts of regulators and governing bodies to monitor and restrict risk taking behaviour provides a good example of how complex it can be to understand risk. Another example is the postcrisis rebalancing of pension funds' portfolios towards lowrisk assets which not only means that the likelihood of earning high enough returns to recover losses and correct the underfunding of funds is pretty slim. More importantly, it puts risk measurement at the centre of the question of whether pension funds' portfolios are really providing an appropriate rate of return on pension savings. In her talk Professor Zalewska will discuss the difficulties of measuring risk in practice and the challenges faced in interpreting the evidence, using the dotcom bubble of 19972001, the current financial crisis, and the developments of the pension industry in the lastdecade as examples of the risky business of trying to measure risk. 
11 March 2016  Laure SaintRaymond Paris/Harvard 
Linear fluid models as scaling limits of interacting system of particles In his sixth problem, Hilbert asked for an axiomatization of gas dynamics, and he suggested to use the Boltzmann equation as an intermediate description between the (microscopic) atomic dynamics and (macroscopic) fluid models. The main difficulty to achieve this program is to prove the asymptotic decorrelation between the local microscopic interactions, referred to as propagation of chaos, on a time scale much larger than the mean free time. This is indeed the key property to observe some relaxation towards local thermodynamic equilibrium. This control of the collision process can be obtained in fluctuation regimes. In a joint work with Thierry Bodineau and Isabelle Gallagher, we have established a long time convergence result to the linearized Boltzmann equation, and eventually derived the acoustic and incompressible Stokes equations in dimension 2. The proof relies crucially on symmetry arguments, combined with a suitable pruning procedure to discard super exponential collision trees. 
18 March 2016  Esteban Tabak New York 
Discovering and Filtering Sources of Variability in Data
David Parkin Public Lecture Attributing variability to factors, and filtering external sources of variability from data sets, impacts virtually every dataintensive activity. Examples include the attribution of climate variability to independent factors such as solar radiation or human activity, and the assessment of the influence of factors such as diet, smoking and lifestyle on health. Still another example is the compilation of an integrated dataset collected at different labs or hospitals, different censuses or different experiments. Taken separately, the individual datasets may lack enough samples to yield statistically significant results. Taken together, a large fraction of their variability may be attributable to their origin. Hence the need, during amalgamation, to filter from the data those characteristics that are idiosyncratic to each study. A methodology based on the mathematical theory of optimal transport  initially developed to solve the problem of moving piles of material  is particularly wellsuited to this task: one seeks a set of maps that transforms the probability distributions underlying the samples from each origin into a distribution common to all, while distorting the data as little as possible. The amalgamation of datasets can be viewed more generally as an explanation of variability: once the component of the variability in the data attributable to a factor is filtered, what is left behind in the transformed variables is the remaining, and yet to be explained, variability. This talk will describe a general methodology and show applications to areas as diverse as personalized medicine, poll interpretation and the discovery of asynchronous principal components associated to climate variability. 
8 April 2016  Paul Sutcliffe Durham 
Knots and Vortices
Vortices appear in a variety of physical fields from fluids to ultracold atomic condensates. Circular vortex strings are familiar as smoke rings, but recently there has been interest in trying to tie vortex strings in knots. Unfortunately, the generic behaviour is that such vortex strings simply untie themselves via reconnection events. I shall discuss this issue and describe some nongeneric field theories with knot solutions that remain knotted. 
22 April 2016  Ruben Rosales MIT 
Phantom traffic jams, jamitons, detonation waves, and subcharacteristics Joint Mathematical Sciences and IMI Landscapes Colloquium Phantom traffic jams are both annoying to the drivers, and very interesting mathematically. These are traffic jams that occur spontaneously, without apparent cause. I will begin with a review of traffic flow theory, and introduce examples of continuum models. The simplest is a kinematic equation, the LighthillWhithamRichards [LWR] model, which expresses the conservation of vehicles  under the assumption that the vehicle flow is a function of the density. "Second order" models of traffic flow include a second equation: an evolution equation for the flow velocity characterized by a relaxation time τ. An important question is whether the LWR model is recovered in the limit τ=0. The answer lies in whether a "subcharacteristic condition" is satisfied. When it is not, phantom jams and jamitons (spontaneously generated nonlinear traveling waves with an "event" horizon) occur. I will discuss these issues, the ensuing dynamics, and the mathematical analogy between jamitons and detonations. 
Date  Speaker  Title/Abstract 

2 October 2015  Jim Zidek UBC, Vancouver  Bath IMI Global Chair 201516 
The long road to 0.075  a statistician's perspective of the
process for setting ozone
standards Bath
IMI Global Chair Public Lecture
The presentation will take us along the road to the ozone standard for the United States, announced in March 2008 by the US Environmental Protection Agency, leading to the new proposal in 2014. That agency is responsible for monitoring that nation'air quality standards under the Clean Air Act of 1970. The talk will describe how Professor Zidek, a Canadian statistician, came to serve on the US Clean Air Scientific Advisory Committee (CASAC) for Ozone that recommended the standard, as well as his perspectives on the process of developing it. The talk will be a narrative about the interaction between science and public policy in an environment that harbors a large number of stakeholders, with varying but legitimate perspectives, a great deal of uncertainty, in spite of the great body of knowledge about ozone, and above all, a substantial potential risk to human health and welfare. 
16 October 2015  Mark Gross Cambridge 
Mirror symmetry and tropical geometry Mirror symmetry is a phenomenon discovered by stringtheorists around 1990. Initially, it involved predictions for the numbers of certain kinds of curves contained in certain threedimensional complex manifolds. For example, it gave predictions for the number of lines, conics, etc. contained in a threedimensional hypersurface defined by a quintic equation. There has been a huge amount of work devoted to understanding this phenomenon over the last 25 years. I will explain how mirror symmetry connects naturally with a much more recent concept, tropical geometry, a kind of combinatorial piecewise linear geometry, and how this helps explain why mirror symmetry works. 
23 October 2015  Amit Acharya Carnegie Mellon 
Why energyminimizing dynamics of some energies good for defect
equilibria are not good for dynamics, and an improvement
Line defects appear in the microscopic structure of crystalline materials (e.g. metals) as well as liquid crystals, the latter an intermediate phase of matter between liquids and solids. The study of defects is relevant for applications, as in structural materials they are essential for largescale permanent deformation, stress, and eventual failure; in semiconductors they affect energy bandgaps and electronic properties; In MEMS devices they affect reliabilityof metallic interconnects; in liquid crystals, they affect optical properties (e.g. in LCDs) as well as flow properties in liquid crystalline polymers. Mathematically, their study is challenging since they correspond to topological singularities that result in blowup of total energies of finite bodies when utilizing most commonly used classical models of energy density. The study of both equilibria and the dynamics of defects are interesting. In statics, this talk will introduce the concept of an "eigendeformation" field and the design of an energy density function coupling the deformation to this new field which together help alleviate the nasty singularities mentioned above. Incidentally, the eigendeformation field bears much similarity to gauge fields in highenergy physics, but arises from an entirely different standpoint not involving the notion of gauge invariance in our considerations. In dynamics, it will be shown that incorporating a conservation law for the topological charge of line defects allows for the correct prediction of some important features of defect dynamics that would not be possible just with the knowledge of an energy function. This is joint work with Chiqun Zhang, a graduate student at CMU. I am an engineer who works in the field of continuum mechanics and my talk should be accessible to anyone with interest, with the tools being employed being common sense, physical reasoning, and applied mathematics. 
4 December 2015  George Weiss Tel Aviv 
The output regulation of nonlinear systems This presentation is about control theory, which may be regarded as a branch of mathematics or as a branch of engineering. In control theory, a real world dynamical system (such as a washing machine, a wind turbine, an electrical machine, a boiler, an aircraft, a car or more often just a small part of any of those mentioned) is represented by its mathematical model. In most cases this model is a collection of differential and algebraic equations that link input variables (or signals), state variables and output variables. A control expert tries to interconnect such systems in clever ways, so that they behave in a desired way. The meaning of "desired way" can have many interpretations, and one possible interpretation is the socalled regulator problem. This problem assumes that we are given a system to be controlled (called a plant) with input and output signals, and an exosystem that is another system which is unstable and has only outputs. The exosystem being unstable, it generates signals that persist for a long time, such as oscillations or constants, and these affect the plant. One output of the plant is called the error signal, and satisfactory behaviour means that this error signal should converge to zero. (There is another technical assumption called stability, that we shall explain in due course.) A simple example: the plant is your kettle, the exosystem generates a desired temperature level that is suitable for brewing tea, and the error signal is the difference between the temperature of the water and the desired temperature. How can the regulator problem be solved? As always in control theory, we are looking to design another system, called the controller, which will be interconnected with the plant, and will achieve the desired effect. (For the kettle, the controller is its thermostat. In more complex situations, the controller will be an algorithm running on a processor, suitably interconnected with the plant via socalled sensors and actuators.) In the case of the regulator problem, the controller will often contain a subsystem called an internal model. This is interesting even for a nonspecialist, because it has intuitive appeal: the internal model should be some sort of copy of the exosystem. We may think of the controller as fighting against the effects of the exosystem, and the internal model is a model of the opponent, which helps the controller work out the best strategy. We shall give a very simple explanation of the socalled "internal model principle". In the more technical part of the presentation we review the concepts of controller, regulator problem and internal model. We focus on the nonlinear local error feedback regulator problem. The plant is a nonlinear finitedimensional system with a single control input and a single output and it is locally exponentially stable around the origin. We shall also discuss ways to relax this stability assumption. The plant is driven, via a separate disturbance input, by a Lyapunov stable exosystem whose states are nonwandering. The reference signal that the plant output must track is a nonlinear function of the exosystem state. The local error feedback regulator problem is to design a dynamic feedback controller, with the tracking error as its input, such that (i) the closedloop system of the plant and the controller is locally exponentially stable, and (ii) the tracking error tends to zero for all sufficiently small initial conditions of the plant, the controller and the exosystem. Under the assumption that the above regulator problem is solvable, we propose a nonlinear controller whose order is relatively small  typically equal to the order of the exosystem, and that solves the regulator problem. The emphasis is on the low order of the controller  in contrast, previous results on the regulator problem have typically proposed controllers of a much larger order. The stability assumption on the plant (which can be relaxed to some extent) is crucial for making it possible to design a low order controller. We will show, under certain assumptions, that our propose controller is of minimal order. Three examples are presented  the first is a very simple illustration of our controller design procedure while the second is more involved, and shows that sometimes a nontrivial immersion of the exosystem is needed in the design. The third example, based on output voltage regulation for a boost power converter, shows how the regulator equations may reduce to a first order PDE with no given boundary conditions, but which nevertheless has a locally unique solution. (Joint work with Vivek Natarajan.) 
27 November 2015  Carole Mundell Bath 
Big Bangs and Black
Holes Bath IMI Public
Lecture Although the idea of 'black holes' dates back over 200 years, they remained a speculation until the late 20th century. Their existence has now been confirmed by observation but questions regarding their creation and influence are at the forefront of modern astronomy. Astronomers can never hope to travel to black holes and so instead rely on the coded information contained within the light detected from these distant objects. The visible light to which our human eyes are most sensitive has enriched culture for thousands of years. However, this light represents only a small fraction of the total light available for collection; technological advances in the 20th and 21st centuries have ensured that we can collect light ranging from the highest energy gamma rays, through Xrays to long wavelength radio waves  the whole range of the electromagnetic spectrum. In this talk, Professor Mundell will introduce the most distant and powerful explosions in the Universe  Gamma Ray Bursts  and describe recent advances in autonomous robotic observation of their light with space and groundbased telescopes which catch the light that signals the birth of a new black hole. In particular, she will present new insights gleaned with novel cameras  the RINGO polarimeters on the Liverpool Telescope  that have provided the first direct, realtime measurements of the magnetic fields that are thought to power these prodigious explosions. In doing so, she will try to give a flavour of the hectic life of an astronomer in the modern era of robotic telescopes and realtime discoveries. 
Date  Speaker  Title/Abstract 

13 February 2015  Claire Mathieu CNRS and ENS Paris, France 
Homophily and the Glass Ceiling Effect in Social Networks The glass ceiling may be defined as “the unseen, yet unbreakable barrier that keeps minorities and women from rising to the upper rungs of the corporate ladder, regardless of their qualifications or achievements”. Although undesirable, it is well documented that many societies and organizations exhibit a glass ceiling. In this talk we formally define and study the glass ceiling effect in social networks and provide a natural mathematical model that (partially) explains it. We propose a biased preferential attachment model that has two type of nodes, and is based on three well known social phenomena: i) minority of females in the network, ii) rich get richer (preferential attachment) and iii) homophily (liking those who are the same). We prove that our model exhibits a strong class ceiling effect and that all three conditions are necessary, i.e., removing any one of them, will cause the model not to exhibit glass ceiling. Additionally we present empirical evidence of a studentmentor network of researchers (based on DBLP data) that exhibits all the above properties: female minority, preferential attachment, homophily and glass ceiling. 
27 February 2015  Graeme Milton University of Utah, USA 
Elastic metamaterials Composite materials can have properties unlike any found in nature, and in this case they are known as metamaterials. Materials with negative Poisson's ratio or negative refractive index are now classic examples. The effective mass density, which governs the propagation of elastic waves in a metamaterial can be anisotropic, negative, or even complex. Even the eigenvectors of the effective mass density tensor can vary with frequency. We show that metamaterials can exhibit a "Willis type behavior" which generalizes continuum elastodynamics. Nonlinear metamaterials are also interesting and a basic question is what nonlinear behaviors can one get in periodic materials constructed from rigid bars and pivots? It turns out that the range is enormous. Materials for which the only easy mode of macroscopic deformation is an affine deformation, can be classed as unimode, bimode, trimode,... hexamode, acccording to the number of easy modes of deformation. We give a complete chacterization of possible behaviors of nonlinear unimode materials. 
13 March 2015  Marta SanzSolé University of Barcelona, Spain 
Some aspects of potential theory for solutions to SPDEs For a stochastic process, hitting probabilities measure the chances that the sample paths visit a given set A. A typical problem in probabilistic potential theory consists in estimating hitting probabilities in terms of the capacity or the Hausdorff measure of A, and to derive consequences, like a characterization of polar sets, the Hausdorff dimension of the range of the process on a given set, among others. In this talk, we will discuss these questions in the framework of systems of stochastic partial differential equations (SPDEs) driven by Gaussian noises white in time and with a spatially homogeneous covariance. Results for parabolic, hyperbolic and elliptic SPDEs will be presented. The results in the case of additive noise provide a hint on optimality. For multiplicative noise, hitting probabilities are estimated using tools of Malliavin calculus. 
27 March 2015  Keith Ball, FRS FRSE Warwick 
Where is a convex set? Our intuition about convex domains comes from low dimensional examples. It leads us far astray when we consider highdimensional objects. In this talk I will explain that the mass of a convex domain in a highdimensional space is concentrated in regions that appear on the face of it to be very small: that we need a new intuition that comes from classical probability theory: in particular from the Central Limit Theorem. 
17 April 2015  Frank den Hollander Leiden University, The Netherlands 
Scaling properties of a Brownian porous medium The path of a Brownian motion on a ddimensional torus run up to time t is a random compact subset of the torus. In this talk we look at the geometric and spectral properties of the complement C(t) of this set when t tends to infinity. Questions we address are the following: 1. What is the linear size of the largest region in C(t)? 2. What does C(t) look like around this region? 3. Does C(t) have some sort of "componentstructure"? 4. What are the largest capacity, largest volume and smallest principal Dirichlet eigenvalue of the "components" of C(t)? We discuss both d ≥ 3 and d=2, which turn out to be very different. Joint work with Michiel van den Berg (Bristol), Erwin Bolthausen (Zurich) and Jesse Goodman (Auckland). 
21 May 2015  Valeria Simoncini University of Bologna, Italy 
Modelassisted effective linear system numerical solution Advanced mathematical models very often require the solution of (sequences of) large algebraic linear systems, whose numerical treatment should incorporate problem information in order to be computationally effective. For instance, matrices and vectors usually inherit crucial (e.g., spectral) properties of the underlying continuous operators. In this talk we will discuss a few examples where stateoftheart iterative linear system solvers can conveniently exploit this information to devise effective stopping criteria. Our presentation will focus on linear systems stemming from the numerical discretization of (systems of) partial differential equations. 
19 June 2015  Nalini Joshi University of Sydney, Australia  Special LMS Hardy Fellow for 2015 
When applied mathematics collided with algebra  LMS Hardy Lectureship Tour 2015 Nonlinear integrable systems arose in applied mathematics and physics, when they were discovered as a possible resolution of the FermiPastaUlam problem. More recently, it has been discovered that integrable systems (such as the Painlevé equations) arise from translations on affine Weyl groups, which are more familiar to algebraicists. I will explain how this amazing development gives rise to properties that all applied mathematicians should be able to master, through examples. The talk was followed by a wine reception to celebrate the 150th anniversary of the LMS. 
Date  Speaker  Title/Abstract 

10 October 2014  Manfred Lehn Johannes Gutenberg Universität Mainz, Germany 
Holomorphic symplectic manifolds The linear algebra of bilinear forms splits into the separate domains of symmetric and skewsymmetric forms. These behave quite differently even from an algebraic point of view. This difference becomes more pointed when the coefficients in the forms are allowed to vary with parameters. This leads to the notions of Riemannian versus symplectic manifolds. The importance of the latter arises from their role as phase spaces in classical mechanics. Whereas there are plenty of real symplectic manifolds, compact complex (or holomorphic) sympletic manifolds appear to be difficult to construct. I will discuss two constructions of such manifolds that are based on configuration spaces of points on surfaces and of curves on fourdimensional manifolds. 
MONDAY 27 Oct 2014  Nigel Goldenfeld University of Illinois at UrbanaChampaign, USA 
Phase Transitions in Early Life: Clues from the Genetic Code Relics of early life, preceding even the last universal common ancestor of all life on Earth, are present in the structure of the modern day canonical genetic code  the map between DNA sequence and amino acids that form proteins. The code is not random, as often assumed, but instead is now known to have certain error minimisation properties. How could such a code evolve, when it would seem that mutations to the code itself would cause the wrong proteins to be translated, thus killing the organism? Using digital life simulations, I show how a unique and optimal genetic code can emerge over evolutionary time, but only if horizontal gene transfer  a network effect  was a much stronger characteristic of early life than it is now. These results suggest a natural scenario in which evolution exhibits three distinct dynamical regimes, differentiated respectively by the way in which information flow, genetic novelty and complexity emerge. Possible observational signatures of these predictions are discussed. 
21 November 2014  Chris Holmes Oxford 
The impact of BigData on statistical science "Bigdata" is a buzz word that has generated significant attention in recent times. While the scientific value of the term is questionable, it does relate to a real world phenomena in the increased ability and availability of "measurement" and wide scale data capture, coupled with advancements in high performance computing. One area of impact is in biomedical research where we are entering an era of routine genetic sequencing of patients, pathogens and cancers, coupled with electronic health records, images and environmental data. This is generating large highdimensional heterogeneous data sets. The analysis of such data objects provides unique challenges to statisticians and applied mathematicians. In this talk I will discuss some of these challenges and how they impact on the foundations of statistics. I will describe recent work in the use of approximate models and of formal methods to study roubustness of decisions made using misspecified models. 
5 December 2014  Harry Yserentant TU Berlin, Germany 
On the complexity and approximability of electronic wave functions The electronic Schrödinger equation describes the motion of N electrons under Coulomb interaction forces in a field of clamped nuclei. The solutions of this equation, the electronic wave functions, depend on 3N variables, three spatial dimensions for each electron. Approximating them is thus inordinately challenging, and it is conventionally believed that a reduction to simplified models, such as those of the HartreeFock method or density functional theory, is the only tenable approach. We indicate why this conventional wisdom need not to be ironclad: the unexpectedly high regularity of the solutions, which increases with the number of electrons, the decay behavior of their mixed derivatives, and their antisymmetry enforced by the Pauli principle contribute properties that allow these functions to be approximated with an order of complexity which comes arbitrarily close to that for a system of two electrons. It is even possible to reach almost the same complexity as in the oneelectron case adding a simple regularizing factor that depends explicitly on the interelectronic distances. 
Date  Speaker  Title/Abstract 

7 February 2014  Thanasis Fokas Cambridge 
Boundary value problems, medical imaging and beyond A new powerful method for analyzing boundary value problems, often referred to as the "unified transform", will be presented. The implementation of this method to linear problems, which is based on the "synthesis" as opposed to separation of variables, has been acclaimed by the late I.M. Gelfand as "the most important development in the exact analysis of linear PDEs since the classical works of the 18th century". The unified transform has led to the analytical solution of several nonseparable, as well as nonself adjoint boundary value problems. Furthermore, it has led to new numerical techniques for solving linear elliptic PDEs in the interior as well as in the exterior of polygons. Unexpected connections with the Riemann hypothesis will also be mentioned. A related development is the emergence of new analytical methods for solving inverse problems arising in medicine, including techniques for PET (Positron Emission Tomography) and SPECT (Single Photon Emission Computerized Tomography). 
14 February 2014  David Parkin Lecture André Nachbin IMPA, Brazil and David Parkin Visiting Professor, Bath 
Problems with tsunamis We are aware of the devastating problems tsunamis cause. Understanding the evolution of these large water wave disturbances in different scenarios is important. And some of these scenarios motivate new interesting problems in Applied Maths: from the theory of partial differential equations (PDEs) to the accurate computation of solutions in domains with complicated geometric features, as will be described. The mathematical topics that come into play range from reduced modeling, probabilistic modeling and the asymptotic analysis of PDEs with oscillatory (deterministic or random) coefficients, to the related numerical issues. I will present an overview on how these topics come together, with emphasis on a remarkable dynamical feature related to the scattering of long waves in a randomly varying medium, namely a disordered topography. I will then address a very recent problem on nonlinear waves on graphs. Nonlinear waves on a graph is a theme related with tsunamis in fjords (Norway). 
28 February 2014  Julia Wolf Bristol 
Approximate groups An approximate group is a subset S of a group that is almost closed under the group operation (say multiplication), in the sense that the set of pairwise products of elements in S is not much larger than S itself. Evidently a true subgroup forms an approximate group, and much effort has gone into investigating to what extent sets satisfying the aforementioned relaxed closure condition resemble actual subgroups. The study of approximate groups in the abelian case goes back to the 1970s, but a strongly quantitative as well as a rich nonabelian theory have recently become available, the latter having applications to many other areas of mathematics where groups play a vital role. We shall start from the abelian basics before surveying some of the more recent developments, illustrating the richness of the subject and highlighting a couple of the remaining challenges. 
21 March 2014  Angela McLean Oxford 
How fast does HIV evolve? Studies of the natural history of HIV infection within infected individuals show how the virus evolves to escape from the selection pressures imposed by immune responses. Carefully interpreted, such data can teach us about the strength of those immune responses. Data on the withinhost evolutionary dynamics of HIV is hard to gather and contingent on having samples from the precise weeks and months when new variants are growing. It is much easier to gather data on the populationlevel prevalence of immune escape mutants. I will present a mathematical model that allows us to "integrate up" the impact of many withinhost events to find the expected epidemiological patterns of escape mutant prevalence, both in hosts of different types and changing through time. This is a model that lets us make inferences about the rate of evolutionary change within individuals from data on the prevalence of escape mutants within populations. 
28 March 2014  Richard D. James University of Minnesota, USA 
Materials from mathematics We present some recent examples of new materials whose synthesis was guided by some essentially mathematical ideas on the origins of the reversibility of phase transformations. Theory preceded synthesis, and also gave the synthesis recipe. They are hard materials, but nevertheless show liquidlike changes of microstructure under a fraction of a degree change of temperature. The best example, Zn_{45}Au_{30}Cu_{25}, does show unprecedented levels of reversibility, but also raises fundamental new questions for mathematical theory. Another of these alloys, found using the same recipe, has a nonmagnetic phase and a phase that is a strong magnet. This one can be used to convert heat to electricity without the need of a separate electrical generator, and suggests a way to recover the vast amounts of energy stored on earth at small temperature difference. (http://www.aem.umn.edu/~james/research/) 
11 April 2014  Jeffrey Steif Chalmers University of Technology, Sweden 
Noise Sensitivity of Boolean Functions and Critical Percolation Noise sensitivity concerns the phenomenon that certain types of events (Boolean functions) of many variables are sensitive to small noise. These concepts become especially interesting in the context of critical percolation where certain socalled critical exponents have been rigorously computed in the last decade. Some of the important tools in this area are discrete Fourier analysis on the hypercube and randomized algorithms in theoretical computer science. In this lecture, I will give an overview of this subject. No background concerning any of the above topics will be assumed. 
16 May 2014  Vasileios Maroulas University of Tennessee, USA and Leverhulme Trust Visiting Fellow, Bath 
Navigating the stochastic filtering landscape This talk navigates us through the landscape of stochastic filtering, its computational implementations and their applications in science and engineering. We start by exploring the idea to identify the best estimate for the true values based on noisy data within a plethora of applications in ecology, biology and defense. The best estimate is in turn linked to the filtering distribution which in general does not have a closed form solution. Employing several methods, e.g. particle filters, we approximate the filtering distribution in order to estimate the behavior of a pertinent latent spatiotemporal process. On the other hand, the dynamic process encapsulates several parameters whose engagement leads us to a research path involving a novel algorithm of particle filters with an MCMC scheme and an Empirical Bayes method. Last stop of this talk is at the Fukushima power plant station whose data we use in order to infer about the radioactive material caused by the disastrous accident in 2011.The talk is based on joint works with E. Evangelou, K. Kang, R. Mahler , P. Stinis and J. Xiong. 
Date  Speaker  Title/Abstract 

4 October 2013  LMS Aitken Lectureship 2013 Robert McLachlan Massey University, New Zealand 
Successes and prospects of geometric numerical integration Geometric numerical integration emerged in the 1990s. Its roots lie in successful and widely used algorithms of computational physics, especially the symplectic leapfrog method, and in the numerical analysis of classical families of numerical integrators such as RungeKutta. Combining these two strands has led to better algorithms for physical simulations and also to a better understanding of the process of numerical integration. Today the behaviour of integration algorithms is studied with respect to a range of geometric properties, including preservation of invariant (symplectic and volume) forms and invariant sets, including those that emerge in an asymptotic limit. The seminar will serve as an introduction to geometric numerical integration, its practical and theoretical successes, and its open questions. 
25 October 2013  Tom Leinster Edinburgh 
The many faces of magnitude The magnitude of a square matrix is the sum of all the entries of its inverse. This strange definition, suitably used, produces a family of invariants in different contexts across mathematics. All of them can be loosely understood as "size". For example, the magnitude of a convex set is an invariant from which one can conjecturally recover many important classical quantities: volume, surface area, perimeter, and so on. The magnitude of a graph is a new invariant sharing features with the Tutte polynomial. The magnitude of a category is very closely related to the Euler characteristic of a topological space. Magnitude also appears in the difficult problem of quantifying biological diversity: under certain circumstances, the greatest possible diversity of an ecosystem is exactly its magnitude. I will give an aerial view of this landscape. 
8 November 2013  Imre Leader Cambridge 
Partition Regular Equations A finite or infinite matrix M is called `partition regular' if whenever the natural numbers are finitely coloured there exists a vector x, with all of its entries the same colour, such that Mx=0. Many of the classical results of Ramsey theory, such as van der Waerden's theorem or Schur's theorem, may be naturally rephrased as assertions that certain matrices are partition regular. While the structure of finite partition regular matrices is well understood, little is known in the infinite case. In this talk we will review some known results and then proceed to some recent developments. No knowledge of anything at all will be assumed. 
22 November 2013  Mark Girolami University College London 
Bayesian Uncertainty Quantification for Differential Equations A Bayesian inferential framework to quantify uncertainty in solving the forward and inverse problems for models defined by general systems of analytically intractable differential equations is suggested. Viewing solution estimation of differential equations as an inference problem allows one to quantify numerical uncertainty using the tools of Bayesian function estimation, which may then be propagated through to uncertainty in the model parameters and subsequent predictions. Regularity assumptions are incorporated by modelling system states in a Hilbert space with Gaussian measure, and through iterative modelbased sampling a posterior measure on the space of possible solutions is provided. Consistency proofs for the probabilistic solution are provided and an efficient computational implementation is used to demonstrate the methodology on a wide range of challenging forward and inverse problems. For the inverse problem we incorporate the approach into a fully Bayesian framework for state and parameter inference from incomplete observations of the states. The approach is assessed on ordinary and partial differential equation models with chaotic dynamics, illconditioned mixed boundary value problems, and an example characterising parameter and state uncertainty in a biochemical signalling pathway which incorporates a nonlinear delayfeedback mechanism. This is joint work with Oksana Chkrebtii, David A. Campbell, and Ben Calderhead. 
13 December 2013  Chris Rogers Cambridge 
Optimal Investment: what we can and can't do This talk will present some very classical material about optimal investment/consumption problems, showing how the problems may in principle be solved by the methods of stochastic optimal control. Then we will go on to discuss some natural and important problems which lie way beyond the reach of existing techniques, calling for radical new methods. 
Date  Speaker  Title/Abstract 

8 February 2013  Stefan Vandewalle Catholic University of Leuven (Belgium) 
Postponed. 
1 March 2013  Christl Donnelly Imperial College London 
Badger culling and the scienceled policy challenge Bovine TB is a disease that is currently costing UK taxpayers £90million a year to control. Last year some 26,000 cattle were compulsorily slaughtered after testing positive for the disease. In 1997 a committee led by (now Lord) Prof John Krebs was asked to review the scientific evidence relating to badger and badger culling to control TB in cattle. The committee recommended a largescale randomised trial of two badger culling policies. It was subsequently designed, overseen and analysed by the Independent Scientific Group on Cattle TB which issued its final report in 2007. Whether to cull badgers to control cattle disease remains controversial despite considerable agreement on the science base underpinning the discussions. The evidence (with emphasis on the statistical and mathematical evidence) will be reviewed, putting into context the various arguments being put forward. Finally, questions are posed about the role of science and scientists in the policy making sphere. 
15 March 2013  Nick Gould STFC  Rutherford Appleton Laboratory 
Modern methods for quadratic programming In this talk I shall review the important advances in quadratic programming  the optimization of a quadratic function of many variables within a polyhedral feasible region  that have occurred since its inception in the late 1940s. I will consider both the convex and nonconvex cases, and illustrate the significant difficulties that arise in the latter. I shall describe the most successful current approaches, and highlight two new approaches in the convex case that overcome significant defects that arise with current methods. 
12 April 2013  Walter Gilks University of Leeds 
Genomes in 3D: some issues in multidimensional scaling There is growing evidence that the threedimensional (3D) layout of genomes (the set of chromosomes) within in the nucleus of a cell is conserved over millions of years. Over recent years, laboratory techniques have been developed that provide increasingly highresolution information on the 3D structure of the genome of any species. The latest of these techniques, called 5C and HiC, use highthroughput DNA sequencing to produce millions of pairs of short DNA reads, each pair identifying two small pieces of the genome which are juxtaposed within the nuclear space. This allows a matrix of DNADNA contacts within the genome to be compiled. In principle, these contact matrices hold information on the 3D configuration of the genome. Extracting this information may be done using multidimensional scaling. However, this is not entirely straightforward. The data contain substantial amounts of noise and the experimental procedures require aggregation of contacts over millions of cells, introducing an additional source of variability. After briefly introducing the biological, experimental and bioinformatic background, I will present data analyses, simulation results and statistical theory which highlight difficulties in analysing the resulting contact matrices, and describe some preliminary statistical methods to address some of the issues arising. 
26 April 2013  John Toland Isaac Newton Institute 
Problems with a Variational Theory of Water Waves By minimising an energy functional, a global variational theory of waves on the surface of a perfect fluid with a prescribed distribution of vorticity has been obtained when the surface is in contact with a strong hyperelastic membrane. It will be shown that a minimiser does not exist when surface energy effects are absent. Thus classical Stokes waves, the existence of which is known for other reasons, are not minimisers of the energy and their relationship with the energy functional remains a mystery. A variational theory of Stokes waves seems a long wave off. 
Date  Speaker  Title/Abstract 

12 October 2012  Nick Trefethen University of Oxford 
Six myths of polynomial interpolation and quadrature Computation with polynomials is a powerful tool for all kinds of mathematical problems, but the subject has been held back by longstanding confusion on a number of key points. In this talk I'll attempt to sort things out as systematically as possible by focusing on six widespread misconceptions. In each case I will explain how the myth took hold  for all of them contain some truth  and then I'll give theorems and numerical demonstrations to explain why it is mostly false. 
26 October 2012  Richard Pinch Cheltenham 
Primes and Pseudoprimes "The problem of distinguishing prime numbers from composite ... is known to be one of the most important and useful in arithmetic."  Gauss This talk will describe some modern algorithms for determining primality which are fast but are not always correct. Studying their behaviour, and failure, gives rise to interesting problems in number theory or practical and theoretical importance. 
2 November 2012  Ernst Wit University of Groningen (The Netherlands) 
Reproducing kernel Hilbert space based estimation of ordinary differential
equations Nonlinear systems of differential equations have attracted the interest of researchers in fields like System Biology, Ecology or Biochemistry, due to their flexibility and their ability to describe dynamical systems. Despite the importance of such models in many branches of science they have not been the focus of systematic statistical analysis until recently. In this work we propose a general approach to estimate the parameters of systems of differential equations measured with noise. Our methodology is based on the maximization of a penalized likelihood where the differential system of equations is used as a penalty. To do so, we use a Reproducing Kernel Hilbert space that allows us to formulate the estimation problem as an unconstrained, easytosolve numeric maximization problem. The proposed method is tested in real and simulated examples showing its utility in a wide range of scenarios. We implemented the method as a general purpose package in R. This work is joint with Javier Gonzalez and Ivan Vujacic. 
9 November 2012  Jörn Behrens KlimaCampus, University of Hamburg (Germany) 
! NOTE unusual location  2 East 3.1  at the usual time 4:15pm Tea/coffee from 3:45pm in the foyer of 2 East Scientific Computing Methods behind Tsunami Early Warning  Isaac Newton Institute 20th Anniversary Lecture Series Since December 2004, when the Great AndamanSumatra Earthquake and Tsunami devastated vast coastal areas rimming the Indian Ocean and took approximately 230,000 lives a number of similarly catastrophic events has demonstrated how tsunami early warning measures can help to alleviate the aftermath. It is not very well known, however, that these systems rely heavily on mathematical and computational methods. In this presentation some of these methods, including solution techniques to the multiscale problem of farfield and nearfield wave dispersion, the assimilation of (scarce) measurement data, and the propagation of uncertainty, will be introduced. The rewarding result of this research is the ability to contribute to lifesaving operations. 
23 November 2012  Grégoire Allaire Ecole Polytechnique (France) 
Homogenization of convectiondiffusion equations and
application to Taylor dispersion in porous media We review the application of periodic homogenization theory to convectiondiffusion equations in porous media, with possibly additional reaction terms. The main goal, from an application point of view, is to obtain formulas for homogenized (or effective, or upscaled) coefficients: it is the socalled Taylor dispersion problem. In particular, we shall discuss a special scaling of the problem (large Péclet and Damköhler numbers), for which the homogenized problem is obtained in a moving frame of reference. The analysis requires new mathematical tools, including the notion of twoscale convergence with drift. Eventually, the case of a bounded porous medium yields an intriguing phenomenon of concentration and localization: the ``hot spot'' problem. 
Date  Speaker  Title/Abstract 

17 Feb 2012  Oliver Riordan University of Oxford 
Explosive percolation? Random graphs are the basic mathematical models for largescale disordered networks in many different fields (e.g., physics, biology, sociology). One of their most interesting features, both mathematically and in terms of applications, is the "phase transition": as the ratio of the number of edges to vertices increases past a certain critical point, the global structure changes radically, from only small components to a single macroscopic ("giant") component plus small ones. Recent work on Achlioptas processes (a natural type of evolving random graph model) suggested that in some such processes the transition is particularly radical: more or less as soon as the macroscopic component appears, it is already extremely large. The talk will include general background on Achlioptas and other random graph processes, plus a discussion of joint work with Lutz Warnke on the explosive percolation phenomenon. 
2 Mar 2012  Bálint Tóth Technical University Budapest (Hungary) 
Brownian motion and "Brownian motion" The physical phenomenon called Brownian motion is the apparently random motion of a particle suspended in a fluid, driven by collisions and interactions with the molecules of the fluid which are in permanent thermal agitation. One of the idealised mathematical models of this random drifting is the stochastic process called commonly "Brownian motion", or Wiener process. A dynamical theory of Brownian motion should link these two: derive in mathematically satisfactory way  as a kind of macroscopic scaling limit  the idealised mathematical description from physical principles. The first attempt was made by Einstein in his celebrated 1905 paper and we are still very far from the end of this endeavour. I will survey some attempts. 
16 Mar 2012  Angus Macintyre Queen Mary, University of London 
Logic and the exponential functions of analysis I will give an overview of how a modeltheoretic analysis of the real and complex exponential functions has assisted in the solution of purely analytic problems, and the formulation of novel ones. 
Date  Speaker  Title/Abstract 

14 Oct 2011  Mark Chaplain University of Dundee 
Multiscale mathematical modelling of cancer growth Cancer growth is a complicated phenomenon involving many interrelated processes across a wide range of spatial and temporal scales, and as such presents the mathematical modeller with a correspondingly complex set of problems to solve. In this talk we will present multiscale mathematical models for the growth and spread of cancer and will focus on three main scales of interest: the subcellular, cellular and macroscopic. The subcellular scale refers to activities that take place within the cell or at the cell membrane, e.g. DNA synthesis, gene expression, cell cycle mechanisms, absorption of vital nutrients, activation or inactivation of receptors, transduction of chemical signals. The cellular scale refers to the main activities of the cells, e.g. statistical description of the progression and activation state of the cells, interactions among tumour cells and the other types of cells present in the body (such as endothelial cells, macrophages, lymphocytes), proliferative and destructive interactions, aggregation and disaggregation properties. The macroscopic scale refers to those phenomena which are typical of continuum systems, e.g. cell migration, diffusion and transport of nutrients and chemical factors, mechanical responses, interactions between different tissues, tissue remodelling. The models presented will be either systems of nonlinear partial differential equations or individualforced based models and, in addition to presenting our computational simulation results, we will discuss some analytical and numerical results and issues. Finally, we will present an overview of recent analytical results in the area concerning a new notion of multiscale convergence, called "threescale convergence". 
28 Oct 2011  Yves Balasko University of York 
Some mathematical aspects of economic theory Abstract in PDF 
11 Nov 2011  Terry Lyons University of Oxford 
Expected signature and all that... 
25 Nov 2011  Angus Macintyre Queen Mary, University of London 
Rescheduled for 16 Mar 2012. 
9 Dec 2011  Mark Lewis University of Alberta (Canada) 
The Mathematics Behind Biological Invasion Processes Models for invasions track the front of an expanding wave of population density. They take the form of parabolic partial differential equations and related integral formulations. These models can be used to address questions ranging from the rate of spread of introduced invaders and diseases to the ability of vegetation to shift in response to climate change. In this talk I will focus on scientific questions that have led to new mathematics and on mathematics that have led to new biological insights. I will investigate the mathematical and empirical basis for multispecies invasions, for accelerating invasion waves, and for nonlinear stochastic interactions that can determine spread rates. 
Date  Speaker  Title/Abstract 

4 Mar 2011  Roger HeathBrown University of Oxford 
Counting Solutions of Diophantine Equations Given a Diophantine equation f(x_1,...,x_n)=0, where f is an integer polynomial, let N(B) be the number of integral solutions in which x_i is at most B for each index i. The talk will be about the growth of N(B) as B tends to infinity. Why should one be interested in this? What phenomena affect the growth rate? How does one prove anything about N(B)? 
18 Mar 2011  Gero Friesecke Technical University of Munich (Germany) 
How good is the quantum mechanical explanation of the periodic table? Abstract in PDF 
1 Apr 2011  Elmer Rees University of Bristol 
Real Linear Algebra and Topology I will review the history of the construction of real division algebras starting with Hamilton, Graves and Cayley. I will also discuss some of the attempts to prove that there are none of dimension more than eight. The proof of the final nonexistence theorem needs methods from topology. These methods can also be used to give restrictions on the spaces of real matrices of given rank. One of the first published results of this kind was the Adams, Lax, Philips theorem motivated by the construction of hypoelliptic operators. Most of these results could easily be stated in a first course on Linear Algebra but the only known proofs for several of them use topology. 
15 Apr 2011  Andrew Stuart University of Warwick 
Bayes Theorem and Inverse Problems Inverse problems arise in many areas of science and technology, and are interesting for mathematicians because they are typically illposed. The Bayesian approach to inverse problems is useful in many of these arenas for several reasons, in particular because it provides a clean and transparent approach to regularization and because it allows for the quantification of uncertainty. I will introduce the Bayesian approach to inverse problems via the simple (to state) problem of predicting the next number in a sequence. I will then develop these ideas for more complex PDEbased models arising in the physical sciences. Finally I will point to open questions and new directions in the field. 
Date  Speaker  Title/Abstract 

14 Jan 2011  Mark Peletier 
Special Landscape Seminar  Leverhulme Lecture Gradient flows, optimal transport, and fresh bread In 1997, Jordan, Kinderlehrer, and Otto pioneered a new way of looking at ageold equations for diffusion, thus giving an exact mathematical description of the sense in which `diffusion is driven by entropy'. This sense revolves around the concept of optimal transport. Introduced by Monge in 1781, this theory focuses on optimal ways to transport given quantities from A to B. Its development took off after Kantorovich improved the formulation in 1942, and in recent years the theory has exploded, with applications in differential geometry, probability theory, functional analysis, analysis on nonsmooth spaces, and many more. In this talk I will revisit the original connection between the diffusive problems on one hand and the theory of optimal transport on the other. I will show how the two are connected, discuss many consequences of this, and describe recent insight into the deeper meaning of this connection. 
Date  Speaker  Title/Abstract 

15 Oct 2010  Dorothy Buck Imperial College 
The Topology of DNAProtein Interactions The central axis of the famous DNA double helix is often constrained or even circular. The shape of the axis can influence which proteins interact with the underlying DNA. Subsequently, in all cells there are proteins whose primary function is to change the DNA axis topology  for example converting a torus link into an unknot. Additionally, there are several protein families that change the axis topology as byproduct of their interaction with DNA. 
29 Oct 2010  Timothy J. Hollowood Swansea University 
Solitons and Integrability in Quantum Field Theory Since John Scott Russel observed a solitonlike wave in a canal in 1834, the theory of solitons and integrable systems has been a fruitful area of research in mathematical physics. In the area of Quantum Field Theory and Statistical Mechanics, integrability has proved to be a very powerful idea and often such models can be solved exactly leading to insights that take us beyond the perturbative regime. The classic example is the sineGordon theory whose exact spectrum and scattering matrix were written down over 30 years ago. More recent work has shown that the sineGordon theory is the simplest model of a class associated to (pseudo) Riemannian Symmetric Spaces whose integrability is linked to underlying mathematical structures involving affine Lie algebras and their deformations known as quantum groups. These generalized sineGordon theories also plan a key role in the integrability underlying strong theory and quantum gravity. 
12 Nov 2010  Richard Gill Leiden University (The Netherlands) 
Murder by numbers In March 2003, Dutch nurse Lucia de Berk was sentenced to life imprisonment by a court in The Hague for 5 murders and 2 murder attempts of patients in her care at a number of hospitals where she had worked in the Hague between 1996 and 2001. The only hard evidence against her was a statistical analysis resulting in a pvalue of 1 in 342 million which purported to show that it could not be chance that so many incidents and deaths occured on her ward while she was on duty. On appeal in 2003 the life sentence was confirmed, this time for 7 murders and 3 attempts. This time, no statistical evidence was used at all: all the deaths were proven to be unnatural and Lucia shown to have caused them using scientific medical evidence only. However, after growing media attention and pressure by concerned scientists, including many statisticians, new forensic investigations were made which showed that the conviction was unsafe. After a new trial, Lucia was spectacularly and completely exhonerated in 2010. I'll discuss the statistical evidence and show how it became converted into incontrovertible medicalscientific proof in order to secure the second, and as far as the Dutch legal system was concerned, definitive conviction. I'll also show how statisticians were instrumental in convincing the legal establishment that Lucia should be given a completely new trial. The history of Lucia de Berk brought a number of deficiencies to light in the way in which scientific evidence is evaluated in criminal courts. Similar cases to that of Lucia occur regularly all over the world. The question of how that kind of data should be statistically analyzed is still problematic. I believe that there are also important lessens to be learnt by the medical world, however, the Dutch medical community, where most people still believe Lucia is a terrible serial killer, is resisting all attempts to uncover what really happened. 
Date  Speaker  Title/Abstract 

12 Feb 2010  Brian Conrey American Institute of Mathematics (USA) 
The Riemann Hypothesis 150 years ago B. Riemann discovered a pathway to understanding the prime numbers. But today we still have not completed his vision. I will give an introduction to Riemann's Hypothesis, one of the most compelling mathematics problems of all time, and describe some of its colorful history. 
5 Mar 2010  Richard Thomas Imperial College 
Counting curves in algebraic geometry One can study "complex manifolds" or "algebraic varieties" via invariants that "count holomorphic curves in them". Without assuming prior knowledge of geometry (just holomorphic functions), this talk will explain the notions in inverted commas. In particular, there are at least four different ways to define curve counting. 
19 Mar 2010  Endre Süli University of Oxford 
Mathematical challenges in kinetic models of dilute polymers We shall review recent developments concerning the existence of global weak solutions to coupled NavierStokesFokkerPlanck systems of partial differential equations that arise in kinetic models for dilute polymers. We shall also survey some recent developments concerning the numerical analysis of highdimensional FokkerPlanck equations with unbounded drift terms featuring in these models. 
30 Apr 2010  James Davenport University of Bath 
The mathematics of the internet There is a lot of mathematics underpinning daily use of the Internet, be it Google or Internet shopping. In general, but not always, the mathematics has come first, and the application later. In this talk, we will sketch some of the applications, and describe, as one nonspecialist to others, part of the mathematics. As a sideeffect, we will explain why Google Scholar is much more reliable than "Impact Factors". 
Date  Speaker  Title/Abstract 

30 Oct 2009  Sir John Kingman University of Bristol 
Forbidden transitions: continuous time versus discrete space Markov processes are the stochastic analogues of ordinary dynamical systems governed by first order differential equations. Almost all the theory of such processes deals with the autonomous case, in which the equations do not explicitly involve time, but this is an unnatural restriction, for instance in applications to biology or OR. One of the few known results about the general case came from an ingenious argument of former Bath Professor David Williams, and has led to a complete analysis for processes taking only finitely many values. Much deeper problems arise when there is a countable infinity of possible states, as often occurs in applications. 
13 Nov 2009  Raphaël Rouquier University of Oxford 
Dunkl operators: from analysis to algebra and back We will introduce deformations of partial derivatives (Dunkl operators) and discuss their actions on polynomial functions. This brings in special functions and gives rise to CalogeroMoser integrable systems. Various flavours of Hecke algebras control these structures. The space of families of points in the plane provides a rich background. Eventually, the Dunkl operators can be studied via microlocal methods. 
27 Nov 2009  Claude Le Bris Ecole Nationale des Ponts et Chaussées (Paris, France) 
Random media in computational material science The talk will overview some recent contributions on several theoretical aspects and numerical approaches in stochastic homogenization, for the modelling of random materials. In particular, some variants of the classical theory will be introduced. The relation between stochastic homogenization problems and other multiscale problems in materials science will be emphasized. On the numerical front, some approaches will be presented, for acceleration of convergence as well as for approximation of the stochastic problem when the random character is only a perturbation of a deterministic model. 
11 Dec 2009  Reidun Twarock University of York 
Viruses and geometry Viruses have protein containers that encapsulate, and hence provide protection for, their genomic material. For a significant number of viruses these containers are organised according to icosahedral symmetry, which allows us to model their structural organisation via group theory. We show here that a wide spectrum of distinct viral features can be predicted in striking detail via a classification of affine extensions of the icosahedral group. Examples discussed in this talk include the sizes and shapes of the protein building blocks of the containers, the doubleshelled genomic RNA structure in MS2, the dodecahedral RNA cage in Pariacoto virus, and the heterogeneity in the genomic organisation of Picornaviridae. Some of the implications of this fundamental geometric principle of virus architecture for virus assembly and evolution are also discussed. 
Date  Speaker  Title/Abstract 

27 Feb 2009  Nick Evans University of Southampton 
The Large Hadron Collider and the Search for the Higgs Boson The Large Hadron Collider project is underway at CERN, Geneva  this 27km round ring will collide protons with ten times higher energy than has been achieved before. The goal is to probe the structure of matter at scales of 10^{18} m. I will review the mathematical structure of the Standard Model of particle physics and the crucial unconfirmed mechanism for symmetry breaking, the Higgs mechanism. LHC discovery signals for the Higgs boson will be discussed. The simplest model is fine tuned though and we believe the Higgs is liable to be found in conjunction with other new physics from compositeness, to supersymmetry, to extra dimensions of space. 
20 Mar 2009  Vincent Beffara Ecole Normale Supérieure de Lyon (France) 
Isotropic embeddings of planar lattices In recent years, huge progress has been made in the understanding of critical 2D models of statistical physics, and especially about their scaling limits (through the use of SLE processes). However, one question remains widely open, and that is the reason for universality, i.e. the belief that similar system on different lattices, even though they have different critical points, nevertheless converge to the same scaling limit as the lattice mesh goes to 0. It seems that a key question along the way to understanding it is, given the combinatorics of a lattice, how to embed it in the plane in order to obtain a conformally invariant scaling limit  and a surprising fact is that the "right" embedding depends not only on the lattice but also on the model. I will present the few results I was recently able to obtain in this direction. 
1 May 2009  Ulrike Tillmann University of Oxford 
Topological Field Theory, via configuration spaces of points and
moduli spaces of surfaces In this introductory talk, I will give an outline of some of the ideas that have led to some remarkable theorems describing the topology of moduli spaces of Riemann surfaces (MadsenWeiss) and a very recent classification theorem for Topological Field Theories (HopkinsLurie). 
Date  Speaker  Title/Abstract 

10 Oct 2008  Heinz Engl University of Vienna 
Regularization of Inverse Problems: Convergence Analysis,
New Applications and Challenges We illustrate, via a parameter identification problem from systems biology, the numerical difficulties arising when solving inverse problems and give an overview over problem areas where inverse problems appear and over regularization methods for their stable solution. Recent emphasis has been on nonlinear problems in a nonHilbertspace setting, e.g., in the connection with TVregularization and sparsityenforcing regularization. We illustrate the importance of the latter via an inverse bifurcation problem from systems biology. Finally, we mention some inverse problems that appear in finance and illustrate the effect of regularization. 
7 Nov 2008  Martin Bridson University of Oxford 
Dimension, rigidity and fixedpoint theorems The first half of the lecture will recall three strands of thought from 20th century geometry/topology: (super)rigidity, which constrains the way in which lattices in certain Lie groups can act, e.g. SL(n,Z); Smith theory, wherein one studies the fixedpoints sets of finite groups acting on spheres and contractible spaces; and finally Helly's theorem, which characterises the dimension of a Euclidean space in terms of the possible intersection patterns of convex subsets. In the second half will describe in outline some recent theorems that draw inspiration from the foregoing ideas. For example, I shall sketch a proof that SL(n,Z) admits no nontrivial actions by homeomorphisms on a sphere of dimension less than n1 or a contractible manifold of dimension less than n. I shall also motivate and explain a fixed point theorem that settles a longstanding question of Kropholler. 
21 Nov 2008  John Ball University of Oxford 
Mathematical Problems of Liquid Crystals Most mathematical work on nematic liquid crystals has been in the context of the OseenFrank theory, which models the mean orientation of the constituent rodlike molecules by means of a director field consisting of unit vectors. However nowadays most physicists use the Landau  de Gennes theory, whose basic variable is a tensorvalued order parameter. Unlike the OseenFrank theory, that of Landau  de Gennes does not assign an unphysical orientation to the director field. The lecture will describe the two theories and the relationship between them, as well as other interesting mathematical problems related to the Landaude Gennes theory. 
Date  Speaker  Title/Abstract 

8 Feb 2008  Alexander Bobenko Technical University Berlin (Germany) 
Discrete differential geometry: from organizing principles to applications Discrete differential geometry aims at the development and application of discrete equivalents of the geometric notions and methods of differential geometry. The latter appears then as a limit of refinements of the discretization. Current progress in this field is to a large extent stimulated by its relevance for applications (computer graphics etc). Concrete examples considered in the talk include discrete curvature line parametrizations, discrete Willmore energy, and applications to architecture. 
22 Feb 2008  Stephen Senn University of Glasgow 
Why I hate minimisation If publicly funded collaborative trials are compared to those run in the pharmaceutical industry it will be found that the former generally use more complex allocation algorithms but simpler approaches to analysis. For example a popular design and analysis combination is to control covariate imbalance using an approach known as 'minimisation' but then ignore the covariates altogether in the analysis. In this talk I will explain why minimisation is not even a sensible approach to controlling covariate imbalance and also explain why sophistication in analysis can make a bigger contribution to improving the efficiency of inferences than complication in allocation. 
7 Mar 2008  Julian Besag University of Bath 
Geographical analysis and ethical dilemmas in the study of
childhood leukemias in Great Britain The causes of childhood leukaemias are largely unknown but it is often claimed that cases occur in geographical clusters. This has led to various possible explanations, including the existence of a virus and the effects of radiation from nearby nuclear installations. It is also true that apparent clusters may occur purely by chance, even if there is no underlying cause, especially if the database is extensive. Thus, "cluster busting" has been highly controversial. This talk will describe a very simple method (Besag and Newell, 1991, "The detection of clusters in rare diseases", Journal of the Royal Statistical Society, A 154, 143155) designed specifically to trawl a database of more than 100,000 census enumeration districts (ED's) in Great Britain for evidence of clusters in childhood leukaemias. Results for a fiveyear period centred on a census date will be given in detail, together with reasons for not publishing them or releasing them to the press. The talk is intended to be accessible to anyone familiar with the Poisson distribution. 
11 Apr 2008  Kenneth Falconer University of St Andrews 
Symmetry and enumeration of selfsimilar fractals We describe a general method for enumerating the distinct selfsimilar sets that arise as attractors of certain families of iterated function systems, using a little group theory to analyse the symmetries of the attractors. The talk will be illustrated by a range of pictorial examples and is suitable for a broad audience. 
25 Apr 2008  Rob Stevenson University of Amsterdam (The Netherlands) 
Adaptive wavelet methods for solving high dimensional PDEs We discuss a nonstandard numerical method for solving (linear) PDE's. Such equations can be written in the form Lu=f, where L is a boundedly invertible operator between Hilbert spaces. By equipping these spaces with (wavelet) bases, an equivalent infinite matrix vector equation is obtained. We will show that the adaptive wavelet scheme produces a sequence of approximations to the solution vector that converges with the best possible rate, where the cost of producing these approximations is proportional to their length. Finally, we discuss the application of this scheme to problems on product domains, where we will obtain rates that are independent of the space dimension. 
2 May 2008  Peter Bickel University of California, Berkeley (USA) 
Low dimensional features which enable high dimensional inference Theoretical analysis seems to suggest that standard problems such as estimating a function of high dimensional variables with noisy data (regression or classification) should be impossible without detailed knowledge or absurdly large amounts of data. Nevertheless, algorithms to perform classification of images or other high dimensional objects are remarkably successful. The generally held explanation is the presence of sparsity/low dimensional structure. I'll discuss analytically and with examples why this may be right. 
7 May 2008  Jerry L. Bona University of Illinois at Chicago (USA) 

9 May 2008  Trevor Wooley University of Bristol 
Tales from the wild Diophantine west: integral solutions of
polynomial equations Diophantine equations (polynomial equations to be solved in integers) with few variables have attracted the enthusiastic attention of number theorists for millenia, and indeed the work of Wiles concerning Fermat's Last Theorem even attracted the attention of the mass media. In contrast, the solubility of diophantine equations in many variables is a wild frontier with, for the most part, only sketchy knowledge and speculative conjectures. We will provide an overview of the latter area, illustrating our discussion with an exotic tale  the story of the little known race half a century ago to solve cubic equations in many variables. The ideas underlying this story continue to inspire modern developments. 