Guy McCusker, University of Bath
\[ \def\coh{\overset{\textstyle\frown}{\smile}} \] My visit to Marseille this month (November 2023) was funded by grants held by Pierre Clairambault and Lionel Vaux Auclair. And we talked about this material a little bit.
Thanks Pierre and Lionel – I’ve had a great time.
I’ve also chatted with Jim Laird (Bath) about some of this material.
With thanks to John Power…
“I understood that too”
Dana Scott, Data Types as Lattices (1976):
A continuous function \(f: \mathbf{P}\omega \rightarrow \mathbf{P}\omega\) is determined by the relation \[ \{ (S,n) \mid S \subset_f \omega, n \in f(S) \} \]
Since \(\mathbf{P}_f \omega \times \omega\) is countable, such relations can be encoded as sets of naturals.
This gives an embedding \[ (\mathbf{P}\omega \rightarrow \mathbf{P}\omega) \lhd \mathbf{P}\omega \] and hence a model of the \(\lambda\)-calculus.
But how should we understand the construction?
(Try to) define a category as follows:
The enumeration \(\{n_1,\ldots,n_k\}\) is not unique because set union is idempotent.
This means that there is no identity in the definition as given above.
There are lots of ways to fix this problem:
We’ll explore a range of models of computation which have much in common with this construction.
We try to advocate a view of these categories as the opposite of categories of algebras.
(cf. Stone duality)
Most of this was presaged by Lamarche (1992)
Take Scott’s idea and replace sets with multisets:
This works just fine!
Before
After
Recall that \(\mathbf{Rel}\) is the Kleisli category for the powerset monad on \(\mathbf{Set}\): relations are maps \(A \rightarrow \mathbf{P}B\).
The opposite category described above can be seen as consisting of functions \[ B \rightarrow \mathbf{P}\mathbf{M}_f A \]
The functor \(\mathbf{P}\mathbf{M}_f\) is a monad thanks to a distributive law of monads \[ \mathbf{M}_f \mathbf{P} \rightarrow \mathbf{P}\mathbf{M}_f \] mapping \[ [S_1,\ldots,S_k] \mapsto \{ [a_1,\ldots,a_k] \mid a_i \in S_i \} \]
The Scott model — using \(\mathbf{P}_f\) rather than \(\mathbf{M}_f\) — cannot be constructed in this way.
There is no distributive law for \(\mathbf{P}_f\) over \(\mathbf{P}\) and in fact no monad structure on \(\mathbf{P}\mathbf{P}_f\) at all (Klin and Salamanca, 2018)
Related: Zwart and Marsden (2022) characterise the availability and non-availability of such distributive laws for a range of monads.
This category is:
We’ll expand on this last point of view.
Aside: the idea of using relations between monoids underpins Uday Reddy’s relational semantics of imperative computation (1996). We’ll see this later if there’s time.
Idea: replace \(\mathbf{P}X = 2^X\) with the monad \(R^X\) where \(R\) is an appropriate structure of coefficients:
Easy reading: each entry \((M, b)\) in a relation now comes with a coefficient. Coefficients could track:
When composing, the coefficient of an entry \((M,c)\) is computed by:
(Laird et al., 2013; cf. Ehrhard & Regnier; cf. also Lamarche, obviously).
This is a simplification of Girard’s quantitative semantics, where the coefficients were taken to be sets, and of profunctor models, etc.
The monad \(\mathbf{M}_f\) distributes over \(R^X\) so we obtain a composite monad \(R^{\mathbf{M}_f X}\).
Yet another viewpoint is that the free commutative monoid construction extends to the Kleisli category for \(R^X\).
This all holds for general, abstract reasons: see e.g. Porst, 2008.
With our preferred view, the model looks like this:
Monoids in a category of modules are algebras.
The fact that relational semantics is based around algebras is not a new observation: it underpins Ehrhard’s development of finiteness spaces, and his work with Regnier on differential lambda-calculus, Taylor expansion etc.
And Lamarche’s paper was entirely based on algebras like the ones we’ll discuss.
But I understood that too. Finally. Although there is lots for me still to understand.
Ehrhard’s line of work profitably uses the viewpoints of linear logic.
We’ll advocate that, sometimes, heading straight for the (opposite of the) Cartesian closed category – as Lamarche did – is helpful, because we can work directly with the algebras.
As we will see, the quantitative relational model may be described as follows:
This gives an alternative view, using arguably more standard mathematics.
A monomial over \(A\) is a formal product \(a_1\cdots a_n\) of elements of \(A\), identified up to commutativity.
That is, it is a finite multiset of elements of \(A\).
A polynomial is a linear combination \[ p = \sum_{m \in \mathbf{M}_f A} p_m \cdot m \] with each \(p_m \in R\).
(We will typically work with semiring structures that admit infinite sums, so we don’t draw a distinction between polynomials and power series. Maybe we should.)
Polynomials form an algebra:
\[ (pq)_m = \sum_{m = m_1\cdot m_2} p_{m_1}q_{m_2} \]
The polynomial ring \(R[A]\) is the symmetric algebra over the free module with basis \(A\).
Perhaps more familiar: \(R[A]\) is the free commutative monoid over \(R^A\).
Algebra homomorphisms are module homomorphisms (linear maps) which also preserve multiplication.
So we get natural isomorphisms
This was the perspective introduced by Lamarche (1992). (Note that when \(R = 2 = \{0,1\}\), \(R^A\) is \(\mathbf{P}A\) and we are in the traditional relational setting.)
Everything above is a free construction.
Thought: can we get something interesting by looking at other algebras i.e. by applying some quotients?
There are two equivalent ways to define algebras:
The second view gives us easy access to a way of quotienting our structures: a semiring \(A\) can be quotiented by an ideal \(I\) via the Bourne equivalence (Bourne, 1951) \[ x \equiv y \Leftrightarrow \exists i,j\in I. x + i = y+j . \]
We obtain a new algebra \[ R \longrightarrow A \longrightarrow A/I \]
Another standard algebra construction will come in handy: we can use any monoid as the collection of monomials.
The polynomials (power series) \[ \sum_{m\in M} p_m \cdot m \] form an \(R\)-algebra.
Working over the Boolean semiring, Scott’s construction starts to look like the algebra (actually, free module) over the powerset monoid: the free commutative idempotent monoid on a set.
Step back and consider algebras formed from idempotent commutative monoids.
Let \(f:A \rightarrow B\) be an algebra homomorphism and \(a\) any monomial in \(A\).
Because of idempotence in the underlying monoid \[ f(a) = f(a^2) = f(a)^2 \]
Suppose \(f(a) = \sum_i b_i\).
Then we require
The secret to obtaining the closed structure in the relational model was freeness: any map from singleton-monomials to an algebra generates a well-defined algebra homomorphism.
That is no longer the case; how can we restore it?
Option 1: consider only those elements of our algebras which are up-closed: if monomial \(a\) has coefficient 1 then so does \(ab\) for every \(b\).
This leads to a category, but it is still tricky to obtain a closed structure.
Option 2: quotient our algebras so that each \(b_ib_j = 0\).
Consider instead idempotent commutative monoids with a zero, which we write as \(\bot\) to avoid clash with the semiring zero.
Form algebras which are sums \(\sum_i a_i\) such that \[ i \not= j \Rightarrow a_i a_j = \bot \] (NB sums are now partial).
These algebras will be idempotent — \((\sum_i a_i)^2 = \sum_i a_i\) — if we can ignore \(\bot\).
We don’t want to care about coefficients of zero in our algebra: we want \(a = a + \bot\).
Easy! Quotient out by the ideal generated by \(\bot\).
We can see this algebra as containing sums of non-\(\bot\) elements. Multiplication discards any \(\bot\)s that are generated.
Let’s apply this idea to the Scott-like model and see what we get.
We have to decide which multiplications (unions) in \(\mathbf{P}_f A\) will yield \(\bot\).
To make life easy, just declare that certain pairs of elements \(a\), \(a'\) yield \(\bot\).
As above we generate an algebra whose monomials are the elements of \(\mathbf{P}_f A\) generated by this monoid operation.
What do algebra homomorphisms look like?
We’ll focus on the action of an algebra homomorphism on singletons \[ f(a) = \sum_i S_i \] where each \(S_i \subseteq_f B\).
Write \(a\mathbin f S_i\) when \(S_i\) is in the support of \(f(a)\).
Let’s see what properties this relation has.
Because of the definition of our algebras we have:
If \(a \cdot a'= \bot\) then \(f(a\cdot a') = 0\). So if \(f(a) = \sum_i S_i\) and \(f(a') = \sum_j T_j\) we get \[ \sum_{i,j} S_i \cdot T_j = 0 \] which means each \(S_i \cdot T_j = \bot\).
So we have:
Write \(a\coh a'\) to mean \(a\cdot a'\not = \bot\).
Note that any set \(S\) appearing as a monomial in our algebra satisfies \[ x, y\in S \Rightarrow x \coh y \]
So the monomials are precisely cliques.
Reversing the arrows, to look at the opposite category, the conditions above become:
These are exactly the maps of coherence spaces!
I guess this is not Scott’s model any more.
Theorem The category of coherence spaces and stable maps is a full subcategory of the opposite of this category of partial algebras and algebra homomorphisms.
(Other than Lamarche, Ehrhard, Regnier…)
Tsukada and Asada (2022) have also shown how to reconstruct coherence spaces in categories of modules; they construct the linear maps, and find a free commutative comonoid. Their construction appears to be quite different to this one.
Reddy (Global state considered unnecessary, 1996) introduced another revolutionary idea.
Stateful computation can be modelled using traces of behaviour, rather than an explicit representation of state.
(This presaged the game semantics of imperative computation.)
In this interpretation, a term such as \[ x:\mathsf{Var}, y:\mathsf{Var} \vdash x:=0; x := !x + 1 ; y := !x :\mathsf{Comm} \] is modelled as a set of (tuples of) traces: \[ \mathsf{write}~0\cdot\mathsf{read}~n \cdot \mathsf{write}~(n+1) \cdot \mathsf{read}~m, \quad\mathsf{write}~m , \quad \checkmark \]
We model the allocation of (local) store to \(x\) by:
So the semantics of \[ y:\mathsf{Var} \vdash \mathsf{new}~x~\mathsf{in}~x:=0; x := !x + 1 ; y := !x :\mathsf{Comm} \] contains just \[ \mathsf{write}~1, \quad \checkmark. \]
Reddy defined a multicategory. In its simplest form:
(Reddy, 1996) The relational semantics is sound for inteference-controlled Idealised Algol (SCI).
(2002) The relational semantics is fully abstract for SCI with active expressions.
The non-multicategory variant of this construction can be achieved through our algebraic viewpoint:
The free algebra is the noncommutative analogue of the symmetric algebra (polynomial ring): monomials are now finite sequences rather than finite multisets.
Active expressions are terms that return values but may affect the store, e.g. the preincrement operator
\[
x := !x + 1; \mathsf{return}~!x
\]
(also known as ++x
).
Reddy refined the model with a notion of passive type where such side-effects cannot happen.
In the syntax, passivity is easy to achieve: simply do not provide access any state-changing operations at passive types.
A consequence is that, in a term such as \[ \Gamma, c : \mathsf{Comm} \vdash M: P \] where \(P\) is passive, we know (intuitively, or via an inductive proof) that the command \(c\) cannot be used.
In the semantics this property will appear as a notion of passivity reflection.
Idea: events in the traces are either active (state-changing) or passive (not state-changing).
Passive events commute with one another and are idempotent.
An A/P-monoid \(A\) is given by two sets, \(A_a\) and \(A_p\), and is the quotient of the free monoid over \(A_a + A_p\) by the relations
for \(p,p' \in A_p\).
Concretely this can be described as the monoid of sequences \[ s_0 \cdot \alpha_0 \cdot s_1 \cdots s_n \cdot \alpha_n \] where:
As above, we can construct an algebra over this monoid, using linear combinations \[ \sum_{a\in A} k_a \cdot a \] with coefficients \(k_a\) drawn from some semiring.
Reddy’s description of the model is based on coherence spaces:
Observations
WARNING!
There is now a distinction between incoherence and multiplying to zero.
The coherence relation describes when events can appear in parallel while the monoid multiplication puts things together in sequence.
For the passive events, the two coincide.
This needs some study!
In our reverse direction, passivity reflection would mean: given \(b \in B_p\), the support of \(f(b)\) contains only elements made from \(A_p\).
Algebra homomorphisms are passivity reflecting.
For \(b\in B_p\) we have \(b\cdot b = b\). Hence \(f(b) = f(b\cdot b) = f(b)\cdot f(b)\).
Let \(s\) be a minimal sequence in the support of \(f(b)\) that contains an element of \(A_a\). The equation above means that \(s = s' \cdot s'\) for some \(s' \in f(b)\), which is impossible by minimality.
Nice! A desirable semantic property emerges from the homomorphism property in our algebraic view.
We can single out those objects \((\emptyset, A_p)\) where the set of active elements is empty, and immediately notice:
The full subcategory of objects with no active elements is the category of coherence spaces.
Then passivity-reflection may be expressed as:
The inclusion functor from coherence spaces to the opposite of our category of algebras has a left adjoint which maps an object \((A_a,A_p)\) to \((\emptyset, A_p)\).