Previously: Identity Types.

Let me first explain why the naive categorical model of dependent types doesn’t work very well for identity types. The problem is that, in such a model, any arrow can be considered a fibration, and therefore interpreted as a dependent type. In our definition of path induction, with an arrow d that makes the outer square commute, there are no additional constraints on \rho:

So we are free to substitute \text{refl} \colon A \to \text{Id}_A for \rho, and use the identity for d:

We are guaranteed that there is a unique J that makes the two triangles commute. But these commutation conditions tell us that J is the inverse of \text{refl} and, consequently, \text{Id}_A is isomorphic to A. It means that there are no paths in \text{Id}_A other than those given by reflexivity.

This is not satisfactory from the point of homotopy type theory, since the existence of non-trivial paths is necessary to make sense of things like higher inductive types (HITs) or univalence. If we want to model dependent types as fibrations, we cannot allow \text{refl} to be a fibration. Thus we need a more restrictive model in which not all mappings are fibrations.

The solution is to apply the lessons from topology, in particular homotopy. Recall that maps between topological spaces can be characterized as fibrations, cofibrations, and homotopy equivalences. There usually are overlaps between these three categories.

A fibration that is also an equivalence is called a trivial fibration or an acyclic fibration. Similarly, we have trivial (acyclic) cofibrations. We’ve also seen that, roughly speaking, cofibrations generalize injections. So, if we want a non-trivial model of identity types, it makes sense to model \text{refl} as a cofibration. We want different values a \colon A to be mapped to different “paths” \text{refl}(a) \colon \text{Id}_A, and we’d also like to have some non-trivial paths left over to play with.

We know what a path is in a topological space A: it’s a continuous mapping from a unit interval, \gamma \colon I \to A. In a cartesian category with weak equivalences, we can use an abstract definition of the object representing the space of all paths in A. A path space object P_A is an object that factorizes the diagonal morphism \Delta = \pi \circ s, such that s is a weak equivalence.

In a category that supports a factorization system, we can tighten this definition by requiring that \pi be a fibration, defining a good path object; and s be a cofibration (therefore a trivial cofibration) defining a very good path object.

The intuition behind this diagram is that \pi produces the start and the end of a path, and s sends a point to a constant path.

Compare this with the diagram that illustrates the introduction rule for the identity type:

We want \text{refl} to be a trivial cofibration, that is a cofibration that’s a weak homotopy equivalence. Weak equivalence let us shrink all paths. Imagine a path as a rubber band. We fix one end and of the path and let the other end slide all the way back until the path becomes constant. (In the compact/open topology of P_A, this works also for closed loops.)

Thus the existence of very good path objects takes care of the introduction rule for identity types.

The elimination rule is harder to wrap our minds around. How is it possible that any mapping that is defined on trivial paths, which are given by \text{refl} over the diagonal of A \times A, can be uniquely extended to all paths?

We might take a hint from complex analysis, where we can uniquely define analytic continuations of real function.

In topology, our continuations are restricted by continuity. When you are lifting a path in A, you might not have much choice as to which value of type D(a) to pick next. In fact, there might be only one value available to you — the “closest one” in the direction you’re going. To choose any other value, you’d have to “jump,” which would break continuity.

You can visualize building J(a, b, p) by growing it from its initial value J(a, a, \text{refl}(a)) = d(a). You gradually extend it above the path p until you reach the point directly above b. (In fact, this is the rough idea behind cubical type theory.)

A more abstract way of putting it is that identity types are path spaces in some factorization system, and dependent types over them are fibrations that satisfy the lifting property. Any path from a to b in \text{Id}_A has a unique lifting J in D that starts at d(a).

This is exactly the property illustrated by the lifting diaram, in which paths are elements of the identity type \text{Id}_A:

This time \text{refl} is not a fibration.

The general framework, in which to build models of homotopy type theory is called a Quillen model category, which was originally used to model homotopies in topological spaces. It can be described as a weak factorization system, in which any morphism can be written as a trivial cofibration followed by a fibration. It must satisfy the unique lifting property for any square in which i is a cofibration, p is a fibration, and one of them is a weak equivalence:

The full definition of a model category has more moving parts, but this is the gist of it.

We model dependent types as fibrations in a model category. We construct path objects as factorizations of the diagonal, and this lets us define non-trivial identity types.

One of the consequences of the fact that there could be multiple identity paths between two points is that we can, in turn, compare these paths for equality. There is, for instance, an identity type of identity types \text{Id}_{\text{Id}_A(a, b)}. Such iterated types correspond to higher homotopies: paths between paths, and so on, ad infinitum. The structure that arises is called a weak infinity groupoid. It’s a category in which every morphism has an inverse (just follow the same path backwards), and category laws (identity or associativity) are satisifed up to higher morphisms (higher homotopies).


Previously: Models of (Dependent) Type Theory.

There is a deep connection between mathematics and programming. Computer programs deal with such mathematical objects as numbers, vectors, monoids, functors, algebras, and many more. We can implement such structures in most programming languages. For instance, here’s the definition of natural numbers in Haskell:

data Nat where
  Z :: Nat           -- zero
  S :: Nat -> Nat    -- successor

There is a problem, though, with encoding the laws that they are supposed to obey. These laws are expressed using equalities. But not all equalities are created equal. Take, for instance, this definition of addition:

plus :: Nat -> Nat -> Nat
plus Z n = n
plus (S n) m = S (plus n m)

The first clause tells us that 0 + n is equal to n. This is called definitional equality, since it’s just part of the definition. However, if we reverse the order of the addends and try to prove n + 0 = n, there is no obvious shortcut. We have to prove it the hard way! This second type of equality is called propositional.

In traditional mathematics equality is treated as a relation. Two things are either equal or not. In type theory, on the other hand, equality is a type. It’s a type of proofs of equality. When you postulate that two things are equal, you specify the type of proofs that you are willing to accept.

Equality is a dependent type, because it depends on the values that are being compared. For a pair of values of the type A, the identity (or equality) type is denoted by \text{Id}_A:

\frac{a, b \colon A}{\text{Id}_A (a, b) \colon \text{Type}}

(In this notation, assumptions are listed above the horizontal line.)

You’ll also see \text{Id}_A(a, b) written as a =_A b, or even a = b, if the type A is obvious from the context.

When two values are not equal, this type is uninhabited. Otherwise it’s inhabited by proofs, or witnesses, of equality. This is consistent with the propositions as types interpretation of type theory.

For the identity type to be useful, we have to have a way to populate it with values — proofs of equality. We do that by providing an introduction rule:

\frac{a \colon A}{\text{refl}(a) \colon \text{Id}_A(a, a)}

The constructor \text{refl} generates a witness to the obvious fact that, for every a, a = a. Here, \text{refl} is the abbreviation of reflexivity.

At first sight it doesn’t seem like we have gained anything by turning equality into a type. Indeed, if \text{refl} were the only inhabitant of \text{Id}_A, we’d just buy ourselves a lot of headaches for nothing. The trouble is that, if there is a type \text{Id}_A(a, b), then there’s also a type \text{Id}_{\text{Id}_A(a,b)} (type of equality of proofs of equality), and so on, ad infinitum. This used to be a bane of Martin Löf type theory, but it became a bounty for Homotopy Type Theory. So let’s imagine that the identity type may have non-trivial inhabitants. We’ll call these inhabitants paths. The trivial paths generated by \text{refl} are degenerate do-nothing loops.

But first, let’s consider the identity elimination rule: the mapping out of the identity type. What data do we need to provide in order to define a function from a path p \colon \text{Id}_A(x, y) to some type D? In full generality, D should be a dependent type family parameterized by the path p as well as its endpoints x and y:

x, y \colon A, \; p \colon \text{Id}_A (x, y) \vdash D(x, y, p) \colon \text{Type}

Our goal is to define a D-valued mapping J that works for any path in A:

x, y \colon A, \; p \colon \text{Id}_A(x, y) \vdash J(x, y, p) \colon D(x, y, p)

The trick is to realize that there is only one family of constructors of the identity type, \text{refl}(a), so it’s enough to specify how J is supposed to act on it. We do this by providing a dependent function d (a dependent function is a function whose return type depends on the value of its argument):

a \colon A \vdash d(a) \colon D(a, a, \text{refl}(a))

This is very similar to constructing a mapping out of a Boolean by specifying its action on the two constructors, \text{True} and \text{False}. Except that here we have a whole family of constructors \text{refl}(a) parameterized by a.

This is enough information to uniquely determine the action of J on any path, provided it agrees with d along \text{refl}:

J(a, a, \text{refl}(a)) = d(a)

This procedure for defining mappings out of identity types is called path induction.

Categorical Model

We’ll build a categorical interpretation of identity types making as few assumptions as possible. We’ll model \text{Id}_A as a fibration:

\text{Id}_A \xrightarrow{\pi} A \times A

The introduction rule asserts the existence of the arrow:

A \xrightarrow{\text{refl}} \text{Id}_A

For any given a \colon A, this arrow selects a diagonal element of the identity type, \text{Id}_A(a, a). When projected down to A \times A using \pi, it lands on the diagonal of A \times A. In a cartesian category, the diagonal is defined by the arrow \Delta \colon A \to A \times A, so we have:

\Delta = \pi \circ \text{refl}

In other words, there is a factorization of the diagonal arrow.

The elimination rule starts with defining a dependent type D, which can be modeled as a fibration \rho:

D \xrightarrow{\rho} \text{Id} \xrightarrow{\pi} A \times A

We provide a dependent function d, which is interpreted as an arrow d : A \to D that maps a to an element d(a) of D(a, a, \text{refl}(a)). So, when projected back to \text{Id}_A using \rho, it produces the same element as \text{refl}(a)

\rho \circ d = \text{refl}

Given these assumptions, there exists a mapping:

x, y \colon A, p \colon \text{Id}_A(x, y) \vdash J(x, y, p) \colon D(x, y, p)

which is interpreted as a section \text{Id}_A \to D. The section condition is:

\rho \circ J = id_{\text{Id}_A}

This mapping is unique if we assume that it agrees with d when restricted to \text{refl}:

J \circ \text{refl} = d

Consequently, we can illustrate the path induction rule using a single commuting diagram:

If the outer square commutes then there is a unique diagonal arrow that makes the two triangles commute. The similarity to the lifting property of homotopy theory is not accidental, as we’ll see next.

Next: Modeling Identity Types.


Previously: (Weak) Factorization Systems.

It’s been known since Lambek that typed lambda calculus can be modeled in a cartesian closed category, CCC. Cartesian means that you can form products, and closed means that you can form function types. Loosely speaking, types are modeled as objects and functions as arrows.

More precisely, objects correspond to contexts— which are cartesian products of types, with the terminal object representing the empty context. Arrows correspond to terms in those contexts.

Mathematicians and computer scientist use special syntax to formalize type theory that is used in lambda calculus. For instance a typing judgment:

\vdash A \colon \text{Type}

states that A is a type. The turnstile symbol is used to separate contexts from judgments. Here, the context is empty (nothing on the left of the turnstile). Categorically, this means that A is an object.

A more general context \Gamma is a product of objects A_1 \times A_2 \times ... . The judgment:

\Gamma \vdash A \colon \text{Type}

tells us that the type A may depend on other types A_1 \times A_2 \times ... , in which case A is a type constructor. The canonical example of a type constructor is a list.

A term t of type A in some context \Gamma is written as:

\Gamma \vdash t \colon A

We interpret it as an arrow from the cartesian product of types that form \Gamma to the object A:

\Gamma \xrightarrow{t} A

Things get more interesting when we allow types to not only depend on other types, but also on values. This is the real power of dependent types à la Martin-Löf.

For instance, the judgment:

a \colon A \vdash B(a) \colon \text{Type}

defines a single-argument type family. For every argument a of type A it produces a new type B(a). The canonical example of a dependent type is a counted vector, which encodes length (a value of type \mathbb N) in its type.

Categorically, we model a type family as a fibration given by an arrow \pi \colon B \to A. The type B(a) is a fiber \pi^{-1} a.

In the case of counted vectors, the projection gives the length of the vector.

In general, a type family may depend on multiple arguments, as in:

a_1 \colon A_1, a_2 \colon A_2, ... , a_n \colon A_n \vdash B(a_1, a_2, ..., a_n) \colon \text{Type}

Here, each type in the context may depend on the values defined by its predecessor. For instance, A_2 may depend on a_1, A_3 may depend on a_1 and a_2, and so on.

Categorically, we model such a judgment as a tower of fibrations:

B \xrightarrow{\pi_n} A_n ... \xrightarrow{\pi_2} A_2 \xrightarrow{\pi_1} A_1

Notice that the context is no longer a cartesian product of types but rather an ordered series of fibrations.

A term in a dependently typed system has to satisfy an additional condition: it has to map into the correct fiber. For instance, in the judgment:

a \colon A \vdash t(a) \colon B(a)

not only t depends on a, but so does the type B(a). If we model t as an arrow A \to B, we have to make sure that t(a) is projected back to a. Categorically, we call such maps sections:

\pi \circ t = id_A

In other words, a section is an arrow that is a right inverse of a projection, .

Thus a well-formed vector-valued term produces, for every n \colon \mathbb N, a vector of lenght n.

In general, a term:

a_1 \colon A_1, a_2 \colon A_2, ... , a_n \colon A_n \vdash t \colon B

is interpreted as a section A_n \xrightarrow{t} B of the projection \pi_n.

At this point one usually introduces the definitions of dependent sums and products, which requires the modeling category to be locally cartesian closed. We’ll instead concentrate on the identity types, which are closely related to homotopy theory.

Next: Identity Types.


Previously: Fibrations and Cofibrations.

In topology, we say that two shapes are the same if there is a homeomorphism– an invertible continuous map– between them. Continuity means that nothing is broken and nothing is glued together. This is how we can turn a coffe cup into a torus. A homeomorphism, however, won’t let us shrink a torus to a circle. So if we are only interested in how many holes the shapes have, we have to relax our notion of equivalence.

Let’s go back to the definition of homeomorphism. It is defined as a pair of continuous functions between two topological spaces, f \colon X \to Y and g \colon Y \to X, such that their compositions are identity maps, f \circ g = id_Y and g \circ f = id_X, respectively.

If we want to allow for extreme shrinkage, as with a (solid) torus shrinking down to a circle, we have to relax the identity conditions.

A homotopy equivalence is a pair of continuous functions, but their compositions don’t have to be equal to identities. It’s enough that they are homotopic to identities. In other words, it’s possible to create an animation that transforms one to another.

Take the example of a line \mathbb R and a point. The point is a trivial topological space where only the whole space (here, the singleton \{*\}) and the empty set are open. \mathbb R and \{*\} are obviously not homeomorphic, but they don’t have any holes, so they should be homotopy equivalent.

Indeed, let’s construct the equivalence as a pair of constant functions: f \colon x \mapsto * and g \colon * \mapsto 0 (the origin of \mathbb R). Both are continuous: The pre-image f^{-1} \{ * \} is the whole real line, which is open. The pre-image g^{-1} of any open set in \mathbb R is \{*\}, which is also open.

The composition f \circ g \colon * \mapsto * is equal to the identity on \{*\}, so it’s automatically homotopic to it. The interesting part is the composition g \circ f \colon x \mapsto 0, which is emphatically not equal to identity on \mathbb R. We can however construct a homotpy between the identity and it. It’s a function that interpolates between them:

h \colon  \mathbb R \times I \to \mathbb R

h (x, 0) = id_{\mathbb R} x = x

h(x, 1) = (g \circ f) x = 0

(I is the unit interval [0, 1].) Such a function exists:

h (x, t) = (1 - t) \,x

When a space is homotopy equivalent to a point, we say that it’s contractible. Thus \mathbb R is contractible. Similarly, n-dimensional spaces, \mathbb R^n, as well as n-dimensional balls are all contractible. A circle, however, is not contractible, because it has a hole.

Another way of looking for holes in spaces is by trying to shrink loops. If there is a hole inside a loop, it cannot be continuously shrunk to a point. Imagine a loop going around a circle. There is no way you can “unwind” it without breaking something. In fact, you can have loops that wind n times around a circle. They can’t be homotopied into each other if their winding numbers are different.

In general, paths in a topological space X, i.e. continuous mappings \gamma \colon I \to X, naturally split into equivalence classes with respect to homotopy. Two paths, \gamma and \gamma', sharing the same endpoints are in the same class if there is a homotopy between them:

h \colon I \times I \to X

h (0, t) = \gamma t

h (1, t) = \gamma' t

Moreover, two paths can be composed, as long as the endpoint of one coincides with the start point of the other (after performing appropriate reparametrizations).

There is a unit of such composition, a constant path; and every path has its inverse, a path that traces the original but goes in the opposite direction.

It’s easy to see that path composition induces a groupoid structure on the set of equivalence classes of paths. A groupoid is a category in which every morphism (here, path) is invertible. This particular groupoid is called the fundamental groupoid of the topological space X.

If we pick a base point x_0 in X, the paths that start and end at this point form closed loops. These loops then form a fundamental group \pi_1(X, x_0).

Notice that, as long as there is a path \gamma from x_0 to x_1, the fundamental groups at both points are isomorphic. This is because every loop l at x_1 induces a loop at x_0 given by the concatenation l' = \gamma^{-1} \circ l \circ \gamma.

So there is essentially a single fundamental group for the whole space, as long as X is path-connected, i.e., it doesn’t split into multiple disconnected chunks.

Going back to our example of a circle, its fundamental group is the group of integers \mathbb Z with addition. All loops that wind n-times around the circle in one direction correspond to the integer n.

Negative integers correspond to loops winding in the opposite direction. If you follow an n-loop with an m-loop, you get an (m+n)-loop. Zero corresponds to loops that can be shrunk to a point.

To tie these two notions of hole-counting together, it can be shown that two spaces that are homotopy equivalent also have the same fundamental groups. This makes sense, since equivalent spaces have the same holes.

Not only that, they also have the same higher homotopy groups (as long as both are path-connected).

We define the n-th homotopy group by replacing simple loops (which are homotopic to a circle, or a 1-dimensional sphere) with n-dimensional spheres. Attempts at shrinking those may detect higher-dimensional holes.

For instance, imagine an Earth-like ball with it’s inner core scooped out. Any 1-loop inside its bulk can be shrunk to a point (it will glide off the core). But a 2-sphere that envelops the core cannot be shrunk.

In fact such a 2-sphere can be wrapped around the core an arbitrary number of times. In math notation, we say:

\pi_2 (B^3 \setminus \{ 0 \}) = \mathbb Z

Corresponding to these higher homotopy groups there is also a higher homotopy groupoid, in which there are invertible paths, surfaces between paths, volumes between surfaces, etc. Taken together, these form an infinity groupoid. It is exactly the inspiration behind the infinity groupoid in homotopy type theory, HoTT.

Spaces that are homotopy equivalent have the same homotopy groups, but the converse is not true. There are spaces that are not homotopy equivalent even though they have the same homotopy groups. This is why a weaker version of equivalence was introduced.

A map between topological spaces is called a weak homotopy equivalence if it induces isomorphisms between all homotopy groups.

There is a subtle difference between strong and weak equivalences. Strong equivalence can be broken by a local anomaly, like in the following example: Take X = \{0, 1, 2, ... \}, the set of natural numbers in which every number is considered an open set. Take Y = \{ 0 \} \cup  \{ 1, \frac{1}{2}, \frac{1}{3}, ... \} with the topology inherited from the real line.

The singleton set \{ 0 \} in Y is the obstacle to constructing a homeomorphism or a homotopy equivalence between X and Y. That’s because it is not open (there is no open set in \mathbb R that contains it and nothing else). However, it’s impossible to have a non-trivial loop that would contain it, so X and Y are weakly equivalent.

Grothendieck conjectured that the infinity groupoid captures all information about a topological space up to weak homotopy equivalence.

Weak equivalences, together with fibrations and cofibrations, form the foundation of weak factorization systems and Quillen model categories.
Next: (Weak) Factorization Systems.


Previously: Subfunctor Classifier.

We are used to thinking of a mapping as either being invertible or not. It’s a yes or no question. A mapping between sets is invertible if it’s both injective and surjective. It means that it never merges two elements into one, and it covers the whole target set.

But if you are willing to look closer, the failures of invertibility are a whole fascinating area of research. Things get a lot more interesting if you consider mapping between topological spaces, which have to skillfully navigate around various holes and tears in the target space, and shrink or glue together parts of the source space. I will be glossing over some of the topological considerations, concentrating on the big-picture intuitions (both culinary and cinematographic).

Fibrations

In what follows, we’ll be considering a function p \colon E \to B. We’ll call p a projection from the total set E to the base set B.

Let’s start by considering the first reason for a failure of invertibility: multiple elements being mapped into one. Even though we can’t invert such a mapping, we can use it to fibrate the source E.

To each element y \in B we’ll assign a fiber, the set of elements of E that are mapped to y. By abuse of notation, we call such a fiber p^{-1} y:

p^{-1} y = \{ x |\; p x = y \}

For some y‘s, this set may be empty \emptyset; for others, it may contain lots elements.

Notice that p is an isomorphism if and only if every fiber is a singleton set. This property gives rise to a very useful definition of equivalence in homotopy type theory, where we ask for every fiber to be contractible.

A set-theoretic union of all fibers reconstructs the total set E.

Things get more interesting when we move to topological spaces and continuous functions. To begin with, we can define a path in B as a continuous mapping from the unit interval I = [0, 1] to B.

\gamma \colon I \to B

We can then ask if it’s possible to lift this path to E, that is to construct \tilde{\gamma} \colon I \to E that lies above \gamma. To do this, we pick the starting point e \in E that lies above the starting point of \gamma, that is p \,e = \gamma \,0.

This setup can be summarized in the commuting square:

The lifting \tilde{\gamma} is then the diagonal arrow that makes both triangles commute:

It’s helpful to imagine a path as a trajectory of a point moving through a topological space. The parameter t \in I is then interpreted as time.

In homotopy theory we generalize this idea to the movement, or deformation, of arbitrary shapes, not just points.

If we describe such a shape as the image of some topological space X, its deformation is a mapping h \colon X \times I \to B. Such potentially “fat” paths are called homotopies. In particular, if we replace X by a single point, we recover our “thin” paths.

A homotopy lifting property is expressed as the existence of the diagonal function \tilde h in this commuting square, such that the resulting triangles commute:

In other words, given a homotopy h of the shape X in the base, and an embedding e of this shape in E above h \, 0, we can construct a homotopy in E whose projection is h.

Cofibrations

In category theory, every construction has its dual, which is obtained by reversing all the arrows. In topology, there is an analogous relation called the Eckmann-Hilton duality. Besides reversing the arrows, it also dualizes products to exponentials (using the currying adjunction), and injections to surjections.

When dualizing the homotopy lifting diagram, we replace the trivial injection of X \times \{0\} \cong X into X \times I by a surjection \varepsilon_0 of the exponential X^I onto X. Here, X^I is the set of functions I \to X, or the path space of X. The surjection \varepsilon_0 maps every path \gamma \colon I \to X to its starting point by evaluating it at zero. (The evaluation map \varepsilon_t \colon \gamma \mapsto \gamma \,t is continuous in the compact-open topology of X^I.)

The homotopy lifting property is thus dualized to the homotopy extension property:

Corresponding to the fibration that deals with the failure of the injectivity of p, the homotopy extension property deals with the failure of the surjectivity of i.

If the mapping i has the extension property for all topological spaces X, it is called a cofibration.

Intuitively, a cofibration is an embedding of B in E such that any “spaghettification” of B, that is embedding it in the path space X^I, can be extended to a spaghettification of E. X plays the role of an ambient space where these operations can be visualized. Later we’ll see how to construct a minimal such ambient space called a mapping cylinder.

Let’s deconstruct the meaning of the outer commuting square.

  • i embeds B into E.
  • e further embeds E into X, and we end up with the embedding of B inside X given by e \circ i.
  • h embeds B into the path space of X (the dotted paths below).

The commuting condition for this square means that the starting points of all these paths, the result of applying \varepsilon_0 to each of them, coincide with the embedding of B into X given by e \circ i. It’s like extruding a bunch of spaghetti, each strand corresponding to a point in B.

With this setup, we postulate the existence of the diagonal mapping \tilde h \colon E \to X^I that makes the two triangles commute. In other words, E is mapped to a family of paths in X which, when restricted to the image of B coincide with the original mapping h.

The spaghetti extruded through the E shape contain the spaghetti extruded through the B shape.

Another intuitive description of this situation uses the idea of homotopy as animation. The strands of spaghetti become trajectories of particles.

We start by setting up the initial scene. We embed E in the big space X using e \colon E \to X.

We have the embedding i \colon B \to E, which induces the embedding of B into X:

b = e \circ i

Then we animate the embedding of B using the homotopy

h \colon B \times I \to X

The initial frame of this animation is given by b:

h(x, 0) = b \, x

We say that i is a cofibration if every such animation can be extended to the bigger animation:

\tilde h \colon E \times I \to X

whose starting frame is given by e:

\tilde h (x, 0) = e \, x.

The commuting condition (for the lower triangle) means that the two animations coincide for all x \in B and t \in I:

h (x, t) = \tilde h (i \,x, t)

Just like a fiber is an epitome of non-injectiveness, one can define a cofiber as an epitome of non-surjectiveness. It’s essentially the part of E that is not covered by i.

As a topological space it’s the result of shrinking the image of B inside E to a point (the resulting topology is called the quotient topology).

Notice that, unlike with fibers, there is just one cofiber for a given cofibration (up to homotopy equivalence).

Lifting Property

A category theorist looking at the two diagrams that define, respectively, homotopy lifting and extension, will immediately ignore all the superfluous details. She will turn the second diagram upside down and merge it with the first diagram to get:

This is how we read the new diagram: If for any morphisms f and g that make the outer square commute, there exist a diagonal morphism h that makes the two triangles commute, then we say that i has the left lifting property with respect to p. Equivalently, p has the right lifting property with respect to i.

Or we can say that the two morphisms are orthogonal to each other.

The motivation for this nomenclature is interesting. In the category of sets, the archetypal non-surjective function is the “absurd” 0 \to 1 (or, in set notation, \emptyset \to \{*\}). It turns out that all surjective functions are its right liftings. In other words, all surjections are right-orthogonal to the simplest non-surjective function.

Indeed, the function at the bottom y \colon 1 \to Y picks an element y \in Y. Similarly, the diagonal function picks an element x \in X. The commuting triangle tells us that for every y there exists an x such that y = p \, x.

Similarly, we can show that all injective functions are orthogonal to the archetypal non-injective function 2 \to 1 (or \{x_1, x_2\} \to \{*\}).

Indeed, assume that i maps two different elements a_1, a_2 \in A to a single element b \in B. We could then pick f such that f \, a_1 = x_1 and f \, a_2 = x_2. The diagonal h can map b to either x1 or x2 but not both, so we couldn’t make the upper triangle commute.

Incidentally, injections are also right-orthogonal to the archetypal non-injection.

Next: (Weak) Homotopy Equivalences.


Previously: Subobject Classifier.

In category theory, objects are devoid of internal structure. We’ve seen however that in certain categories we can define relationships between objects that mimic the set-theoretic idea of one set being the subset of another. We do this using the subobject classifier.

We would like to define a subobject classifier in the category of presheaves, so we could easily characterize subfunctors of a given presheaf. This will help us work with sieves, which are subfunctors of the hom-functor C(-, a); and coverages, which are special kinds of sieves.

Recall that a presheaf S is a subfunctor of another presheaf P \colon C^{op} \to Set if it satisfies two conditions.

  • For every object a, we have a set inclusion: S a \subseteq P a,
  • For every morphism f \colon c \to a, the function S f \colon S a \to S c is a restriction of the function P f \colon P a \to P c. In other words, P f and S f must agree on the subset S a.

As category theory goes, this is a very low-level definition. We need something more abstract: We need to construct a subobject classifier in the category of presheaves. Recall that a subobject classifier is defined by the following pullback diagram:

This time, however, the objects are presheaves and the arrows are natural transformations.

To begin with we have to define a terminal presheaf, 1 \colon C^{op} \to Set that satisfies the condition that, for any presheaf P, there is a unique natural transformation ! \colon P \to 1. This will work if every component !_a \colon P a \to 1 a of this natural transformation is unique, which is true if we choose 1 a to be the terminal singleton set \{ * \}. Thus the terminal presheaf maps all objects to the terminal set, and all morphisms to the identity on \{ * \}.

Next, let’s instantiate the subobject classifier diagram at a particular object a.

Here, the component true_a picks a special “True” element in the set \Omega_a. If the presheaf S is a subfunctor of P, the set S a is a subset of P a. The function \chi_a must therefore map the whole subset S a to “True”. This is consistent with our definition of the subobject classifier for sets.

The second condition in the definition of a subfunctor is more interesting. It involves the mapping of morphisms.

The restriction condition

We have to consider all morphisms converging on our object of interest a. For instance, lets take f \colon c \to a. The presheaf P lifts it to a function P f \colon P a \to P c. If S is a subfunctor of P, S f is a restriction of P f.

Specifically the restriction condition tells us that, if we pick an element x \in S a, then both P f and S f will map it to the same element of S c. In fact, when defining a subobject, we only care if the embedding of S c in P c is injective (monomorphic). It’s okay if it permutes the elements of S c. So it’s enough that, for all x \in S a, the following condition is satisfied:

(P f) x \in S c

Now consider an arbitrary x \in P a (not necessarily an element of S a). We can gather all arrows f converging on a for which the subset-mapping condition is satisfied:

(P f) x \in S c

If S is a subfunctor of P, these arrows form a sieve on a, as any composition f \circ g also satisfies the subset-mapping condition:

Moreover, if x is in fact an element of S a, this sieve is the maximal sieve. A maximal sieve on a is a collection of all arrows converging on a.

We can now define a function \chi_a that assigns to each x \in P a the sieve of arrows that satisfy the subset-mapping condition.

\chi_a x = \{f \colon c \to a \, |  \, (P f) x \in S c\}

The function \chi_a has the property that, if x is an element of S a, the result is the maximal sieve on a.

It makes sense then to define \Omega_a as a set of sieves on a, and “True” as the maximal sieve on a. (Thus \Omega_a is a set whose elements are sets.)

The mapping \Omega \colon a \to \Omega_a can be made into a presheaf by defining its action on morphisms. The lifting of f \colon c \to a takes a sieve s_a \in \Omega_a to a sieve s'_{c} \in \Omega c, defined as a set of arrows h \colon c' \to c, such that f \circ h \in s_a.

Notice that the resulting sieve s_c' is maximal if and only if f \in \Omega_a. (Hint: If a sieve is maximal, then it contains identity.)

It can be shown that the the functions \chi_a combine to form a natural transformation \chi \colon P \to \Omega.

What remains to be shown is that this \chi is a unique such natural transformation that completes the pullback:

To show that, let’s assume that there is another natural transformation \theta \colon P \to \Omega making this diagram into a pullback. Let’s redraw the subfunctor condition for arrows, replacing \chi with \theta:

Let’s pick an x \in P a and call y = (P f) x. We’ll follow a set of equivalences.

The pullback condition:

tells us that y \in S c is equivalent to \theta_c y = true_c. In other words:

\theta_c ((P f) x) = true_c

Using naturality of \theta:

we can rewrite it as:

(\Omega f) (\theta_a x) = true_c.

Both sides of this equation are sieves. By definition, the lifting of f, \Omega f, acting on \theta_a x is a sieve defined by the following set of arrows:

(\Omega f) (\theta_a x) = \{ h \colon c' \to c \, | \, f \circ h \in \theta_a x \}

Since true_c is a maximal sieve, it must be that f \in \theta_a x.

We have shown that the condition (P f) x \in S c is equivalent to f \in \theta_a x. But the first condition is exactly the one we used to define \chi_a x. Therefore \chi is the only function that makes the subobject classifier diagram into a pullback.

Subfunctor classifier

The subobject classifier in the category of presheaves is thus a presheaf \Omega that maps objects to sieves, together with the natural transformation true \colon 1 \to \Omega that picks maximal sieves.

Every natural transformation \chi \colon P \to \Omega defines a subfunctor of the presheaf P. The components of this natural transformation serve as characteristic functions for the sets P a. A given element x is in the subset S a iff \chi_a maps it to the maximal sieve on a.

But there’s not one but many different ways of failing the subset test. They are given by non-maximal sieves. We may think of them as satisfying the Anna Karenina principle, “All happy families are alike; each unhappy family is unhappy in its own way.”

Why sieves? Because once an element of a set P a is mapped by P f to an element of a subset S c, it will continue to be mapped into consecutive subsets S c', etc. The network of “happy” morphisms keeps growing outward. By contrast, the “unhappy” elements of x \in P a have at least one morphism f \colon c \to a, whose lifting maps it outside the subset S c. That’s the morphism that’s absent from the non-maximal sieve \chi_a. Finally, naturality of \chi ensures that subset conditions propagate coherently from object to object.

Next: Fibrations and Cofibrations.


Proviously Sieves and Sheaves.

We have seen how topology can be defined by working with sets of continuous functions over coverages. Categorically speaking, a coverage is a special case of a sieve, which is defined as a subfunctor of the hom-functor C(-, a).

We’d like to characterize the relationship between a functor and its subfunctor by looking at them as objects in the category of presheaves. For that we need to introduce the idea of a subobject.

We’ll start by defining subobjects in the category of sets in a way that avoids talking about elements. Here we have two options.

The first one uses a characteristic function. It’s a predicate that answers the question: Is some element x a member of a given subset or not? Notice that any Boolean-valued function uniquely defines a subset of its domain, so we don’t really need to talk about elements, just a function.

But we still have to define a Boolean set. Let’s call this set \Omega, and designate one of its element as “True.” Selecting “True” can be done by defining a function true \colon 1 \to \Omega, where 1 is the terminal object (here, a singleton set). In principle we should insist that \Omega contains two elements, “True” and “False,” but that would make it more difficult to generalize.

The second way to define a subset S \subseteq P is to provide an injective function m \colon S \rightarrowtail P that embeds S in P. Injectivity guarantees that no two elements are mapped to the same element. The image of m then defines the subset of P. In a general category, injective functions are replaced by monics (monomorphisms).

Notice that there can be many injections that define the same subset. It’s okay for them to permute the image of m as long as it covers exactly the same subset of P. (These injections form an equivalence class.)

The fact that the two definitions coincide can be summarized by one commuting diagram. In the category of sets, given a characteristic function \chi, the subset S and the monic m are uniquely (up to isomorphism) defined as a pullback of this diagram.

We can now turn the tables and use this diagram to define the object \Omega called the subobject classifier, together with the monic true \colon 1 \rightarrowtail \Omega. We do it by means of a universal construction. We postulate that: For every monic S \rightarrowtail P between two arbitrary objects there exist a unique arrow \chi \colon P \to \Omega such that the above diagram constitutes a pullback.

This is a slightly unusual definition. Normally we think of a pullback as defining the northwest part of the diagram given its southeast part. Here, we are solving a sudoku puzzle, trying to fill the southeast part to uniquely complete a pullback diagram.

Let’s see how this works for sets. To construct a pullback (a.k.a., a fibered product P \times_{\Omega} 1) we first create a set of pairs (x, *) where x \in P and * \in 1 (the only element of the singleton set). But not all x‘s are acceptable, because we have a pullback condition, which says that \chi x = \text{True}, where \text{True} is the element of \Omega pointed to by true. This tells us that S is isomorphic to the subset of P for which \chi is \text{True}.

The question is: What happens to the other elements of P? They cannot be mapped to \text{True}, so \Omega must contain at least one more element (in case m is not an isomorphism). Can it contain more?

This is where the universal construction comes into play. Any monic m (here, an injective function) must uniquely determine a \chi that completes the pullback. In particular, we can pick S to be a singleton set and P to be a two-element set. We see that if \Omega contained only \text{True} and nothing else, no \chi would complete the pullback. And if \Omega contained more than two elements, there would be not one but at least two such \chi‘s. So, by the Goldilock principle, \Omega must have exactly two elements.

We’ll see later that this is not necessarily true in a more general category.

Next: Subfunctor Classifier.


The yearly Advent of Code is always a source of interesting coding challenges. You can often solve them the easy way, or spend days trying to solve them “the right way.” I personally prefer the latter. This year I decided to do some yak shaving with a puzzle that involved looking for patterns in a grid. The pattern was the string XMAS, and it could start at any location and go in any direction whatsoever.

My immediate impulse was to elevate the grid to a comonad. The idea is that a comonad describes a data structure in which every location is a center of some neighborhood, and it lets you apply an algorithm to all neighborhoods in one fell swoop. Common examples of comonads are infinite streams and infinite grids.

Why would anyone use an infinite grid to solve a problem on a finite grid? Imagine you’re walking through a neighborhood. At every step you may hit the boundary of a grid. So a function that retrieves the current state is allowed to fail. You may implement it as returning a Maybe value. So why not pre-fill the infinite grid with Maybe values, padding it with Nothing outside of bounds. This might sound crazy, but in a lazy language it makes perfect sense to trade code for data.

I won’t bore you with the details, they are available at my GitHub repository. Instead, I will discuss a similar program, one that I worked out some time ago, but wasn’t satisfied with the solution: the famous Conway’s Game of Life. This one actually uses an infinite grid, and I did implement it previously using a comonad. But this time I was more ambitious: I wanted to generate this two-dimensional comonad by composing a pair of one-dimensional ones.

The idea is simple. Each row of the grid is an infinite bidirectional stream. Since it has a specific “current position,” we’ll call it a cursor. Such a cursor can be easily made into a comonad. You can extract the current value; and you can duplicate a cursor by creating a cursor of cursors, each shifted by the appropriate offset (increasing in one direction, decreasing in the other).

A two-dimensional grid can then be implemented as a cursor of cursors–the inner one extending horizontally, and the outer one vertically.

It should be a piece of cake to define a comonad instance for it: extract should be a composition of (extract . extract) and duplicate a composition of (duplicate . fmap duplicate), right? It typechecks, so it must be right. But, just in case, like every good Haskell programmer, I decided to check the comonad laws. There are three of them:

extract . duplicate = id
fmap extract . duplicate = id
duplicate . duplicate = fmap duplicate . duplicate

And they failed! I must have done something illegal, but what?

In cases like this, it’s best to turn to basics–which means category theory. Compared to Haskell, category theory is much less verbose. A comonad is a functor W equipped with two natural transformations:

\varepsilon \colon W \to \text{Id}

\delta \colon W \to W \circ W

In Haskell, we write the components of these transformations as:

extract :: w a -> a
duplicate :: w a -> w (w a)

The comonad laws are illustrated by the following commuting diagrams. Here are the two counit laws:

and one associativity law:

These are the same laws we’ve seen above, but the categorical notation makes them look more symmetric.

So the problem is: Given a comonad W, is the composition W \circ W also a comonad? Can we implement the two natural transformations for it?

\varepsilon_c \colon W \circ W \to \text{Id}

\delta_c \colon W \circ W \to W \circ W \circ W \circ W

The straightforward implementation would be:

W \circ W \xrightarrow{\varepsilon \circ W} W \xrightarrow{\varepsilon} \text{Id}

corresponding to (extract . extract), and:

W \circ W \xrightarrow{W \circ \delta} W \circ W \circ W \xrightarrow{\delta \circ W \circ W} W \circ W \circ W \circ W

corresponding to (duplicate . fmap duplicate).

To see why this doesn’t work, let’s ask a more general question: When is a composition of two comonads, say W_2 \circ W_1, again a comonad? We can easily define a counit:

W_2 \circ W_1 \xrightarrow{\varepsilon_2 \circ W_1} W \xrightarrow{\varepsilon_1} \text{Id}

The comultiplication, though, is tricky:

W_2 \circ W_1 \xrightarrow{W_2 \circ \delta_1} W_2 \circ W_1 \circ W_1 \xrightarrow{\delta_2 \circ W} W_2 \circ W_2 \circ W_1 \circ W_1

Do you see the problem? The result is W_2^2 \circ W_1^2 but it should be (W_2 \circ W_1)^2. To make it a comonad, we have to be able to push W_2 through W_1 in the middle. We need W_2 to distribute over W_1 through a natural transformation:

\lambda \colon W_2 \circ W_1 \to W_1 \circ W_2

But isn’t that only relevant when we compose two different comonads–surely any functor distributes over itself! And there’s the rub: Not every comonad distributes over itself. Because a distributive comonad must preserve the comonad laws. In particular, to restore the the counit law we need this diagram to commute:

and for the comultiplication law, we require:

Even if the two comonad are the same, the counit condition is still non-trivial:

The two whiskerings of \varepsilon are in general not equal. All we can get from the original comonad laws is that they are only equal when applied to the result of  comultiplication:

(\varepsilon \circ W) \cdot \delta = (W \circ \varepsilon) \cdot \delta.

Equipped with the distributive mapping \lambda we can complete our definition of comultiplication for a composition of two comonads:

W_2 \circ W_1 \xrightarrow{W_2 \circ \delta_1} W_2 \circ W_1^2 \xrightarrow{\delta_2 \circ W} W_2^2 \circ W_1^2 \xrightarrow{W_2 \circ \lambda \circ W_1} (W_2 \circ W_1)^2

Going back to our Haskell code, we need to impose the distributivity condition on our comonad. There is a type class for it defined in Data.Distributive:

class Functor w => Distributive w where
  distribute :: Functor f => f (w a) -> w (f a)

Thus the general formula for composing two comonads is:

instance (Comonad w2, Comonad w1, Distributive w1) => 
Comonad (Compose w2 w1) where extract = extract . extract . getCompose duplicate = fmap Compose . Compose . fmap distribute . duplicate . fmap duplicate . getCompose

In particular, it works for composing a comonad with itself, as long as the comonad distributes over itself.

Equipped with these new tools, let’s go back to implementing a two-dimensional infinite grid. We start with an infinite stream:

data Stream a = (:>) { headS :: a
                     , tailS :: Stream a}
  deriving Functor

infixr 5 :>

What does it mean for a stream to be distributive? It means that we can transpose a “matrix” whose rows are streams. The functor f is used to organize these rows. It could, for instance, be a list functor, in which case you’d have a list of (infinite) streams.

  [   1 :>   2 :>   3 .. 
  ,  10 :>  20 :>  30 ..
  , 100 :> 200 :> 300 .. 
  ]

Transposing a list of streams means creating a stream of lists. The first row is a list of heads of all the streams, the second row is a list of second elements of all the streams, and so on.

  [1, 10, 100] :>
  [2, 20, 200] :>
  [3, 30, 300] :>
  ..

Because streams are infinite, we end up with an infinite stream of lists. For a general functor, we use a recursive formula:

instance Distributive Stream where
    distribute :: Functor f => f (Stream a) -> Stream (f a)
    distribute stms = (headS  stms) :> distribute (tailS  stms)

(Notice that, if we wanted to transpose a list of lists, this procedure would fail. Interestingly, the list monad is not distributive. We really need either fixed size or infinity in the picture.)

We can build a cursor from two streams, one going backward to infinity, and one going forward to infinity. The head of the forward stream will serve as our “current position.”

data Cursor a = Cur { bwStm :: Stream a
                    , fwStm :: Stream a }
  deriving Functor

Because streams are distributive, so are cursors. We just flip them about the diagonal:

instance Distributive Cursor where
    distribute :: Functor f => f (Cursor a) -> Cursor (f a)
    distribute fCur = Cur (distribute (bwStm  fCur)) 
                          (distribute (fwStm  fCur))

A cursor is also a comonad:

instance Comonad Cursor where
  extract (Cur _ (a :> _)) = a
  duplicate bi = Cur (iterateS moveBwd (moveBwd bi)) 
                     (iterateS moveFwd bi)

duplicate creates a cursor of cursors that are progressively shifted backward and forward. The forward shift is implemented as:

moveFwd :: Cursor a -> Cursor a
moveFwd (Cur bw (a :> as)) = Cur (a :> bw) as

and similarly for the backward shift.

Finally, the grid is defined as a cursor of cursors:

type Grid a = Compose Cursor Cursor a

And because Cursor is a distributive comonad, Grid is automatically a lawful comonad. We can now use the comonadic extend to advance the state of the whole grid:

generations :: Grid Cell -> [Grid Cell]
generations = iterate $ extend nextGen

using a local function:

nextGen :: Grid Cell -> Cell
nextGen grid
  | cnt == 3 = Full
  | cnt == 2 = extract grid
  | otherwise = Empty
  where
      cnt = countNeighbors grid

You can find the full implementation of the Game of Life and the solution of the Advent of Code puzzle, both using comonad composition, on my GitHub.


Previously: Covering Sieves.

We’ve seen an intuitive description of presheaves as virtual objects. We can use the same trick to visualize natural transformations.

A natural transformation can be drawn as a virtual arrow \alpha between two virtual objects corresponding to two presheaves S and P. Indeed, for every s_a \in S a, seen as an arrow a \to S, we get an arrow a \to P simply by composition \alpha \circ s_a. Notice that we are thus defining the composition with \alpha, because we are outside of the original category. A component \alpha_a of a natural transformation is a mapping between two arrows.

Untitled Artwork

This composition must be associative and, indeed, associativity is guaranteed by the naturality condition. For any arrow f \colon a \to b, consider a zigzag path from a to P given by \alpha \circ s_b \circ f. The two ways of associating this composition give us \alpha_a \circ S f = P f \circ \alpha_b.

Untitled Artwork

Let’s now recap our previous definitions: A cover of u is a bunch of arrows converging on u satisfying certain conditions. These conditions are defined in terms of a coverage. For every object u we define a whole family of covers, and then combine them into one big collection that we call the coverage.

A sheaf is a presheaf that is compatible with a coverage. It means that for every cover \{u_i\} , if we pick a compatible family of x_i \in P u_i that agrees on all overlaps, then this uniquely determines the element (virtual arrow) x \in P u.

Untitled Artwork

A covering sieve of u is a presheaf that extends a cover \{u_i\} . It assigns a singleton set to each u_i and all its open subsets (that is objects that have arrows pointing to u_i); and an empty set otherwise. In particular, the sieve includes all the overlaps, like u_i \cap u_j, even if they are not present in the original cover.

The key observation here is that a sieve can serve as a blueprint for, or a skeleton of, a compatible family \{ x_i \}. Indeed, S_u maps all objects either to singletons or to empty sets. In terms of virtual arrows, there is at most one arrow going to S_u from any object. This is why a natural transformation from S_u to any presheaf P produces a family of arrows x_i \in P u_i. It picks a single arrow from each of the hom-sets u_i \to P.

Untitled Artwork

The sieve includes all intersections, and all diagrams involving those intersections necessarily commute. They commute because the category we’re working with is thin, and so is the category extended by adding the virtual object S_u. Thus a family generated by a natural transformation \alpha \in Nat (S_u, P) is automatically a compatible family. Therefore, if P is a sheaf, it determines a unique element x \in P u.

This lets us define a sheaf in terms of sieves, rather than coverages.

A presheaf P is a sheaf if and only if, for every covering sieve S_u of every u, there is a one-to-one correspondence between the set of natural transformations Nat (S_u, P) and the set P u.

In terms of virtual arrows, this means that there is a one-to-one correspondence between arrows \alpha \colon S_u \to P and x \colon u \to P.

Untitled Artwork
Next: Subobject Classifier


Previously: Sheaves as Virtual Objects.

In order to define a sheaf, we have to start with coverage. A coverage defines, for every object u, a family of covers that satisfy the sub-coverage conditions. Granted, we can express coverage using objects and arrows, but it would be much nicer if we could use the language of functors and natural transformations.

Let’s start with the idea that, categorically, a cover of u is a bunch of arrows converging on u. Each arrow p_i \colon u_i \to u is a member of the hom-set \mathcal C (u_i, u). Now consider the fact that \mathcal C (-, u) is a presheaf, \mathcal C^{op} \to \mathbf{Set}, and ask the question: Is a cover a “subfunctor” of \mathcal C (-, u)?

A subfunctor of a presheaf P is defined as a functor S such that, for each object v, S v is a subset of P vand, for each arrow f \colon v \to w, the function S f \colon S w \to S v is a restriction of P f.

Untitled Artwork

In general, a cover does not correspond to a subfunctor of the hom-functor. Let’s see why, and how we can fix it.

Let’s try to define S, such that S u_i is non-empty for any object u_i that’s in the cover of u, and empty otherwise. As a presheaf, we could represent it as a virtual object with arrows coming from all \{ u_i \}‘s.

Untitled Artwork

Now consider an object v that is not in the cover, but it has an arrow f \colon v \to u_k connecting it to some element u_k of the cover. Functoriality requires the (virtual) composition s_k \circ f to exist.Untitled Artwork

Thus v must be included in the cover–if we want S to be a functor.

In particular, if we are looking at a category of open sets with inclusions, this condition means that all (open) sub-sets of the covering sets must also be included in the cover. Such a “downward closed” family of sets is called a sieve.

Imagine sets in the cover as holes in a sieve. Smaller sets that can “pass through” these holes must also be parts of the sieve.

If you start with a cover, you can always extend it to a covering sieve by adding more arrows. It’s as if you started with a few black holes, and everything that could fall into them, would fall.

We have previously defined sheaves in terms of coverings. In the next installment we’ll see that they can equally well be defined using covering sieves.

Next Sieves and Sheaves.