Haskell



There is a lot of folklore about various data types that pop up in discussions about lenses. For instance, it’s known that FunList and Bazaar are equivalent, although I haven’t seen a proof of that. Since both data structures appear in the context of Traversable, which is of great interest to me, I decided to do some research. In particular, I was interested in translating these data structures into constructs in category theory. This is a continuation of my previous blog posts on free monoids and free applicatives. Here’s what I have found out:

  • FunList is a free applicative generated by the Store functor. This can be shown by expressing the free applicative construction using Day convolution.
  • Using Yoneda lemma in the category of applicative functors I can show that Bazaar is equivalent to FunList

Let’s start with some definitions. FunList was first introduced by Twan van Laarhoven in his blog. Here’s a (slightly generalized) Haskell definition:

data FunList a b t = Done t 
                   | More a (FunList a b (b -> t))

It’s a non-regular inductive data structure, in the sense that its data constructor is recursively called with a different type, here the function type b->t. FunList is a functor in t, which can be written categorically as:

L_{a b} t = t + a \times L_{a b} (b \to t)

where b \to t is a shorthand for the hom-set Set(b, t).

Strictly speaking, a recursive data structure is defined as an initial algebra for a higher-order functor. I will show that the higher order functor in question can be written as:

A_{a b} g = I + \sigma_{a b} \star g

where \sigma_{a b} is the (indexed) store comonad, which can be written as:

\sigma_{a b} s = \Delta_a s \times C(b, s)

Here, \Delta_a is the constant functor, and C(b, -) is the hom-functor. In Haskell, this is equivalent to:

newtype Store a b s = Store (a, b -> s)

The standard (non-indexed) Store comonad is obtained by identifying a with b and it describes the objects of the slice category C/s (morphisms are functions f : a \to a' that make the obvious triangles commute).

If you’ve read my previous blog posts, you may recognize in A_{a b} the functor that generates a free applicative functor (or, equivalently, a free monoidal functor). Its fixed point can be written as:

L_{a b} = I + \sigma_{a b} \star L_{a b}

The star stands for Day convolution–in Haskell expressed as an existential data type:

data Day f g s where
  Day :: f a -> g b -> ((a, b) -> s) -> Day f g s

Intuitively, L_{a b} is a “list of” Store functors concatenated using Day convolution. An empty list is the identity functor, a one-element list is the Store functor, a two-element list is the Day convolution of two Store functors, and so on…

In Haskell, we would express it as:

data FunList a b t = Done t 
                   | More ((Day (Store a b) (FunList a b)) t)

To show the equivalence of the two definitions of FunList, let’s expand the definition of Day convolution inside A_{a b}:

(A_{a b} g) t = t + \int^{c d} (\Delta_b c \times C(a, c)) \times g d \times C(c \times d, t)

The coend \int^{c d} corresponds, in Haskell, to the existential data type we used in the definition of Day.

Since we have the hom-functor C(a, c) under the coend, the first step is to use the co-Yoneda lemma to “perform the integration” over c, which replaces c with a everywhere. We get:

t + \int^d \Delta_b a \times g d \times C(a \times d, t)

We can then evaluate the constant functor and use the currying adjunction:

C(a \times d, t) \cong C(d, a \to t)

to get:

t + \int^d b \times g d \times C(d, a \to t)

Applying the co-Yoneda lemma again, we replace d with a \to t:

t + b \times g (a \to t)

This is exactly the functor that generates FunList. So FunList is indeed the free applicative generated by Store.

All transformations in this derivation were natural isomorphisms.

Now let’s switch our attention to Bazaar, which can be defined as:

type Bazaar a b t = forall f. Applicative f => (a -> f b) -> f t

(The actual definition of Bazaar in the lens library is even more general–it’s parameterized by a profunctor in place of the arrow in a -> f b.)

The universal quantification in the definition of Bazaar immediately suggests the application of my favorite double Yoneda trick in the functor category: The set of natural transformations (morphisms in the functor category) between two functors (objects in the functor category) is isomorphic, through Yoneda embedding, to the following end in the functor category:

Nat(h, g) \cong \int_{f \colon [C, Set]} Set(Nat(g, f), Nat(h, f))

The end is equivalent (modulo parametricity) to Haskell forall. Here, the sets of natural transformations between pairs of functors are just hom-functors in the functor category and the end over f is a set of higher-order natural transformations between them.

In the double Yoneda trick we carefully select the two functors g and h to be either representable, or somehow related to representables.

The universal quantification in Bazaar is limited to applicative functors, so we’ll pick our two functors to be free applicatives. We’ve seen previously that the higher-order functor that generates free applicatives has the form:

F g = Id + g \star F g

Here’s the version of the Yoneda embedding in which f varies over all applicative functors in the category App, and g and h are arbitrary functors in [C, Set]:

App(F h, F g) \cong \int_{f \colon App} Set(App(F g, f), App(F h, f))

The free functor F is the left adjoint to the forgetful functor U:

App(F g, f) \cong [C, Set](g, U f)

Using this adjunction, we arrive at:

[C, Set](h, U (F g)) \cong \int_{f \colon App} Set([C, Set](g, U f), [C, Set](h, U f))

We’re almost there–we just need to carefuly pick the functors g and h. In order to arrive at the definition of Bazaar we want:

g = \sigma_{a b} = \Delta_a \times C(b, -)

h = C(t, -)

The right hand side becomes:

\int_{f \colon App} Set\big(\int_c Set (\Delta_a c \times C(b, c), (U f) c)), \int_c Set (C(t, c), (U f) c)\big)

where I represented natural transformations as ends. The first term can be curried:

Set \big(\Delta_a c \times C(b, c), (U f) c)\big) \cong Set\big(C(b, c), \Delta_a c \to (U f) c \big)

and the end over c can be evaluated using the Yoneda lemma. So can the second term. Altogether, the right hand side becomes:

\int_{f \colon App} Set\big(a \to (U f) b)), (U f) t)\big)

In Haskell notation, this is just the definition of Bazaar:

forall f. Applicative f => (a -> f b) -> f t

The left hand side can be written as:

\int_c Set(h c, (U (F g)) c)

Since we have chosen h to be the hom-functor C(t, -), we can use the Yoneda lemma to “perform the integration” and arrive at:

(U (F g)) t

With our choice of g = \sigma_{a b}, this is exactly the free applicative generated by Store–in other words, FunList.

This proves the equivalence of Bazaar and FunList. Notice that this proof is only valid for Set-valued functors, although a generalization to the enriched setting is relatively straightforward.

There is another family of functors, Traversable, that uses universal quantification over applicatives:

class (Functor t, Foldable t) => Traversable t where
  traverse :: forall f. Applicative f => (a -> f b) -> t a -> f (t b)

The same double Yoneda trick can be applied to it to show that it’s related to Bazaar. There is, however, a much simpler derivation, suggested to me by Derek Elkins, by changing the order of arguments:

traverse :: t a -> (forall f. Applicative f => (a -> f b) -> f (t b))

which is equivalent to:

traverse :: t a -> Bazaar a b (t b)

In view of the equivalence between Bazaar and FunList, we can also write it as:

traverse :: t a -> FunList a b (t b)

Note that this is somewhat similar to the definition of toList:

toList :: Foldable t => t a -> [a]

In a sense, FunList is able to freely accumulate the effects from traversable, so that they can be interpreted later.

Acknowledgments

I’m grateful to Edward Kmett and Derek Elkins for many discussions and valuable insights.

Advertisements

Abstract

The use of free monads, free applicatives, and cofree comonads lets us separate the construction of (often effectful or context-dependent) computations from their interpretation. In this paper I show how the ad hoc process of writing interpreters for these free constructions can be systematized using the language of higher order algebras (coalgebras) and catamorphisms (anamorphisms).

Introduction

Recursive schemes [meijer] are an example of successful application of concepts from category theory to programming. The idea is that recursive data structures can be defined as initial algebras of functors. This allows a separation of concerns: the functor describes the local shape of the data structure, and the fixed point combinator builds the recursion. Operations over data structures can be likewise separated into shallow, non-recursive computations described by algebras, and generic recursive procedures described by catamorphisms. In this way, data structures often replace control structures in driving computations.

Since functors also form a category, it’s possible to define functors acting on functors. Such higher order functors show up in a number of free constructions, notably free monads, free applicatives, and cofree comonads. These free constructions have good composability properties and they provide means of separating the creation of effectful computations from their interpretation.

This paper’s contribution is to systematize the construction of such interpreters. The idea is that free constructions arise as fixed points of higher order functors, and therefore can be approached with the same algebraic machinery as recursive data structures, only at a higher level. In particular, interpreters can be constructed as catamorphisms or anamorphisms of higher order algebras/coalgebras.

Initial Algebras and Catamorphisms

The canonical example of a data structure that can be described as an initial algebra of a functor is a list. In Haskell, a list can be defined recursively:

data List a = Nil | Cons a (List a)

There is an underlying non-recursive functor:

data ListF a x = NilF | ConsF a x
instance Functor (ListF a) where
  fmap f NilF = NilF
  fmap f (ConsF a x) = ConsF a (f x)

Once we have a functor, we can define its algebras. An algebra consist of a carrier c and a structure map (evaluator). An algebra can be defined for an arbitrary functor f:

type Algebra f c = f c -> c

Here’s an example of a simple list algebra, with Int as its carrier:

sum :: Algebra (ListF Int) Int
sum NilF = 0
sum (ConsF a c) = a + c

Algebras for a given functor form a category. The initial object in this category (if it exists) is called the initial algebra. In Haskell, we call the carrier of the initial algebra Fix f. Its structure map is a function:

f (Fix f) -> Fix f

By Lambek’s lemma, the structure map of the initial algebra is an isomorphism. In Haskell, this isomorphism is given by a pair of functions: the constructor In and the destructor out of the fixed point combinator:

newtype Fix f = In { out :: f (Fix f) }

When applied to the list functor, the fixed point gives rise to an alternative definition of a list:

type List a = Fix (ListF a)

The initiality of the algebra means that there is a unique algebra morphism from it to any other algebra. This morphism is called a catamorphism and, in Haskell, can be expressed as:

cata :: Functor f => Algebra f a -> Fix f -> a
cata alg = alg . fmap (cata alg) . out

A list catamorphism is known as a fold. Since the list functor is a sum type, its algebra consists of a value—the result of applying the algebra to NilF—and a function of two variables that corresponds to the ConsF constructor. You may recognize those two as the arguments to foldr:

foldr :: (a -> c -> c) -> c -> [a] -> c

The list functor is interesting because its fixed point is a free monoid. In category theory, monoids are special objects in monoidal categories—that is categories that define a product of two objects. In Haskell, a pair type plays the role of such a product, with the unit type as its unit (up to isomorphism).

As you can see, the list functor is the sum of a unit and a product. This formula can be generalized to an arbitrary monoidal category with a tensor product \otimes and a unit 1:

L\, a\, x = 1 + a \otimes x

Its initial algebra is a free monoid .

Higher Algebras

In category theory, once you performed a construction in one category, it’s easy to perform it in another category that shares similar properties. In Haskell, this might require reimplementing the construction.

We are interested in the category of endofunctors, where objects are endofunctors and morphisms are natural transformations. Natural transformations are represented in Haskell as polymorphic functions:

type f :~> g = forall a. f a -> g a
infixr 0 :~>

In the category of endofunctors we can define (higher order) functors, which map functors to functors and natural transformations to natural transformations:

class HFunctor hf where
  hfmap :: (g :~> h) -> (hf g :~> hf h)
  ffmap :: Functor g => (a -> b) -> hf g a -> hf g b

The first function lifts a natural transformation; and the second function, ffmap, witnesses the fact that the result of a higher order functor is again a functor.

An algebra for a higher order functor hf consists of a functor f (the carrier object in the functor category) and a natural transformation (the structure map):

type HAlgebra hf f = hf f :~> f

As with regular functors, we can define an initial algebra using the fixed point combinator for higher order functors:

newtype FixH hf a = InH { outH :: hf (FixH hf) a }

Similarly, we can define a higher order catamorphism:

hcata :: HFunctor h => HAlgebra h f -> FixH h :~> f
hcata halg = halg . hfmap (hcata halg) . outH

The question is, are there any interesting examples of higher order functors and algebras that could be used to solve real-life programming problems?

Free Monad

We’ve seen the usefulness of lists, or free monoids, for structuring computations. Let’s see if we can generalize this concept to higher order functors.

The definition of a list relies on the cartesian structure of the underlying category. It turns out that there are multiple cartesian structures of interest that can be defined in the category of functors. The simplest one defines a product of two endofunctors as their composition. Any two endofunctors can be composed. The unit of functor composition is the identity functor.

If you picture endofunctors as containers, you can easily imagine a tree of lists, or a list of Maybes.

A monoid based on this particular monoidal structure in the endofunctor category is a monad. It’s an endofunctor m equipped with two natural transformations representing unit and multiplication:

class Monad m where
  eta :: Identity    :~> m
  mu  :: Compose m m :~> m

In Haskell, the components of these natural transformations are known as return and join.

A straightforward generalization of the list functor to the functor category can be written as:

L\, f\, g = 1 + f \circ g

or, in Haskell,

type FunctorList f g = Identity :+: Compose f g

where we used the operator :+: to define the coproduct of two functors:

data (f :+: g) e = Inl (f e) | Inr (g e)
infixr 7 :+:

Using more conventional notation, FunctorList can be written as:

data MonadF f g a = 
    DoneM a 
  | MoreM (f (g a))

We’ll use it to generate a free monoid in the category of endofunctors. First of all, let’s show that it’s indeed a higher order functor in the second argument g:

instance Functor f => HFunctor (MonadF f) where
  hfmap _   (DoneM a)  = DoneM a
  hfmap nat (MoreM fg) = MoreM $ fmap nat fg
  ffmap h (DoneM a)    = DoneM (h a)
  ffmap h (MoreM fg)   = MoreM $ fmap (fmap h) fg

In category theory, because of size issues, this functor doesn’t always have a fixed point. For most common choices of f (e.g., for algebraic data types), the initial higher order algebra for this functor exists, and it generates a free monad. In Haskell, this free monad can be defined as:

type FreeMonad f = FixH (MonadF f)

We can show that FreeMonad is indeed a monad by implementing return and bind:

instance Functor f => Monad (FreeMonad f) where
  return = InH . DoneM
  (InH (DoneM a))    >>= k = k a
  (InH (MoreM ffra)) >>= k = 
        InH (MoreM (fmap (>>= k) ffra))

Free monads have many applications in programming. They can be used to write generic monadic code, which can then be interpreted in different monads. A very useful property of free monads is that they can be composed using coproducts. This follows from the theorem in category theory, which states that left adjoints preserve coproducts (or, more generally, colimits). Free constructions are, by definition, left adjoints to forgetful functors. This property of free monads was explored by Swierstra [swierstra] in his solution to the expression problem. I will use an example based on his paper to show how to construct monadic interpreters using higher order catamorphisms.

Free Monad Example

A stack-based calculator can be implemented directly using the state monad. Since this is a very simple example, it will be instructive to re-implement it using the free monad approach.

We start by defining a functor, in which the free parameter k represents the continuation:

data StackF k  = Push Int k
               | Top (Int -> k)
               | Pop k            
               | Add k
               deriving Functor

We use this functor to build a free monad:

type FreeStack = FreeMonad StackF

You may think of the free monad as a tree with nodes that are defined by the functor StackF. The unary constructors, like Add or Pop, create linear list-like branches; but the Top constructor branches out with one child per integer.

The level of indirection we get by separating recursion from the functor makes constructing free monad trees syntactically challenging, so it makes sense to define a helper function:

liftF :: (Functor f) => f r -> FreeMonad f r
liftF fr = InH $ MoreM $ fmap (InH . DoneM) fr

With this function, we can define smart constructors that build leaves of the free monad tree:

push :: Int -> FreeStack ()
push n = liftF (Push n ())

pop :: FreeStack ()
pop = liftF (Pop ())

top :: FreeStack Int
top = liftF (Top id)

add :: FreeStack ()
add = liftF (Add ())

All these preparations finally pay off when we are able to create small programs using do notation:

calc :: FreeStack Int
calc = do
  push 3
  push 4
  add
  x <- top
  pop
  return x

Of course, this program does nothing but build a tree. We need a separate interpreter to do the calculation. We’ll interpret our program in the state monad, with state implemented as a stack (list) of integers:

type MemState = State [Int]

The trick is to define a higher order algebra for the functor that generates the free monad and then use a catamorphism to apply it to the program. Notice that implementing the algebra is a relatively simple procedure because we don’t have to deal with recursion. All we need is to case-analyze the shallow constructors for the free monad functor MonadF, and then case-analyze the shallow constructors for the functor StackF.

runAlg :: HAlgebra (MonadF StackF) MemState
runAlg (DoneM a)  = return a
runAlg (MoreM ex) = 
  case ex of
    Top  ik  -> get >>= ik  . head
    Pop  k   -> get >>= put . tail   >> k
    Push n k -> get >>= put . (n : ) >> k
    Add  k   -> do (a: b: s) <- get
                   put (a + b : s)
                   k

The catamorphism converts the program calc into a state monad action, which can be run over an empty initial stack:

runState (hcata runAlg calc) []

The real bonus is the freedom to define other interpreters by simply switching the algebras. Here’s an algebra whose carrier is the Const functor:

showAlg :: HAlgebra (MonadF StackF) (Const String)

showAlg (DoneM a) = Const "Done!"
showAlg (MoreM ex) = Const $
  case ex of
    Push n k -> 
      "Push " ++ show n ++ ", " ++ getConst k
    Top ik -> 
      "Top, " ++ getConst (ik 42)
    Pop k -> 
      "Pop, " ++ getConst k
    Add k -> 
      "Add, " ++ getConst k

Runing the catamorphism over this algebra will produce a listing of our program:

getConst $ hcata showAlg calc

> "Push 3, Push 4, Add, Top, Pop, Done!"

Free Applicative

There is another monoidal structure that exists in the category of functors. In general, this structure will work for functors from an arbitrary monoidal category C to Set. Here, we’ll restrict ourselves to endofunctors on Set. The product of two functors is given by Day convolution, which can be implemented in Haskell using an existential type:

data Day f g c where
  Day :: f a -> g b -> ((a, b) -> c) -> Day f g c

The intuition is that a Day convolution contains a container of some as, and another container of some bs, together with a function that can convert any pair (a, b) to c.

Day convolution is a higher order functor:

instance HFunctor (Day f) where
  hfmap nat (Day fx gy xyt) = Day fx (nat gy) xyt
  ffmap h   (Day fx gy xyt) = Day fx gy (h . xyt)

In fact, because Day convolution is symmetric up to isomorphism, it is automatically functorial in both arguments.

To complete the monoidal structure, we also need a functor that could serve as a unit with respect to Day convolution. In general, this would be the the hom-functor from the monoidal unit:

C(1, -)

In our case, since 1 is the singleton set, this functor reduces to the identity functor.

We can now define monoids in the category of functors with the monoidal structure given by Day convolution. These monoids are equivalent to lax monoidal functors which, in Haskell, form the class:

class Functor f => Monoidal f where
  unit  :: f ()
  (>*<) :: f x -> f y -> f (x, y)

Lax monoidal functors are equivalent to applicative functors [mcbride], as seen in this implementation of pure and <*>:

  pure  :: a -> f a
  pure a = fmap (const a) unit
  (<*>) :: f (a -> b) -> f a -> f b
  fs <*> as = fmap (uncurry ($)) (fs >*< as)

We can now use the same general formula, but with Day convolution as the product:

L\, f\, g = 1 + f \star g

to generate a free monoidal (applicative) functor:

data FreeF f g t =
      DoneF t
    | MoreF (Day f g t)

This is indeed a higher order functor:

instance HFunctor (FreeF f) where
    hfmap _ (DoneF x)     = DoneF x
    hfmap nat (MoreF day) = MoreF (hfmap nat day)
    ffmap f (DoneF x)     = DoneF (f x)
    ffmap f (MoreF day)   = MoreF (ffmap f day)

and it generates a free applicative functor as its initial algebra:

type FreeA f = FixH (FreeF f)

Free Applicative Example

The following example is taken from the paper by Capriotti and Kaposi [capriotti]. It’s an option parser for a command line tool, whose result is a user record of the following form:

data User = User
  { username :: String 
  , fullname :: String
  , uid      :: Int
  } deriving Show

A parser for an individual option is described by a functor that contains the name of the option, an optional default value for it, and a reader from string:

data Option a = Option
  { optName    :: String
  , optDefault :: Maybe a
  , optReader  :: String -> Maybe a 
  } deriving Functor

Since we don’t want to commit to a particular parser, we’ll create a parsing action using a free applicative functor:

userP :: FreeA Option User
userP  = pure User 
  <*> one (Option "username" (Just "John")  Just)
  <*> one (Option "fullname" (Just "Doe")   Just)
  <*> one (Option "uid"      (Just 0)       readInt)

where readInt is a reader of integers:

readInt :: String -> Maybe Int
readInt s = readMaybe s

and we used the following smart constructors:

one :: f a -> FreeA f a
one fa = InH $ MoreF $ Day fa (done ()) fst

done :: a -> FreeA f a
done a = InH $ DoneF a

We are now free to define different algebras to evaluate the free applicative expressions. Here’s one that collects all the defaults:

alg :: HAlgebra (FreeF Option) Maybe
alg (DoneF a) = Just a
alg (MoreF (Day oa mb f)) = 
  fmap f (optDefault oa >*< mb)

I used the monoidal instance for Maybe:

instance Monoidal Maybe where
  unit = Just ()
  Just x >*< Just y = Just (x, y)
  _ >*< _ = Nothing

This algebra can be run over our little program using a catamorphism:

parserDef :: FreeA Option a -> Maybe a
parserDef = hcata alg

And here’s an algebra that collects the names of all the options:

alg2 :: HAlgebra (FreeF Option) (Const String)
alg2 (DoneF a) = Const "."
alg2 (MoreF (Day oa bs f)) = 
  fmap f (Const (optName oa) >*< bs)

Again, this uses a monoidal instance for Const:

instance Monoid m => Monoidal (Const m) where
  unit = Const mempty
  Const a >*< Const b = Const (a  b)

We can also define the Monoidal instance for IO:

instance Monoidal IO where
  unit = return ()
  ax >*< ay = do a <- ax
                 b <- ay
                 return (a, b)

This allows us to interpret the parser in the IO monad:

alg3 :: HAlgebra (FreeF Option) IO
alg3 (DoneF a) = return a
alg3 (MoreF (Day oa bs f)) = do
    putStrLn $ optName oa
    s <- getLine
    let ma = optReader oa s
        a = fromMaybe (fromJust (optDefault oa)) ma
    fmap f $ return a >*< bs

Cofree Comonad

Every construction in category theory has its dual—the result of reversing all the arrows. The dual of a product is a coproduct, the dual of an algebra is a coalgebra, and the dual of a monad is a comonad.

Let’s start by defining a higher order coalgebra consisting of a carrier f, which is a functor, and a natural transformation:

type HCoalgebra hf f = f :~> hf f

An initial algebra is dualized to a terminal coalgebra. In Haskell, both are the results of applying the same fixed point combinator, reflecting the fact that the Lambek’s lemma is self-dual. The dual to a catamorphism is an anamorphism. Here is its higher order version:

hana :: HFunctor hf 
     => HCoalgebra hf f -> (f :~> FixH hf)
hana hcoa = InH . hfmap (hana hcoa) . hcoa

The formula we used to generate free monoids:

1 + a \otimes x

dualizes to:

1 \times a \otimes x

and can be used to generate cofree comonoids .

A cofree functor is the right adjoint to the forgetful functor. Just like the left adjoint preserved coproducts, the right adjoint preserves products. One can therefore easily combine comonads using products (if the need arises to solve the coexpression problem).

Just like the monad is a monoid in the category of endofunctors, a comonad is a comonoid in the same category. The functor that generates a cofree comonad has the form:

type ComonadF f g = Identity :*: Compose f g

where the product of functors is defined as:

data (f :*: g) e = Both (f e) (g e)
infixr 6 :*:

Here’s the more familiar form of this functor:

data ComonadF f g e = e :< f (g e)

It is indeed a higher order functor, as witnessed by this instance:

instance Functor f => HFunctor (ComonadF f) where
  hfmap nat (e :< fge) = e :< fmap nat fge
  ffmap h (e :< fge) = h e :< fmap (fmap h) fge

A cofree comonad is the terminal coalgebra for this functor and can be written as a fixed point:

type Cofree f = FixH (ComonadF f)

Indeed, for any functor f, Cofree f is a comonad:

instance Functor f => Comonad (Cofree f) where
  extract (InH (e :< fge)) = e
  duplicate fr@(InH (e :< fge)) = 
                InH (fr :< fmap duplicate fge)

Cofree Comonad Example

The canonical example of a cofree comonad is an infinite stream:

type Stream = Cofree Identity

We can use this stream to sample a function. We’ll encapsulate this function inside the following functor (in fact, itself a comonad):

data Store a x = Store a (a -> x) 
    deriving Functor

We can use a higher order coalgebra to unpack the Store into a stream:

streamCoa :: HCoalgebra (ComonadF Identity)(Store Int)
streamCoa (Store n f) = 
    f n :< (Identity $ Store (n + 1) f)

The actual unpacking is a higher order anamorphism:

stream :: Store Int a -> Stream a
stream = hana streamCoa

We can use it, for instance, to generate a list of squares of natural numbers:

stream (Store 0 (^2))

Since, in Haskell, the same fixed point defines a terminal coalgebra as well as an initial algebra, we are free to construct algebras and catamorphisms for streams. Here’s an algebra that converts a stream to an infinite list:

listAlg :: HAlgebra (ComonadF Identity) []
listAlg(a :< Identity as) = a : as

toList :: Stream a -> [a]
toList = hcata listAlg

Future Directions

In this paper I concentrated on one type of higher order functor:

1 + a \otimes x

and its dual. This would be equivalent to studying folds for lists and unfolds for streams. But the structure of the functor category is richer than that. Just like basic data types can be combined into algebraic data types, so can functors. Moreover, besides the usual sums and products, the functor category admits at least two additional monoidal structures generated by functor composition and Day convolution.

Another potentially fruitful area of exploration is the profunctor category, which is also equipped with two monoidal structures, one defined by profunctor composition, and another by Day convolution. A free monoid with respect to profunctor composition is the basis of Haskell Arrow library [jaskelioff]. Profunctors also play an important role in the Haskell lens library [kmett].

Bibliography

  1. Erik Meijer, Maarten Fokkinga, and Ross Paterson, Functional Programming with Bananas, Lenses, Envelopes and Barbed Wire
  2. Conor McBride, Ross Paterson, Idioms: applicative programming with effects
  3. Paolo Capriotti, Ambrus Kaposi, Free Applicative Functors
  4. Wouter Swierstra, Data types a la carte
  5. Exequiel Rivas and Mauro Jaskelioff, Notions of Computation as Monoids
  6. Edward Kmett, Lenses, Folds and Traversals
  7. Richard Bird and Lambert Meertens, Nested Datatypes
  8. Patricia Johann and Neil Ghani, Initial Algebra Semantics is Enough!

Abstract: I derive free monoidal profunctors as fixed points of a higher order functor acting on profunctors. Monoidal profunctors play an important role in defining traversals.

The beauty of category theory is that it lets us reuse concepts at all levels. In my previous post I have derived a free monoidal functor that goes from a monoidal category C to Set. The current post may then be shortened to: Since profunctors are just functors from C^{op} \times C to Set, with the obvious monoidal structure induced by the tensor product in C, we automatically get free monoidal profunctors.

Let me fill in the details.

Profunctors in Haskell

Here’s the definition of a profunctor from Data.Profunctor:

class Profunctor p where
  dimap :: (s -> a) -> (b -> t) -> p a b -> p s t

The idea is that, just like a functor acts on objects, a profunctor p acts on pairs of objects \langle a, b \rangle. In other words, it’s a type constructor that takes two types as arguments. And just like a functor acts on morphisms, a profunctor acts on pairs of morphisms. The only tricky part is that the first morphism of the pair is reversed: instead of going from a to s, as one would expect, it goes from s to a. This is why we say that the first argument comes from the opposite category C^{op}, where all morphisms are reversed with respect to C. Thus a morphism from \langle a, b \rangle to \langle s, t \rangle in C^{op} \times C is a pair of morphisms \langle s \to a, b \to t \rangle.

Just like functors form a category, profunctors form a category too. In this category profunctors are objects, and natural transformations are morphisms. A natural transformation between two profunctors p and q is a family of functions which, in Haskell, can be approximated by a polymorphic function:

type p ::~> q = forall a b. p a b -> q a b

If the category C is monoidal (has a tensor product \otimes and a unit object 1), then the category C^{op} \times C has a trivially induced tensor product:

\langle a, b \rangle \otimes \langle c, d \rangle = \langle a \otimes c, b \otimes d \rangle

and unit \langle 1, 1 \rangle

In Haskell, we’ll use cartesian product (pair type) as the underlying tensor product, and () type as the unit.

Notice that the induced product does not have the usual exponential as the right adjoint. Indeed, the hom-set:

(C^{op} \times C) \, ( \langle a, b  \rangle \otimes  \langle c, d  \rangle,  \langle s, t  \rangle )

is a set of pairs of morphisms:

\langle s \to a \otimes c, b \otimes d \to t  \rangle

If the right adjoint existed, it would be a pair of objects \langle X, Y  \rangle, such that the following hom-set would be isomorphic to the previous one:

\langle X \to a, b \to Y  \rangle

While Y could be the internal hom, there is no candidate for X that would produce the isomorphism:

s \to a \otimes c \cong X \to a

(Consider, for instance, unit () for a.) This lack of the right adjoint is the reason why we can’t define an analog of Applicative for profunctors. We can, however, define a monoidal profunctor:

class Monoidal p where
  punit :: p () ()
  (>**<) :: p a b -> p c d -> p (a, c) (b, d)

This profunctor is a map between two monoidal structures. For instance, punit can be seen as mapping the unit in Set to the unit in C^{op} \times C:

punit :: () -> p <1, 1>

Operator >**< maps the product in Set to the induced product in C^{op} \times C:

(>**<) :: (p <a, b>, p <c, d>) -> p (<a, b> × <c, d>)

Day convolution, which works with monoidal structures, generalizes naturally to the profunctor category:

data PDay p q s t = forall a b c d. 
     PDay (p a b) (q c d) ((b, d) -> t) (s -> (a, c))

Higher Order Functors

Since profunctors form a category, we can define endofunctors in that category. This is a no-brainer in category theory, but it requires some new definitions in Haskell. Here’s a higher-order functor that maps a profunctor to another profunctor:

class HPFunctor pp where
  hpmap :: (p ::~> q) -> (pp p ::~> pp q)
  ddimap :: (s -> a) -> (b -> t) -> pp p a b -> pp p s t

The function hpmap lifts a natural transformation, and ddimap shows that the result of the mapping is also a profunctor.

An endofunctor in the profunctor category may have a fixed point:

newtype FixH pp a b = InH { outH :: pp (FixH pp) a b }

which is also a profunctor:

instance HPFunctor pp => Profunctor (FixH pp) where
    dimap f g (InH pp) = InH (ddimap f g pp)

Finally, our Day convolution is a higher-order endofunctor in the category of profunctors:

instance HPFunctor (PDay p) where
  hpmap nat (PDay p q from to) = PDay p (nat q) from to
  ddimap f g (PDay p q from to) = PDay p q (g . from) (to . f)

We’ll use this fact to construct a free monoidal profunctor next.

Free Monoidal Profunctor

In the previous post, I defined the free monoidal functor as a fixed point of the following endofunctor:

data FreeF f g t =
      DoneF t
    | MoreF (Day f g t)

Replacing the functors f and g with profunctors is straightforward:

data FreeP p q s t = 
      DoneP (s -> ()) (() -> t) 
    | MoreP (PDay p q s t)

The only tricky part is realizing that the first term in the sum comes from the unit of Day convolution, which is the type () -> t, and it generalizes to an appropriate pair of functions (we’ll simplify this definition later).

FreeP is a higher order endofunctor acting on profunctors:

instance HPFunctor (FreeP p) where
    hpmap _ (DoneP su ut) = DoneP su ut
    hpmap nat (MoreP day) = MoreP (hpmap nat day)
    ddimap f g (DoneP au ub) = DoneP (au . f) (g . ub)
    ddimap f g (MoreP day) = MoreP (ddimap f g day)

We can, therefore, define its fixed point:

type FreeMon p = FixH (FreeP p)

and show that it is indeed a monoidal profunctor. As before, the trick is to fist show the following property of Day convolution:

cons :: Monoidal q => PDay p q a b -> q c d -> PDay p q (a, c) (b, d)
cons (PDay pxy quv yva bxu) qcd = 
      PDay pxy (quv >**< qcd) (bimap yva id . reassoc) 
                              (assoc . bimap bxu id)

where

assoc ((a,b),c) = (a,(b,c))
reassoc (a, (b, c)) = ((a, b), c)

Using this function, we can show that FreeMon p is monoidal for any p:

instance Profunctor p => Monoidal (FreeMon p) where
  punit = InH (DoneP id id)
  (InH (DoneP au ub)) >**< frcd = dimap snd (\d -> (ub (), d)) frcd
  (InH (MoreP dayab)) >**< frcd = InH (MoreP (cons dayab frcd))

FreeMon can also be rewritten as a recursive data type:

data FreeMon p s t where
     DoneFM :: t -> FreeMon p s t
     MoreFM :: p a b -> FreeMon p c d -> 
                        (b -> d -> t) -> 
                        (s -> (a, c)) -> FreeMon p s t

Categorical Picture

As I mentioned before, from the categorical point of view there isn’t much to talk about. We define a functor in the category of profunctors:

A_p q = (C^{op} \times C) (1, -) + \int^{ a b c d } p a b \times q c d \times (C^{op} \times C) (\langle a, b \rangle \otimes \langle c, d \rangle, -)

As previously shown in the general case, its initial algebra defines a free monoidal profunctor.

Acknowledgments

I’m grateful to Eugenia Cheng not only for talking to me about monoidal profunctors, but also for getting me interested in category theory in the first place through her Catsters video series. Thanks also go to Edward Kmett for numerous discussions on this topic.


The Free Theorem for Ends

In Haskell, the end of a profunctor p is defined as a product of all diagonal elements:

forall c. p c c

together with a family of projections:

pi :: Profunctor p => forall c. (forall a. p a a) -> p c c
pi e = e

In category theory, the end must also satisfy the edge condition which, in (type-annotated) Haskell, could be written as:

dimap f idb . pib = dimap ida f . pia

for any f :: a -> b.
Using a suitable formulation of parametricity, this equation can be shown to be a free theorem. Let’s first review the free theorem for functors before generalizing it to profunctors.

Functor Characterization

You may think of a functor as a container that has a shape and contents. You can manipulate the contents without changing the shape using fmap. In general, when applying fmap, you not only change the values stored in the container, you change their type as well. To really capture the shape of the container, you have to consider not only all possible mappings, but also more general relations between different contents.

A function is directional, and so is fmap, but relations don’t favor either side. They can map multiple values to the same value, and they can map one value to multiple values. Any relation on values induces a relation on containers. For a given functor F, if there is a relation a between type A and type A':

A <=a=> A'

then there is a relation between type F A and F A':

F A <=(F a)=> F A'

We call this induced relation F a.

For instance, consider the relation between students and their grades. Each student may have multiple grades (if they take multiple courses) so this relation is not a function. Given a list of students and a list of grades, we would say that the lists are related if and only if they match at each position. It means that they have to be equal length, and the first grade on the list of grades must belong to the first student on the list of students, and so on. Of course, a list is a very simple container, but this property can be generalized to any functor we can define in Haskell using algebraic data types.

The fact that fmap doesn’t change the shape of the container can be expressed as a “theorem for free” using relations. We start with two related containers:

xs :: F A
xs':: F A'

where A and A' are related through some relation a. We want related containers to be fmapped to related containers. But we can’t use the same function to map both containers, because they contain different types. So we have to use two related functions instead. Related functions map related types to related types so, if we have:

f :: A -> B
f':: A'-> B'

and A is related to A' through a, we want B to be related to B' through some relation b. Also, we want the two functions to map related elements to related elements. So if x is related to x' through a, we want f x to be related to f' x' through b. In that case, we’ll say that f and f' are related through the relation that we call a->b:

f <=(a->b)=> f'

For instance, if f is mapping students’ SSNs to last names, and f' is mapping letter grades to numerical grades, the results will be related through the relation between students’ last names and their numerical grades.

To summarize, we require that for any two relations:

A <=a=> A'
B <=b=> B'

and any two functions:

f :: A -> B
f':: A'-> B'

such that:

f <=(a->b)=> f'

and any two containers:

xs :: F A
xs':: F A'

we have:

if       xs <=(F a)=> xs'
then   F xs <=(F b)=> F xs'

This characterization can be extended, with suitable changes, to contravariant functors.

Profunctor Characterization

A profunctor is a functor of two variables. It is contravariant in the first variable and covariant in the second. A profunctor can lift two functions simultaneously using dimap:

class Profunctor p where
    dimap :: (a -> b) -> (c -> d) -> p b c -> p a d

We want dimap to preserve relations between profunctor values. We start by picking any relations a, b, c, and d between types:

A <=a=> A'
B <=b=> B'
C <=c=> C'
D <=d=> D'

For any functions:

f  :: A -> B
f' :: A'-> B'
g  :: C -> D
g' :: C'-> D'

that are related through the following relations induced by function types:

f <=(a->b)=> f'
g <=(c->d)=> g'

we define:

xs :: p B C
xs':: p B'C'

The following condition must be satisfied:

if             xs <=(p b c)=> xs'
then   (p f g) xs <=(p a d)=> (p f' g') xs'

where p f g stands for the lifting of the two functions by the profunctor p.

Here’s a quick sanity check. If b and c are functions:

b :: B'-> B
c :: C -> C'

than the relation:

xs <=(p b c)=> xs'

becomes:

xs' = dimap b c xs

If a and d are functions:

a :: A'-> A
d :: D -> D'

then these relations:

f <=(a->b)=> f'
g <=(c->d)=> g'

become:

f . a = b . f'
d . g = g'. c

and this relation:

(p f g) xs <=(p a d)=> (p f' g') xs'

becomes:

(p f' g') xs' = dimap a d ((p f g) xs)

Substituting xs', we get:

dimap f' g' (dimap b c xs) = dimap a d (dimap f g xs)

and using functoriality:

dimap (b . f') (g'. c) = dimap (f . a) (d . g)

which is identically true.

Special Case of Profunctor Characterization

We are interested in the diagonal elements of a profunctor. Let’s first specialize the general case to:

C = B
C'= B'
c = b

to get:

xs = p B B
xs'= p B'B'

and

if             xs <=(p b b)=> xs'
then   (p f g) xs <=(p a d)=> (p f' g') xs'

Chosing the following substitutions:

A = A'= B
D = D'= B'
a = id
d = id
f = id
g'= id
f'= g

we get:

if              xs <=(p b b)=> xs'
then   (p id g) xs <=(p id id)=> (p g id) xs'

Since p id id is the identity relation, we get:

(p id g) xs = (p g id) xs'

or

dimap id g xs = dimap g id xs'

Free Theorem

We apply the free theorem to the term xs:

xs :: forall c. p c c

It must be related to itself through the relation that is induced by its type:

xs <=(forall b. p b b)=> xs

for any relation b:

B <=b=> B'

Universal quantification translates to a relation between different instantiations of the polymorphic value:

xsB <=(p b b)=> xsB'

Notice that we can write:

xsB = piB xs
xsB'= piB'xs

using the projections we defined earlier.

We have just shown that this equation leads to:

dimap id g xs = dimap g id xs'

which shows that the wedge condition is indeed a free theorem.

Natural Transformations

Here’s another quick application of the free theorem. The set of natural transformations may be represented as an end of the following profunctor:

type NatP a b = F a -> G b
instance Profunctor NatP where
    dimap f g alpha = fmap g . alpha . fmap f

The free theorem tells us that for any mu :: NatP c c:

(dimap id g) mu = (dimap g id) mu

which is the naturality condition:

mu . fmap g = fmap g . mu

It’s been know for some time that, in Haskell, naturality follows from parametricity, so this is not surprising.

Acknowledgment

I’d like to thank Edward Kmett for reviewing the draft of this post.

Bibliography

  1. Bartosz Milewski, Ends and Coends
  2. Edsko de Vries, Parametricity Tutorial, Part 1, Part 2, Contravariant Functions.
  3. Bartosz Milewski, Parametricity: Money for Nothing and Theorems for Free

This is part 24 of Categories for Programmers. Previously: Comonads. See the Table of Contents.

We’ve seen several formulations of a monoid: as a set, as a single-object category, as an object in a monoidal category. How much more juice can we squeeze out of this simple concept?

Let’s try. Take this definition of a monoid as a set m with a pair of functions:

μ :: m × m -> m
η :: 1 -> m

Here, 1 is the terminal object in Set — the singleton set. The first function defines multiplication (it takes a pair of elements and returns their product), the second selects the unit element from m. Not every choice of two functions with these signatures results in a monoid. For that we need to impose additional conditions: associativity and unit laws. But let’s forget about that for a moment and just consider “potential monoids.” A pair of functions is an element of a cartesian product of two sets of functions. We know that these sets may be represented as exponential objects:

μ ∈ m m×m
η ∈ m1

The cartesian product of these two sets is:

m m×m × m1

Using some high-school algebra (which works in every cartesian closed category), we can rewrite it as:

m m×m + 1

The plus sign stands for the coproduct in Set. We have just replaced a pair of functions with a single function — an element of the set:

m × m + 1 -> m

Any element of this set of functions is a potential monoid.

The beauty of this formulation is that it leads to interesting generalizations. For instance, how would we describe a group using this language? A group is a monoid with one additional function that assigns the inverse to every element. The latter is a function of the type m->m. As an example, integers form a group with addition as a binary operation, zero as the unit, and negation as the inverse. To define a group we would start with a triple of functions:

m × m -> m
m -> m
1 -> m

As before, we can combine all these triples into one set of functions:

m × m + m + 1 -> m

We started with one binary operator (addition), one unary operator (negation), and one nullary operator (identity — here zero). We combined them into one function. All functions with this signature define potential groups.

We can go on like this. For instance, to define a ring, we would add one more binary operator and one nullary operator, and so on. Each time we end up with a function type whose left-hand side is a sum of powers (possibly including the zeroth power — the terminal object), and the right-hand side being the set itself.

Now we can go crazy with generalizations. First of all, we can replace sets with objects and functions with morphisms. We can define n-ary operators as morphisms from n-ary products. It means that we need a category that supports finite products. For nullary operators we require the existence of the terminal object. So we need a cartesian category. In order to combine these operators we need exponentials, so that’s a cartesian closed category. Finally, we need coproducts to complete our algebraic shenanigans.

Alternatively, we can just forget about the way we derived our formulas and concentrate on the final product. The sum of products on the left hand side of our morphism defines an endofunctor. What if we pick an arbitrary endofunctor F instead? In that case we don’t have to impose any constraints on our category. What we obtain is called an F-algebra.

An F-algebra is a triple consisting of an endofunctor F, an object a, and a morphism

F a -> a

The object is often called the carrier, an underlying object or, in the context of programming, the carrier type. The morphism is often called the evaluation function or the structure map. Think of the functor F as forming expressions and the morphism as evaluating them.

Here’s the Haskell definition of an F-algebra:

type Algebra f a = f a -> a

It identifies the algebra with its evaluation function.

In the monoid example, the functor in question is:

data MonF a = MEmpty | MAppend a a

This is Haskell for 1 + a × a (remember algebraic data structures).

A ring would be defined using the following functor:

data RingF a = RZero
             | ROne
             | RAdd a a 
             | RMul a a
             | RNeg a

which is Haskell for 1 + 1 + a × a + a × a + a.

An example of a ring is the set of integers. We can choose Integer as the carrier type and define the evaluation function as:

evalZ :: Algebra RingF Integer
evalZ RZero      = 0
evalZ ROne       = 1
evalZ (RAdd m n) = m + n
evalZ (RMul m n) = m * n
evalZ (RNeg n)   = -n

There are more F-algebras based on the same functor RingF. For instance, polynomials form a ring and so do square matrices.

As you can see, the role of the functor is to generate expressions that can be evaluated using the evaluator of the algebra. So far we’ve only seen very simple expressions. We are often interested in more elaborate expressions that can be defined using recursion.

Recursion

One way to generate arbitrary expression trees is to replace the variable a inside the functor definition with recursion. For instance, an arbitrary expression in a ring is generated by this tree-like data structure:

data Expr = RZero
          | ROne
          | RAdd Expr Expr 
          | RMul Expr Expr
          | RNeg Expr

We can replace the original ring evaluator with its recursive version:

evalZ :: Expr -> Integer
evalZ RZero        = 0
evalZ ROne         = 1
evalZ (RAdd e1 e2) = evalZ e1 + evalZ e2
evalZ (RMul e1 e2) = evalZ e1 * evalZ e2
evalZ (RNeg e)     = -(evalZ e)

This is still not very practical, since we are forced to represent all integers as sums of ones, but it will do in a pinch.

But how can we describe expression trees using the language of F-algebras? We have to somehow formalize the process of replacing the free type variable in the definition of our functor, recursively, with the result of the replacement. Imagine doing this in steps. First, define a depth-one tree as:

type RingF1 a = RingF (RingF a)

We are filling the holes in the definition of RingF with depth-zero trees generated by RingF a. Depth-2 trees are similarly obtained as:

type RingF2 a = RingF (RingF (RingF a))

which we can also write as:

type RingF2 a = RingF (RingF1 a)

Continuing this process, we can write a symbolic equation:

type RingFn+1 a = RingF (RingFn a)

Conceptually, after repeating this process infinitely many times, we end up with our Expr. Notice that Expr does not depend on a. The starting point of our journey doesn’t matter, we always end up in the same place. This is not always true for an arbitrary endofunctor in an arbitrary category, but in the category Set things are nice.

Of course, this is a hand-waving argument, and I’ll make it more rigorous later.

Applying an endofunctor infinitely many times produces a fixed point, an object defined as:

Fix f = f (Fix f)

The intuition behind this definition is that, since we applied f infinitely many times to get Fix f, applying it one more time doesn’t change anything. In Haskell, the definition of a fixed point is:

newtype Fix f = Fix (f (Fix f))

Arguably, this would be more readable if the constructor’s name were different than the name of the type being defined, as in:

newtype Fix f = In (f (Fix f))

but I’ll stick with the accepted notation. The constructor Fix (or In, if you prefer) can be seen as a function:

Fix :: f (Fix f) -> Fix f

There is also a function that peels off one level of functor application:

unFix :: Fix f -> f (Fix f)
unFix (Fix x) = x

The two functions are the inverse of each other. We’ll use these functions later.

Category of F-Algebras

Here’s the oldest trick in the book: Whenever you come up with a way of constructing some new objects, see if they form a category. Not surprisingly, algebras over a given endofunctor F form a category. Objects in that category are algebras — pairs consisting of a carrier object a and a morphism F a -> a, both from the original category C.

To complete the picture, we have to define morphisms in the category of F-algebras. A morphism must map one algebra (a, f) to another algebra (b, g). We’ll define it as a morphism m that maps the carriers — it goes from a to b in the original category. Not any morphism will do: we want it to be compatible with the two evaluators. (We call such a structure-preserving morphism a homomorphism.) Here’s how you define a homomorphism of F-algebras. First, notice that we can lift m to the mapping:

F m :: F a -> F b

we can then follow it with g to get to b. Equivalently, we can use f to go from F a to a and then follow it with m. We want the two paths to be equal:

g ∘ F m = m ∘ f

alg

It’s easy to convince yourself that this is indeed a category (hint: identity morphisms from C work just fine, and a composition of homomorphisms is a homomorphism).

An initial object in the category of F-algebras, if it exists, is called the initial algebra. Let’s call the carrier of this initial algebra i and its evaluator j :: F i -> i. It turns out that j, the evaluator of the initial algebra, is an isomorphism. This result is known as Lambek’s theorem. The proof relies on the definition of the initial object, which requires that there be a unique homomorphism m from it to any other F-algebra. Since m is a homomorphism, the following diagram must commute:

alg2

Now let’s construct an algebra whose carrier is F i. The evaluator of such an algebra must be a morphism from F (F i) to F i. We can easily construct such an evaluator simply by lifting j:

F j :: F (F i) -> F i

Because (i, j) is the initial algebra, there must be a unique homomorphism m from it to (F i, F j). The following diagram must commute:

alg3a

But we also have this trivially commuting diagram (both paths are the same!):

alg3

which can be interpreted as showing that j is a homomorphism of algebras, mapping (F i, F j) to (i, j). We can glue these two diagrams together to get:

alg4

This diagram may, in turn, be interpreted as showing that j ∘ m is a homomorphism of algebras. Only in this case the two algebras are the same. Moreover, because (i, j) is initial, there can only be one homomorphism from it to itself, and that’s the identity morphism idi — which we know is a homomorphism of algebras. Therefore j ∘ m = idi. Using this fact and the commuting property of the left diagram we can prove that m ∘ j = idFi. This shows that m is the inverse of j and therefore j is an isomorphism between F i and i:

F i ≅ i

But that is just saying that i is a fixed point of F. That’s the formal proof behind the original hand-waving argument.

Back to Haskell: We recognize i as our Fix f, j as our constructor Fix, and its inverse as unFix. The isomorphism in Lambek’s theorem tells us that, in order to get the initial algebra, we take the functor f and replace its argument a with Fix f. We also see why the fixed point does not depend on a.

Natural Numbers

Natural numbers can also be defined as an F-algebra. The starting point is the pair of morphisms:

zero :: 1 -> N
succ :: N -> N

The first one picks the zero, and the second one maps all numbers to their successors. As before, we can combine the two into one:

1 + N -> N

The left hand side defines a functor which, in Haskell, can be written like this:

data NatF a = ZeroF | SuccF a

The fixed point of this functor (the initial algebra that it generates) can be encoded in Haskell as:

data Nat = Zero | Succ Nat

A natural number is either zero or a successor of another number. This is known as the Peano representation for natural numbers.

Catamorphisms

Let’s rewrite the initiality condition using Haskell notation. We call the initial algebra Fix f. Its evaluator is the contructor Fix. There is a unique morphism m from the initial algebra to any other algebra over the same functor. Let’s pick an algebra whose carrier is a and the evaluator is alg.

alg5
By the way, notice what m is: It’s an evaluator for the fixed point, an evaluator for the whole recursive expression tree. Let’s find a general way of implementing it.

Lambek’s theorem tells us that the constructor Fix is an isomorphism. We called its inverse unFix. We can therefore flip one arrow in this diagram to get:

alg6

Let’s write down the commutation condition for this diagram:

m = alg . fmap m . unFix

We can interpret this equation as a recursive definition of m. The recursion is bound to terminate for any finite tree created using the functor f. We can see that by noticing that fmap m operates underneath the top layer of the functor f. In other words, it works on the children of the original tree. The children are always one level shallower than the original tree.

Here’s what happens when we apply m to a tree constructed using Fix f. The action of unFix peels off the constructor, exposing the top level of the tree. We then apply m to all the children of the top node. This produces results of type a. Finally, we combine those results by applying the non-recursive evaluator alg. The key point is that our evaluator alg is a simple non-recursive function.

Since we can do this for any algebra alg, it makes sense to define a higher order function that takes the algebra as a parameter and gives us the function we called m. This higher order function is called a catamorphism:

cata :: Functor f => (f a -> a) -> Fix f -> a
cata alg = alg . fmap (cata alg) . unFix

Let’s see an example of that. Take the functor that defines natural numbers:

data NatF a = ZeroF | SuccF a

Let’s pick (Int, Int) as the carrier type and define our algebra as:

fib :: NatF (Int, Int) -> (Int, Int)
fib ZeroF = (1, 1)
fib (SuccF (m, n)) = (n, m + n)

You can easily convince yourself that the catamorphism for this algebra, cata fib, calculates Fibonacci numbers.

In general, an algebra for NatF defines a recurrence relation: the value of the current element in terms of the previous element. A catamorphism then evaluates the n-th element of that sequence.

Folds

A list of e is the initial algebra of the following functor:

data ListF e a = NilF | ConsF e a

Indeed, replacing the variable a with the result of recursion, which we’ll call List e, we get:

data List e = Nil | Cons e (List e)

An algebra for a list functor picks a particular carrier type and defines a function that does pattern matching on the two constructors. Its value for NilF tells us how to evaluate an empty list, and its value for ConsF tells us how to combine the current element with the previously accumulated value.

For instance, here’s an algebra that can be used to calculate the length of a list (the carrier type is Int):

lenAlg :: ListF e Int -> Int
lenAlg (ConsF e n) = n + 1
lenAlg NilF = 0

Indeed, the resulting catamorphism cata lenAlg calculates the length of a list. Notice that the evaluator is a combination of (1) a function that takes a list element and an accumulator and returns a new accumulator, and (2) a starting value, here zero. The type of the value and the type of the accumulator are given by the carrier type.

Compare this to the traditional Haskell definition:

length = foldr (\e n -> n + 1) 0

The two arguments to foldr are exactly the two components of the algebra.

Let’s try another example:

sumAlg :: ListF Double Double -> Double
sumAlg (ConsF e s) = e + s
sumAlg NilF = 0.0

Again, compare this with:

sum = foldr (\e s -> e + s) 0.0

As you can see, foldr is just a convenient specialization of a catamorphism to lists.

Coalgebras

As usual, we have a dual construction of an F-coagebra, where the direction of the morphism is reversed:

a -> F a

Coalgebras for a given functor also form a category, with homomorphisms preserving the coalgebraic structure. The terminal object (t, u) in that category is called the terminal (or final) coalgebra. For every other algebra (a, f) there is a unique homomorphism m that makes the following diagram commute:

alg7

A terminal colagebra is a fixed point of the functor, in the sense that the morphism u :: t -> F t is an isomorphism (Lambek’s theorem for coalgebras):

F t ≅ t

A terminal coalgebra is usually interpreted in programming as a recipe for generating (possibly infinite) data structures or transition systems.

Just like a catamorphism can be used to evaluate an initial algebra, an anamorphism can be used to coevaluate a terminal coalgebra:

ana :: Functor f => (a -> f a) -> a -> Fix f
ana coalg = Fix . fmap (ana coalg) . coalg

A canonical example of a coalgebra is based on a functor whose fixed point is an infinite stream of elements of type e. This is the functor:

data StreamF e a = StreamF e a
  deriving Functor

and this is its fixed point:

data Stream e = Stream e (Stream e)

A coalgebra for StreamF e is a function that takes the seed of type a and produces a pair (StreamF is a fancy name for a pair) consisting of an element and the next seed.

You can easily generate simple examples of coalgebras that produce infinite sequences, like the list of squares, or reciprocals.

A more interesting example is a coalgebra that produces a list of primes. The trick is to use an infinite list as a carrier. Our starting seed will be the list [2..]. The next seed will be the tail of this list with all multiples of 2 removed. It’s a list of odd numbers starting with 3. In the next step, we’ll take the tail of this list and remove all multiples of 3, and so on. You might recognize the makings of the sieve of Eratosthenes. This coalgebra is implemented by the following function:

era :: [Int] -> StreamF Int [Int]
era (p : ns) = StreamF p (filter (notdiv p) ns)
    where notdiv p n = n `mod` p /= 0

The anamorphism for this coalgebra generates the list of primes:

primes = ana era [2..]

A stream is an infinite list, so it should be possible to convert it to a Haskell list. To do that, we can use the same functor StreamF to form an algebra, and we can run a catamorphism over it. For instance, this is a catamorphism that converts a stream to a list:

toListC :: Fix (StreamF e) -> [e]
toListC = cata al
   where al :: StreamF e [e] -> [e]
         al (StreamF e a) = e : a

Here, the same fixed point is simultaneously an initial algebra and a terminal coalgebra for the same endofunctor. It’s not always like this, in an arbitrary category. In general, an endofunctor may have many (or no) fixed points. The initial algebra is the so called least fixed point, and the terminal coalgebra is the greatest fixed point. In Haskell, though, both are defined by the same formula, and they coincide.

The anamorphism for lists is called unfold. To create finite lists, the functor is modified to produce a Maybe pair:

unfoldr :: (b -> Maybe (a, b)) -> b -> [a]

The value of Nothing will terminate the generation of the list.

An interesting case of a coalgebra is related to lenses. A lens can be represented as a pair of a getter and a setter:

set :: a -> s -> a
get :: a -> s

Here, a is usually some product data type with a field of type s. The getter retrieves the value of that field and the setter replaces this field with a new value. These two functions can be combined into one:

a -> (s, s -> a)

We can rewrite this function further as:

a -> Store s a

where we have defined a functor:

data Store s a = Store (s -> a) s

Notice that this is not a simple algebraic functor constructed from sums of products. It involves an exponential as.

A lens is a coalgebra for this functor with the carrier type a. We’ve seen before that Store s is also a comonad. It turns out that a well-behaved lens corresponds to a coalgebra that is compatible with the comonad structure. We’ll talk about this in the next section.

Challenges

  1. Implement the evaluation function for a ring of polynomials of one variable. You can represent a polynomial as a list of coefficients in front of powers of x. For instance, 4x2-1 would be represented as (starting with the zero’th power) [-1, 0, 4].
  2. Generalize the previous construction to polynomials of many independent variables, like x2y-3y3z.
  3. Implement the algebra for the ring of 2×2 matrices.
  4. Define a coalgebra whose anamorphism produces a list of squares of natural numbers.
  5. Use unfoldr to generate a list of the first n primes.

Next: Algebras for Monads.


Unlike monads, which came into programming straight from category theory, applicative functors have their origins in programming. McBride and Paterson introduced applicative functors as a programming pearl in their paper Applicative programming with effects. They also provided a categorical interpretation of applicatives in terms of strong lax monoidal functors. It’s been accepted that, just like “a monad is a monoid in the category of endofunctors,” so “an applicative is a strong lax monoidal functor.”

The so called “tensorial strength” seems to be important in categorical semantics, and in his seminal paper Notions of computation and monads, Moggi argued that effects should be described using strong monads. It makes sense, considering that a computation is done in a context, and you should be able to make the global context available under the monad. The fact that we don’t talk much about strong monads in Haskell is due to the fact that all functors in the category Set, which underlies the Haskell’s type system, have canonical strength. So why do we talk about strength when dealing with applicative functors? I have looked into this question and came to the conclusion that there is no fundamental reason, and that it’s okay to just say:

An applicative is a lax monoidal functor

In this post I’ll discuss different equivalent categorical definitions of the applicative functor. I’ll start with a lax closed functor, then move to a lax monoidal functor, and show the equivalence of the two definitions. Then I’ll introduce the calculus of ends and show that the third definition of the applicative functor as a monoid in a suitable functor category equipped with Day convolution is equivalent to the previous ones.

Applicative as a Lax Closed Functor

The standard definition of the applicative functor in Haskell reads:

class Functor f => Applicative f where
    (<*>) :: f (a -> b) -> (f a -> f b)
    pure :: a -> f a

At first sight it doesn’t seem to involve a monoidal structure. It looks more like preserving function arrows (I added some redundant parentheses to suggest this interpretation).

Categorically, functors that “preserve arrows” are known as closed functors. Let’s look at a definition of a closed functor f between two categories C and D. We have to assume that both categories are closed, meaning that they have internal hom-objects for every pair of objects. Internal hom-objects are also called function objects or exponentials. They are normally defined through the right adjoint to the product functor:

C(z × a, b) ≅ C(z, a => b)

To distinguish between sets of morphisms and function objects (they are the same thing in Set), I will temporarily use double arrows for function objects.

We can take a functor f and act with it on the function object a=>b in the category C. We get an object f (a=>b) in D. Or we can map the two objects a and b from C to D and then construct the function object in D: f a => f b.

closed

We call a functor closed if the two results are isomorphic (I have subscripted the two arrows with the categories where they are defined):

f (a =>C b) ≅ (f a =>D f b)

and if the functor preserves the unit object:

iD ≅ f iC

What’s the unit object? Normally, this is the unit with respect to the same product that was used to define the function object using the adjunction. I’m saying “normally,” because it’s possible to define a closed category without a product.

Note: The two arrows and the two is are defined with respect to two different products. The first isomorphism must be natural in both a and b. Also, to complete the picture, there are some diagrams that must commute.

The two isomorphisms that define a closed functor can be relaxed and replaced by unidirectional morphisms. The result is a lax closed functor:

f (a => b) -> (f a => f b)
i -> f i

This looks almost like the definition of Applicative, except for one problem: how can we recover the natural transformation we call pure from a single morphism i -> f i.

One way to do it is from the position of strength. An endofunctor f has tensorial strength if there is a natural transformation:

stc a :: c ⊗ f a -> f (c ⊗ a)

Think of c as the context in which the computation f a is performed. Strength means that we can use this external context inside the computation.

In the category Set, with the tensor product replaced by cartesian product, all functors have canonical strength. In Haskell, we would define it as:

st (c, fa) = fmap ((,) c) fa

The morphism in the definition of the lax closed functor translates to:

unit :: () -> f ()

Notice that, up to isomorphism, the unit type () is the unit with respect to cartesian product. The relevant isomorphisms are:

λa :: ((), a) -> a
ρa :: (a, ()) -> a

Here’s the derivation from Rivas and Jaskelioff’s Notions of Computation as Monoids:

    a
≅  (a, ())   -- unit law, ρ-1
-> (a, f ()) -- lax unit
-> f (a, ()) -- strength
≅  f a       -- lifted unit law, f ρ

Strength is necessary if you’re starting with a lax closed (or monoidal — see the next section) endofunctor in an arbitrary closed (or monoidal) category and you want to derive pure within that category — not after you restrict it to Set.

There is, however, an alternative derivation using the Yoneda lemma:

f ()
≅ forall a. (() -> a) -> f a  -- Yoneda
≅ forall a. a -> f a -- because: (() -> a) ≅ a

We recover the whole natural transformation from a single value. The advantage of this derivation is that it generalizes beyond endofunctors and it doesn’t require strength. As we’ll see later, it also ties nicely with the Day-convolution definition of applicative. The Yoneda lemma only works for Set-valued functors, but so does Day convolution (there are enriched versions of both Yoneda and Day convolution, but I’m not going to discuss them here).

We can define the categorical version of the Haskell’s applicative functor as a lax closed functor going from a closed category C to Set. It’s a functor equipped with a natural transformation:

f (a => b) -> (f a -> f b)

where a=>b is the internal hom-object in C (the second arrow is a function type in Set), and a function:

1 -> f i

where 1 is the singleton set and i is the unit object in C.

The importance of a categorical definition is that it comes with additional identities or “axioms.” A lax closed functor must be compatible with the structure of both categories. I will not go into details here, because we are really only interested in closed categories that are monoidal, where these axioms are easier to express.

The definition of a lax closed functor is easily translated to Haskell:

class Functor f => Closed f where
    (<*>) :: f (a -> b) -> f a -> f b
    unit :: f ()

Applicative as a Lax Monoidal Functor

Even though it’s possible to define a closed category without a monoidal structure, in practice we usually work with monoidal categories. This is reflected in the equivalent definition of the Haskell’s applicative functor as a lax monoidal functor. In Haskell, we would write:

class Functor f => Monoidal f where
    (>*<) :: (f a, f b) -> f (a, b)
    unit :: f ()

This definition is equivalent to our previous definition of a closed functor. That’s because, as we’ve seen, a function object in a monoidal category is defined in terms of a product. We can show the equivalence in a more general categorical setting.

This time let’s start with a symmetric closed monoidal category C, in which the function object is defined through the right adjoint to the tensor product:

C(z ⊗ a, b) ≅ C(z, a => b)

As usual, the tensor product is associative and unital — with the unit object i — up to isomorphism. The symmetry is defined through natural isomorphism:

γ :: a ⊗ b -> b ⊗ a

A functor f between two monoidal categories is lax monoidal if there exist: (1) a natural transformation

f a ⊗ f b -> f (a ⊗ b)

and (2) a morphism

i -> f i

Notice that the products and units on either side of the two mappings are from different categories.

A (lax-) monoidal functor must also preserve associativity and unit laws.

For instance a triple product

f a ⊗ (f b ⊗ f c)

may be rearranged using an associator α to give

(f a ⊗ f b) ⊗ f c

then converted to

f (a ⊗ b) ⊗ f c

and then to

f ((a ⊗ b) ⊗ c)

Or it could be first converted to

f a ⊗ f (b ⊗ c)

and then to

f (a ⊗ (b ⊗ c))

These two should be equivalent under the associator in C.

assoc

Similarly, f a ⊗ i can be simplified to f a using the right unitor ρ in D. Or it could be first converted to f a ⊗ f i, then to f (a ⊗ i), and then to f a, using the right unitor in C. The two paths should be equivalent. (Similarly for the left identity.)

unit

We will now consider functors from C to Set, with Set equipped with the usual cartesian product, and the singleton set as unit. A lax monoidal functor is defined by: (1) a natural transformation:

(f a, f b) -> f (a ⊗ b)

and (2) a choice of an element of the set f i (a function from 1 to f i picks an element from that set).

We need the target category to be Set because we want to be able to use the Yoneda lemma to show equivalence with the standard definition of applicative. I’ll come back to this point later.

The Equivalence

The definitions of a lax closed and a lax monoidal functors are equivalent when C is a closed symmetric monoidal category. The proof relies on the existence of the adjunction, in particular the unit and the counit of the adjunction:

ηa :: a -> (b => (a ⊗ b))
εb :: (a => b) ⊗ a -> b

For instance, let’s assume that f is lax-closed. We want to construct the mapping

(f a, f b) -> f (a ⊗ b)

First, we apply the lifted pair (unit, identity), (f η, f id)

(f a -> f (b => a ⊗ b), f id)

to the left hand side. We get:

(f (b => a ⊗ b), f b)

Now we can use (the uncurried version of) the lax-closed morphism:

(f (b => x), f b) -> f x

to get:

f (a ⊗ b)

Conversely, assuming the lax-monoidal property we can show that the functor is lax-closed, that is to say, implement the following function:

(f (a => b), f a) -> f b

First we use the lax monoidal morphism on the left hand side:

f ((a => b) ⊗ a)

and then use the counit (a.k.a. the evaluation morphism) to get the desired result f b

There is yet another presentation of applicatives using Day convolution. But before we get there, we need a little refresher on calculus.

Calculus of Ends

Ends and coends are very useful constructs generalizing limits and colimits. They are defined through universal constructions. They have a few fundamental properties that are used over and over in categorical calculations. I’ll just introduce the notation and a few important identities. We’ll be working in a symmetric monoidal category C with functors from C to Set and profunctors from Cop×C to Set. The end of a profunctor p is a set denoted by:

a p a a

The most important thing about ends is that a set of natural transformations between two functors f and g can be represented as an end:

[C, Set](f, g) = ∫a C(f a, g a)

In Haskell, the end corresponds to universal quantification over a functor of mixed variance. For instance, the natural transformation formula takes the familiar form:

forall a. f a -> g a

The Yoneda lemma, which deals with natural transformations, can also be written using an end:

z (C(a, z) -> f z) ≅ f a

In Haskell, we can write it as the equivalence:

forall z. ((a -> z) -> f z) ≅ f a

which is a generalization of the continuation passing transform.

The dual notion of coend is similarly written using an integral sign, with the “integration variable” in the superscript position:

a p a a

In pseudo-Haskell, a coend is represented by an existential quantifier. It’s possible to define existential data types in Haskell by converting existential quantification to universal one. The relevant identity in terms of coends and ends reads:

(∫ z p z z) -> y ≅ ∫ z (p z z -> y)

In Haskell, this formula is used to turn functions that take existential types to functions that are polymorphic:

(exists z. p z z) -> y ≅ forall z. (p z z -> y)

Intuitively, it makes perfect sense. If you want to define a function that takes an existential type, you have to be prepared to handle any type.

The equivalent of the Yoneda lemma for coends reads:

z f z × C(z, a) ≅ f a

or, in pseudo-Haskell:

exists z. (f z, z -> a) ≅ f a

(The intuition is that the only thing you can do with this pair is to fmap the function over the first component.)

There is also a contravariant version of this identity:

z C(a, z) × f z ≅ f a

where f is a contravariant functor (a.k.a., a presheaf). In pseudo-Haskell:

exists z. (a -> z, f z) ≅ f a

(The intuition is that the only thing you can do with this pair is to apply the contramap of the first component to the second component.)

Using coends we can define a tensor product in the category of functors [C, Set]. This product is called Day convolution:

(f ★ g) a = ∫ x y f x × g y × C(x ⊗ y, a)

It is a bifunctor in that category (read, it can be used to lift natural transformations). It’s associative and symmetric up to isomorphism. It also has a unit — the hom-functor C(i, -), where i is the monoidal unit in C. In other words, Day convolution imbues the category [C, Set] with monoidal structure.

Let’s verify the unit laws.

(C(i, -) ★ g) a = ∫ x y C(i, x) × g y × C(x ⊗ y, a)

We can use the contravariant Yoneda to “integrate over x” to get:

y g y × C(i ⊗ y, a)

Considering that i is the unit of the tensor product in C, we get:

y g y × C(y, a)

Covariant Yoneda lets us “integrate over y” to get the desired g a. The same method works for the right unit law.

Applicative as a Monoid

Given a monoidal category, we can always define a monoid as an object m equipped with two morphisms:

μ :: m ⊗ m -> m
η :: i -> m

satisfying the laws of associativity and unitality.

We have shown that the functor category [C, Set] (with C a symmetric monoidal category) is monoidal under Day convolution. An object in this category is a functor f. The two morphisms that would make it a candidate for a monoid are natural transformations:

μ :: f ★ f -> f
η :: C(i, -) -> f

The a component of the natural transformation μ can be rewritten as:

(∫ x y f x × f y × C(x ⊗ y, a)) -> f a

which is equivalent to:

x y (f x × f y × C(x ⊗ y, a) -> f a)

or, upon currying:

x y (f x, f y) -> C(x ⊗ y, a) -> f a

It turns out that so defined monoid is equivalent to a lax monoidal functor. This was shown by Rivas and Jaskelioff. The following derivation is due to Bob Atkey.

The trick is to start with the whole set of natural transformation from f★f to f. The multiplication μ is just one of them. We’ll express the set of natural transformations as an end:

a ((f ★ f) a -> f a)

Plugging in the formula for the a component of μ, we get:

a x y (f x, f y) -> C(x ⊗ y, a) -> f a

The end over a does not involve the first argument, so we can move the integral sign:

x y (f x, f y) -> ∫ a C(x ⊗ y, a) -> f a

Then we use the Yoneda lemma to “perform the integration” over a:

x y (f x, f y) -> f (x ⊗ y)

You may recognize this as a set of natural transformations that define a lax monoidal functor. We have established a one-to-one correspondence between these natural transformations and the ones defining monoidal multiplication using Day convolution.

The remaining part is to show the equivalence between the unit with respect to Day convolution and the second part of the definition of the lax monoidal functor, the morphism:

1 -> f i

We start with the set of natural transformations that contains our η:

a (i -> a) -> f a

By Yoneda, this is just f i. Picking an element from a set is equivalent to defining a morphism from the singleton set 1, so for any choice of η we get:

1 -> f i

and vice versa. The two definitions are equivalent.

Notice that the monoidal unit η under Day convolution becomes the definition of pure in the Haskell version of applicative. Indeed, when we replace the category C with Set, f becomes and endofunctor, and the unit of Day convolution C(i, -) becomes the identity functor Id. We get:

η :: Id -> f

or, in components:

pure :: a -> f a

So, strictly speaking, the Haskell definition of Applicative mixes the elements of the lax closed functor and the monoidal unit under Day convolution.

Acknowledgments

I’m grateful to Mauro Jaskelioff and Exequiel Rivas for correspondence and to Bob Atkey, Dimitri Chikhladze, and Make Shulman for answering my questions on Math Overflow.


This is part 23 of Categories for Programmers. Previously: Monads Categorically. See the Table of Contents.

Now that we have covered monads, we can reap the benefits of duality and get comonads for free simply by reversing the arrows and working in the opposite category.

Recall that, at the most basic level, monads are about composing Kleisli arrows:

a -> m b

where m is a functor that is a monad. If we use the letter w (upside down m) for the comonad, we can define co-Kleisli arrows as morphism of the type:

w a -> b

The analog of the fish operator for co-Kleisli arrows is defined as:

(=>=) :: (w a -> b) -> (w b -> c) -> (w a -> c)

For co-Kleisli arrows to form a category we also have to have an identity co-Kleisli arrow, which is called extract:

extract :: w a -> a

This is the dual of return. We also have to impose the laws of associativity as well as left- and right-identity. Putting it all together, we could define a comonad in Haskell as:

class Functor w => Comonad w where
    (=>=) :: (w a -> b) -> (w b -> c) -> (w a -> c)
    extract :: w a -> a

In practice, we use slightly different primitives, as we’ll see shortly.

The question is, what’s the use for comonads in programming?

Programming with Comonads

Let’s compare the monad with the comonad. A monad provides a way of putting a value in a container using return. It doesn’t give you access to a value or values stored inside. Of course, data structures that implement monads might provide access to their contents, but that’s considered a bonus. There is no common interface for extracting values from a monad. And we’ve seen the example of the IO monad that prides itself in never exposing its contents.

A comonad, on the other hand, provides the means of extracting a single value from it. It does not give the means to insert values. So if you want to think of a comonad as a container, it always comes pre-filled with contents, and it lets you peek at it.

Just as a Kleisli arrow takes a value and produces some embellished result — it embellishes it with context — a co-Kleisli arrow takes a value together with a whole context and produces a result. It’s an embodiment of contextual computation.

The Product Comonad

Remember the reader monad? We introduced it to tackle the problem of implementing computations that need access to some read-only environment e. Such computations can be represented as pure functions of the form:

(a, e) -> b

We used currying to turn them into Kleisli arrows:

a -> (e -> b)

But notice that these functions already have the form of co-Kleisli arrows. Let’s massage their arguments into the more convenient functor form:

data Product e a = Prod e a
  deriving Functor

We can easily define the composition operator by making the same environment available to the arrows that we are composing:

(=>=) :: (Product e a -> b) -> (Product e b -> c) -> (Product e a -> c)
f =>= g = \(Prod e a) -> let b = f (Prod e a)
                          c = g (Prod e b)
                       in c

The implementation of extract simply ignores the environment:

extract (Prod e a) = a

Not surprisingly, the product comonad can be used to perform exactly the same computations as the reader monad. In a way, the comonadic implementation of the environment is more natural — it follows the spirit of “computation in context.” On the other hand, monads come with the convenient syntactic sugar of the do notation.

The connection between the reader monad and the product comonad goes deeper, having to do with the fact that the reader functor is the right adjoint of the product functor. In general, though, comonads cover different notions of computation than monads. We’ll see more examples later.

It’s easy to generalize the Product comonad to arbitrary product types including tuples and records.

Dissecting the Composition

Continuing the process of dualization, we could go ahead and dualize monadic bind and join. Alternatively, we can repeat the process we used with monads, where we studied the anatomy of the fish operator. This approach seems more enlightening.

The starting point is the realization that the composition operator must produce a co-Kleisli arrow that takes w a and produces a c. The only way to produce a c is to apply the second function to an argument of the type w b:

(=>=) :: (w a -> b) -> (w b -> c) -> (w a -> c)
f =>= g = g ... 

But how can we produce a value of type w b that could be fed to g? We have at our disposal the argument of type w a and the function f :: w a -> b. The solution is to define the dual of bind, which is called extend:

extend :: (w a -> b) -> w a -> w b

Using extend we can implement composition:

f =>= g = g . extend f

Can we next dissect extend? You might be tempted to say, why not just apply the function w a -> b to the argument w a, but then you quickly realize that you’d have no way of converting the resulting b to w b. Remember, the comonad provides no means of lifting values. At this point, in the analogous construction for monads, we used fmap. The only way we could use fmap here would be if we had something of the type w (w a) at our disposal. If we coud only turn w a into w (w a). And, conveniently, that would be exactly the dual of join. We call it duplicate:

duplicate :: w a -> w (w a)

So, just like with the definitions of the monad, we have three equivalent definitions of the comonad: using co-Kleisli arrows, extend, or duplicate. Here’s the Haskell definition taken directly from Control.Comonad library:

class Functor w => Comonad w where
  extract :: w a -> a
  duplicate :: w a -> w (w a)
  duplicate = extend id
  extend :: (w a -> b) -> w a -> w b
  extend f = fmap f . duplicate

Provided are the default implementations of extend in terms of duplicate and vice versa, so you only need to override one of them.

The intuition behind these functions is based on the idea that, in general, a comonad can be thought of as a container filled with values of type a (the product comonad was a special case of just one value). There is a notion of the “current” value, one that’s easily accessible through extract. A co-Kleisli arrow performs some computation that is focused on the current value, but it has access to all the surrounding values. Think of the Conway’s game of life. Each cell contains a value (usually just True or False). A comonad corresponding to the game of life would be a grid of cells focused on the “current” cell.

So what does duplicate do? It takes a comonadic container w a and produces a container of containers w (w a). The idea is that each of these containers is focused on a different a inside w a. In the game of life, you would get a grid of grids, each cell of the outer grid containing an inner grid that’s focused on a different cell.

Now look at extend. It takes a co-Kleisli arrow and a comonadic container w a filled with as. It applies the computation to all of these as, replacing them with bs. The result is a comonadic container filled with bs. extend does it by shifting the focus from one a to another and applying the co-Kleisli arrow to each of them in turn. In the game of life, the co-Kleisli arrow would calculate the new state of the current cell. To do that, it would look at its context — presumably its nearest neighbors. The default implementation of extend illustrates this process. First we call duplicate to produce all possible foci and then we apply f to each of them.

The Stream Comonad

This process of shifting the focus from one element of the container to another is best illustrated with the example of an infinite stream. Such a stream is just like a list, except that it doesn’t have the empty constructor:

data Stream a = Cons a (Stream a)

It’s trivially a Functor:

instance Functor Stream where
    fmap f (Cons a as) = Cons (f a) (fmap f as)

The focus of a stream is its first element, so here’s the implementation of extract:

extract (Cons a _) = a

duplicate produces a stream of streams, each focused on a different element.

duplicate (Cons a as) = Cons (Cons a as) (duplicate as)

The first element is the original stream, the second element is the tail of the original stream, the third element is its tail, and so on, ad infinitum.

Here’s the complete instance:

instance Comonad Stream where
    extract (Cons a _) = a
    duplicate (Cons a as) = Cons (Cons a as) (duplicate as)

This is a very functional way of looking at streams. In an imperative language, we would probably start with a method advance that shifts the stream by one position. Here, duplicate produces all shifted streams in one fell swoop. Haskell’s laziness makes this possible and even desirable. Of course, to make a Stream practical, we would also implement the analog of advance:

tail :: Stream a -> Stream a
tail (Cons a as) = as

but it’s never part of the comonadic interface.

If you had any experience with digital signal processing, you’ll see immediately that a co-Kleisli arrow for a stream is just a digital filter, and extend produces a filtered stream.

As a simple example, let’s implement the moving average filter. Here’s a function that sums n elements of a stream:

sumS :: Num a => Int -> Stream a -> a
sumS n (Cons a as) = if n <= 0 then 0 else a + sumS (n - 1) as

Here’s the function that calculates the average of the first n elements of the stream:

average :: Fractional a => Int -> Stream a -> a
average n stm = (sumS n stm) / (fromIntegral n)

Partially applied average n is a co-Kleisli arrow, so we can extend it over the whole stream:

movingAvg :: Fractional a => Int -> Stream a -> Stream a
movingAvg n = extend (average n)

The result is the stream of running averages.

A stream is an example of a unidirectional, one-dimensional comonad. It can be easily made bidirectional or extended to two or more dimensions.

Comonad Categorically

Defining a comonad in category theory is a straightforward exercise in duality. As with the monad, we start with an endofunctor T. The two natural transformations, η and μ, that define the monad are simply reversed for the comonad:

ε :: T -> I
δ :: T -> T2

The components of these transformations correspond to extract and duplicate. Comonad laws are the mirror image of monad laws. No big surprise here.

Then there is the derivation of the monad from an adjunction. Duality reverses an adjunction: the left adjoint becomes the right adjoint and vice versa. And, since the composition R ∘ L defines a monad, L ∘ R must define a comonad. The counit of the adjunction:

ε :: L ∘ R -> I

is indeed the same ε that we see in the definition of the comonad — or, in components, as Haskell’s extract. We can also use the unit of the adjunction:

η :: I -> R ∘ L

to insert an R ∘ L in the middle of L ∘ R and produce L ∘ R ∘ L ∘ R. Making T2 from T defines the δ, and that completes the definition of the comonad.

We’ve also seen that the monad is a monoid. The dual of this statement would require the use of a comonoid, so what’s a comonoid? The original definition of a monoid as a single-object category doesn’t dualize to anything interesting. When you reverse the direction of all endomorphisms, you get another monoid. Recall, however, that in our approach to a monad, we used a more general definition of a monoid as an object in a monoidal category. The construction was based on two morphisms:

μ :: m ⊗ m -> m
η :: i -> m

The reversal of these morphisms produces a comonoid in a monoidal category:

δ :: m -> m ⊗ m
ε :: m -> i

One can write a definition of a comonoid in Haskell:

class Comonoid m where
  split   :: m -> (m, m)
  destroy :: m -> ()

but it is rather trivial. Obviously destroy ignores its argument.

destroy _ = ()

split is just a pair of functions:

split x = (f x, g x)

Now consider comonoid laws that are dual to the monoid unit laws.

lambda . bimap destroy id . split = id
rho . bimap id destroy . split = id

Here, lambda and rho are the left and right unitors, respectively (see the definition of monoidal categories). Plugging in the definitions, we get:

lambda (bimap destroy id (split x))
= lambda (bimap destroy id (f x, g x))
= lambda ((), g x)
= g x

which proves that g = id. Similarly, the second law expands to f = id. In conclusion:

split x = (x, x)

which shows that in Haskell (and, in general, in the category Set) every object is a trivial comonoid.

Fortunately there are other more interesting monoidal categories in which to define comonoids. One of them is the category of endofunctors. And it turns out that, just like the monad is a monoid in the category of endofunctors,

The comonad is a comonoid in the category of endofunctors.

The Store Comonad

Another important example of a comonad is the dual of the state monad. It’s called the costate comonad or, alternatively, the store comonad.

We’ve seen before that the state monad is generated by the adjunction that defines the exponentials:

L z = z × s
R a = s ⇒ a

We’ll use the same adjunction to define the costate comonad. A comonad is defined by the composition L ∘ R:

L (R a) = (s ⇒ a) × s

Translating this to Haskell, we start with the adjunction between the Product functor on the left and the Reader functor or the right. Composing Product after Reader is equivalent to the following definition:

data Store s a = Store (s -> a) s

The counit of the adjunction taken at the object a is the morphism:

εa :: ((s ⇒ a) × s) -> a

or, in Haskell notation:

counit (Prod (Reader f) s) = f s

This becomes our extract:

extract (Store f s) = f s

The unit of the adjunction is:

unit :: a -> Reader s (Product a s)
unit a = Reader (\s -> Prod a s)

We construct δ, or duplicate, as the horizontal composition:

δ :: L ∘ R -> L ∘ R ∘ L ∘ R
δ = L ∘ η ∘ R

We have to sneak η through the leftmost L, which is the Product functor. It means acting with η, or Store f, on the left component of the pair (that’s what fmap for Product would do). We get:

duplicate (Store f s) = Store (Store f) s

(Remember that, in the formula for δ, L and R stand for identity natural transformations whose components are identity morphisms.)

Here’s the complete definition of the Store comonad:

instance Comonad (Store s) where
  extract (Store f s) = f s
  duplicate (Store f s) = Store (Store f) s

You may think of the Reader part of Store as a generalized container of as that are keyed using elements of the type s. For instance, if s is Int, Reader Int a is an infinite bidirectional stream of as. Store pairs this container with a value of the key type. For instance, Reader Int a is paired with an Int. In this case, extract uses this integer to index into the infinite stream. You may think of the second component of Store as the current position.

Continuing with this example, duplicate creates a new infinite stream indexed by an Int. This stream contains streams as its elements. In particular, at the current position, it contains the original stream. But if you use some other Int (positive or negative) as the key, you’d obtain a shifted stream positioned at that new index.

In general, you can convince yourself that when extract acts on the duplicated Store it produces the original Store (in fact, the identity law for the comonad states that extract . duplicate = id).

The Store comonad plays an important role as the theoretical basis for the Lens library. Conceptually, the Store s a comonad encapsulates the idea of “focusing” (like a lens) on a particular substructure of the date type a using the type s as an index. In particular, a function of the type:

a -> Store s a

is equivalent to a pair of functions:

set :: a -> s -> a
get :: a -> s

If a is a product type, set could be implemented as setting the field of type s inside of a while returning the modified version of a. Similarly, get could be implemented to read the value of the s field from a. We’ll explore these ideas more in the next section.

Challenges

  1. Implement the Conway’s Game of Life using the Store comonad. Hint: What type do you pick for s?

Acknowledgments

I’m grateful to Edward Kmett for reading the draft of this post and pointing out flaws in my reasoning.

Next: F-Algebras.

Next Page »