Categories for Programmers. Previously Products and Coproducts. See the Table of Contents.
We’ve seen two basic ways of combining types: using a product and a coproduct. It turns out that a lot of data structures in everyday programming can be built using just these two mechanisms. This fact has important practical consequences. Many properties of data structures are composable. For instance, if you know how to compare values of basic types for equality, and you know how to generalize these comparisons to product and coproduct types, you can automate the derivation of equality operators for composite types. In Haskell you can automatically derive equality, comparison, conversion to and from string, and more, for a large subset of composite types.
Let’s have a closer look at product and sum types as they appear in programming.
Product Types
The canonical implementation of a product of two types in a programming language is a pair. In Haskell, a pair is a primitive type constructor; in C++ it’s a relatively complex template defined in the Standard Library.
Pairs are not strictly commutative: a pair (Int, Bool)
cannot be substituted for a pair (Bool, Int)
, even though they carry the same information. They are, however, commutative up to isomorphism — the isomorphism being given by the swap
function (which is its own inverse):
swap :: (a, b) -> (b, a) swap (x, y) = (y, x)
You can think of the two pairs as simply using a different format for storing the same data. It’s just like big endian vs. little endian.
You can combine an arbitrary number of types into a product by nesting pairs inside pairs, but there is an easier way: nested pairs are equivalent to tuples. It’s the consequence of the fact that different ways of nesting pairs are isomorphic. If you want to combine three types in a product, a
, b
, and c
, in this order, you can do it in two ways:
((a, b), c)
or
(a, (b, c))
These types are different — you can’t pass one to a function that expects the other — but their elements are in one-to-one correspondence. There is a function that maps one to another:
alpha :: ((a, b), c) -> (a, (b, c)) alpha ((x, y), z) = (x, (y, z))
and this function is invertible:
alpha_inv :: (a, (b, c)) -> ((a, b), c) alpha_inv (x, (y, z)) = ((x, y), z)
so it’s an isomorphism. These are just different ways of repackaging the same data.
You can interpret the creation of a product type as a binary operation on types. From that perspective, the above isomorphism looks very much like the associativity law we’ve seen in monoids:
(a * b) * c = a * (b * c)
Except that, in the monoid case, the two ways of composing products were equal, whereas here they are only equal “up to isomorphism.”
If we can live with isomorphisms, and don’t insist on strict equality, we can go even further and show that the unit type, ()
, is the unit of the product the same way 1 is the unit of multiplication. Indeed, the pairing of a value of some type a
with a unit doesn’t add any information. The type:
(a, ())
is isomorphic to a
. Here’s the isomorphism:
rho :: (a, ()) -> a rho (x, ()) = x
rho_inv :: a -> (a, ()) rho_inv x = (x, ())
These observations can be formalized by saying that Set (the category of sets) is a monoidal category. It’s a category that’s also a monoid, in the sense that you can multiply objects (here, take their cartesian product). I’ll talk more about monoidal categories, and give the full definition in the future.
There is a more general way of defining product types in Haskell — especially, as we’ll see soon, when they are combined with sum types. It uses named constructors with multiple arguments. A pair, for instance, can be defined alternatively as:
data Pair a b = P a b
Here, Pair a b
is the name of the type paremeterized by two other types, a
and b
; and P
is the name of the data constructor. You define a pair type by passing two types to the Pair
type constructor. You construct a pair value by passing two values of appropriate types to the constructor P
. For instance, let’s define a value stmt
as a pair of String
and Bool
:
stmt :: Pair String Bool stmt = P "This statements is" False
The first line is the type declaration. It uses the type constructor Pair
, with String
and Bool
replacing a
and the b
in the generic definition of Pair
. The second line defines the actual value by passing a concrete string and a concrete Boolean to the data constructor P
. Type constructors are used to construct types; data constructors, to construct values.
Since the name spaces for type and data constructors are separate in Haskell, you will often see the same name used for both, as in:
data Pair a b = Pair a b
And if you squint hard enough, you may even view the built-in pair type as a variation on this kind of declaration, where the name Pair
is replaced with the binary operator (,)
. In fact you can use (,)
just like any other named constructor and create pairs using prefix notation:
stmt = (,) "This statement is" False
Similarly, you can use (,,)
to create triples, and so on.
Instead of using generic pairs or tuples, you can also define specific named product types, as in:
data Stmt = Stmt String Bool
which is just a product of String
and Bool
, but it’s given its own name and constructor. The advantage of this style of declaration is that you may define many types that have the same content but different meaning and functionality, and which cannot be substituted for each other.
Programming with tuples and multi-argument constructors can get messy and error prone — keeping track of which component represents what. It’s often preferable to give names to components. A product type with named fields is called a record in Haskell, and a struct
in C.
Records
Let’s have a look at a simple example. We want to describe chemical elements by combining two strings, name and symbol; and an integer, the atomic number; into one data structure. We can use a tuple (String, String, Int)
and remember which component represents what. We would extract components by pattern matching, as in this function that checks if the symbol of the element is the prefix of its name (as in He being the prefix of Helium):
startsWithSymbol :: (String, String, Int) -> Bool startsWithSymbol (name, symbol, _) = isPrefixOf symbol name
This code is error prone, and is hard to read and maintain. It’s much better to define a record:
data Element = Element { name :: String , symbol :: String , atomicNumber :: Int }
The two representations are isomorphic, as witnessed by these two conversion functions, which are the inverse of each other:
tupleToElem :: (String, String, Int) -> Element tupleToElem (n, s, a) = Element { name = n , symbol = s , atomicNumber = a }
elemToTuple :: Element -> (String, String, Int) elemToTuple e = (name e, symbol e, atomicNumber e)
Notice that the names of record fields also serve as functions to access these fields. For instance, atomicNumber e
retrieves the atomicNumber
field from e
. We use atomicNumber
as a function of the type:
atomicNumber :: Element -> Int
With the record syntax for Element
, our function startsWithSymbol
becomes more readable:
startsWithSymbol :: Element -> Bool startsWithSymbol e = isPrefixOf (symbol e) (name e)
We could even use the Haskell trick of turning the function isPrefixOf
into an infix operator by surrounding it with backquotes, and make it read almost like a sentence:
startsWithSymbol e = symbol e `isPrefixOf` name e
The parentheses could be omitted in this case, because an infix operator has lower precedence than a function call.
Sum Types
Just as the product in the category of sets gives rise to product types, the coproduct gives rise to sum types. The canonical implementation of a sum type in Haskell is:
data Either a b = Left a | Right b
And like pairs, Either
s are commutative (up to isomorphism), can be nested, and the nesting order is irrelevant (up to isomorphism). So we can, for instance, define a sum equivalent of a triple:
data OneOfThree a b c = Sinistral a | Medial b | Dextral c
and so on.
It turns out that Set is also a (symmetric) monoidal category with respect to coproduct. The role of the binary operation is played by the disjoint sum, and the role of the unit element is played by the initial object. In terms of types, we have Either
as the monoidal operator and Void
, the uninhabited type, as its neutral element. You can think of Either
as plus, and Void
as zero. Indeed, adding Void
to a sum type doesn’t change its content. For instance:
Either a Void
is isomorphic to a
. That’s because there is no way to construct a Right
version of this type — there isn’t a value of type Void
. The only inhabitants of Either a Void
are constructed using the Left
constructors and they simply encapsulate a value of type a
. So, symbolically, a + 0 = a
.
Sum types are pretty common in Haskell, but their C++ equivalents, unions or variants, are much less common. There are several reasons for that.
First of all, the simplest sum types are just enumerations and are implemented using enum
in C++. The equivalent of the Haskell sum type:
data Color = Red | Green | Blue
is the C++:
enum { Red, Green, Blue };
An even simpler sum type:
data Bool = True | False
is the primitive bool
in C++.
Simple sum types that encode the presence or absence of a value are variously implemented in C++ using special tricks and “impossible” values, like empty strings, negative numbers, null pointers, etc. This kind of optionality, if deliberate, is expressed in Haskell using the Maybe
type:
data Maybe a = Nothing | Just a
The Maybe
type is a sum of two types. You can see this if you separate the two constructors into individual types. The first one would look like this:
data NothingType = Nothing
It’s an enumeration with one value called Nothing
. In other words, it’s a singleton, which is equivalent to the unit type ()
. The second part:
data JustType a = Just a
is just an encapsulation of the type a
. We could have encoded Maybe
as:
type Maybe a = Either () a
More complex sum types are often faked in C++ using pointers. A pointer can be either null, or point to a value of specific type. For instance, a Haskell list type, which can be defined as a (recursive) sum type:
List a = Nil | Cons a (List a)
can be translated to C++ using the null pointer trick to implement the empty list:
template<class A> class List { Node<A> * _head; public: List() : _head(nullptr) {} // Nil List(A a, List<A> l) // Cons : _head(new Node<A>(a, l)) {} };
Notice that the two Haskell constructors Nil
and Cons
are translated into two overloaded List
constructors with analogous arguments (none, for Nil
; and a value and a list for Cons
). The List
class doesn’t need a tag to distinguish between the two components of the sum type. Instead it uses the special nullptr
value for _head
to encode Nil
.
The main difference, though, between Haskell and C++ types is that Haskell data structures are immutable. If you create an object using one particular constructor, the object will forever remember which constructor was used and what arguments were passed to it. So a Maybe
object that was created as Just "energy"
will never turn into Nothing
. Similarly, an empty list will forever be empty, and a list of three elements will always have the same three elements.
It’s this immutability that makes construction reversible. Given an object, you can always disassemble it down to parts that were used in its construction. This deconstruction is done with pattern matching and it reuses constructors as patterns. Constructor arguments, if any, are replaced with variables (or other patterns).
The List
data type has two constructors, so the deconstruction of an arbitrary List
uses two patterns corresponding to those constructors. One matches the empty Nil
list, and the other a Cons
-constructed list. For instance, here’s the definition of a simple function on List
s:
maybeTail :: List a -> Maybe (List a) maybeTail Nil = Nothing maybeTail (Cons _ t) = Just t
The first part of the definition of maybeTail
uses the Nil
constructor as pattern and returns Nothing
. The second part uses the Cons
constructor as pattern. It replaces the first constructor argument with a wildcard, because we are not interested in it. The second argument to Cons
is bound to the variable t
(I will call these things variables even though, strictly speaking, they never vary: once bound to an expression, a variable never changes). The return value is Just t
. Now, depending on how your List
was created, it will match one of the clauses. If it was created using Cons
, the two arguments that were passed to it will be retrieved (and the first discarded).
Even more elaborate sum types are implemented in C++ using polymorphic class hierarchies. A family of classes with a common ancestor may be understood as one variant type, in which the vtable serves as a hidden tag. What in Haskell would be done by pattern matching on the constructor, and by calling specialized code, in C++ is accomplished by dispatching a call to a virtual function based on the vtable pointer.
You will rarely see union
used as a sum type in C++ because of severe limitations on what can go into a union. You can’t even put a std::string
into a union because it has a copy constructor.
Algebra of Types
Taken separately, product and sum types can be used to define a variety of useful data structures, but the real strength comes from combining the two. Once again we are invoking the power of composition.
Let’s summarize what we’ve discovered so far. We’ve seen two commutative monoidal structures underlying the type system: We have the sum types with Void
as the neutral element, and the product types with the unit type, ()
, as the neutral element. We’d like to think of them as analogous to addition and multiplication. In this analogy, Void
would correspond to zero, and unit, ()
, to one.
Let’s see how far we can stretch this analogy. For instance, does multiplication by zero give zero? In other words, is a product type with one component being Void
isomorphic to Void
? For example, is it possible to create a pair of, say Int
and Void
?
To create a pair you need two values. Although you can easily come up with an integer, there is no value of type Void
. Therefore, for any type a
, the type (a, Void)
is uninhabited — has no values — and is therefore equivalent to Void
. In other words, a*0 = 0
.
Another thing that links addition and multiplication is the distributive property:
a * (b + c) = a * b + a * c
Does it also hold for product and sum types? Yes, it does — up to isomorphisms, as usual. The left hand side corresponds to the type:
(a, Either b c)
and the right hand side corresponds to the type:
Either (a, b) (a, c)
Here’s the function that converts them one way:
prodToSum :: (a, Either b c) -> Either (a, b) (a, c) prodToSum (x, e) = case e of Left y -> Left (x, y) Right z -> Right (x, z)
and here’s one that goes the other way:
sumToProd :: Either (a, b) (a, c) -> (a, Either b c) sumToProd e = case e of Left (x, y) -> (x, Left y) Right (x, z) -> (x, Right z)
The case of
statement is used for pattern matching inside functions. Each pattern is followed by an arrow and the expression to be evaluated when the pattern matches. For instance, if you call prodToSum
with the value:
prod1 :: (Int, Either String Float) prod1 = (2, Left "Hi!")
the e
in case e of
will be equal to Left "Hi!"
. It will match the pattern Left y
, substituting "Hi!"
for y
. Since the x
has already been matched to 2
, the result of the case of
clause, and the whole function, will be Left (2, "Hi!")
, as expected.
I’m not going to prove that these two functions are the inverse of each other, but if you think about it, they must be! They are just trivially re-packing the contents of the two data structures. It’s the same data, only different format.
Mathematicians have a name for such two intertwined monoids: it’s called a semiring. It’s not a full ring, because we can’t define subtraction of types. That’s why a semiring is sometimes called a rig, which is a pun on “ring without an n” (negative). But barring that, we can get a lot of mileage from translating statements about, say, natural numbers, which form a rig, to statements about types. Here’s a translation table with some entries of interest:
Numbers | Types |
---|---|
0 | Void |
1 | () |
a + b | Either a b = Left a | Right b |
a * b | (a, b) or Pair a b = Pair a b |
2 = 1 + 1 | data Bool = True | False |
1 + a | data Maybe = Nothing | Just a |
The list type is quite interesting, because it’s defined as a solution to an equation. The type we are defining appears on both sides of the equation:
List a = Nil | Cons a (List a)
If we do our usual substitutions, and also replace List a
with x
, we get the equation:
x = 1 + a * x
We can’t solve it using traditional algebraic methods because we can’t subtract or divide types. But we can try a series of substitutions, where we keep replacing x
on the right hand side with (1 + a*x)
, and use the distributive property. This leads to the following series:
x = 1 + a*x x = 1 + a*(1 + a*x) = 1 + a + a*a*x x = 1 + a + a*a*(1 + a*x) = 1 + a + a*a + a*a*a*x ... x = 1 + a + a*a + a*a*a + a*a*a*a...
We end up with an infinite sum of products (tuples), which can be interpreted as: A list is either empty, 1
; or a singleton, a
; or a pair, a*a
; or a triple, a*a*a
; etc… Well, that’s exactly what a list is — a string of a
s!
There’s much more to lists than that, and we’ll come back to them and other recursive data structures after we learn about functors and fixed points.
Solving equations with symbolic variables — that’s algebra! It’s what gives these types their name: algebraic data types.
Finally, I should mention one very important interpretation of the algebra of types. Notice that a product of two types a
and b
must contain both a value of type a
and a value of type b
, which means both types must be inhabited. A sum of two types, on the other hand, contains either a value of type a
or a value of type b
, so it’s enough if one of them is inhabited. Logical and and or also form a semiring, and it too can be mapped into type theory:
Logic | Types |
---|---|
false | Void |
true | () |
a || b | Either a b = Left a | Right b |
a && b | (a, b) |
This analogy goes deeper, and is the basis of the Curry-Howard isomorphism between logic and type theory. We’ll come back to it when we talk about function types.
Challenges
- Show the isomorphism between
Maybe a
andEither () a
. - Here’s a sum type defined in Haskell:
data Shape = Circle Float | Rect Float Float
When we want to define a function like
area
that acts on aShape
, we do it by pattern matching on the two constructors:area :: Shape -> Float area (Circle r) = pi * r * r area (Rect d h) = d * h
Implement
Shape
in C++ or Java as an interface and create two classes:Circle
andRect
. Implementarea
as a virtual function. - Continuing with the previous example: We can easily add a new function
circ
that calculates the circumference of aShape
. We can do it without touching the definition ofShape
:circ :: Shape -> Float circ (Circle r) = 2.0 * pi * r circ (Rect d h) = 2.0 * (d + h)
Add
circ
to your C++ or Java implementation. What parts of the original code did you have to touch? - Continuing further: Add a new shape,
Square
, toShape
and make all the necessary updates. What code did you have to touch in Haskell vs. C++ or Java? (Even if you’re not a Haskell programmer, the modifications should be pretty obvious.) - Show that
a + a = 2 * a
holds for types (up to isomorphism). Remember that2
corresponds toBool
, according to our translation table.
Next: Functors.
Acknowledments
Thanks go to Gershom Bazerman for reviewing this post and helpful comments.
January 13, 2015 at 5:57 pm
[…] Next: Simple Algebraic Data Types. […]
January 14, 2015 at 1:41 am
Really great post. Looking forward to more like this.
January 18, 2015 at 1:47 pm
Very interesting stuff. One nit pick if I may, shouldn’t the the correspondence of the Either type in logic be the logical XOR as opposed to the logical OR? As far as I know, Either a b can never be both, a and b.
January 18, 2015 at 3:07 pm
@Peter: I guess I have oversimplified it. Let me rephrase it: The type
Either a b
is inhabited if any of the two types a or b are inhabited. Which is also true if both are inhabited. The only uninhabited case isEither Void Void
, which corresponds to False || False. I’ll edit the post accordingly.April 7, 2015 at 9:36 pm
This post mentions that coproducts in Set show that set is a symmetric monoidal category, but there is no more mention of what it means to be symmetric. I’d suggest that should either be: left out, explained, cross referenced to prior writing that I missed 🙂
May 3, 2015 at 9:43 am
Does anybody have a solution to the idea to make an ADT in C++? The best I come with is having a base class that plays a role of type constructor, i.e. it have a constructor that calls a constructor of «derived» classes, and adds the created class to an internal union with pointers. The method «area()» in the exercise would then check which one member active in the union, and call their «area()».
One can’t do area() to be a pure virtual, because then you couldn’t use a constructor of the «base» class. And no need for inheritance here, actually.
And it is a horrible solution, at least because one could just use a «derived» class separately. Also, if data constructors signature the same (i.e. it’s two constructors that takes exactly one integer), you probably need to add one more argument to find which one ought to construct the «type constructor».
August 6, 2015 at 5:29 pm
Since C++11, std::string and other types with non-trivial special member functions are allowed in unions.
July 29, 2016 at 8:09 am
I’m struggle with one little thing. Suppose, I have a coproduct «Either a b». Thus I have two arrows, a “a → Either” and “b → Either”.
So far so good, but the problem is that by definition of coproduct we should have at least one “NotACoproduct” type, with arrows a → NotACoproduct, b → NotACoproduct, and Either → NotACoproduct.
What is this type NotACoproduct, and most importantly, why don’t you mention it? Could it be easily omitted?
July 29, 2016 at 10:32 am
Take, for instance,
Either Int Bool
. Almost every type is not-a-coproduct ofInt
andBool
. TryBool
for instance as a candidate with these two injections:You can check that the following is the unique refactoring:
You see the pattern? You branch on the two possibilities and apply the corresponding injections.
July 29, 2016 at 2:22 pm
Ah, I see. Thank you.
I’m wondering though: why is the requirement to have that second not-a-coproduct type? Does it have an actual reason, or it’s just happened to be like this? To me it seems like we could remove that requirement, and Either Int Bool still would have worked.
July 29, 2016 at 2:37 pm
The requirement is not that a not-a-coproduct type must exist. The requirement is that for every such type (if it exists) there is a unique factoring through the actual coproduct. That’s the essence of every universal construction.
August 3, 2016 at 9:29 am
Thank you! Category Theory is pretty inaccessible for me as a software engineer without a solid math background, and your posts are incredibly indispensable to me! I’m working my way through them sequentially and I am excited about working through them all!
Thank you!
April 24, 2017 at 4:03 pm
Currently C++17 offers a type-safe union / sum type with std::variant. For instance
variant<string, int> v(“abc”);
v = 12;
will work just fine with the compiler validating whether the types are correct. And yes, variants are mutable if not const.
August 23, 2017 at 11:10 am
A family of classes with a common ancestor may be understood as one variant type, in which the vtable serves as a hidden tag. What in Haskell would be done by pattern matching on the constructor, and by calling specialized code, in C++ is accomplished by dispatching a call to a virtual function based on the vtable pointer
It seems to me that these are very different from each other because pattern matching needs the real type at the call site to work but vtable have as you say a hidden tag which means the caller doesn’t need to know the tag to use the object. The decision was taken at initialization.
I think in haskell, existentials are the nearest concept to this kind of construction.
pattern matching would be more of a switch on steroid.
Am I missing something ?
Thanks,
Kevin
August 23, 2017 at 1:15 pm
In Haskell, pattern matching needs the type, but the variant information is not encoded in the type. It’s encoded in the hidden tag. In C++ the information is originally encoded in the type, but once you erase the actual type and cast the pointer to the base type, the information is stored in the hidden tag — the v-pointer.
The theories behind sum types and subtype polymorphism may be different, but they serve similar purpose.
September 17, 2017 at 2:33 pm
Great article and a great series of articles!
I think this sentence is incorrect: “But barring that, we can get a lot of mileage from translating statements about, say, natural numbers, which form a ring, to statements about types.”
From what I think I have understood, natural numbers also form a semiring, not a ring, as additive inverse is not defined for natural numbers. As in, it is also a ring without n.
Do let me know if I’m wrong, as that means I have misunderstood something important.
September 17, 2017 at 3:23 pm
Must have been a typo, right after explaining what a rig was 😉
Fixed!
February 2, 2018 at 1:11 pm
Regarding your challenge “Show the isomorphism between “Maybe a” and “Either () a,” you define isomorphism in the “Products and coproducts” post this way:
“An isomorphism is an invertible morphism; or a pair of morphisms, one being the inverse of the other.”
By “invertible” you mean just that, two arrows that link two objects, but in opposite directions, correct? It has nothing to do with being an invertible function, because “isomorphism” applies to any category, including those that do not involve functions. Just a pair of arrows in opposite directions.
However, in this challenge, it seems you want to see not only an invertible morphism, but an invertible function, a one-to-one correspondence between “Maybe a” and “Either () a”.
I say that because, if you just mean a pair of morphisms in opposite directions, then the fact that these types are “Maybe a” and “Either () a” seems completely irrelevant. Any two types A and B that are not Void are isomorphic because there is always an isomorphism between A and B, namely that consisting of a pair of functions f: A -> B and another g: B -> A. g does not need to be the inverse of f. Any such pair of function will do because they will form a pair of morphism in opposite directions and this will, according to your definition, be isomorphic. So you might as well have stated the challenge simply as: “Show the isomorphism between any two non-empty types”.
But, like I said, it seems patently obvious that you want a proof that there is a one-to-one correspondence between the elements of “Maybe a” and “Either () a”, but at the same time you are asking for an isomorphism and isomorphisms have nothing do to with invertible functions.
Am I missing something? Thanks again.
February 3, 2018 at 1:47 am
Invertible means they are the inverse of each other, that is g.h = id and h.g = id. This definition works in any category.
February 3, 2018 at 3:45 am
Hmm… Sorry, I don’t see how that answers my question.
Yes, that is the definition of invertible morphism. But your challenge seems to refer to a different notion of invertible, namely invertible function, rather than morphism, because it provides the specific types. Or are the specific types irrelevant? Could the challenge have been stated as “Show the isomorphism between any two non-empty types” and involve the exact same steps in its proof?
February 4, 2018 at 8:50 am
The definition of isomorphism works in any category, that means also in the category of types and functions. Implementing the two functions is pretty straightforward. The challenge is to show, using equational reasoning, that they are the inverse of each other.
February 4, 2018 at 9:35 am
“The challenge is to show, using equational reasoning, that they are the inverse of each other.”
But that’s the thing! To show they are isomorphic in category theory, there is no need to do equational reasoning or even to consider what types they are!
Let me spell out an answer to the challenge that makes my point more clear:
Both types are non-empty. Therefore, there is some function Maybe a -> Either () a., for example the function mapping any element in Maybe a to the value (). So there is a morphism from Maybe a to Either () a. Likewise, there is a function Either () a -> Maybe a, for example the function mapping any element to (). Therefore there is a morphism from Either () a to Maybe a. Therefore, there is a pair of morphisms between these types and they are isomorphic.
Note that at no point in the above proof did I need to use any knowledge about Maybe and Either. I just used the fact that they have at least one value. I could have written it for any other non-empty types A and B.
Note that I picked trivial functions that are not the inverse of each other, and it still all works at the category theory level.
Sorry to insist, but it seems we may be talking past each other here and maybe I am missing something important.
February 4, 2018 at 9:45 am
What do you mean “it works at the category theory level”? What’s the composition of the two morphisms? Can you show it’s id?
February 4, 2018 at 4:59 pm
Maybe this is what the misunderstanding lays…
My understanding is that an isomorphism “at the category theory level” is simply a pair of morphisms on two objects, going in opposite directions. The definition doesn’t mention “ids”.
In fact, I am very surprised to even hear about an id at all. I didn’t know arrows (morphisms) had ids. Isn’t the whole point of category theory that we just have “objects and arrows” without regard for their inner structure?
If I understand what you mean by id, the identity of the morphisms I was talking about are what I described above: a pair of functions mapping one element in each of the types to some (arbitrary) element in the other type. This forms a pair of morphisms in opposite directions and therefore an isomorphism, according to my understanding of the definition of isomorphism in categories.
But these functions are not the inverse of each other and we don’t need to know that these types are Maybe a and Either () a in order to define them.
Hopefully that clarifies things further. Thanks.
February 8, 2018 at 1:37 pm
I thought some more and identified the source of my misunderstanding.
You say “An isomorphism is an invertible morphism; or a pair of morphisms, one being the inverse of the other”, but what “invertible” means in categories (not just in the category of sets and functions) was not explicitly defined. I looked up the definition of isomorphism and found out that it is a morphism f such that there is another morphism g and the composition of f and g is the identity morphism. But my understanding of “identity morphism” was that it was any morphism linking an object to itself.
I understood it like that because of the passage:
“For every object A there is an arrow which is a unit of composition. This arrow loops from the object to itself. Being a unit of composition means that, when composed with any arrow that either starts at A or ends at A, respectively, it gives back the same arrow.”
and then interpreting “same arrow” to mean an arrow linking the same nodes.
And then this led me to believe that any pair of morphisms between two objects would form an isomorphism, which in turn led me to conclude that any two non-empty types are isomorphic because there is always such a pair between them, which in turn led me to believe that, in that challenge, whether the types were Maybe a and Either () a was a completely irrelevant fact.
But now I see that not every morphism linking a node to itself is an identity morphism, and that in the category of types and functions this is the identity function, so things make sense again…
Actually this hints at something deeper, which is that saying “a category is just objects and arrows” is misleading. Just giving someone some objects linked by arrows does not tell someone, for example, about the isomorphisms in it. One needs to also be told what the composition operator is, and which morphisms linking a node to itself are to be considered identity morphisms, to check whether two morphisms do compose into the identity morphism.
I hope this is not too long, but this was a pretty tricky confusion for me, especially the paragraph above, so it might also be for someone else.
February 9, 2018 at 1:32 pm
Rodrigo, In the chapter Category: The essence of composition I explicitly say: “But the essence of a category is composition.” Maybe I should have stressed the fact that there may be many looping arrows, and only one of them is identity. This becomes obvious in the next chapter that describes monoids.
February 9, 2018 at 5:50 pm
Yes, now I see it. 🙂 Thanks!
November 30, 2018 at 7:23 am
Hi Bartosz,
Thank you for such a great article, my question is:
Is it correct to show the isomorphism using the algebra of types:
Maybe a ~ 1 + a
Either () a ~ 1 + a
?
December 2, 2018 at 12:11 pm
Yes.
June 13, 2019 at 1:37 am
Hi Bartosz, thank you for the series!
I don’t follow the example with the boolean type:
I get that Maybe is done this way, because Nothing has exactly one value, and thus corresponds to unit, and the (+) corresponds to “logical or”.
However, how can you make a statement about 2 = 1 + 1? I see that Bool can have two values, which are both units, so 1 or 1. But how that can equal to “2” ? Of course 1+1 = 2, but how does that translate to the type system? Is there some value of Bool, which could be considered as 2? (I hardly think so for binary 0/1 values).
In the table you do not make any other expressions of equality.
June 13, 2019 at 9:13 am
Consider it just a new short name for 1+1.
June 13, 2019 at 10:10 pm
[…] 原文见 https://bartoszmilewski.com/20… […]
September 10, 2019 at 5:47 am
[…] mentioned in Bartosz Milewski’s article on that, the Maybe type is actually a sum type itself. Maybe A = A + Unit where Unit is the type of exactly […]
January 23, 2021 at 7:53 am
Hello Bartosz,
When you write I’ll talk more about monoidal categories, and give the full definition in the future, is it didactically intentional that you vaguely write “the future” rather then giving a hard reference (which I guess would be “Section 22.1”)?
If this is the case, then I’ll trust in your approach (and won’t peep into the future, ahahah). If it is not, and you’d appreciate cross references instead of “later”s, “in the future”s, et similia, I can make some PR on GitHub.
January 23, 2021 at 8:29 am
Obviously, when I was blogging, I couldn’t reference the future, but feel free to fix it on GitHub.
May 20, 2023 at 7:16 am
The alternative formulation of
Maybe
given as:data Maybe a = Either () a
is slightly confusing IMO.
Either
is clearly not a value constructor.I get that you might not want to bring up new syntax for type aliases yet but perhaps just saying
Maybe a
is isomorphic toEither () a
, or writing it as:data Maybe a = MkMaybe (Either () a)
would be clearer?
May 20, 2023 at 7:53 am
Good point. I changed it to a type alias.