Implications of Easy October 16, 2005 | 12:49 pm

Chia’s jibe about Ocaml reminded me of what I was thinking earlier today- that there is a difference, in terms of the Saphir-Worf hypothesis, between something being possible and something being easy in a given computer programming language.

OK, a quite side trip- for those who don’t know about the Saphir-Worf hypothesis, that’s the theory that the language(s) you speak influence what thoughts you can think. Basically, if you can’t express it, you can’t think it. While it may or may not be true for natural languages, it is most undoubtably true for computer languages. Because it’s not about doing the thing that is easy- it’s about doing the things that are now possible (if difficult) because that thing is easy.

The classic example of this is implementing objects. You can implement objects in C- I have. While not “difficult” in some absolute sense, it is signifigantly more difficult to do in C than in a language that supports objects, like Java. And inheriting objects, and handling virtual functions, is signifigantly clunkier in C than in Java or Python or etc. Not to mention bugs can be signifigantly harder to track down, as you now have to verify not only that the function is correct, but that the function is being called correctly (something you can just assume with an object oriented language).

But once the mechanics of handling objects are easy, a whole new design world opens up- and with it, whole new ways to think about and approach problems. The best expression of this whole new way of thinking (that I know of) is still the Gang of Four’s “Design Patterns”. Object oriented programming isn’t about the mechanics of inheritance, it’s about the next level up- it’s about the design pattern space objects allow for. Take an experiences Object programmer and drop him back into a procedural group, and he will write “incomprehensible” code to the procedural programmers. Not because they won’t understand the mechanics of what his code is doing (in fact, they are likely to understand the mechanics better than the Object programmer), but because they won’t understand the idioms his program is using. A fellow Object programmer would very rapidly start going “OK- that’s a decorator, and that’s a memento, and …”, but the old-style imperitive programmers will not be able to see the forrest for all the trees in the way.

That is the problem that Chia is running into- I’ve introduced him to new ways of thinking about and approaching problems, which has “infected” his previous programming habits. And is now making the Java programmers look at him weirdly, much the way those old-fasioned procedural programmers would look at a transplanted Java programmer. Note that I’m going through the same process myself, even as we speak- this isn’t a wise and experienced sage passing on the knowledge of the ages to a child-apprentice, this is the kid from the next grade up going “oh, yeah- you do that this way…” Which is why I’m accutely aware of the transformation process.

What I wanted to comment on was some of the things that, while possible in other languages, Ocaml makes easy, and thus encourages the exploration of new pattern spaces. By the way, if you don’t already know it, I’m a huge fan of the Ocaml programming language.

The first thing Ocaml makes easy is expression the of relationship. OO programmers all know about the is-a and has-a relationships- a car is-a vehicle while a car has-a radio. This relationship is generally taught as a way to decide wether to use inheritance or member variables to express the relationship. Both are expressing a relation between two types- does type A inherit from type B, or contain a type B as a member? But there is also an of relationship between types. We want a list of ints, or an array of floats, etc. The of relationship is not static- we want to be able to express a level of generality- we want to be able to express the concept of a list of whatever- I don’t care what type the list contains, just that it is a list. The code to calculate the length of a list doesn’t care what type the list holds.

This is the great failing of most “popular” programming languages, they don’t have a good way to express the of relationship. Well, you can do it in most languages, using templates in C++, or casting to Object in Java (or, comming soon, using templates in Java- which is just syntactic sugar over casting to type Object). This is because both Java and C++ are effectively still using the Algol-68 type system (with a few extensions and now a few kludges), and because Algol-68 didn’t have a good way to express the of relationship, neither does C++ or Java.

Ocaml has the concept of universal types. A universal type starts with an apostrophe and then follows with a name, so ‘a, ‘key, ‘some_type are all universal types. This isn’t that big of a change on the lowest level, but it has a bunch of profound implications. First of all, it makes it easy for me to talk about ‘a lists- literally, “a list of some type, call it a”. It allows me to easily express type concepts, such as “the map function takes a reference to a function that converts a type a to a type b, and a list of type a, and returns a list of type b”. Actually, saying it english is signifigantly more verbose than saying it in Ocaml, where it’s just (‘a -> ‘b) -> ‘a list -> ‘b list.

This is writable in Java and C++, but it’s not “natural”- it’s not a function that most Java and C++ programmers will write immediately for a new data structure (while it’s one of the first three your average Ocaml programmer will write- the other two being iter and fold). Worse yet, it’s one they are not likely to ever write, as they are likely to say “if you need to do that, it’s easy enough to do with a loop and an iterator”- and thus they miss the whole design pattern space waiting for them.

I haven’t seen much talk about this, so I’ve privately named the design pattern “half functions”- and they’re all over Ocaml code. Functions which understand one half (or one third, or one thirty-second) of the problem, which are waiting to be married with another function (or functions) which understand the other half (or two thirds, etc.). The map function is a great example of this- it understands lists, but not anything else. Pair it with a function that knows how to convert, say, ints to floats, and now you have a usefull function that can convert a list of ints into a list of floats.

“So what?” says the C++ programmers. Well, say I have 6 different data structures- lists, arrays, trees, heaps, queues, and stacks. I define map on all six of them, for 6 functions. Then I define six different conversion functions, say int to float, float to int, int to string, string to int, float to string, and string to float. So I’ve now written 12 different functions- but, by pairing the two halves up different ways, I have 36 different functions. I pick the data structure I want to convert from column A, and the conversion I want to apply from column B, and I have my data structure conversion function. We have, in essence, factored out and reused the common code from all the list conversion functions, and factored out and reused the common code from all the int to float conversion functions, and thus have acheived a signifigant reduction in code size. This is something the loop with an interator pattern can not acheive.

Note that we’ve also opened up dramatic possibilities for performance enhancements. For example, the naive loop over an iterator converting a tree to a new type has an O(N log N) complexity- enumerating the elements of the old tree is O(N) cost, but after converting the element into the new type, you have to insert it into the new tree, at a cost of O(log N). With the half-function approach, I can optimize the conversion routine to operate in O(N) time, instead of O(N log N). Now that I’ve explained the idea to you, you’re probably going “yeah- that could be written”. But it’s clunky an “unnatural”. Here, by the way, are three implementations of the map function, on lists, in Ocaml, C++, and Java:


let map f = function
    | h :: t -> (f h) :: (map f t)
    | [] -> []


template<class a, class b> std::list<b> map(b& (*f)(a&), std::list<a> alst) {
    std::list<a>::iterator &i;
    std::list<b> blst = new std::list<b>();
    for (i = alst.begin(); i != alst.end(); ++i) {
    return blst;


class Map {
    interface Convert {
        public Object fn(Object);
    public final map(Convert f, LinkedList a) {
        Iterator i = a.listIterator(0);
        LinkedList b = new LinkedList();
        while (i.hasNext()) {

The Java version of the code brings up an important point. To write the map function in Java, I basically have to completely subvert the type system. I didn’t have to do it explicitly in this code, but all the code that touches this code is going to having to cast everything up to and down from Object all over the place. We have completely lost the concept that we’re converting a list of type a to a list of type b, as the types a and b have themselves been lost. This has lead a large number of people to assume that static type checking is fundamentally broken- after all, you can’t write the simplest data structure in Java without completely subverting it. So why have it all? The answer is that you should have to subvert the type system for this. If the type system is sufficiently expressive, you don’t have to subvert it- at which point the type system is no longer an enemy to be done away with, but an ally, finding bugs in your code for you.

Another thing that being able to express the of relationship encourages you to do is to build complex data structures out of simple data structures. If you can have a list of arrays of trees of queues of stacks of whatevers easily, you are more likely to go for the complicated data structure. It all of a sudden seems natural, if not even a bit naive, to represent a spare matrix as a list of tuples of an int and a list of tuples of an int and a float, or in Ocaml, a (int * ((int * float) list)) list. This is a medium-complicated data structure in Ocaml. After all, it’s only two structures deep (not counting the tuples). In code I just wrote, I was using a map of strings to maps of ints to multidimensional maps of strings to strings. In Ocaml code, this is:

module StringMap = Map.Make(String);;
module OrdInt = struct
    type t = int
    let compare (x: int) y = if x < y then -1 else if x > y then 1 else 0
module IntMap = Map.Make(OrdInt);;
type node_t =
    | Leaf of string
    | Fork of string * (node_t list)
type t = node_t list IntMap.t StringMap.t;;

11 lines, of which 6 should be (IMHO) in the standard library (all the module stuff, everything before the

type node_t =


This is just one aspect of the different ways Ocaml encourages you to approach problems differently. I could just as easily written about lazy evaluation, or modules and functors, or higher order functions, or any of a dozen other examples. All of which you can do in other languages, but because the implementations are clunky, error prone, and generally not natural, are discouraged, and thus the pattern space they encourage remains unexplored.

Which is why I don’t accept the blame for introducing Chia to this, I claim responsibility. Brag about it, even. Because by being able to think in these new ways, and in these new patterns, both Chia and I are better programmers.

And that is something to be proud of, IMHO.


  • Candide

    My bad — you certainly deserve the responsibility, and you’re right: because of OCaml, we’re both better programmers. On the other hand, it’s a kind of Plato’s Cave issue: we see the light, and we try to share that, but it’s frustrating because what we’re doing seems “awkward” and “overcomplicated” to everyone else.

    The most common data structure I’m using these days is a lazy map: I just put in the parameters as an object, and the map resolves itself to the right answer. The Jakarta Commons Collection package is really nice for this (see the LazyMap class), and is greatly powerful when combined with the WeakHashMap class in the core of the language (“It’s cached — oh, is it gone? Here’s another one.”). The real danger is that you can get into some abusive object creation with any kind of nested results.

    The other thing that’s difficult is concepts like: “I’ll test using some function, and do some standard processing while it’s true.” I just did this the other day. These plugable hunks of code normally get handled in an object maner by extending the class or (for the bad programmer) copy-and-paste, but that’s actually pretty awkward, and makes refactoring difficult.