Why learn Haskell

Dirk van Deun, dirk at dinf.vub.ac.be
April 2011

This is quite a hard question to answer off the cuff, so I've written down an extensive answer here. I mostly take my second favourite family of languages, the Lisps, as a starting point for comparison: the contrast with C-like languages is of course even much larger.

There are many reasons to learn Haskell, but I will pick just one and explain it in some detail. I hope that this will be more appealing than a more complete dry enumeration. I will pick "programming in the small". You should learn Haskell because it is great for programming in the small. When it comes to programming in the large, I also feel that the advantages of Haskell outweigh its disadvantages, although that is debatable; but when it comes to programming algorithms as opposed to whole programs, advantages are just about all there is.

In Haskell, the most natural way to write code is at a high level of abstraction. Of course you can write code at a high level of abstraction in any language, but programming with higher order functions in C, for instance, is not really taking the path of least resistance. And if your more abstract code is also harder to write and harder to debug, you have not gained much.

Programming at a high level of abstraction should diminish the amount of opportunities you have to introduce bugs. In my experience, when it comes to programming in the small, the amount of bugs correlates with the amount of loops or recursions; also with the amount of branches introduced by conditionals; and with the amount of identifiers (variables) to keep track of, especially in the presence of assignments.

That is why the Lisp map function is so brilliant. What would otherwise consist of a loop (over all elements of the list), a conditional (checking for the end condition, the empty list), and would bring an extra identifier in view (to hold the current node), can be written as one simple map expression, making it almost impossible to introduce a bug. But the uses of map are limited, as are the uses of its siblings, like filter and reduce. First, these functions can only be used over finite lists, and they always process the complete list. So for "find the first element in a list l that satisfies property p", you should still write:

   (let loop ((lst l))
     (if (p (car lst))
         (car lst)
         (loop (cdr lst))))

or grow a collection of ad hoc functions; whereas in Haskell, you would write:

   head (filter p l)

(Well, you would probably write head $ filter p l, with the dollar operator that eases the burden on the mental stack of the human reader when reading expressions with lots of subexpressions, but the equivalent expression above will look more familiar to most people.)

Because of Haskell's laziness, this expression will not process the whole list; and because Haskell is pure functional, laziness will not change the semantics (which it could do in a non-pure language by dropping side effects that should happen when the unused part of the list is generated). This makes the higher order functions much more useful in Haskell than they are in Lisp. (Sceptics who wonder what is wrong with needing a separate library function find-first are invited to locate the find-second function in their language manual.)

I will of course grant that the same programming style is possible in other languages, for instance with delay and force in Scheme, but in other languages this is not the path of least resistance, and people just do not program in Scheme in this style. (Also, the ordering of side effects will become hard to predict if delay and force are used pervasively.)

The second limitation is that such higher order functions are traditionally only for iterating over data structures. You still use explicit loops or recursions to iterate over for instance the integers. You could construct lists of integers, of course, but that seems like a bad idea. One of the reasons is that you would often have to start from the endless list of integers, which would take endless time to construct; but in Haskell laziness would once again solve that problem. Another issue is efficiency. Let us generate a list of the values 1 to 100 and sum them:

   sum [1..100]

We would expect this to be very inefficient, because this expression first builds a list of integers, and then destroys the list again, to return only a number. Now the amazing thing is, that it will probably be about as efficient as:

   (let loop ((n 1))
     (if (> n 100)
         0
         (+ n (loop (+ n 1)))))

This is because building a list and then destroying it again can be optimized away in very many cases; and once again, this is only allowed because Haskell is pure functional and lazy (as building and destroying operations have to be interleaved). This allows us to write clear and simple code in terms of lists and higher order functions without sacrificing efficiency. Granted that a library exists in Common Lisp that allows you to do similar things under certain conditions (which the programmer has to check), but using this library is not the natural thing for a Lisper to do.

Similarly, and a boon for thinking at a high level of abstraction, in Haskell higher order functions on actual lists are also composable without loss of efficiency. A map over a map is just fine: no need to combine the two mapped functions by hand, nor to use a mildly obscure Common Lisp library.

The end result is that in Haskell, you will use considerably less explicit recursion: you can map and filter and fold your way out of almost anything. Fewer branching conditionals and fewer variables follow. This is the main cause that makes idiomatic Haskell code so extremely concise, and makes that going back to any other language makes programming seem like busywork. More importantly, it is much easier to convince yourself that there are no bugs in the two lines of Haskell in the examples above, than in the eight lines of equivalent Scheme.

There are other causes for the concision of Haskell code, superficial ones like pattern matching over function arguments, as well as more fundamental ones, like monads and list comprehensions. Simple but cool, for instance, is passing partially applied functions: being able to pass get "id" instead of (lambda (obj) (get "id" obj)) avoids clutter and allows you to be more creative with higher order functions without making your code unreadable. The same goes for point-free programming: uniq.sort is much easier on the reader than (lambda (lst) (uniq (sort lst))). Incidentally, Haskell programmers tend to get cocky and compose very complex expressions of this type (because they can), which substitutes one source of bugs for another; but fortunately the type inferencer detects this kind of bugs very well. Why static typing is a great fit for pure functional languages is however a separate topic.

So we can use a higher level of abstraction when iterating over ranges of values, as well as over the elements of data structures. That leaves one thing: filling arrays in one swoop. Immutable arrays, known from the APL family of languages, are used in a very high level programming style in which arrays are constructed by one expression and never updated. This does not only reduce the amount of explicit loops or recursions, but it also does away with lots of assignments that might make your program hard to understand. Immutable arrays are well-supported in Haskell, and as opposed to APL arrays, their elements are evaluated lazily as well, so unused elements never really get computed. In other words, this array is a cache for a function f over the domain [a..z]:

   array (a,z) [(i, f i) | i <- [a..z]]

In conclusion: in Haskell writing algorithms at a high level of abstraction (in what often can be called "executable specification style") is not just possible, but actually the path of least resistance: it leads to very concise and clear code, which can be compiled to efficient binaries, thanks to laziness and pureness.

Back to Dirk van Deun's Web 1.0 blog