Saturday, December 20, 2014

A commentary on 24 days of GHC extensions, part 3

It's time for some more Haskell opinions.

Day 13: Functional Dependencies

See day 12.

The addition of functional dependencies to multi-parameter type classes all of a sudden opened up the possibility of doing type level computations that had not been there before. The type level programming industry seems to have started with Thomas Hallgren's 2001 paper Fun with Functional Dependencies (which got some inspiration from the type level numeric computation available in (classic) Bluespec).

Day 14: Deriving

I don't know the history of deriving Functor et al. I believe Simon PJ added it after enough nagging.

I do have an intense dislike for GeneralizedNewtypeDeriving as implemented in GHC. As illustrated by ghc bug 1496 (found by Stefan O'Rear who was 14-15 at the time) it leads to an inconsistent type system. Instead of removing the root cause of the problem GHC HQ has decided to add a rather ingenious, but grotesque, solution where type variables are given roles. It's complex and it's going to get worse. It also brings its own problems (e.g., join can no longer be made a method in the monad class. WAT?!?!).

In the two compilers where I've implemented GND I've chosen a different route. The compiler tries to construct a proper instance declaration for the derived class. If it can be constructed and type checks then all is well. If it doesn't, there's something tricky going on and the instance has to be written by hand. I claim this covers 99% of all GND uses. And it makes me sleep better at night having had the instance declaration approved by the type checker.

I also prefer to make the compiler recognise and remove the identity functions that newtype gives rise to rather than forcing the user to explicitly using the new Coercible class.

Day 15: Derive Generic

The generic deriving extension is a clever idea from Clean, where all the meta-data about a data type (constructor names, types, etc) is encoded at the type level, and thus available at compile time.

The details of this encoding are rather gruesome, but I think it could be a lot nicer using the new type level strings, type level numbers, and type families. Time for a small redesign?

Day 16: Overloaded String

Overloaded strings was another extension I added to GHC because of my work on the Paradise DSL. Since Paradise is deeply embedded using DSL-level strings would be ugly unless the string literals were overloaded.

It was quite easy to add to GHC. The least easy part was adapting the defaulting rules to strings as well.

Worth noting is that pattern matching on overloaded strings turns into an equality comparison (just as with numeric literals), and so requires the string type to be an instance of Eq, which is not a superclass of IsString. Oh, and the name IsString for the class with the fromString was just temporary since I couldn't think of a good name. I guess it stuck.

Day 17: Rank-N Types

Rank-n types are, of course, a rather old idea, but it took a while before they made it into Haskell. At the Haskell Workshop in La Jolla in 1995 Mark Jones had a very nice paper, From Hindley-Milner Types to First-Class Structures. It shows how to add a form of records to Haskell, where the record components can be polymorhic. This has the same expressivity as current rank-n types, but looks a little different.

For example, the archetypical Haskell rank-2 function has this type now

runST :: (forall s . ST s a) -> a
With records with polymorphic components we have to introduce a new record type to write this. With modern Haskell it would look like
data ArgST a = ArgST (forall s . ST s a)
runST :: ArgST a -> a
Or with Mark's suggested anonymous record types
runST :: { arg :: forall s . ST s a } -> a
The 1996 release of HBC contained this kind of polymorphic components, but it did not have the anonymous records. GHC got rank-n (or was it rank-2?) types in 2000. The GHC rank-n types are somewhat more convenient, but requires function type signatures which the hbc version did not.

At the La Jolla ICFP we had a Haskell committee meeting about the next Haskell release, 1.3. If memory serves me, after a lot of discussions we decided that Haskell was going to get record types. Something along what Mark's paper suggested, but I think they were going to be named. The records would support polymorphic components (thus general universal quantification) and data types would support existentials. But after we split up, I guess the czar got cold feet and we ended up with the current Haskell records (I didn't really like them when I first saw them, and I still don't). Haskell could have been rather different if we had followed through on the decision (I'm not sure about better, but at least different).

Day 18: Existential Types

The idea of existential types is old, but the first time I saw it in an ML-like programming language was Läufer&Odersky's An Extension of ML with First-Class Abstract Types at the ML workshop in San Francisco in June 1992. I promptly implemented it and it was in version 0.998.1 of HBC in July 1992.

Adding existentials to ML is very easy; it's just a matter of tweaking the type checker to accept some more programs. Adding it to Haskell is a little more involved since the existential variables my appear in a context, and so a value of an existential type has to be a pair of a dictionaries and the actual value (ML only needs the actual value). The dictionaries are needed to use overloaded operations on the value.

What about syntax? I did it exactly like Läufer&Odersky, namely free variables in a data declaration are existentially quantified. E.g., data E = C a (a -> Int), since the type variable a does not appear in the head of the data declaration it is existentially quantified.

Some people (I remember Mark Jones and Simon PJ) argued fiercely against this, saying that it would be too easy to make mistakes and forget a type variable and get it existentially qualified. So when GHC finally got them they looked like data E = forall a . C a (a -> Int), using the confusing (but entirely logical!) forall to indicate an existential type. But then GADTs came along, and now it's suddenly no longer important to use the (pointless) forall, i.e., now we can write data E where { C :: a -> (a -> Int) -> E }. Oh, well.

Some people say that existential types is an anti-pattern in Haskell. I don't agree. It has a few perfectly legitimate uses, the problem is just that if you come from an OO background you'll probably reach for existentials in all kind of cases when there are better solutions.

Labels:

Friday, December 19, 2014

A commentary on 24 days of GHC extensions, part 2

Day 7: Type Operators

I understand the usefulness of the type operator GHC extension, but I don't like it. For value identifiers any operator not starting with a ':' is treated the same way as identifiers starting with a lower case letter. This is how it used to be on the type level too. But with the latest version of the TypeOperators extension these operators now behave like upper case identifiers and can be used to name types. An ugly wart, IMO.

Day 8: Recursive do

This extension stems from Levent Erkök's PhD thesis Value Recursion in Monadic Computations. The motivation was to be able to describe circuits with feedback using monadic notation.

There's actually two way to get recursive do-notation. You can either use the rec keyword inside the do or you can use the keyword mdo instead of do. Why mdo, you ask? It should really be μdo, where μ is the recursion operator. But people at GHC HQ are scared of Unicode, so instead we got mdo.

Day 9: Nullary Type Classes

Once you have accepted multi-parameter type classes then nullary type classes are a natural consequence; zero is also a number, after all.

In 1994 I worked on the pH language when I visited MIT. The acronym pH stands for "parallel Haskell", which is what it is. It had some imperative features that were not in a monad. This was in 1994, and the ST monad was just being invented, so we were trying out other things. To keep track of what parts of the program used imperative features I used a nullary type class Imperative that tainted any code that used assignment et al. (As far as I know, this is the earliest use of a nullary type class.) When mixing imperative code and pure code it is important to not be too polymorphic. ML-like languages use the value restriction to accomplish this. Since Haskell's monomorphism restriction is essentially the same as the value restriction, having the Imperative type class neatly accomplished the value restriction for pH.

Day 10: Implicit Parameters

The implicit parameter started as an attempt to simplify Haskell overloading. Haskell overloading, when implemented by dictionary passing, really has two parts: first there is a way to implicitly pass dictionaries around, and second, dictionaries are constructed automatically by using the instance declarations as a logical inference system.

The implicit parameters do the automagic passing of parameters, and is thus half of what is needed for overloading. But they were never used for overloading, instead it was discovered that they can be made to behave like dynamic binding, but with static types. This is quite cool! Even if one doesn't like dynamic binding (I don't).

There idea is presented Erik Meijer et al's paper Implicit Parameters: Dynamic Scoping with Static Types. It's worth reading.

Day 11: Type Families

Type families grew out of the need to have type classes with associated types. The latter is not strictly necessary since it can be emulated with multi-parameter type classes, but it gives a much nicer notation in many cases. The same is true for type families; they can also be emulated by multi-parameter type classes. But MPTC gives a very logic programming style of doing type computation; whereas type families (which are just type functions that can pattern match on the arguments) is like functional programming.

Using closed type families adds some extra strength that cannot be achieved by type classes. To get the same power from type classes we would need to add closed type classes. Which would be quite useful; this is what instance chains gives you.

Day 12: Multi-Parameter Type Classes

Even the original paper on Haskell type classes mentions multiple parameters, so this idea has been with us from the start. But it's not present in Stefan Kaes original work on type classes.

Unfortunately, multiple type class parameters very quickly lead to type ambiguities in the code, and we are forced to add type signatures. So MPTC was not really useful until Mark Jones introduced functional dependencies. Mark got the idea from database theory where the same idea is used to limit the relations in the database.

Labels:

Monday, December 15, 2014

A commentary on 24 days of GHC extensions

Ollie Charles has continued his excellent series of Christmas Haskell blogs. This year he talks about 24 Days of GHC Extensions. His blog posts are an excellent technical introduction to various Haskell extensions. Reading them inspired me to write some non-technical comments about the various extensions; giving a little bit of history and personal comments about them.

Day 1: View Patterns

View patterns have a long history. As far as I know views were first suggested by Phil Wadler in 1987, Views: a way for pattern matching to cohabit with data abstraction. As the title of the paper suggests, it was about being able to pattern match on abstract data types. Variations of Phil's suggestions have been implemented several times (I did it both in LML and for Haskell in hbc). In all these early suggestions the conversion between the concrete type and the view type were implicit, whereas in the ViewPatterns extension the conversion is explicit.

The addition of view patterns was partly spurred by the fact that F# has a very neat way of introducing pattern matching for abstract types, called active patterns. Since Simon Peyton Jones is in the same research lab as Don Syme it's natural that Simon couldn't let F# have this advantage over Haskell. :)

Day 2: Pattern Synonyms

Pattern synonyms is a more recent addition. The first time I remember hearing the idea was this the paper Abstract Value Constructors, and ever since then I wished that Haskell would have something like that. Patterns were one of the few constructs in Haskell that could not be named and reused. The first time they were added to Haskell was in the SHE preprocessor.

At a Haskell hackathon in Cambridge, August 2011, Simon Peyton Jones, Simon Marlow and I met to hash out how they could be added to GHC. I wanted to go beyond simple pattern synonyms. The simplest kind is uni-directional synonyms which can only occur in patterns. The simply bi-directional synonyms can be used both in patterns and expressions, but are limited in what they can do. The explicit bi-directional synonyms allow the full power of both patterns and expressions. With explicit bi-directional synonyms and view patterns we can finally implement views as envisioned by Phil, but now broken down into smaller reusable pieces. The result of our discussions at the hackathon can ge found here. But we were all too busy to implement any of it. All the hard implementation work, and fixing all the additional snags found, was done by Gergő Érdi.

I find this a very exciting extension, and you can take it even further, like Associated Pattern Synonyms in class declarations.

Day 3: Record Wildcards

The record wildcard extension allows you to open a record and get access to all the fields without using qualified names for them. The first time I encountered this idea was in Pascal which as a with-statement. It looks like with expr do stmt where the expr is a record value and inside the stmt all the fields of the record can be accessed directly. The construct later appeared in Ada as well, where it's called use.

So having something similar in Haskell doesn't seem so far fetched. But in was actually the dual, namely to construct values using record wildcard notation that inspired me to make this extension.

In 2006 I was developing the Paradise EDSL which is a DSL embedded in Haskell for describing computations and user interfaces. I wanted a way to make the EDSL less verbose and that's why record wildcards came about. Here's a simple example to illustrate the idea. We want to be able to input a coordinate record.

data Coord = Coord { x :: Double, y :: Double }
    coord :: P Coord
    coord = do
        x <- input 0.0
        y <- input 0.0
        return Coord{..}
This says that x and y are input and that their default value is 0. We need to have an input for every field in the Coord record and at the end we need to construct the record itself. Without the record wildcard the construction would have been Coord{x=x,y=y}. This isn't too bad, but for hundreds of inputs it gets tedious.

I made a quick and dirty implementation of this in GHC, but it was too ugly to get into the official release. Instead Dan Licata reimplemented (and extended) it under Simon PJ's supervision into what we have today.

I'm actually quite ambivalent about this extension. It's incredibly convenient (especially in the pattern form), but it introduces new bound names without an explicit mention of the name in the binder. This makes it harder to read code. The same objection can be made about the Haskell import declaration when it lacks an import list.

Day 4: Bang Patterns

I don't have much to say about BangPatterns. One day they simply appeared in a new GHC version. :)

The Clean language has long has something similar, but in Clean the annotation goes on the type instead of on the pattern. In Haskell it seems like a nice duality to have it on the pattern since there also lazy patterns introduced by ~.

Day 5: Rebindable Syntax

The rebindable syntax is almost like going back the older Haskell reports. They lacked an exact specification of where the identifier introduced by desugaring were bound. Of course, it was resolved so that they always refer to the Prelude identifiers. But when experimenting with other preludes it's very useful to be able to override this. Which is exactly what RebindableSyntax allows you to do.

In my opinion, this extension should be a little different. It ought to be possible to give a module name for where the names should be found. So normally this would be Prelude, but I'd like to be able to say RebindableSyntax=MyPrelude, and then all names introduced by desugaring will be found in MyPrelude (even if they are not in scope).

Day 6: List comprehensions

This bundles together a number of list comprehension extensions.

First, MonadComprehensions, which allows list comprehension syntax for any monad. This is simply going back to Haskell 1.4 where monad comprehensions were introduced. The monad comprehension syntax had been used by Phil Wadler before that. Monad comprehension were removed from 1.4, because they were deemed too confusing for beginners. Monad comprehensions were brought back to GHC by George Giorgidze et al.

The ParallelListComp allows zip like operation to be expression in the comprehension. The idea originates from Galois in the design of Cryptol. John Launchbury, Jeff Lewis, and Thomas Nordin were turning crypto networks into recursive equations and they wanted a nicer notation than using zipWith. (Thanks to Andy Adams-Moran for the history lesson.)

The origins TransformListComp are simple. It's just a way to bring more of the power of SQL into list comprehensions. It's an extension that introduces far too much special syntax for my taste, but the things you can do are quite neat.

Labels:

Saturday, April 05, 2014

Haskell error reporting with locations, update

Since some people (I'm among them) dislike impure features in Haskell I thought I'd present a slight variation on the error location feature that is "pure".

First, the __LOCATION__ variable gets an abstract type. So

  data Location
  __LOCATION__ :: Location
It's defined in the Prelude and always in scope. The type cannot be compared, shown, or anything. There's just one thing that can be done, namely:
  extractLocation :: Location -> IO String
The error function needs a new exception to throw
  data ErrorCallLoc = ErrorCallLoc Location String

  {-# LOCATIONTRANSPARENT error #-}
  error :: String -> a
  error s = throw (ErrorCallLoc __LOCATION__ s)
This means that the location string cannot be used when we throw the error. But it can be used where the error is caught, since this can only be done in the IO monad.

Under the hood the everything is just as before, Location is just a string. It just can't be manipulated except in the IO monad, so we can pretend it's pure.

  newtype Location = Location String
  extractLocation (Location s) = return s
It now looks a lot like Michael Snoyman's proposal.

Friday, April 04, 2014

Haskell error reporting with locations

Error reporting in GHC is not always the nicest. For example, I often develop code by using undefined as a placeholder for code I've not written yet. Here's a simple example:
import System.Environment
  main = do
    args <- getargs
    if null args then
      undefined
     else
      undefined
Running this looks like this:
$ ./H
  H: Prelude.undefined
Which undefined caused that? Looking at the error message we have no idea. Wouldn't it be nice with some location information?

We can actually get location information by using Control.Exception.assert:

import Control.Exception(assert)
  import System.Environment

  main = do
    args <- getargs
    if null args then
      assert False undefined
     else
      assert False undefined
Now running it is much more informative:
$ ./H
  H: H.hs:7:9-14: Assertion failed
How is assert able to report the location? If we dig deep enough we discover that it's because the ghc compiler contains a special hack to recognize this function and give it location information.

A generalized hack

In a Haskell compiler that I've implemented I've taken this compiler hack and extended it so it can be used for any function.  It comes in two parts, location information and location transparent definitions.

__LOCATION__

The __LOCATION__ identifier is always defined and utterly magical. Its value is a string that describes the location of that very identifier. This is the very opposite of a referentially transparent name. In fact it's value varies with where it is placed in the code! So it's definitely not for purists. But I'm a practical man, so I sometimes have resort of the ugliness of reality. And in reality we want to report locations in errors.

Enough philosophy, here's an example:

main = do
    print __LOCATION__
    print   __LOCATION__
And running it prints:
"test/Test.hs:2:11"
  "test/Test.hs:3:13"
And to illustrate the impurity:
main = do
    let loc = __LOCATION__
    print loc
    print loc
And running this:
"test/Test.mu:2:15"
  "test/Test.mu:2:15"

Location transparency

The __LOCATION__ identifier gives the location of itself. This is of little use on its own. Imagine the definition we could give for undefined. Somewhere in the Prelude module it could say something like
undefined = error ("undefined: " ++ __LOCATION__)
But if we use this all that it will tell us is where the definition of undefined is, not where it was used.

To get the point of use instead of the definition I've introduced location transparent definitions. In a location transparent definition the __LOCATION__ identifier will not refer to its own position, but to the position of the reference to the definition. Location transparency is introduced with a pragma.

{-# LOCATIONTRANSPARENT undefined #-}
  undefined = error ("undefined: " ++ __LOCATION__)
With this definition our initial example looks like this when we run it:
undefined: test/H.hs:6:9
In fact, the real definition of undefined doesn't look like that. The __LOCATION__ identifier is only used in the definition of error, so it looks something like this:
{-# LOCATIONTRANSPARENT error #-}
  error :: String -> a
  error s = throw (ErrorCall (__LOCATION__ ++ ": " ++ s))

  {-# LOCATIONTRANSPARENT undefined #-}
  undefined = error "undefined"
Since both error and undefined are transparent any use of undefined will be reported with the location of the use.

Furthermore, we can make a few more functions location transparent, e.g.,

{-# LOCATIONTRANSPARENT head #-}
  head :: [a] -> a
  head [] = error "Empty list"
  head (x:xs) = x
A simple example:
main = putStr (head [])
Which will print:
test/Head.hs:1:16: Empty list
which is the location where head was called.

Implementation

There are different ways to implement this feature, and I'm going to sketch two of them.

First: Every function that has the LOCATIONTRANSPARENT pragma will be inlined at the point of use, and the __LOCATION__ identifier in the inlined code will be updated to reflect the call site. The definitions must be processed in a bottom-up fashion for this to work. It's fairly simple to implement, but will cause some code bloat due to inlining.

Second: Every function that has LOCATIONTRANSPARENT pragma will be rewritten (by the compiler) to have an extra location argument, and each use of this function will be rewritten to pass in the current location. For example (using $$f for the location version of f):

main = putStr ($$head __LOCATION__ [])

  $$head __LOCATION__ [] = $$error __LOCATION__ "Empty list"
  $$head __LOCATION__ (x:xs) = x
  $$error __LOCATION__ s = throw (ErrorCall (__LOCATION__ ++ ": " ++ s))
This should be fairly straightforward to implement, but I've not tried it. (It's somewhat like dynamic binding, so maybe ghc could reuse that mechanism for locations.)

And, of course, the global __LOCATION__ identifier has to be recognized by the compiler and replaced by a string that is its location.

Conclusion

I implemented the __LOCATION__ hack quite a while ago, and I like the much improved reporting of error locations. I hope someone will add it to ghc as well.

Labels:

Thursday, April 03, 2014

A small Haskell extension

The extension

In Haskell you can give a type to an expression by writing expr ::  type.  To an untrained eye the :: looks just like an infix operator, even if it is described as a special syntactical construct in the Haskell report.  But let's think about it as an infix operator for a moment.

For an infix operator you you can for a section, i.e., a use of the operator with one operand left out.  For instance (* 2) leaves out the first operand, and Haskell defines this to be the same as (\ x -> x * 2).  Regarding :: as an operator we should be able to write (:: type) and it should have the obvious meaning (\ x -> x :: type).

I suggest, and I plan sending the haskell-prime mailing list, Haskell should adopt this small extension.

Why?

First, the extension is very light weight and has almost no extra intellectual weight for anyone learning Haskell.  I'd argue it makes the language simpler because it allows :: to be treated more like an infix operator.  But without use cases this would probably not be enough of an argument.

Example 1

We want to make a function, canonDouble, that takes a string representing a Double and changes it to the standard Haskell string representing this Double.  E.g. canonDouble "0.1e1" == "1.0".  A first attempt might look like this:

  canonDouble :: String -> String
  canonDouble = show . read         -- WRONG!

This is, of course, wrong since the compiler cannot guess that the type between read and show should be a Double.  We can convey this type information in different ways, e.g.:

  canonDouble :: String -> String
  canonDouble = show . asDouble . read
    where asDouble :: Double -> Double
          asDouble x = x

This is somewhat clumsy.  Using my proposed extension we can instead write:

  canonDouble :: String -> String
  canonDouble = show . (:: Double) . read

This has the obvious meaning, and succinctly describes what we want.

Example 2

In ghc 7.8 there is a new, better implementation of Data.Typeable.  It used to be (before ghc 7.8) that to get a TypeRep for some type you would have to have a value of that type.  E.g., typeOf True gives the TypeRep for the Bool type.  If we don't have a value handy of the type, then we will have to make one, e.g., by using undefined.  So we could write typeOf (undefined :: Bool).

This way of using undefined is rather ugly, and relies on non-strictness to work.  Ghc 7.8 add a new, cleaner way of doing it.

  typeRep :: proxy a -> TypeRep

The typeRep function does not need an actual value, but just a proxy for the value.  A common proxy is the Proxy type from Data.Proxy:

  data Proxy a = Proxy

Using this type we can now get the TypeRep of a Bool by writing typeRep (Proxy :: Proxy Bool).  Note that in the type signature of typeRep the proxy is a type variable.  This means we can use other values as proxies, e.g., typeRep ([] :: [Bool]).

We can in fact use anything as a proxy that has a structure that unifies with proxy a.  For instance, if we want a proxy for the type T we could use T -> T, which is the same as (->) T T.  The (->) T part makes of it is the proxy and the last T makes up the a.

The extension I propose provides an easy way to write a function of type T -> T, just write (:: T).  So to get a TypeRep for Bool we can simply write typeRep (:: Bool).  Doesn't that look (deceptively) simple?

In fact, my driving force for coming up with this language extension was to get an easy and natural way to write type proxies, and I think using (:: T) for a type proxy is a as easy and natural as it gets (even if the reason it works is rather peculiar).

Implementation

I've implemented the extension in one Haskell compiler and it was very easy to add and it works as expected.  Since it was so easy, I'll implement it for ghc as well, and the ghc maintainers can decide if the want to merge it.  I suggest this new feature is available using the language extension name SignatureSections.

Extensions

Does it make sense to do a left section of ::?  I.e., does (expr ::) make sense?  In current Haskell that does not make sense, since it would be an expression that lacks an argument that is a type.  Haskell doesn't currently allow explicit type arguments, but if it ever will this could be considered.

With the definition that (:: T) is the same as (\ x -> x :: T) any use of quantified or qualified types as T will give a type error.  E.g., (:: [a]), which is (\ x -> x :: [a],  is a type error.  You could imagine a different desugaring of (:: T), namely (id :: T -> T).  Now (:: [a]) desugars to (id :: [a] -> [a]) which is type correct.  In general, we have to keep quantifiers and qualifiers at the top, i.e., (:: forall a . a) turns into (id :: forall a . a -> a).

Personally, I'm not convinced this more complex desugaring is worth the extra effort.

Labels:

Saturday, July 09, 2011

Impredicative polymorphism, a use case

In a recent question on stackoverflow I made a comment about how Haskell can be considered a good imperative language because you can define abstractions to make it convenient. When I was going to make my point by implementing a simple example of it I found that what I wanted to do no longer works in ghc-7.0.4. Here's a simple example of what I wanted to do (which works in ghc-6.12.3). It's a simple library that makes Haskell code look a bit like Python.

{-# LANGUAGE ExtendedDefaultRules, TupleSections #-}
module Main where
import qualified Prelude
import Boa

last_elt(xs) = def $: do
    assert xs "Empty list"
    lst <- var xs              -- Create a new variable, lst
    ret <- var (xs.head)
    while lst $: do
        ret #= lst.head        -- Assign variable ret
        lst #= lst.tail
    return ret

first_elt(xs) = def $: do
    l <- var xs
    l.reverse                  -- Destructive reverse
    return (last_elt(l))

factorial(n) = def $: do
    assert (n<=0) "Negative factorial"
    ret <- var 1
    i <- var n
    while i $: do
        ret *= i
        i -= 1
    return ret

test = def $: do
    print "Hello"
    print ("factorial 10 =", factorial(10))

main = do
    test
    l <- varc [1, 2, 3]
    print ("first and last:",)
    print (first_elt(l),)
    print (last_elt(l))

On the whole it's pretty boring, except for one thing. In imperative languages there is (usually) a disctinction between l-values and r-values. An l-value represent a variable, i.e., something that can be assigned, whereas an r-value is simply a value. So in Python, in the statement x = x + 1 the x on the left is an l-value (it's being assigned), whereas on the right it's an r-value. You can use the same notation for both in most imperative languages. If you want to do the same in Haskell you have (at least) two choices. First you can unify the concepts of l-value and r-value and have a runtime test that you only try to assign variables. So, e.g., 5 = x would type check but have a runtime failure. I will not dwell further on this since it's not very Haskelly.
Instead, we want to have l-values and r-values being statically type checked. Here's the interesting bit of the types for a simple example.
 
data LValue a
data RValue a
instance (Num a) => Num (RValue a)

class LR lr
instance LR RValue
instance LR LValue

var :: RValue a -> IO (forall lr . (LR lr) => lr a)
(#=) :: LValue a -> RValue a -> IO ()

foo = do
    x <- var 42
    x #= x + 1

We have two type constructors LValue and RValue representing l-values and r-values of some type a. The r-values is an instance of Num. Furthermore, the class LR where the type is either LValue or RValue.
The var function creates a new variable given a value. The return type of var is the interesting part. It says that the return value is polymorphic; it can be used either as an l-value or as r-value. Just as we want.
The assignment operator, (#=), takes an l-value, an r-value, and returns nothing.

So in the example we expect x to have type forall lr . (LR lr) => lr a, in which case the assignment will type check.
If we try to compile this we get
    Illegal polymorphic or qualified type:
      forall (lr :: * -> *). LR lr => lr a
    Perhaps you intended to use -XImpredicativeTypes
    In the type signature for `var':
      var :: RValue a -> IO (forall lr. LR lr => lr a)

The problem is that universal quantification is not normally allowed as the argument to a type constructor. This requires the impredicative polymorphism extension to ghc. If we turn it on it compiles fine in ghc-6.12.3.
But, with ghc-7.0.4 we get
    Couldn't match expected type `LValue a0'
                with actual type `forall (lr :: * -> *). LR lr => lr a1'
    In the first argument of `(#=)', namely `x'
    In the expression: x #= x + 1

I can't really explain the rational behind the change in the ghc type system (Simon say it's simpler now), but I feel this has really made ghc worse for defining DSELs. When you define a DSEL you usually use the do-notation for the binding construct in the embedded language. This is the only programmable binding construct (by overloading (>>=)), so there is little choice. With the change in ghc-7 it means you can no longer make DSELs with a polymorhic binding construct (like I wanted here), because the binder seems to be monomorphic now

Please Simon, can we have polymorphic do bindings back?