So when I read the "Syntactic Sugar for Arrows" proposal, my initial
reaction is "Wow, that's a little complicated.  It doesn't look like
syntactic sugar to me."  (Err, no offense, I hope.)  This contrasts
with the do-notation, which does look like syntactic sugar: you can
rewrite any do expression in terms of the basic combinators with a
bounded amount of pain.[1]  Somehow with Arrows the point-free syntax
you are forced into is extraordinarily unwieldy, and the arrows
"syntactic sugar" is much handier.  I presume people have tried and
failed to come up with a more efficient set of combinators?  Any
thoughts as to why the arrow combinators need to be so unwieldy?

A possibly related question: are there any general results on the
verboseness of lambda calculus versus combinators?

Incidentally, it seems to me that this is one case where a Lisp-like
macro facility might be useful.  With Haskell, it is impossible to
play with bindings, while presumably you can do this with good Lisp
macro systems.

Best,
        Dylan Thurston

Footnotes: 
[1] A quick look at the Haskell report reveals that named fields,
pattern matching, and deriving declarations are not "syntactic sugar"
in this sense.  Of these, pattern matching is fundamental, named
fields have clear semantics, and deriving declarations are more iffy
[though very handy].  Did I miss any?

Attachment: msg09683/pgp00000.pgp
Description: PGP signature

Reply via email to