Ben wrote:
however, this does bring up a general question : why are applicative
functors (often) faster than monads?  malcolm wallace mentioned this
is true for polyparse, and heinrich mentioned this is true more
generally.  is there a yoga by which one can write monadic functors
which have a specialized, faster applicative implementation?

I'm not knowledgeable enough on the multicore stuff, but I can comment on the monad vs applicative issue.

Applicative functors are not per se faster than monads, it's just that the former can encode static analysis while the latter can't. As you can see from the type of "bind"

   (>>=) :: m a -> (a -> m b) -> m b

the structure of the computation in the second argument, i.e. its various side effects, can depend on a value of type a , which is only available at "run-time".

In contrast, the type of "apply"

   (<*>) :: m (a -> b) -> m a -> m b

makes it clear that the side effects are the same, no matter what the value of a will be at run-time. In other words, the structure of the computation is known statically.


For parsers, interesting analyses are

* Does a parser accept the empty set?
* What are the first characters that a parser can accept?

The answers can be obtained easily enough from an applicative functors, for instance

    acceptsEmpty (pure x)  = True
    acceptsEmpty (f <*> g) = acceptsEmpty f && acceptsEmpty g

But the corresponding analysis for monadic parsers is either harder or hopelessly inefficient because we don't know the structure of the parser until we run it on some input.

See also this answer on StackOverflow:

  http://stackoverflow.com/a/7863380/403805



Best regards,
Heinrich Apfelmus

--
http://apfelmus.nfshost.com


_______________________________________________
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe

Reply via email to