Simon Marlow wrote:
> Ok.  It's going to be hard to get a fair comparison here, but I've just done
> a rough measurement on GHC:
I can't obviously run MLj on Simon Marlow's computer, so I have rerun both tests
on this (obviously much slower) Sparc box, with GHC 4.04 and SML/NJ 110.7,
using the happy/ML-yacc output in both cases.  This still isn't tremendously fair;
for one thing yacc output is not exactly typical.  Also I must admit that the bit
of the MLj compiler I am testing does not include the bit which does fixity analysis,
which is arguably part of the parsing.  (I think this could have been bundled into the
rest of the parser without it slowing things down much.)  On the other hand, I should
repeat that the MLj parser is doing more since it is include much fuller location
information and is also capable of error recovery.  Anyway here are my results

              GHC                       MLj
wc -l       10146                      4856
wc -c      312962                    212902
real       46.654                     7.679
user       12.870                     2.540
sys         1.170                     0.230

Notes:
1) unlike Simon, I didn't try to adjust the heap size to make the MLj parsing go
   faster.
2) for the Haskell parser, I've counted the number of lines and characters after
   preprocessing.  This is slightly nicer to GHC than if I'd used Simon's figures.
3) Since I suspect a significant part of the compilation time is taken up with
   loading in the heap image, etc, I think the results would be even more
   favourable to MLj if I had a program of the same length.
[snip]
> So what we're talking about is just parser recovery?  Not even passing the
> corrected output through the rest of the front end?  Then I'm even less
> convinced.  Given that the longest you have to wait for a parse error on any
> human-written Haskell file is at the most 1 second, what's the point?
Yes, that is precisely what I want.   I think it is already clear that some of
us have rather slower computers!  Also we have to fight against NFS much of the
time.
> 
> Actually, I think the best solution is to have a parser that runs in the
> background while you're editing, so you'll know if the program is at least
> syntactically correct before even running the compiler.
It occurred to me ages ago that ideally there'd be an Emacs mode or something
which dynamically lexed, parsed and typechecked code as it was typed in.
Actually I can't see any reason why this should be impossible, though it would
be more than a PhD project to actually implement it.  Why can't you get Microsoft
to spend a few millions doing it? 
> It was you who suggested the parser be rewritten.  So what's the
> alternative?
Rewrite Happy.

Best wishes

George Russell

Reply via email to