Re: [Haskell-cafe] Re[2]: Haskell performance in heavy numerical computations

2006-07-10 Thread Brent Fulgham


On Jul 10, 2006, at 4:23 AM, Donald Bruce Stewart wrote:

Ah! In this case, on the debian, the benchmark has been compiled
_without_ -fexcess-precision, that's what's causing the big slow down.
We had to turn it on, on the gp4, but it seems the flag wasn't
propagated to the debian/sempron builds for some reason.

Looks like the ghc/mandelbrot benchmarks just needs to be rerun with
-fexcess-precision in this case.


Ah!  Thanks for noticing this -- I did not catch it.

I've rerun, and the new value is more like 0.428 secs for 600.

Thanks,

-Brent
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] New Benchmark Under Review: Magic Squares

2006-07-04 Thread Brent Fulgham

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Daniel,

I have now tuned Josh Goldfoot's code without changing the order in  
which the
magic squares are produced, for a 5x5 magic square, my machine took  
about 1
1/2 hours and used 2Mb memory (considering that the original code  
did not
finish within 4 1/2 hours here, that should push time on the  
benchmarking

machine under 3000s and put us in the lead, I hope).


Thanks for your efforts on this project.  I'm actually more  
interested in using your earlier solution, since it is so much  
faster.  Right now, the magic square code rises in runtime from 1.5  
seconds to 4 hours with an increase of 1 in the square's dimension.   
I would much rather use a technique that had a more linear (or even  
exponential) increase!


I would propose modifying the other entries (since there are only a  
handful) to match the output of your original solution.


What do you think?

- -Brent
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.2.2 (Darwin)

iD8DBQFEqpVmzGDdrzfvUpURAkPpAJ9oKTwzmUyTAoA6yQdOo7APKnXCqACghJEV
id5EqEyVKrvSlJlLH9JZTN0=
=jNXB
-END PGP SIGNATURE-
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] New Benchmark Under Review: Magic Squares

2006-07-04 Thread Brent Fulgham

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1


On Jul 4, 2006, at 5:20 PM, Daniel Fischer wrote:

I would propose modifying the other entries (since there are only a
handful) to match the output of your original solution.

What do you think?


Cool, though the problem of exploding runtime remains, it's only  
pushed a
little further. Now I get a 5x5 magig square in 1 s, a 6x6 in 5.4  
s, but 7x7

segfaulted after about 2 1/2 hours - out of memory, I believe.


Hrm.  Well, I still prefer the growth of search space in your version  
over the
original, since it *was* going from 0.01 s (3x3) to 0.10 (4x4) to 4  
hours (5x5).


Going 1s-5.4s -x hours is at least a bit more controlled.

I wonder if anyone can propose a slightly smaller problem, or a  
better algorithm?


Thanks,

- -Brent
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.2.2 (Darwin)

iD8DBQFEq0b7zGDdrzfvUpURAumWAJ4itR4eayB3mj5hYEtxbK630mF4IgCeO3PA
qFF7cLTW4xk36J/nQOON+F4=
=C1xL
-END PGP SIGNATURE-
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Shootout favouring C

2006-01-17 Thread Brent Fulgham
Spurred out of my typical lazy state by the recent
activity on
Haskell-cafe, I went ahead and bumped Ackermann to
9,10, and 11
to see what would happen.

http://shootout.alioth.debian.org/debian/benchmark.php?test=ackermannlang=all

As expected, GHC makes quite a good showing, moving to
4th position
behind Ada, ML, and Clean (though massively shorter
than any
of the better-performing solutions).

Looks like I fouled something up such that Ruby and
others do not
register.  I may have to play with stack limits again.

-Brent

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Shootout favouring C

2006-01-16 Thread Brent Fulgham
Daniel Fischer wrote:
===
  Is it only my machime, or can you confirm that for
  the Ackermann benchmark, it's very good for C that
they
  chose 9 and not a larger value? For 10, we are
  significantly faster and for 11,12,13, we can run
  rings around the C-programme:
 
Sebastian Sylvan wrote:
===
 This is interesting. Hopefully it's not intentional,
 but it's quite obvious that for benchmarks where the
fastest 
 time is only a few fractions of a second, languages
with more 
 complex runtime systems will be unfairly slow due to
the 
 startup cost.
[...]
 In other words I'd prefer if all benchmarks are
 reconfigured to target an execution time of at least
a few 
 seconds for the fastest benchmarks.

I can confirm that it was not intentional, though we
have
been aware of the problem.  The original shootout used
even
smaller values of N.  About a year ago, we increased
the values
to the levels you see now.

As hardware (and implementations) have improved, it is
probably
time to bump the values yet again.

Part of the problem was that some languages
(*cough*-ruby-*cough*) have extremely poor support for
recursive
calls, and will encounter stack overflow or other
problems when
N is above 7 or 8.  We've changed things a bit to
supply higher
stack depths to avoid this, but at some point we just
have to
bow to reality and mark Python and Ruby up as failures
in the
Ackermann test (let the hate-mail begin, yet again!).

We've increased the timeouts a bit to help, and the
stack depth,
so I'll rerun the ackermann benchmarks with 9 as the
lowest
level, and extending to 10 and 11 at the higher end.

Thanks,

-Brent
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Shootout favouring C

2006-01-16 Thread Brent Fulgham
   Shootout favouring C
   Daniel Fischer wrote:
   ===
   Is it only my machime, or can you confirm that
   for the Ackermann benchmark, it's very good for
C
   that they chose 9 and not a larger value? 
 
  Sebastian Sylvan wrote:
  ===
  This is interesting. Hopefully it's not
  intentional,
 
 --- Isaac Gouy wrote:
 Pardon my rudeness but this really is getting a bit
 much!
 
 Please keep to the true spirit of fictional crime
 writing and provide a motive for these evil
 characters who will stop at nothing to make Haskell
 seem some worse than C.

Isaac, did you get your check from the C compiler
consortium
yet?  Mine has not shown up as it usually does on the
thirteenth
of every month.  Since our filthy lucre is delayed,
and those
meddling Haskellers are pointing out our flaws, I
think the
heat is on.  Perhaps it's time to approach Google
about getting
Python's rankings a bit higher and stop pandering to
the
C crew?

Oops!  Did this go to the Haskell list?  We are
undone!

;-P

-Brent

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Shootout favouring C

2006-01-16 Thread Brent Fulgham
--- Sebastian Sylvan [EMAIL PROTECTED]
wrote:
 I don't think his reaction is reasonable. Especially
 not if he indeed is claiming that the cases where C
is
 favoured are unintentional. The fact that he seems
to take 
 suggestions for improvements as a personal insult
hardly
 helps his case (that the benchmarks aren't
intentionally
 geared to favour C).

Let's not descend into needless ad-hominem attacks.  I
certainly cannot claim that the shootout is perfect,
or that
we do everything right.  But Isaac has been a
long-time
champion of correcting many of the problems in the
shootout
that are largely the result of my ineptitude, and I
think
comments that imply a lack of rigor or desire for
accuracy
are more rightly pointed at me.
 
 Daniel's benchmarks are interesting, and instead of
 a paranoid and rude response from the people
responsible,
 I would rather have liked to see if the results are
the
 same on the systems used in the shootout.

I think I answered this in another e-mail.  I do see
similar
results on the shootout machine.  Out of a desire for
inclusiveness, we arbitrarily kept lower values of N
so that
certain scripting languages (mainly Ruby and Python)
would
not self-destruct when attempting to process larger
values
of N.

Furthermore, I had arbitrarily restricted the timeout
to 600 seconds to avoid running the shootout for weeks
due
to poor performance in some implementations.  However,
now 
that the main shootout tests are stabilized, it's not
such
a big deal to extend the timeout (as we have done for
spectralnorm and others), and I think it would be good
to
do so for the Ackermann test.

Thanks,

-Brent
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Shootout favoring imperative code

2006-01-04 Thread Brent Fulgham
--- Sebastian Sylvan [EMAIL PROTECTED]
wrote:
 Some of the problems seem to be heavily geared
 towards an imperative *implementation*, meaning that
a Haskell
 version is hardly idiomatic Haskell (and as such I ,
and I 
 suspect otehrs, really have no inclination to work
on it).

I agree that several benchmarks suffer from this
problem, but
we have been trying to change this where possible.

 Take the fannuch benchmark for instance. Part of it
 is to generate input data (all permutations of a
sequence). It
 would be better to not require the program to print
out the 
 first 30 lists from the input data, since that
places 
 additional (completely unneeded) restrictions
 on how to implement the program (and leads to an
 unnecessarily ugly implementation if you generate
the input in 
 a non-imperative fashion).

I must disagree.  We based this benchmark on a very
standard
benchmark studied by Lisp implementors (and others,
see e.g.,
http://citeseer.ist.psu.edu/315141.html)in an effort
to address
the problems of the original array access benchmark
(which was
extremely imperative in nature).  I don't think asking
Haskell
to handle this simple program is unfair; Ken Anderson
and others
dealt with this for Lisp many years ago.

 I assume it's no coincidence that this sequence
 exactly matches the straightforward way to
generate 
 permutations in C.

But I think Haskell may face real-world cases where
data must
be produced in some known order.  For Haskell to be a
contender
in real world use, it sometimes has to confront ugly
requirements.

 I should also note that I don't think these
benchmarks mean 
 anything at all. It would be better to test, say,
the best
 possible solutions for some of the ICFP programming
contests, 
 for example. They say a lot more about real-world
speed.

Agreed.  However, it's a lot easier to get volunteers
to
implement small benchmarks (therefore, providing the
ability to
compare many languages) rather than large ICFP
entries.

-Brent
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Shootout favoring imperative code

2006-01-04 Thread Brent Fulgham
--- Sebastian Sylvan [EMAIL PROTECTED]
wrote:
 My point here was that even though you _can_
 generate this data in Haskell, there's no point in
requiring 
 (because the order doesn't matter for the benchmark
itself). 

We do need to agree on which 30 permutations should be
used
in the validation of the benchmark (just to make sure
that
the algorithms are producing correct output).

Perhaps we could specify the 30 (or perhaps 'N')
permutations
as an input file, or perhaps require that they be
hard-coded
into the program?

The problem with using an input file is that now we
are involving
file I/O in the benchmark, which introduces questions
about
where time is being spent (i.e., file access instead
of
pancake-flipping).

-Brent
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: Haskell Speed

2006-01-02 Thread Brent Fulgham

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

I recently stumbled across this thread, related to the Shootout  
(which I run) and wanted to add a few comments.


First, I agree that I am a Haskell novice; however, I rely on  
people like yourselves to show me where I am doing things wrong.   
If our solutions are horribly inefficient, please tell us what we  
can do to resolve the problem.  But note that in some cases, we are  
purposely being inefficient so that we can measure the cost of  
certain basic operations.


Doug Bagely started the Shootout as a comparison of scripting  
languages.  After I revived it, I started focussing more on  
functional and logic languages, since I was curious how these  
languages would fare in comparison with the more typically-used  
languages like C/C++.  Consequently, many of the tests show this  
heritage and are highly imperative.  We have been working to relax  
some of these rules, and have added many new tests that try to be  
more amenable to comparison of lazy and 'strict' languages.


In our defense I would like to plead that it is not easy to devise  
short problems (e.g., that can be expressed in a page or so of code  
- -- we try to limit it to 100 lines of C++ code as a rough  
benchmark), and that also take enough run time for a meaningful  
measurement.  So, many of the tests initially suffered from foolish  
looping and repetition as a means of extending the run times  
sufficiently to allow measurement.  We have been changing tests so  
that fewer of them rely on these kinds of activities to generate  
results.  We are always interested in new ideas, and would consider  
any benchmark ideas you have.


Finally, I don't agree at all that Haskell's scores show poorly.   
The GHC compiler is one of the better performers, and regularly  
trounces other languages when we let it do its job.  For example,  
we had to jump through many hoops to convince the compiler to  
actually invoke all the loops required to do the same way tests  
on methodcall and similar benchmarks, because the compiler was so  
sharp at identifying wasted effort and optimizing it away.  Perhaps  
we don't say this strongly enough on the site, but I was truly  
impressed at its ability to toss aside wasted calculations and  
focus on the core problem.  I'll admit it was frustrating because  
we wanted to see how long it would take for Haskell to make N  
recursive calls, or what have you, but the fact that the compiler  
saw that there was no need to do the recursion to get the result is  
quite amazing!  I'm sure this would be extremely useful when  
compiling real-world programs.


There have been many cases where the obvious solution in Haskell  
is a great performer, but we discover that it's so fast because it  
has tossed aside wasted cycles and just done the work necessary to  
arrive at a solution.  It's unfortunate, since we then have to  
dirty the program with wasted steps to force it to exercise the  
tasks required.  But in a real world test I think Haskell would  
show quite well.


As we evolve into the newer tests, I think you will see Haskell  
make a better showing.  Simon Marlow (and others) have been a huge  
help in optimizing the existing tests in a fair fashion, and as  
we provide more same thing type tests, I think Haskell will pull  
further ahead.


Thanks,

- -Brent

P.S.  I must say that I find Haskell to be the most esthetically  
pleasing language -- it's quite beautiful when printed!


-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.2 (Darwin)

iD8DBQFDuWIlzGDdrzfvUpURAnJuAJ90G9ltXH5m+jsz6D0QoucoY6anNACfcfkQ
Mg2K0NO0kbBYLy4ZX7LCoeM=
=uGnw
-END PGP SIGNATURE-
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe