Dear Dave
| My question to you is, if you were asked to make a recommendation on
| which programming language to use for a program which would run on a
| multiple core processor, but also required adequate memory and running
| time performance, would you lean towards or away from Haskell? What
|
[EMAIL PROTECTED] wrote:
G'day all.
Quoting Jon Harrop <[EMAIL PROTECTED]>:
I would recommend adding:
1. FFT.
2. Graph traversal, e.g. "n"th-nearest neighbor.
I'd like to put in a request for Pseudoknot. Does anyone still have it?
This is it, I think:
http://darcs.haskell.org/nofib/spe
G'day all.
Quoting Jon Harrop <[EMAIL PROTECTED]>:
I would recommend adding:
1. FFT.
2. Graph traversal, e.g. "n"th-nearest neighbor.
I'd like to put in a request for Pseudoknot. Does anyone still have it?
Cheers,
Andrew Bromage
___
Haskell-Cafe
Jon Harrop wrote:
On Thursday 20 December 2007 19:02, Don Stewart wrote:
Ok, so I should revive nobench then, I suspect.
http://www.cse.unsw.edu.au/~dons/nobench/x86_64/results.html
that kind of thing?
Many of those benchmarks look good.
However, I suggest avoiding trivially reducible p
> However, I suggest avoiding trivially reducible problems like computing
> constants (e, pi, primes, fib) and redundant operations (binary trees). Make
> sure programs accept a non-trivial input (even if it is just an int over a
> wide range). Avoid unnecessary repeats (e.g. atom.hs). This will
On Thu, 20 Dec 2007, Jon Harrop wrote:
> On Thursday 20 December 2007 19:02, Don Stewart wrote:
> > Ok, so I should revive nobench then, I suspect.
> >
> > http://www.cse.unsw.edu.au/~dons/nobench/x86_64/results.html
> >
> > that kind of thing?
>
> Many of those benchmarks look good.
>
> Howe
jon:
> On Thursday 20 December 2007 19:02, Don Stewart wrote:
> > Ok, so I should revive nobench then, I suspect.
> >
> > http://www.cse.unsw.edu.au/~dons/nobench/x86_64/results.html
> >
> > that kind of thing?
>
> Many of those benchmarks look good.
>
> However, I suggest avoiding trivially
On Thursday 20 December 2007 19:02, Don Stewart wrote:
> Ok, so I should revive nobench then, I suspect.
>
> http://www.cse.unsw.edu.au/~dons/nobench/x86_64/results.html
>
> that kind of thing?
Many of those benchmarks look good.
However, I suggest avoiding trivially reducible problems like c
simonmarhaskell:
> Malcolm Wallace wrote:
> >Simon Peyton-Jones <[EMAIL PROTECTED]> wrote:
> >
> >>What would be v helpful would be a regression suite aimed at
> >>performance, that benchmarked GHC (and perhaps other Haskell
> >>compilers) against a set of programs, regularly, and published the
> >
simonpj:
> Don, and others,
>
> This thread triggered something I've had at the back of my mind for some time.
>
> The traffic on Haskell Cafe suggests that there is a lot of interest in the
> performance of Haskell programs. However, at the moment we don't have any
> good *performance* regres
Simon Marlow wrote:
Nobench does already collect code size, but does not yet display it in
the results table. I specifically want to collect compile time as well.
Not sure what the best way to measure allocation and peak memory use
are?
With GHC you need to use "+RTS -s" and then slurp in the
Malcolm Wallace wrote:
Simon Peyton-Jones <[EMAIL PROTECTED]> wrote:
What would be v helpful would be a regression suite aimed at
performance, that benchmarked GHC (and perhaps other Haskell
compilers) against a set of programs, regularly, and published the
results on a web page, highlighting r
On Thu, 2007-12-20 at 10:37 +, Simon Peyton-Jones wrote:
> Don, and others,
>
> This thread triggered something I've had at the back of my mind for some time.
>
> The traffic on Haskell Cafe suggests that there is a lot of interest
> in the performance of Haskell programs. However, at the mo
> Assuming your machine architecture supports something like condition codes.
> On, e.g., the MIPS you would need to test for < and == separately.
And even if your machine supports condition codes, you'll need one test plus
two conditional jumps. Not much better than MIPS's 2 independent tests
On Oct 10, 2006, at 14:58 , Jón Fairbairn wrote:
Bulat Ziganshin <[EMAIL PROTECTED]> writes:
Hello Jon,
Tuesday, October 10, 2006, 1:18:52 PM, you wrote:
Surely all but one of the comparisons is unnecessary? If you
use `compare` instead of (==) and friends, won't one do (I'm
assuming that
Bulat Ziganshin <[EMAIL PROTECTED]> writes:
> Hello Jon,
>
> Tuesday, October 10, 2006, 1:18:52 PM, you wrote:
>
> > Surely all but one of the comparisons is unnecessary? If you
> > use `compare` instead of (==) and friends, won't one do (I'm
> > assuming that the compiler can convert cases on L
Hello Jon,
Tuesday, October 10, 2006, 1:18:52 PM, you wrote:
> Surely all but one of the comparisons is unnecessary? If you
> use `compare` instead of (==) and friends, won't one do (I'm
> assuming that the compiler can convert cases on LT, EQ and
> GT into something sensible -- after all, wasn't
"Brian Hulley" <[EMAIL PROTECTED]> writes:
> Lennart Augustsson wrote:
> > I think your first try looks good.
> [snip]
> > ...
> > addPoly1 p1@(p1h@(Nom p1c p1d):p1t) p2@(p2h@(Nom p2c p2d):p2t)
> >| p1d == p2d = Nom (p1c + p2c) p1d : addPoly1 p1t p2t
> >| p1d < p2d = p1h : addPoly1 p1t p2
On 09/10/06, [EMAIL PROTECTED] <[EMAIL PROTECTED]> wrote:
Cale Gibbard wrote:
> I might also like to point out that by "small" and "large", we're
> actually referring to the number of ways in which components of the
> datastructure can be computed separately, which tends to correspond
> nicely to
Yang wrote:
>> > But laziness will cause this to occupy Theta(n)-space of cons-ing
>> > thunks.
>>
>> No, it doesn't. Insisting on accumulator recursion does. Actually,
>> using reverse does. Think about it, a strict reverse cannot use less
>> than O(n) space, either.
>
> Well, in general, the
Cale Gibbard wrote:
> On 08/10/06, [EMAIL PROTECTED] <[EMAIL PROTECTED]> wrote:
>> large -> largedepends on large input like above
>> but roughly same otherwise
>> small -> largeroughly same
>
> Actually, it's an important point that laziness is generally
> preferable in
On 08/10/06, [EMAIL PROTECTED] <[EMAIL PROTECTED]> wrote:
large -> largedepends on large input like above
but roughly same otherwise
small -> largeroughly same
Actually, it's an important point that laziness is generally
preferable in these cases as well, since the lar
Hello,
admittedly, there is a lack of material on lazy evaluation and
performance. IMHO, the current wiki(book) and other articles are
somewhat inadequate which stems from the fact that current rumor is
"strictness is fast" and "arrays must be unboxed" or so. I don't agree
with this, so I post som
Bulat Ziganshin wrote:
numerical speed is poor in ghc 6.4, according to my tests. it's 10-20
times worse than of gcc. afair, the mandelbrot benchmark of Great
Language Shootout proves this - despite all optimization attempts,
GHC entry is still 5-10 times slower than gcc/ocaml/clean ones
We av
24 matches
Mail list logo