Re: [Haskell-cafe] Re: Hashtable woes

2006-02-22 Thread Ketil Malde
"Sebastian Sylvan" <[EMAIL PROTECTED]> writes:

> I think you need to run the Fasta benchmark with N=25 to
> generate the input file for this benchmark...

I made the file available at http://www.ii.uib.no/~ketil/knuc.input

-k
-- 
If I haven't seen further, it is by standing in the footprints of giants

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Hashtable woes

2006-02-22 Thread Sebastian Sylvan
On 2/22/06, Simon Marlow <[EMAIL PROTECTED]> wrote:
> On 21 February 2006 17:21, Chris Kuklewicz wrote:
>
> > From the shooutout itself:
> >
> >
> http://shootout.alioth.debian.org/gp4/benchmark.php?test=knucleotide&lan
> g=ghc&id=3
> >
> > and
> >
> >
> http://shootout.alioth.debian.org/gp4/benchmark.php?test=knucleotide&lan
> g=ghc&id=2
> >
> > (I forget the exact different between them)
> >
> > From the wiki (the Current Entry):
> >
> >
> http://haskell.org/hawiki/KnucleotideEntry#head-dfcdad61d34153143175bb9f
> 8237d87fe0813092
>
> Thanks... sorry for being a bit dim, but how do I make this test run for
> longer?  I downloaded the example input.  The prog doesn't seem to take
> an argument, although the shootout suggests it should be given a
> parameter of 25.
>

I think you need to run the Fasta benchmark with N=25 to generate
the input file for this benchmark...

/S
--
Sebastian Sylvan
+46(0)736-818655
UIN: 44640862
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


RE: [Haskell-cafe] Re: Hashtable woes

2006-02-22 Thread Simon Marlow
On 21 February 2006 17:21, Chris Kuklewicz wrote:

> From the shooutout itself:
> 
>
http://shootout.alioth.debian.org/gp4/benchmark.php?test=knucleotide&lan
g=ghc&id=3
> 
> and
> 
>
http://shootout.alioth.debian.org/gp4/benchmark.php?test=knucleotide&lan
g=ghc&id=2
> 
> (I forget the exact different between them)
> 
> From the wiki (the Current Entry):
> 
>
http://haskell.org/hawiki/KnucleotideEntry#head-dfcdad61d34153143175bb9f
8237d87fe0813092

Thanks... sorry for being a bit dim, but how do I make this test run for
longer?  I downloaded the example input.  The prog doesn't seem to take
an argument, although the shootout suggests it should be given a
parameter of 25.

Cheers,
Simon
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Hashtable woes

2006-02-21 Thread Chris Kuklewicz
Simon Marlow wrote:
> Brian Sniffen wrote:
>> On 2/10/06, Ketil Malde <[EMAIL PROTECTED]> wrote:
>>
>>
>>> Hmm...perhaps it is worth it, then?  The benchmark may specify "hash
>>> table", but I think it is fair to interpret it as "associative data
>>> structure" - after all, people are using "associative arrays" that
>>> (presumably) don't guarantee a hash table underneath, and it can be
>>> argued that Data.Map is the canonical way to achieve that in Haskell.
>>
>>
>> Based on this advice, I wrote a k-nucleotide entry using the rough
>> structure of the OCaml entry, but with the manual IO from Chris and
>> Don's "Haskell #2" entry.  It runs in under 4 seconds on my machine,
>> more than ten times the speed of the fastest Data.HashTable entry.
> 
> I haven't been following this too closely, but could someone provide me
> with (or point me to) the badly performing Data.HashTable example, so we
> can measure our improvements?
> 
> Cheers,
> Simon

>From the shooutout itself:

http://shootout.alioth.debian.org/gp4/benchmark.php?test=knucleotide&lang=ghc&id=3

and

http://shootout.alioth.debian.org/gp4/benchmark.php?test=knucleotide&lang=ghc&id=2

(I forget the exact different between them)

>From the wiki (the Current Entry):

http://haskell.org/hawiki/KnucleotideEntry#head-dfcdad61d34153143175bb9f8237d87fe0813092
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: Hashtable woes

2006-02-21 Thread Simon Marlow

Brian Sniffen wrote:

On 2/10/06, Ketil Malde <[EMAIL PROTECTED]> wrote:



Hmm...perhaps it is worth it, then?  The benchmark may specify "hash
table", but I think it is fair to interpret it as "associative data
structure" - after all, people are using "associative arrays" that
(presumably) don't guarantee a hash table underneath, and it can be
argued that Data.Map is the canonical way to achieve that in Haskell.



Based on this advice, I wrote a k-nucleotide entry using the rough
structure of the OCaml entry, but with the manual IO from Chris and
Don's "Haskell #2" entry.  It runs in under 4 seconds on my machine,
more than ten times the speed of the fastest Data.HashTable entry.


I haven't been following this too closely, but could someone provide me 
with (or point me to) the badly performing Data.HashTable example, so we 
can measure our improvements?


Cheers,
Simon
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Hashtable woes

2006-02-15 Thread Jan-Willem Maessen


On Feb 15, 2006, at 3:42 AM, Ketil Malde wrote:



Not sure how relevant this is, but I see there is a recently released
hash library here that might be a candidate for FFIing?

https://sourceforge.net/projects/goog-sparsehash/


The real issue isn't the algorithms involved; I saw the best  
performance from the stupidest hash algorithm (well, and switching to  
multiplicative hashing rather than mod-k).  The problem is GC of hash  
table elements.  FFI-ing this library would give us really good  
algorithms, but the GC would all indirect through the FFI and I'd  
expect that to make things *worse*, not better.


-Jan



| An extremely memory-efficient hash_map implementation. 2 bits/entry
| overhead! The SparseHash library contains several hash-map
| implementations, including implementations that optimize for space
| or speed.

-k
--
If I haven't seen further, it is by standing in the footprints of  
giants


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Hashtable woes

2006-02-15 Thread John Meacham
On Wed, Feb 15, 2006 at 09:42:10AM +0100, Ketil Malde wrote:
> 
> Not sure how relevant this is, but I see there is a recently released
> hash library here that might be a candidate for FFIing?
> 
> https://sourceforge.net/projects/goog-sparsehash/
> 
> | An extremely memory-efficient hash_map implementation. 2 bits/entry
> | overhead! The SparseHash library contains several hash-map
> | implementations, including implementations that optimize for space
> | or speed.

If we want really fast maps, we should be using this. it beats the
competition by far:

 http://judy.sourceforge.net/

John

-- 
John Meacham - ⑆repetae.net⑆john⑈
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Hashtable woes

2006-02-15 Thread Ketil Malde

Not sure how relevant this is, but I see there is a recently released
hash library here that might be a candidate for FFIing?

https://sourceforge.net/projects/goog-sparsehash/

| An extremely memory-efficient hash_map implementation. 2 bits/entry
| overhead! The SparseHash library contains several hash-map
| implementations, including implementations that optimize for space
| or speed.

-k
-- 
If I haven't seen further, it is by standing in the footprints of giants

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Hashtable woes

2006-02-10 Thread Brian Sniffen
On 2/10/06, Ketil Malde <[EMAIL PROTECTED]> wrote:

> Hmm...perhaps it is worth it, then?  The benchmark may specify "hash
> table", but I think it is fair to interpret it as "associative data
> structure" - after all, people are using "associative arrays" that
> (presumably) don't guarantee a hash table underneath, and it can be
> argued that Data.Map is the canonical way to achieve that in Haskell.

Based on this advice, I wrote a k-nucleotide entry using the rough
structure of the OCaml entry, but with the manual IO from Chris and
Don's "Haskell #2" entry.  It runs in under 4 seconds on my machine,
more than ten times the speed of the fastest Data.HashTable entry.

--
Brian T. Sniffen
[EMAIL PROTECTED]or[EMAIL PROTECTED]
http://www.evenmere.org/~bts
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Hashtable woes

2006-02-10 Thread Ketil Malde

> indicates that it triggers a bug in 6.4.1

Ah, I missed that.

For my word counting indexes, I've settled on Data.Map, calculating an
Int or Integer hash for each word (depending on word length, which is
fixed).  I haven't given it nearly the effort the shootout programs
have seen, though, so I'm not sure how optimal it is.

Other experiences with FiniteMap/Data.Map etc seem to indicate that
they are in the same ball park as Python's hashes.

> We never pounded on Data.Map, but I suspect it cannot be as bad as
> Data.Hashtable. 

Hmm...perhaps it is worth it, then?  The benchmark may specify "hash
table", but I think it is fair to interpret it as "associative data
structure" - after all, people are using "associative arrays" that
(presumably) don't guarantee a hash table underneath, and it can be
argued that Data.Map is the canonical way to achieve that in Haskell. 

-k
-- 
If I haven't seen further, it is by standing in the footprints of giants

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Hashtable woes

2006-02-10 Thread Chris Kuklewicz
Ketil Malde wrote:
> Chris Kuklewicz <[EMAIL PROTECTED]> writes:
> 
>> Is Jan-Willem Maessen's Hash available anywhere?  I could benchmark it.
> 
> Did you ever get around to run the benchmark?  I browsed around a bit,
> and found that the knucleotide is probably the worst GHC benchmark in
> the shootout (even TCL beats GHC by a factor of two!) - which is
> disheartening, because I rely a lot on associative data structures
> (usually Data.Map) in my programs.
> 
> Or have Adrian Hey's AVL-trees been tried?
> 
> -k

No, I did not try it.  This message from Simon Marlow

> Jan-Willem's HashTable attached.  It uses unsafeThaw/unsafeFreeze tricks
> to avoid the GC overheads, for this you need an up to date GHC due to a
> bug in the garbage collector: grab a STABLE snapshot (6.4.1 won't work).
> Or remove the unsafeThaw/unsafeFreeze to use it with 6.4.1, and be
> prepared to bump the heap size.
> 
> In GHC 6.6 the unsafeThaw/unsafeFreeze tricks aren't required, because
> the GC is essentially doing it for you - we put a write barrier in the
> IOArray implementation.

indicates that it triggers a bug in 6.4.1, which is what the shootout is using.
And I suspected bumping the heap size just won't cut it for the amount of data
we are processing.

But I did not test that suspicion.

We never pounded on Data.Map, but I suspect it cannot be as bad as 
Data.Hashtable.

-- 
Chris
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Hashtable woes

2006-01-25 Thread Jan-Willem Maessen
Because Data.HashTable is tied rather specially into the internals of  
Data.Typeable (I think I'm getting that right) it's hard to just drop  
in a new module with the same name.


But for those eager to tinker, here are two modules for simple hash  
tables by table doubling and for multiplicative hashing.  The  
interface is identical to Data.HashTable.


When I get time (not for a couple of weeks I fear) I'm planning to  
learn cabal and cabalize these and a few other related hash table  
modules all under Data.HashTable.* (including a module of tests and a  
unifying type class).


-Jan


...
Is Jan-Willem Maessen's Hash available anywhere?  I could benchmark  
it.




Doubling.hs
Description: Binary data


Multiplicative.hs
Description: Binary data


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Hashtable woes

2006-01-24 Thread Chris Kuklewicz
Bulat Ziganshin wrote:
> Hello Chris,
>
> Monday, January 23, 2006, 6:09:15 PM, you wrote:
>
> CK> Using -A400m I get 39s down from 55s.  That is the best Data.HashTable 
> time I
> CK> have seen. (Using -A10m and -A100m were a little slower).
>
> 1) "-A400m" is a bit unusual. "-H400m" for 500-meg machine, "-H800m"
> for 1g computer and so on will be fastest. current GHC doc leaks
> explanations in this area, but basically -H just allocates that much
> area and then dynamically changes -A after each GC allocating all
> available space to the generation-0 memory pool
>
> 2) it's better to say that was MUT and GC times in your program, and
> even better just to post its output with "+RTS -sstderr"
>
> please post improved results here. that's really interesting for me,
> and for my programs too ;)
>

Here is all the data you wanted:

Running "./cekp +RTS -sstderr -RTS < kfile"

6,292,511,740 bytes allocated in the heap
1,641,755,092 bytes copied during GC
 38,233,484 bytes maximum residency (215 sample(s))

  24003 collections in generation 0 ( 27.23s)
215 collections in generation 1 (  7.80s)

 82 Mb total memory in use

  INIT  time0.00s  (  0.01s elapsed)
  MUT   time   71.32s  ( 78.99s elapsed)
  GCtime   35.03s  ( 39.19s elapsed)
  RPtime0.00s  (  0.00s elapsed)
  PROF  time0.00s  (  0.00s elapsed)
  EXIT  time0.00s  (  0.00s elapsed)
  Total time  106.35s  (118.19s elapsed)

  %GC time  32.9%  (33.2% elapsed)

  Alloc rate88,229,272 bytes per MUT second

  Productivity  67.1% of total user, 60.3% of total elapsed

Is that 6 Billion bytes allocated?  Yes it is.  So use -H 400m:
" ./cekp +RTS -H400m -sstderr -RTS < kfile "

6,293,156,400 bytes allocated in the heap
 99,679,428 bytes copied during GC
  8,742,464 bytes maximum residency (2 sample(s))

 18 collections in generation 0 (  2.84s)
  2 collections in generation 1 (  0.38s)

392 Mb total memory in use

  INIT  time0.00s  (  0.00s elapsed)
  MUT   time   82.42s  (100.62s elapsed)
  GCtime3.22s  (  3.93s elapsed)
  RPtime0.00s  (  0.00s elapsed)
  PROF  time0.00s  (  0.00s elapsed)
  EXIT  time0.00s  (  0.00s elapsed)
  Total time   85.65s  (104.55s elapsed)

  %GC time   3.8%  (3.8% elapsed)

  Alloc rate76,345,461 bytes per MUT second

  Productivity  96.2% of total user, 78.8% of total elapsed

So this is the small improvement, at the cost of turning off the GC.  The
Data.HashTable performance is the bottleneck, but it is not the GC's fault.

Using the way over optimized hashtable in c-code that I adapted:
" ./useit-tree +RTS -s -RTS < ../kfile "

1,096,743,184 bytes allocated in the heap
190,832,852 bytes copied during GC
 38,233,484 bytes maximum residency (9 sample(s))

   4183 collections in generation 0 (  1.39s)
  9 collections in generation 1 (  0.88s)

 82 Mb total memory in use

  INIT  time0.00s  (  0.00s elapsed)
  MUT   time   14.55s  ( 16.06s elapsed)
  GCtime2.27s  (  2.87s elapsed)
  RPtime0.00s  (  0.00s elapsed)
  PROF  time0.00s  (  0.00s elapsed)
  EXIT  time0.00s  (  0.00s elapsed)
  Total time   16.82s  ( 18.93s elapsed)

  %GC time  13.5%  (15.2% elapsed)

  Alloc rate75,377,538 bytes per MUT second

  Productivity  86.5% of total user, 76.9% of total elapsed

Here I see only 1 billion bytes allocated, but I think that hides 100 to 200
million that are being allocated in the c-code.

Note that the total time has dropped by a factor of five over the hashtable w/o
GC, a factor of 6 if I had let GC run.  I can reduce the total time from 16.82s
to 16.20s by adding -H here but that is not called for. ( The time without
profiling is 12.5s)

This benchmark stresses hash tables by adding 1,250,000 string keys, with the
values being the count of the times the key was added.  This is the common "make
a histogram" usage, and never needs to delete from the hashtable. ( The keys are
all fixed length, and the number is known. )

Looking at it as a black box, Data.HashTable is doing a tremendous amount of MUT
work that dominates the run-time, taking many times longer than the Hashtbl that
OCaml uses.  And D's associative array is scary fast.

I have not yet tested the proposed Data.Hash for GHC 6.6.  Apparently I need to
get a darcs checkout to avoid a bug in GHC 6.4.1 first.

I don't expect a Haskell data structure to be as fast as the over optimized
c-code.  But I am very surprised that there is no mutable data structure that
can compete with OCaml.

-- 
Chris
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re[2]: [Haskell-cafe] Re: Hashtable woes

2006-01-23 Thread Bulat Ziganshin
Hello Chris,

Monday, January 23, 2006, 6:09:15 PM, you wrote:

CK> Using -A400m I get 39s down from 55s.  That is the best Data.HashTable time 
I
CK> have seen. (Using -A10m and -A100m were a little slower).

1) "-A400m" is a bit unusual. "-H400m" for 500-meg machine, "-H800m"
for 1g computer and so on will be fastest. current GHC doc leaks
explanations in this area, but basically -H just allocates that much
area and then dynamically changes -A after each GC allocating all
available space to the generation-0 memory pool

2) it's better to say that was MUT and GC times in your program, and
even better just to post its output with "+RTS -sstderr"

please post improved results here. that's really interesting for me,
and for my programs too ;)

-- 
Best regards,
 Bulatmailto:[EMAIL PROTECTED]



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


RE: [Haskell-cafe] Re: Hashtable woes

2006-01-23 Thread Simon Marlow
On 23 January 2006 15:09, Chris Kuklewicz wrote:

> That is good to hear.  The benchmark's tests take 1,250,000
> pre-generated 
> strings as the keys.  At the end, the string keys are 18 characters
> long, drawn randomly from a set of 4 characters.  So the hash
> computations are a nontrivial hit. 
> 
> Using -A400m I get 39s down from 55s.  That is the best
> Data.HashTable time I 
> have seen. (Using -A10m and -A100m were a little slower).
> 
> Using my over optimized c-code hashtable I get 12.3 seconds.  The
> associative arrays in OCaml and D are still faster.  So you see why I
> long for GHC 6.6. 
> 
> Is Jan-Willem Maessen's Hash available anywhere?  I could benchmark
> it. 

Jan-Willem's HashTable attached.  It uses unsafeThaw/unsafeFreeze tricks
to avoid the GC overheads, for this you need an up to date GHC due to a
bug in the garbage collector: grab a STABLE snapshot (6.4.1 won't work).
Or remove the unsafeThaw/unsafeFreeze to use it with 6.4.1, and be
prepared to bump the heap size.

In GHC 6.6 the unsafeThaw/unsafeFreeze tricks aren't required, because
the GC is essentially doing it for you - we put a write barrier in the
IOArray implementation.

Cheers,
Simon


HashTable.hs
Description: HashTable.hs
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Hashtable woes

2006-01-23 Thread Chris Kuklewicz
Simon Marlow wrote:
> Bulat Ziganshin wrote:
>> Hello Chris,
>>
>> Monday, January 23, 2006, 12:27:53 PM, you wrote:
>>
>> CK> The only mutable data structure that come with GHC besides arrays is
>> CK> Data.Hashtable, which is not comptetitive with OCaml Hashtbl or DMD's
>> CK> associative arrays (unless there is something great hidden under
>> CK> Data.Graph). Is there any hope for GHC 6.6? Does anyone have
>> pointers to
>> CK> an existing library at all? Perl and Python and Lua also have
>> excellent
>> CK> built in hashtable capabilities. Where is a good library for Haskell?
>>
>> 1) are you used "+RTS -A10m" / "+RTS -H100m"?
>>
>> 2) Simon Marlow optimized something in the IOArray handling, but i
>> don't understand that is changed. see
>> http://cvs.haskell.org/trac/ghc/ticket/650
> 
> Much of the GC overhead of Data.Hash should be gone in GHC 6.6.  I also
> have an improved implementation of Data.Hash from Jan-Willem Maessen to
> import, but that will be 6.6 rather than 6.4.2.
> 
> Cheers,
> Simon

That is good to hear.  The benchmark's tests take 1,250,000 pre-generated
strings as the keys.  At the end, the string keys are 18 characters long, drawn
randomly from a set of 4 characters.  So the hash computations are a nontrivial 
hit.

Using -A400m I get 39s down from 55s.  That is the best Data.HashTable time I
have seen. (Using -A10m and -A100m were a little slower).

Using my over optimized c-code hashtable I get 12.3 seconds.  The associative
arrays in OCaml and D are still faster.  So you see why I long for GHC 6.6.

Is Jan-Willem Maessen's Hash available anywhere?  I could benchmark it.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: Hashtable woes

2006-01-23 Thread Simon Marlow

Bulat Ziganshin wrote:

Hello Chris,

Monday, January 23, 2006, 12:27:53 PM, you wrote:

CK> The only mutable data structure that come with GHC besides arrays is
CK> Data.Hashtable, which is not comptetitive with OCaml Hashtbl or DMD's
CK> associative arrays (unless there is something great hidden under
CK> Data.Graph). Is there any hope for GHC 6.6? Does anyone have pointers to
CK> an existing library at all? Perl and Python and Lua also have excellent
CK> built in hashtable capabilities. Where is a good library for Haskell?

1) are you used "+RTS -A10m" / "+RTS -H100m"?

2) Simon Marlow optimized something in the IOArray handling, but i
don't understand that is changed. see http://cvs.haskell.org/trac/ghc/ticket/650


Much of the GC overhead of Data.Hash should be gone in GHC 6.6.  I also 
have an improved implementation of Data.Hash from Jan-Willem Maessen to 
import, but that will be 6.6 rather than 6.4.2.


Cheers,
Simon
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe