Re: [PATCHES] hash index improving v3

2008-09-12 Thread Kenneth Marshall
On Thu, Sep 11, 2008 at 08:51:53PM -0600, Alex Hunsaker wrote:
 On Thu, Sep 11, 2008 at 9:24 AM, Kenneth Marshall [EMAIL PROTECTED] wrote:
  Alex,
 
  I meant to check the performance with increasing numbers of collisions,
  not increasing size of the hashed item. In other words, something like
  this:
 
  for ($coll=500; $i=100; $i=$i*2) {
   for ($i=0; $i=100; $i++) {
 hash(int8 $i);
   }
   # add the appropriate number of collisions, distributed evenly to
   # minimize the packing overrun problem
   for ($dup=0; $dup=$coll; $dup++) {
 hash(int8 MAX_INT + $dup * 100/$coll);
   }
  }
 
  Ken
 
 *doh* right something like this...
 
 create or replace function create_test_hash() returns bool as $$
 declare
 coll integer default 500;
 -- tweak this to where create index gets really slow
 max_coll integer default 100;
 begin
 loop
 execute 'create table test_hash_'|| coll ||'(num int8);';
 execute 'insert into test_hash_'|| coll ||' (num) select n
 from generate_series(0, '|| max_coll ||') as n;';
 execute 'insert into test_hash_'|| coll ||' (num) select
 (n+4294967296) * '|| max_col ||'/'|| coll ||'::int from
 generate_series(0, '|| coll ||') as n;';
 
 coll := coll * 2;
 
 exit when coll = max_coll;
 end loop;
 return true;
 end;
 $$ language 'plpgsql';
 
 And then benchmark each table, and for extra credit cluster the table
 on the index and benchmark that.
 
 Also obviously with the hashint8 which just ignores the top 32 bits.
 
 Right?
 
Yes, that is exactly right.

Ken

-- 
Sent via pgsql-patches mailing list (pgsql-patches@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-patches


Re: [PATCHES] hash index improving v3

2008-09-11 Thread Kenneth Marshall
On Wed, Sep 10, 2008 at 10:17:31PM -0600, Alex Hunsaker wrote:
 On Wed, Sep 10, 2008 at 9:49 PM, Alex Hunsaker [EMAIL PROTECTED] wrote:
  On Wed, Sep 10, 2008 at 7:04 AM, Kenneth Marshall [EMAIL PROTECTED] wrote:
  On Tue, Sep 09, 2008 at 07:23:03PM -0600, Alex Hunsaker wrote:
  On Tue, Sep 9, 2008 at 7:48 AM, Kenneth Marshall [EMAIL PROTECTED] 
  wrote:
   I think that the glacial speed for generating a big hash index is
   the same problem that the original code faced.
 
  Yeah sorry, I was not saying it was a new problem with the patch.  Err
  at least not trying to :) *Both* of them had been running at 18+ (I
  finally killed them sometime Sunday or around +32 hours...)
 
   It would be useful to have an equivalent test for the hash-only
   index without the modified int8 hash function, since that would
   be more representative of its performance. The collision rates
   that I was observing in my tests of the old and new mix() functions
   was about 2 * (1/1) of what you test generated. You could just
   test against the integers between 1 and 200.
 
  Sure but then its pretty much just a general test of patch vs no
  patch.  i.e. How do we measure how much longer collisions take when
  the new patch makes things faster?  That's what I was trying to
  measure... Though I apologize I don't think that was clearly stated
  anywhere...
 
  Right, I agree that we need to benchmark the collision processing
  time difference. I am not certain that two data points is useful
  information. There are 469 collisions with our current hash function
  on the integers from 1 to 200. What about testing the performance
  at power-of-2 multiples of 500, i.e. 500, 1000, 2000, 4000, 8000,...
  Unless you adjust the fill calculation for the CREATE INDEX, I would
  stop once the time to create the index spikes. It might also be useful
  to see if a CLUSTER affects the performance as well. What do you think
  of that strategy?
 
  Not sure it will be a good benchmark of collision processing.  Then
  again you seem to have studied the hash algo closer than me.  Ill go
  see about doing this.  Stay tuned.
 
 Assuming I understood you correctly, And I probably didn't this does
 not work very well because you max out at 27,006 values before you get
 this error:
 ERROR:  index row size 8152 exceeds hash maximum 8144
 HINT:  Values larger than a buffer page cannot be indexed.
 
 So is a power-of-2 multiple of 500 not simply:
 x = 500;
 while(1)
 {
 print x;
 x *= 2;
 }
 
 ?
 
Alex,

I meant to check the performance with increasing numbers of collisions,
not increasing size of the hashed item. In other words, something like
this:

for ($coll=500; $i=100; $i=$i*2) {
  for ($i=0; $i=100; $i++) {
hash(int8 $i);
  }
  # add the appropriate number of collisions, distributed evenly to
  # minimize the packing overrun problem
  for ($dup=0; $dup=$coll; $dup++) {
hash(int8 MAX_INT + $dup * 100/$coll);
  }
}

Ken

-- 
Sent via pgsql-patches mailing list (pgsql-patches@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-patches


Re: [PATCHES] hash index improving v3

2008-09-09 Thread Kenneth Marshall
On Sat, Sep 06, 2008 at 08:23:05PM -0600, Alex Hunsaker wrote:
 On Sat, Sep 6, 2008 at 1:09 PM, Tom Lane [EMAIL PROTECTED] wrote:
 For the convenience of anyone intending to test, here is an updated
 patch against CVS HEAD that incorporates Alex's fix.
 
 Here are the results for a table containing 1 million entries that
 will generate hash collisions.  It paints a bad picture for the patch
 but then again im not sure how relevant the issue is.  For example
 yesterday I imported a table with 10 million collisions and the create
 index is still running (now at about ~18 hours).  Maybe we should warn
 if there are lots of collisions when creating the index and suggest
 you use a btree? Anyway here are the results.
 
I think that the glacial speed for generating a big hash index is
the same problem that the original code faced. Because of the collisions
you are unable to achieve the correct packing that the code assumes.
This results in the splitting/copying of every page in the hash index,
a very slow proposition. I had suggested adding some additional parameters
like fillfactor to accomodate these sorts of situations. Since your test
cuts the effective fill by 2 because of the many collisions, you would
need to adjust that calculation to avoid the tremendous amount of random
I/O generated by that mis-assumption.

 ./pgbench -c1 -n -t10 -f bench_createindex.sql
 cvs head: tps = 0.002169
 v5  : tps = 0.002196
 
 pgbench -c1 -n -t1000 -f bench_bitmap.sql
 cvs head: tps = 24.011871
 v5:   tps = 2.543123
 
 pgbench -c1 -n -t1000 -f bench_index.sql
 cvs head: tps = 51.614502
 v5:   tps = 3.205542
 
 pgbench -c1 -n -t1000 -f bench_seqscan.sql
 cvs head: tps = 8.553318
 v5:   tps = 9.836091
 
 Table created via:
 create table test_hash (num int8);
 ./hash | psql -c 'copy test_hash from stdin;'

It would be useful to have an equivalent test for the hash-only
index without the modified int8 hash function, since that would
be more representative of its performance. The collision rates
that I was observing in my tests of the old and new mix() functions
was about 2 * (1/1) of what you test generated. You could just
test against the integers between 1 and 200.

Ken

-- 
Sent via pgsql-patches mailing list (pgsql-patches@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-patches


Re: [PATCHES] updated hash functions for postgresql v1

2008-08-29 Thread Kenneth Marshall
On Sun, Oct 28, 2007 at 08:06:58PM +, Simon Riggs wrote:
 On Sun, 2007-10-28 at 13:05 -0500, Kenneth Marshall wrote:
  On Sun, Oct 28, 2007 at 05:27:38PM +, Simon Riggs wrote:
   On Sat, 2007-10-27 at 15:15 -0500, Kenneth Marshall wrote:
Its features include a better and faster hash function.
   
   Looks very promising. Do you have any performance test results to show
   it really is faster, when compiled into Postgres? Better probably needs
   some definition also; in what way are the hash functions better?

   -- 
 Simon Riggs
 2ndQuadrant  http://www.2ndQuadrant.com
   
  The new hash function is roughly twice as fast as the old function in
  terms of straight CPU time. It uses the same design as the current
  hash but provides code paths for aligned and unaligned access as well
  as separate mixing functions for different blocks in the hash run
  instead of having one general purpose block. I think the speed will
  not be an obvious win with smaller items, but will be very important
  when hashing larger items (up to 32kb).
  
  Better in this case means that the new hash mixes more thoroughly
  which results in less collisions and more even bucket distribution.
  There is also a 64-bit varient which is still faster since it can
  take advantage of the 64-bit processor instruction set.
 
 Ken, I was really looking for some tests that show both of the above
 were true. We've had some trouble proving the claims of other algorithms
 before, so I'm less inclined to take those things at face value.
 
 I'd suggest tests with Integers, BigInts, UUID, CHAR(20) and CHAR(100).
 Others may have different concerns.
 
 -- 
   Simon Riggs
   2ndQuadrant  http://www.2ndQuadrant.com
 
Hi,

I have finally had a chance to do some investigation on
the performance of the old hash mix() function versus
the updated mix()/final() in the new hash function. Here
is a table of my current results for both the old and the
new hash function. In this case cracklib refers to the
cracklib-dict containing 1648379 unique words massaged
in various ways to generate input strings for the hash
functions. The result is the number of collisions in the
hash values generated.

hash inputoldnew
--------
cracklib  338316
cracklib x 2 (i.e. clibclib)  305319
cracklib x 3 (clibclibclib)   323329
cracklib x 10 302310
cracklib x 100350335
cracklib x 1000   314309
cracklib x 100 truncated to char(100) 311327

uint32 from 1-1648379 309319
(uint32 1-1948379)*256309314
(uint32 1-1948379)*16 310314
auint32 (i.e. a1,a0002...)  320321

uint32uint32 (i.e. uint64)321287

In addition to these tests, the new mixing functions allow
the hash to pass the frog2.c test by Bob Jenkins. Here is
his comment from http://burtleburtle.net/bob/hash/doobs.html:

  lookup3.c does a much more thorough job of mixing than any
  of my previous hashes (lookup2.c, lookup.c, One-at-a-time).
  All my previous hashes did a more thorough job of mixing
  than Paul Hsieh's hash. Paul's hash does a good enough job
  of mixing for most practical purposes.

  The most evil set of keys I know of are sets of keys that are
  all the same length, with all bytes zero, except with a few
  bits set. This is tested by frog.c.. To be even more evil, I
  had my hashes return b and c instead of just c, yielding a
  64-bit hash value. Both lookup.c and lookup2.c start seeing
  collisions after 2^53 frog.c keypairs. Paul Hsieh's hash sees
  collisions after 2^17 keypairs, even if we take two hashes with
  different seeds. lookup3.c is the only one of the batch that
  passes this test. It gets its first collision somewhere beyond
  2^63 keypairs, which is exactly what you'd expect from a completely
  random mapping to 64-bit values.

If anyone has any other data for me to test with, please let me
know. I think this is a reasonable justification for including the
new mixing process (mix() and final()) as well as the word-at-a-time
processing in our hash function. I will be putting a small patch
together to add the new mixing process back in to the updated hash
function this weekend in time for the September commit-fest unless
there are objections. Both the old and the new hash functions meet
the strict avalanche conditions as well.
(http://home.comcast.net/~bretm/hash/3.html)

I have used an Inline::C perl driver for these tests and can post
it if others would like to use it as a testbed.

Regards,
Ken
avalance

-- 
Sent via pgsql-patches mailing list (pgsql-patches@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-patches


Re: [PATCHES] updated hash functions for postgresql v1

2008-04-07 Thread Kenneth Marshall
On Sun, Apr 06, 2008 at 12:02:25PM -0400, Tom Lane wrote:
 Kenneth Marshall [EMAIL PROTECTED] writes:
  Okay, I will strip the VALGRIND paths. I did not see a real need for them
  either.
 
 I have a patch ready to commit (as soon as I fix the regression test
 issues) that incorporates all the word-wide-ness stuff.  All you really
 need to look at is the question of hash quality.
 
 I did confirm that the mixing changes account for a noticeable chunk
 of the runtime improvement.  For instance on a Xeon
 
 hash_any_old(32K): 4.386922 s (CVS HEAD)
 hash_any(32K): 3.853754 s (CVS + word-wide calcs)
 hashword(32K): 3.041500 s (from patch)
 hashlittle(32K): 3.092297 s   (from patch)
 
 hash_any_old(32K unaligned): 4.390311 s
 hash_any(32K unaligned): 4.380700 s
 hashlittle(32K unaligned): 3.464802 s
 
 hash_any_old(8 bytes): 1.580008 s
 hash_any(8 bytes): 1.293331 s
 hashword(8 bytes): 1.137054 s
 hashlittle(8 bytes): 1.112997 s
 
 So adopting the mixing changes would make it faster yet.  What we need
 to be certain of is that this doesn't expose us to poorer hashing.
 We know that it is critical that all bits of the input affect all bits
 of the hash fairly uniformly --- otherwise we are subject to very
 serious performance hits at higher levels in hash join, for instance.
 The comments in the new code led me to worry that Jenkins had
 compromised on that property in search of speed.  I looked at his
 website but couldn't find any real discussion of the design principles
 for the new mixing code ...
 
   regards, tom lane
 
Here is a section from http://burtleburtle.net/bob/hash/doobs.html
describing some testing that Bob Jenkins did concerning the mixing
properties of both his original hash function (our current hash_any)
and the new version (the patch)--the last paragraph in particular.

lookup3.c

 A hash I wrote nine years later designed along the same lines as
 My Hash, see http://burtleburtle.net/bob/c/lookup3.c. It takes
 2n instructions per byte for mixing instead of 3n. When fitting
 bytes into registers (the other 3n instructions), it takes advantage
 of alignment when it can (a trick learned from Paul Hsieh's hash).
 It doesn't bother to reserve a byte for the length. That allows
 zero-length strings to require no mixing. More generally, the
 length that requires additional mixes is now 13-25-37 instead of
 12-24-36.

 One theoretical insight was that the last mix doesn't need to do
 well in reverse (though it has to affect all output bits). And the
 middle mixing steps don't have to affect all output bits (affecting
 some 32 bits is enough), though it does have to do well in reverse.
 So it uses different mixes for those two cases. My Hash (lookup2.c)
 had a single mixing operation that had to satisfy both sets of
 requirements, which is why it was slower.

 On a Pentium 4 with gcc 3.4.?, Paul's hash was usually faster than
 lookup3.c. On a Pentium 4 with gcc 3.2.?, they were about the same
 speed. On a Pentium 4 with icc -O2, lookup3.c was a little faster
 than Paul's hash. I don't know how it would play out on other chips
 and other compilers. lookup3.c is slower than the additive hash
 pretty much forever, but it's faster than the rotating hash for
 keys longer than 5 bytes.

 lookup3.c does a much more thorough job of mixing than any of my
 previous hashes (lookup2.c, lookup.c, One-at-a-time). All my
 previous hashes did a more thorough job of mixing than Paul Hsieh's
 hash. Paul's hash does a good enough job of mixing for most
 practical purposes.

 The most evil set of keys I know of are sets of keys that are all
 the same length, with all bytes zero, except with a few bits set.
 This is tested by frog.c.. To be even more evil, I had my hashes
 return b and c instead of just c, yielding a 64-bit hash value.
 Both lookup.c and lookup2.c start seeing collisions after 253
 frog.c keypairs. Paul Hsieh's hash sees collisions after 217
 keypairs, even if we take two hashes with different seeds.
 lookup3.c is the only one of the batch that passes this test. It
 gets its first collision somewhere beyond 263 keypairs, which is
 exactly what you'd expect from a completely random mapping to
 64-bit values. 

I am ready to do some comparison runs between the old hash function
and the new hash function to validate its mixing ability versus our
current function, although the results will seem almost anecdotal.
Do you happen to have particular hashing problems in mind that I
could use for testing? Depending upon the problem sizes you are
interested in gathering empirical data it may take many hours of
CPU time. If I will need more than a single CPU to perform the
testing in a timely fashion, I will need to gain access to our
local cluster resources.

Cheers,
Ken Marshall

-- 
Sent via pgsql-patches mailing list (pgsql-patches@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref

Re: [PATCHES] updated hash functions for postgresql v1

2008-04-05 Thread Kenneth Marshall
On Sat, Apr 05, 2008 at 03:40:35PM -0400, Tom Lane wrote:
 Simon Riggs [EMAIL PROTECTED] writes:
  On Sun, 2007-10-28 at 13:05 -0500, Kenneth Marshall wrote:
  The new hash function is roughly twice as fast as the old function in
  terms of straight CPU time. It uses the same design as the current
  hash but provides code paths for aligned and unaligned access as well
  as separate mixing functions for different blocks in the hash run
  instead of having one general purpose block. I think the speed will
  not be an obvious win with smaller items, but will be very important
  when hashing larger items (up to 32kb).
  
  Better in this case means that the new hash mixes more thoroughly
  which results in less collisions and more even bucket distribution.
  There is also a 64-bit varient which is still faster since it can
  take advantage of the 64-bit processor instruction set.
 
  Ken, I was really looking for some tests that show both of the above
  were true. We've had some trouble proving the claims of other algorithms
  before, so I'm less inclined to take those things at face value.
 
 I spent some time today looking at this code more closely and running
 some simple speed tests.  It is faster than what we have, although 2X
 is the upper limit of the speedups I saw on four different machines.
 There are several things going on in comparison to our existing
 hash_any:
 
 * If the source data is word-aligned, the new code fetches it a word at
 a time instead of a byte at a time; that is
 
 a += (k[0] + ((uint32) k[1]  8) + ((uint32) k[2]  16) + ((uint32) 
 k[3]  24));
 b += (k[4] + ((uint32) k[5]  8) + ((uint32) k[6]  16) + ((uint32) 
 k[7]  24));
 c += (k[8] + ((uint32) k[9]  8) + ((uint32) k[10]  16) + 
 ((uint32) k[11]  24));
 
 becomes
 
 a += k[0];
 b += k[1];
 c += k[2];
 
 where k is now pointer to uint32 instead of uchar.  This accounts for
 most of the speed improvement.  However, the results now vary between
 big-endian and little-endian machines.  That's fine for PG's purposes.
 But it means that we need two sets of code for the unaligned-input code
 path, since it clearly won't do for the same bytestring to get two
 different hashes depending on whether it happens to be presented aligned
 or not.  The presented patch actually offers *four* code paths, so that
 you can compute either little-endian-ish or big-endian-ish hashes on
 either type of machine.  That's nothing but bloat for our purposes, and
 should be reduced to the minimum.
 

I agree that a good portion of the speed up is due to the full word
processing. The original code from Bob Jenkins had all of these code
paths and I just dropped them in with a minimum of changes.

 * Given a word-aligned source pointer and a length that isn't a multiple
 of 4, the new code fetches the last partial word as a full word fetch
 and masks it off, as per the code comment:
 
  * k[2]0xff actually reads beyond the end of the string, but
  * then masks off the part it's not allowed to read.  Because the
  * string is aligned, the masked-off tail is in the same word as the
  * rest of the string.  Every machine with memory protection I've seen
  * does it on word boundaries, so is OK with this.  But VALGRIND will
  * still catch it and complain.  The masking trick does make the hash
  * noticably faster for short strings (like English words).
 
 This I think is well beyond the bounds of sanity, especially since we
 have no configure support for setting #ifdef VALGRIND.  I'd lose the
 non valgrind clean paths (which again are contributing to the patch's
 impression of bloat/redundancy).
 

Okay, I will strip the VALGRIND paths. I did not see a real need for them
either.

 * Independently of the above changes, the actual hash calculation
 (the mix() and final() macros) has been changed.  Ken claims that
 this made the hash better, but I'm deeply suspicious of that.
 The comments in the code make it look like Jenkins actually sacrificed
 hash quality in order to get a little more speed.  I don't think we
 should adopt those changes unless some actual evidence is presented
 that the hash is better and not merely faster.
 

I was repeating the claims made by the functions author after his own
testing. His analysis and tests were reasonable, but I do agree that
we need some testing of our own. I will start pulling some test cases
together like what was discussed earlier with Simon.

 
 In short: I think we should adopt the changes to use aligned word
 fetches where possible, but not adopt the mix/final changes unless
 more evidence is presented.
 
Okay, I agree and will work on producing evidence either way.

 Lastly, the patch adds yet more code to provide the option of computing
 a 64-bit hash rather than 32.  (AFAICS, the claim that this part is
 optimized for 64-bit machines is mere fantasy.  It's simply Yet Another
 duplicate of the identical code, but it gives you back two

Re: [HACKERS] [PATCHES] Fix for large file support (nonsegment mode support)

2008-03-19 Thread Kenneth Marshall
On Wed, Mar 19, 2008 at 10:51:12AM +0100, Martijn van Oosterhout wrote:
 On Wed, Mar 19, 2008 at 09:38:12AM +0100, Peter Eisentraut wrote:
  Another factor I just thought of is that tar, commonly used as part of a 
  backup procedure, can on some systems only handle files up to 8 GB in size. 
   
  There are supposed to be newer formats that can avoid that restriction, but 
  it's not clear how widely available these are and what the incantation is 
  to 
  get at them.  Of course we don't use tar directly, but if we ever make 
  large 
  segments the default, we ought to provide some clear advice for the user on 
  how to make their backups.
 
 By my reading, GNU tar handles larger files and no-one else (not even a
 POSIX standard tar) can...
 
 Have a nice day,
 -- 
 Martijn van Oosterhout   [EMAIL PROTECTED]   http://svana.org/kleptog/
  Please line up in a tree and maintain the heap invariant while 
  boarding. Thank you for flying nlogn airlines.

The star program written by Joerg Schilling is a very well written
POSIX compatible tar program that can easily handle files larger than
8GB. It is another backup option.

Cheers,
Ken

-- 
Sent via pgsql-patches mailing list (pgsql-patches@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-patches


Re: [PATCHES] updated hash functions for postgresql v1

2008-03-17 Thread Kenneth Marshall
On Sun, Mar 16, 2008 at 10:53:02PM -0400, Tom Lane wrote:
 Kenneth Marshall [EMAIL PROTECTED] writes:
  Dear PostgreSQL Developers,
  This patch is a diff -c against the hashfunc.c from postgresql-8.3beta1.
 
 It's pretty obvious that this patch hasn't even been tested on a
 big-endian machine:
 
  + #ifndef WORS_BIGENDIAN
 
 However, why do we need two code paths anyway?  I don't think there's
 any requirement for the hash values to come out the same on little-
 and big-endian machines.  In common cases the byte-array data being
 presented to the hash function would be different to start with, so
 you could hardly expect identical hash results even if you had separate
 code paths.
 
 I don't find anything very compelling about 64-bit hashing, either.
 We couldn't move to that without breaking API for hash functions
 of user-defined types.  Given all the other problems with hash
 indexes, the issue of whether it's useful to have more than 2^32
 hash buckets seems very far off indeed.
 
   regards, tom lane
 

Yes, there is that typo but it has, in fact, been tested on big and
little-endian machines. Since, it was a simple update to replace the
current hash function used by PostgreSQL with the new version from
Bob Jenkins. The test for the endian-ness of the system allows for
the code paths to be optimized for the particular CPU. The 64-bit
hashing was included for use during my work on on the hash index.
Part of that will entail testing the performance of various
permutations of previously submitted suggestions. 

Regards,
Ken Marshall

-- 
Sent via pgsql-patches mailing list (pgsql-patches@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-patches


Re: [PATCHES] hashlittle(), hashbig(), hashword() and endianness

2007-11-16 Thread Kenneth Marshall
On Fri, Nov 16, 2007 at 01:19:13AM -0800, Alex Vinokur wrote:
 On Nov 15, 1:23 pm, [EMAIL PROTECTED] (Heikki Linnakangas)
 wrote:
  Alex Vinokurwrote:
   On Nov 15, 10:40 am,Alex Vinokur[EMAIL PROTECTED]
   wrote:
   [snip]
   I have some question concerning Bob Jenkins' functions
   hashword(uint32_t*, size_t), hashlittle(uint8_t*, size_t) and
   hashbig(uint8_t*, size_t) in lookup3.c.
 
   Let k1 by a key: uint8_t* k1; strlen(k1)%sizeof(uint32_t) == 0.
 
   1. hashlittle(k1) produces the same value on Little-Endian and Big-
   Endian machines.
  Let hashlittle(k1) be == L1.
 
   2. hashbig(k1) produces the same value on Little-Endian and Big-Endian
   machines.
  Let hashbig(k1) be == B1.
 
 L1 != B1
 
   3. hashword((uint32_t*)k1) produces
   * L1 on LittleEndian machine and
   * B1 on BigEndian machine.
 
   ===
   -
   The question is: is it possible to change hashword() to get
   * L1 on Little-Endian machine and
   * B1 on Big-Endian machine
  ?
 
   Sorry, it should be as follows:
 
   Is it possible to create two new hash functions on basis of
   hashword():
  i)  hashword_little () that produces L1 on Little-Endian and Big-
   Endian machines;
  ii) hashword_big ()that produces B1 on Little-Endian and Big-
   Endian machines
  ?
 
  Why?
 
 [snip]
 
 Suppose:
 uint8_t chBuf[SIZE32 * 4];  // ((size_t)chBuf[0]  3) == 0
 
 Function
 hashlittle(chBuf, SIZE32 * 4, 0)
 produces the same hashValue (let this value be L1) on little-endian
 and big-endian machines. So, hashlittle() is endianness-indepent.
 
 On other hand, function
 hashword ((uint32_t)chBuf, SIZE32, 0)
 produces hashValue == L1 on little-endian machine and hashValue != L1
 on big-endian machine. So, hashword() is endianness-dependent.
 
 I would like to use both hashlittle() and hashword() (or
 hashword_little) on little-endian and big-endian machine and to get
 identical hashValues.
 
 
 Alex Vinokur
  email: alex DOT vinokur AT gmail DOT com
  http://mathforum.org/library/view/10978.html
  http://sourceforge.net/users/alexvn
 
 
Alex,

As I suspected, you want a hash function that is independent of the
machine endian-ness. You will need to design, develop, and test such
a function yourself. As you start to look at how overflow, rot's, and
shifts are handled at the boundaries you may find it difficult to
get a fast hash function with those properties. Good luck.

Regards,
Ken

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [PATCHES] updated hash functions for postgresql v1

2007-10-28 Thread Kenneth Marshall
On Sun, Oct 28, 2007 at 05:27:38PM +, Simon Riggs wrote:
 On Sat, 2007-10-27 at 15:15 -0500, Kenneth Marshall wrote:
  Its features include a better and faster hash function.
 
 Looks very promising. Do you have any performance test results to show
 it really is faster, when compiled into Postgres? Better probably needs
 some definition also; in what way are the hash functions better?
  
 -- 
   Simon Riggs
   2ndQuadrant  http://www.2ndQuadrant.com
 
The new hash function is roughly twice as fast as the old function in
terms of straight CPU time. It uses the same design as the current
hash but provides code paths for aligned and unaligned access as well
as separate mixing functions for different blocks in the hash run
instead of having one general purpose block. I think the speed will
not be an obvious win with smaller items, but will be very important
when hashing larger items (up to 32kb).

Better in this case means that the new hash mixes more thoroughly
which results in less collisions and more even bucket distribution.
There is also a 64-bit varient which is still faster since it can
take advantage of the 64-bit processor instruction set.

Ken

---(end of broadcast)---
TIP 7: You can help support the PostgreSQL project by donating at

http://www.postgresql.org/about/donate


Re: [PATCHES] updated hash functions for postgresql v1

2007-10-28 Thread Kenneth Marshall
On Sun, Oct 28, 2007 at 08:06:58PM +, Simon Riggs wrote:
 On Sun, 2007-10-28 at 13:05 -0500, Kenneth Marshall wrote:
  On Sun, Oct 28, 2007 at 05:27:38PM +, Simon Riggs wrote:
   On Sat, 2007-10-27 at 15:15 -0500, Kenneth Marshall wrote:
Its features include a better and faster hash function.
   
   Looks very promising. Do you have any performance test results to show
   it really is faster, when compiled into Postgres? Better probably needs
   some definition also; in what way are the hash functions better?

   -- 
 Simon Riggs
 2ndQuadrant  http://www.2ndQuadrant.com
   
  The new hash function is roughly twice as fast as the old function in
  terms of straight CPU time. It uses the same design as the current
  hash but provides code paths for aligned and unaligned access as well
  as separate mixing functions for different blocks in the hash run
  instead of having one general purpose block. I think the speed will
  not be an obvious win with smaller items, but will be very important
  when hashing larger items (up to 32kb).
  
  Better in this case means that the new hash mixes more thoroughly
  which results in less collisions and more even bucket distribution.
  There is also a 64-bit varient which is still faster since it can
  take advantage of the 64-bit processor instruction set.
 
 Ken, I was really looking for some tests that show both of the above
 were true. We've had some trouble proving the claims of other algorithms
 before, so I'm less inclined to take those things at face value.
 
 I'd suggest tests with Integers, BigInts, UUID, CHAR(20) and CHAR(100).
 Others may have different concerns.
 

Simon,

I agree, that we should not take claims withoug testing them ourselves.
My main motivation for posting the patch was to get feedback on how to
add support for 64-bit hashes that will work with all of our supported
platforms. I am trying to avoid the work on a feature in isolation...
and submit a giant patch with many problems problem. I intend to do
more extensive testing, but I am trying to reach a basic implementation
level before I start the testing. I am pretty good with theory, but my
coding skills are out of practice. It will take me longer to generate
the tests now and without any clear benefit to the hash index implementation.
I am willing to test further, but I would like to have my testing benefit
the hash index implementation and not just the effectiveness and efficiency
of the hashing algorithm.

Regards,
Ken
 -- 
   Simon Riggs
   2ndQuadrant  http://www.2ndQuadrant.com
 
 

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


[PATCHES] updated hash functions for postgresql v1

2007-10-27 Thread Kenneth Marshall
Dear PostgreSQL Developers,

This patch is a diff -c against the hashfunc.c from postgresql-8.3beta1.
It implements the 2006 version of the hash function by Bob Jenkins. Its
features include a better and faster hash function. I have included the
versions supporting big-endian and little-endian machines that will be
selected based on the machine configuration. Currently, I have hash_any()
just a stub calling hashlittle and hashbig. In order to allow the hash
index to support large indexes (10^9 entries), the hash function needs
to be able to provide 64-bit hashes.

The functions hashbig2/hashlittle2 produce 2 32-bit hashes that can be
used as a 64-bit hash value. I would like some feedback as to how best
to include 64-bit hashes within our current 32-bit hash infrastructure.
The hash-merge can simple use one of the 2 32-bit pieces to provide
the current 32-bit hash values needed. Then they could be pulled directly
from the hash index and not need to be recalculated at run time. What
would be the best way to implement this in a way that will work on 
machines without support for 64-bit integers?

The current patch passes all the regression tests, but has a few warnings
for the different variations of the new hash function. Until the design
has crystalized, I am not going to worry about them and I want testers to
have access to the different functions. I am doing the initial patches
to the hash index code based on a 32-bit hash, but I would like to add the
64-bit hash support pretty early in the development cycle in order to
allow for better testing. Any thoughts would be welcome.

Regards,
Ken
*** hashfunc.c  pgsql/src/backend/access/hash/hashfunc.c,v 1.53 2007/09/21 
22:52:52 tgl
--- hashfunc.c_NEW32Wed Oct 17 13:58:10 2007
***
*** 197,230 
   * This hash function was written by Bob Jenkins
   * ([EMAIL PROTECTED]), and superficially adapted
   * for PostgreSQL by Neil Conway. For more information on this
!  * hash function, see http://burtleburtle.net/bob/hash/doobs.html,
!  * or Bob's article in Dr. Dobb's Journal, Sept. 1997.
   */
  
  /*--
   * mix -- mix 3 32-bit values reversibly.
!  * For every delta with one or two bits set, and the deltas of all three
!  * high bits or all three low bits, whether the original value of a,b,c
!  * is almost all zero or is uniformly distributed,
!  * - If mix() is run forward or backward, at least 32 bits in a,b,c
!  * have at least 1/4 probability of changing.
!  * - If mix() is run forward, every bit of c will change between 1/3 and
!  * 2/3 of the time.  (Well, 22/100 and 78/100 for some 2-bit deltas.)
   *--
   */
  #define mix(a,b,c) \
  { \
!   a -= b; a -= c; a ^= ((c)13); \
!   b -= c; b -= a; b ^= ((a)8); \
!   c -= a; c -= b; c ^= ((b)13); \
!   a -= b; a -= c; a ^= ((c)12);  \
!   b -= c; b -= a; b ^= ((a)16); \
!   c -= a; c -= b; c ^= ((b)5); \
!   a -= b; a -= c; a ^= ((c)3);  \
!   b -= c; b -= a; b ^= ((a)10); \
!   c -= a; c -= b; c ^= ((b)15); \
  }
  
  /*
   * hash_any() -- hash a variable-length key into a 32-bit value
   *k   : the key (the unaligned variable-length array 
of bytes)
--- 197,1060 
   * This hash function was written by Bob Jenkins
   * ([EMAIL PROTECTED]), and superficially adapted
   * for PostgreSQL by Neil Conway. For more information on this
!  * hash function, see http://burtleburtle.net/bob/hash/#lookup
!  * and http://burtleburtle.net/bob/hash/lookup3.txt. Further
!  * information on the original version of the hash function can
!  * be found in Bob's article in Dr. Dobb's Journal, Sept. 1997.
   */
  
+ #define rot(x,k) (((x)(k)) | ((x)(32-(k
+ 
+ #ifndef WORS_BIGENDIAN
+ #define HASH_LITTLE_ENDIAN 1
+ #define HASH_BIG_ENDIAN 0
+ #else
+ #define HASH_LITTLE_ENDIAN 0
+ #define HASH_BIG_ENDIAN 1
+ #endif
+ 
  /*--
   * mix -- mix 3 32-bit values reversibly.
!  * 
!  * This is reversible, so any information in (a,b,c) before mix() is
!  * still in (a,b,c) after mix().
!  * 
!  * If four pairs of (a,b,c) inputs are run through mix(), or through
!  * mix() in reverse, there are at least 32 bits of the output that
!  * are sometimes the same for one pair and different for another pair.
!  * This was tested for:
!  * * pairs that differed by one bit, by two bits, in any combination
!  *   of top bits of (a,b,c), or in any combination of bottom bits of
!  *   (a,b,c).
!  * * differ is defined as +, -, ^, or ~^.  For + and -, I transformed
!  *   the output delta to a Gray code (a^(a1)) so a string of 1's (as
!  *   is commonly produced by subtraction) look like a single 1-bit
!  *   difference.
!  * * the base values were pseudorandom, all zero but one bit set, or
!  *   all zero plus a counter that starts at zero.
!  * 
!  * Some k values for my a-=c; a^=rot(c,k); c+=b; arrangement that
!  * satisfy this are
!  * 4  6  8 16 19  4
!  * 9 15  3 18 27 15
!  *14  9  3  7 17  3
!  * Well, 9 15 3 18 27 15 didn't quite get 32 bits 

Re: [PATCHES] [HACKERS] Full page writes improvement, code update

2007-04-25 Thread Kenneth Marshall
On Wed, Apr 25, 2007 at 10:00:16AM +0200, Zeugswetter Andreas ADI SD wrote:
 
   1) To deal with partial/inconsisitent write to the data file at
 crash 
   recovery, we need full page writes at the first modification to
 pages
   after each checkpoint.   It consumes much of WAL space.
  
  We need to find a way around this someday.  Other DBs don't 
  do this; it may be becuase they're less durable, or because 
  they fixed the problem.
 
 They eighter can only detect a failure later (this may be a very long
 time depending on access and verify runs) or they also write page
 images. Those that write page images usually write before images to a
 different area that is cleared periodically (e.g. during checkpoint).
 
 Writing to a different area was considered in pg, but there were more
 negative issues than positive.
 So imho pg_compresslog is the correct path forward. The current
 discussion is only about whether we want a more complex pg_compresslog
 and no change to current WAL, or an increased WAL size for a less
 complex implementation.
 Both would be able to compress the WAL to the same archive log size.
 
 Andreas
 
I definitely am in the camp of not increasing WAL size at all. If we
need a bit more complexity to ensure that, so be it. Any approach that
increases WAL volume would need to have an amazing benefit to make it
warranted. This certainly does not meet that criteria.

Ken


---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly