Kouhei Kaigai kai...@ak.jp.nec.com wrote:
we may need a couple of overhaul around HashJoin to support large
size of data, not only nbuckets around 0x8000.
Perhaps, but this is a clear bug, introduced to the 9.5 code, with
an obvious fix; so I've pushed the change from 1 to 1L on that left
Kevin Grittner kgri...@ymail.com writes:
Kouhei Kaigai kai...@ak.jp.nec.com wrote:
we may need a couple of overhaul around HashJoin to support large
size of data, not only nbuckets around 0x8000.
Perhaps, but this is a clear bug, introduced to the 9.5 code, with
an obvious fix; so I've
I wrote:
I don't think it's anywhere near as clear as you think.
Ah, scratch that --- I was looking at the wrong my_log2() call.
-ENOCAFFEINE.
I'm still doubtful that this is the only overflow risk in that new
ExecChooseHashTableSize code, though. For instance, the only reason the
line
Tom Lane t...@sss.pgh.pa.us wrote:
I'm still doubtful that this is the only overflow risk in that
new ExecChooseHashTableSize code, though.
KaiGai already pointed that out on this thread and I completely
agree; but I figured that I might as well fix the clear bug with an
obvious fix that was
Tom Lane t...@sss.pgh.pa.us wrote:
Kevin Grittner kgri...@ymail.com writes:
Kouhei Kaigai kai...@ak.jp.nec.com wrote:
we may need a couple of overhaul around HashJoin to support large
size of data, not only nbuckets around 0x8000.
Perhaps, but this is a clear bug, introduced to the 9.5
Kouhei Kaigai kai...@ak.jp.nec.com wrote:
longlbuckets;
lbuckets = 1 my_log2(hash_table_bytes / bucket_size);
Assert(nbuckets 0);
In my case, the hash_table_bytes was 101017630802, and bucket_size was 48.
So, my_log2(hash_table_bytes / bucket_size) = 31, then
assertion failed
with
crazy number of rows
On 19 August 2015 at 12:38, Tom Lane t...@sss.pgh.pa.us wrote:
David Rowley david.row...@2ndquadrant.com writes:
david=# set work_mem = '94GB';
ERROR: 98566144 is outside the valid range for parameter work_mem
(64
Kouhei Kaigai kai...@ak.jp.nec.com wrote:
longlbuckets;
lbuckets = 1 my_log2(hash_table_bytes / bucket_size);
Assert(nbuckets 0);
In my case, the hash_table_bytes was 101017630802, and bucket_size was 48.
So, my_log2(hash_table_bytes / bucket_size)
On 19 August 2015 at 08:54, Kevin Grittner kgri...@ymail.com wrote:
Kouhei Kaigai kai...@ak.jp.nec.com wrote:
longlbuckets;
lbuckets = 1 my_log2(hash_table_bytes / bucket_size);
Assert(nbuckets 0);
In my case, the hash_table_bytes was 101017630802, and
On 19 August 2015 at 12:23, Kouhei Kaigai kai...@ak.jp.nec.com wrote:
-Original Message-
From: David Rowley [mailto:david.row...@2ndquadrant.com]
Sent: Wednesday, August 19, 2015 9:00 AM
The size of your hash table is 101017630802 bytes, which is:
david=# select
-Original Message-
From: David Rowley [mailto:david.row...@2ndquadrant.com]
Sent: Wednesday, August 19, 2015 9:00 AM
To: Kevin Grittner
Cc: Kaigai Kouhei(海外 浩平); pgsql-hackers@postgresql.org
Subject: Re: [HACKERS] Bug? ExecChooseHashTableSize() got assertion failed
with
crazy
David Rowley david.row...@2ndquadrant.com writes:
david=# set work_mem = '94GB';
ERROR: 98566144 is outside the valid range for parameter work_mem (64 ..
2097151)
Apparently you're testing on a 32-bit server. 64-bit servers allow
work_mem to go up to INT_MAX kilobytes.
On 19 August 2015 at 12:38, Tom Lane t...@sss.pgh.pa.us wrote:
David Rowley david.row...@2ndquadrant.com writes:
david=# set work_mem = '94GB';
ERROR: 98566144 is outside the valid range for parameter work_mem (64
..
2097151)
Apparently you're testing on a 32-bit server. 64-bit
Hello,
I noticed ExecChooseHashTableSize() in nodeHash.c got failed by
Assert(nbuckets 0), when crazy number of rows are expected.
BACKTRACE:
#0 0x003f79432625 in raise () from /lib64/libc.so.6
#1 0x003f79433e05 in abort () from /lib64/libc.so.6
#2 0x0092600a in
14 matches
Mail list logo