> "Jim" == Jim Nasby writes:
Jim> Anything happen with this, or the patch Andrew posted?
No.
And my attention has just been drawn to this, which looks like the same
issue:
http://www.postgresql.org/message-id/52b47b47-0926-4e15-b25e-212df52fe...@oseberg.io
--
Andrew (irc:RhodiumToad)
On 2/15/15 7:16 PM, Tomas Vondra wrote:
Hi,
On 16.2.2015 00:50, Andrew Gierth wrote:
"Tom" == Tom Lane writes:
I've now tried the attached patch to correct the bucketsize
estimates, and it does indeed stop the planner from considering the
offending path (in this case it just does the join th
On 16.2.2015 03:38, Andrew Gierth wrote:
>> "Tomas" == Tomas Vondra
>> writes:
>
> Tomas> Improving the estimates is always good, but it's not going
> to Tomas> fix the case of non-NULL values (it shouldn't be all
> that Tomas> difficult to create such examples with a value whose
> hash st
> "Tomas" == Tomas Vondra writes:
Tomas> Improving the estimates is always good, but it's not going to
Tomas> fix the case of non-NULL values (it shouldn't be all that
Tomas> difficult to create such examples with a value whose hash starts
Tomas> with a bunch of zeroes).
Right now, I can
Hi,
On 16.2.2015 00:50, Andrew Gierth wrote:
>> "Tom" == Tom Lane writes:
>
> I've now tried the attached patch to correct the bucketsize
> estimates, and it does indeed stop the planner from considering the
> offending path (in this case it just does the join the other way
> round).
>
> One
> "Tom" == Tom Lane writes:
>> A quick test suggests that initializing the hash value to ~0 rather
>> than 0 has a curious effect: the number of batches still explodes,
>> but the performance does not suffer the same way. (I think because
>> almost all the batches end up empty.) I think t
Andrew Gierth writes:
> A quick test suggests that initializing the hash value to ~0 rather than
> 0 has a curious effect: the number of batches still explodes, but the
> performance does not suffer the same way. (I think because almost all
> the batches end up empty.) I think this is worth doing
This came up today on IRC, though I suspect the general problem has been
seen before:
create table m3 (id uuid, status integer);
create table q3 (id uuid);
insert into m3
select uuid_generate_v4(), floor(random() * 4)::integer
from generate_series(1,100);
insert into q3
select id
f