> "Mark Woodward" <[EMAIL PROTECTED]> writes:
>> PostgreSQL promptly uses all available memory for the query and
>> subsequently crashes.
>
> I'll bet a nickel this is on a Linux machine with OOM kill enabled.
> What does the postmaster log show --- or look in the kernel log to
> see if it mentions anything about an out-of-memory kill.

That's a no brainer. Maybe I should have rephrased the condition, crash
may be the wrong word, it was definitely killed by out of memory. Sorry.

>
>> freedb=# create table ucode as select distinct ucode from cdtitles group
>> by ucode having count(ucode)>1 ;
>> server closed the connection unexpectedly
>
> What does EXPLAIN show as the plan for that?  If it's a hash aggregate,
> try with "enable_hashagg" turned off.  How many distinct ucode values
> are there in the table?

There are over 1.7M distinct rows, about 200K non-distinct that I want to
somehow remove.

It does have hash aggregate:
freedb=# explain select distinct ucode from cdtitles group by ucode having
count(ucode)>1 ;
                                     QUERY PLAN
-------------------------------------------------------------------------------------
 Unique  (cost=106536.32..106537.32 rows=200 width=32)
   ->  Sort  (cost=106536.32..106536.82 rows=200 width=32)
         Sort Key: ucode
         ->  HashAggregate  (cost=106527.68..106528.68 rows=200 width=32)
               Filter: (count(ucode) > 1)
               ->  Seq Scan on cdtitles  (cost=0.00..96888.12 rows=1927912
width=32)
(6 rows)


Well, shouldn't hash aggregate respect work memory limits?

---------------------------(end of broadcast)---------------------------
TIP 5: don't forget to increase your free space map settings

Reply via email to