> "Mark Woodward" <[EMAIL PROTECTED]> writes:
>>          ->  HashAggregate  (cost=106527.68..106528.68 rows=200
>> width=32)
>>                Filter: (count(ucode) > 1)
>>                ->  Seq Scan on cdtitles  (cost=0.00..96888.12
>> rows=1927912
>> width=32)
>
>> Well, shouldn't hash aggregate respect work memory limits?
>
> If the planner thought there were 1.7M distinct values, it wouldn't have
> used hash aggregate ... but it only thinks there are 200, which IIRC is
> the default assumption.  Have you ANALYZEd this table lately?

I thought that I had, but I did CLUSTER at some point. Or maybe I didn't
I'm, not sure. I have been working on a file reader/parser/importer
program.  I created and dropped the DB so many times it is hard to keep
track. Still, I would say that is is extremly bad behavior for not having
stats, wouldn't you think?

>
> Meanwhile, I'd strongly recommend turning off OOM kill.  That's got to
> be the single worst design decision in the entire Linux kernel.

How is this any different than the FreeBSD having a default 512M process
size limit? On FreeBSD, the process would have been killed earlier.


---------------------------(end of broadcast)---------------------------
TIP 1: if posting/reading through Usenet, please send an appropriate
       subscribe-nomail command to [EMAIL PROTECTED] so that your
       message can get through to the mailing list cleanly

Reply via email to