On Fri, Feb 10, 2006 at 09:57:12AM -0500, Mark Woodward wrote: > > In most practical situations, I think > > exceeding work_mem is really the best solution, as long as it's not > > by more than 10x or 100x. It's when the estimate is off by many > > orders of magnitude that you've got a problem. Running out of memory > > is not necessarily the worst response ... as long as the system doesn't > > kill the process in response to that. > > I don't agree with you here. Many PostgreSQL installations use PostgreSQL > as part of a larger whole. Adjusting "work_mem" should give the admin some > control over the memory footprint of the system. It is documented as the > limit a specific function path will use before spilling to disk.
And even when PostgreSQL has the server all to itself, having a hashagg spill to disk is *way* better than pushing the machine into a swap storm. At least if you spill the hashagg you only have one backend running at a snail's pace; a swap storm means next to nothing gets done. > This was/is an example of where the behavior of PostgreSQL is clearly > unacceptable. OK, yes, this problem goes away with an ANALYZE, but it > isn't clear how anyone could have known this, and unexpected behavior is > bad in any product. Care to submit a documentation patch before releases are bundled (I think on Sunday?) At least then people would be aware that work_mem is just a suggestion to hash_aggs. I'd do a patch myself but I doubt I'll have time before the release. :( -- Jim C. Nasby, Sr. Engineering Consultant [EMAIL PROTECTED] Pervasive Software http://pervasive.com work: 512-231-6117 vcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461 ---------------------------(end of broadcast)--------------------------- TIP 4: Have you searched our list archives? http://archives.postgresql.org