"Josh Berkus" <[EMAIL PROTECTED]> writes:

> sort_mem: My tests with 8.2 and DBT3 seemed to show that, due to 
> limitations of our tape sort algorithm, allocating over 2GB for a single 
> sort had no benefit.  However, Magnus and others have claimed otherwise.  
> Has this improved in 8.3?

Simon previously pointed out that we have some problems in our tape sort
algorithm with large values of work_mem. If the tape is "large enough" to
generate some number of output tapes then increasing the heap size doesn't buy
us any reduction in the future passes. And managing very large heaps is a
fairly large amount of cpu time itself.

The problem of course is that we never know if it's "large enough". We talked
at one point about having a heuristic where we start the heap relatively small
and double it (adding one row) whenever we find we're starting a new tape. Not
sure how that would work out though.

-- 
  Gregory Stark
  EnterpriseDB          http://www.enterprisedb.com
  Ask me about EnterpriseDB's On-Demand Production Tuning

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance

Reply via email to