"Simon Riggs" <[EMAIL PROTECTED]> writes:

> The buffer size at max tapes is an optimum - a trade off between
> avoiding intermediate merging and merging efficiently. Freeing more
> memory is definitely going to help in the case of low work_mem and lots
> of runs.

I can't follow these abstract arguments. That's why I tried to spell out a
concrete example.

> I think you're not understanding me.
>
> You only need to record the lowest or highest when a run
> completes/starts. When all runs have been written we then have a table
> of the highest and lowest values for each run. We then scan that to see
> whether we can perform merging in one pass, or if not what kind of
> intermediate merging is required. We keep the merge plan in memory and
> then follow it. So probably very small % of total sort cost, though
> might save you doing intermediate merges with huge costs.

Ok, that's a very different concept than what I was thinking.

-- 
  Gregory Stark
  EnterpriseDB          http://www.enterprisedb.com
  Ask me about EnterpriseDB's On-Demand Production Tuning

---------------------------(end of broadcast)---------------------------
TIP 7: You can help support the PostgreSQL project by donating at

                http://www.postgresql.org/about/donate

Reply via email to