Il 21/06/2010 04:25, Tom Lane ha scritto:
No.  You could do that if the rate at which you need to write data to
the file is<= the rate at which you extract it.  But for what we
are doing, namely merging runs from several tapes into one output run,
it's pretty much guaranteed that you need new space faster than you are
consuming data from any one input tape.  It balances out as long as you
keep *all* the tapes in one operating-system file; otherwise not.

                        regards, tom lane

Tom, hope you could clarify the issue of the rates.

During the initialisation phase (loading blocks into heap) of course we can mark as garbage more space than we are consuming (since we haven't still begun merging blocks). The time to do that is after prereading as much tuples as possible. Of course even during the algorithm we cannot output more tuples than we preread. So there is no problem in terms of total number of tuples read and output: at each time, read tuples are >= output tuples.

Of course, in this case, output blocks should be placed in the free space spread around the various files and we should keep track of this placement.

But, recall that even in case of using a LogicalTapeSet we should keep track of the output blocks, as Robert said in his example.

What's wrong in my picture?

Thank you.
Manolo.

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Reply via email to