On Oct 1, 2009, at 4:18 PM, Robert Haas wrote:
On Thu, Oct 1, 2009 at 5:08 PM, Tom Lane <t...@sss.pgh.pa.us> wrote:
Robert Haas <robertmh...@gmail.com> writes:
The elephant in the room here is that if the relation is a million
pages of which 1-100,000 and 1,000,000 are in use, no amount of bias
is going to help us truncate the relation unless every tuple on page
1,000,000 gets updated or deleted.
Well, there is no way to move a tuple across pages in a user-
invisible,
non-blocking fashion, so our ability to do something automatic
about the
above scenario is limited. The discussion at the moment is about
ways
of reducing the probability of getting into that situation in the
first
place. That doesn't preclude also providing some more-invasive tools
that people can use when they do get into that situation; but let's
not let I-want-a-magic-pony syndrome prevent us from doing anything
at all.
That's fair enough, but it's our usual practice to consider, before
implementing a feature or code change, what fraction of the people it
will actually help and by how much. If there's a way that we can
improve the behavior of the system in this area, I am all in favor of
it, but I have pretty modest expectations for how much real-world
benefit will ensue. I suspect that it's pretty common for large
Speaking of helping other cases...
Something else that's been talked about is biasing FSM searches in
order to try and keep a table clustered. If it doesn't add a lot of
overhead, it would be nice to keep that in mind. I don't know where
something like randomly reseting the search would go in the code, but
I suspect it wouldn't be very expandable in the future.
But like Tom said, the top goal here is to help deal with bloat, not
other fanciness.
--
Decibel!, aka Jim C. Nasby, Database Architect deci...@decibel.org
Give your computer some brain candy! www.distributed.net Team #1828
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers