Gregory Stark wrote:
Heikki Linnakangas [EMAIL PROTECTED] writes:
There is one wacky idea I haven't dared to propose yet:
We could lift the limitation that you can't defragment a page that's
pinned, if we play some smoke and mirrors in the buffer manager. When
you prune a page, make
On Tue, 2007-09-18 at 12:10 -0400, Tom Lane wrote:
I wrote:
* The patch makes undocumented changes that cause autovacuum's decisions
to be driven by total estimated dead space rather than total number of
dead tuples. Do we like this?
No one seems to have picked up on this point, but
Decibel! wrote:
On Tue, Sep 18, 2007 at 11:32:52AM -0400, Tom Lane wrote:
Another option would be to prune whenever the free space goes
below table fillfactor and hope that users would set fillfactor so that
atleast one updated tuple can fit in the block. I know its not best to
rely on the
Tom Lane wrote:
Heikki Linnakangas [EMAIL PROTECTED] writes:
We could lift the limitation that you can't defragment a page that's
pinned, if we play some smoke and mirrors in the buffer manager. When
you prune a page, make a *copy* of the page you're pruning, and keep
both versions in the
Heikki Linnakangas [EMAIL PROTECTED] writes:
There is one wacky idea I haven't dared to propose yet:
We could lift the limitation that you can't defragment a page that's
pinned, if we play some smoke and mirrors in the buffer manager. When
you prune a page, make a *copy* of the page you're
Decibel! [EMAIL PROTECTED] writes:
3 isn't that important to me, but 4 is:
4. Doesn't hammer the database to measure
And pgstattuple fails #4 miserably. Want to know the average dead space
in a 500GB database? Yeah, right
So we could put a vacuum_cost_delay() in it ...
On Sep 19, 2007, at 8:08 AM, Tom Lane wrote:
Decibel! [EMAIL PROTECTED] writes:
3 isn't that important to me, but 4 is:
4. Doesn't hammer the database to measure
And pgstattuple fails #4 miserably. Want to know the average dead
space
in a 500GB database? Yeah, right
So we could put a
Bruce Momjian wrote:
If we only prune on an update (or insert) why not just do prune every
time? I figure the prune/defrag has to be lighter than the
update/insert itself.
Pruning is a quite costly operation. You need to check the visibility of
each tuple on the page, following tuple chains
Tom Lane wrote:
* We also need to think harder about when to invoke the page pruning
code. As the patch stands, if you set a breakpoint at
heap_page_prune_opt it'll seem to be hit constantly (eg, once for every
system catalog probe), which seems uselessly often. And yet it also
seems not
Bruce Momjian [EMAIL PROTECTED] writes:
Tom Lane wrote:
But then what happens when you want to update a second tuple on the same
page? None of our existing plan types release and reacquire pin if they
don't have to, and I really doubt that we want to give up that
optimization.
You will
Heikki Linnakangas [EMAIL PROTECTED] writes:
There is one wacky idea I haven't dared to propose yet:
We could lift the limitation that you can't defragment a page that's
pinned, if we play some smoke and mirrors in the buffer manager. When
you prune a page, make a *copy* of the page you're
Tom Lane wrote:
Bruce Momjian [EMAIL PROTECTED] writes:
Tom Lane wrote:
But then what happens when you want to update a second tuple on the same
page? None of our existing plan types release and reacquire pin if they
don't have to, and I really doubt that we want to give up that
Pavan Deolasee [EMAIL PROTECTED] writes:
On 9/18/07, Tom Lane [EMAIL PROTECTED] wrote:
Perhaps we could
replace that heuristic with something that is page-local; seems like
dividing the total used space by the number of item pointers would give
at least a rough approximation of the page's
On 9/18/07, Tom Lane [EMAIL PROTECTED] wrote:
Another option would be to prune whenever the free space goes
below table fillfactor
If default fillfactor weren't 100% then this might be good ;-). But
we could use max(1-fillfactor, BLCKSZ/8) or some such.
Yet another option would be to
I wrote:
* The patch makes undocumented changes that cause autovacuum's decisions
to be driven by total estimated dead space rather than total number of
dead tuples. Do we like this?
No one seems to have picked up on this point, but after reflection
I think there's actually a pretty big
On 9/18/07, Tom Lane [EMAIL PROTECTED] wrote:
I wrote:
* The patch makes undocumented changes that cause autovacuum's decisions
to be driven by total estimated dead space rather than total number of
dead tuples. Do we like this?
No one seems to have picked up on this point, but after
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Tom Lane wrote:
I wrote:
* The patch makes undocumented changes that cause autovacuum's decisions
to be driven by total estimated dead space rather than total number of
dead tuples. Do we like this?
If we do this, then it's not clear that
Pavan Deolasee [EMAIL PROTECTED] writes:
On 9/18/07, Tom Lane [EMAIL PROTECTED] wrote:
In a system with
HOT running well, the reasons to vacuum a table will be:
1. Remove dead index entries.
2. Remove LP_DEAD line pointers.
3. Truncate off no-longer-used end pages.
4. Transfer knowledge
On 9/18/07, Tom Lane [EMAIL PROTECTED] wrote:
* I'm still pretty unhappy about the patch's use of a relcache copy of
GetAvgFSMRequestSize()'s result. The fact that there's no provision for
ever updating the value while the relcache entry lives is part of it,
but the bigger part is that I'd
On 9/18/07, Tom Lane [EMAIL PROTECTED] wrote:
Pavan Deolasee [EMAIL PROTECTED] writes:
On 9/18/07, Tom Lane [EMAIL PROTECTED] wrote:
In a system with
HOT running well, the reasons to vacuum a table will be:
1. Remove dead index entries.
2. Remove LP_DEAD line pointers.
3. Truncate
On Tue, Sep 18, 2007 at 11:32:52AM -0400, Tom Lane wrote:
Another option would be to prune whenever the free space goes
below table fillfactor and hope that users would set fillfactor so that
atleast one updated tuple can fit in the block. I know its not best to
rely on the users though.
On Tue, Sep 18, 2007 at 09:31:03AM -0700, Joshua D. Drake wrote:
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Tom Lane wrote:
I wrote:
* The patch makes undocumented changes that cause autovacuum's decisions
to be driven by total estimated dead space rather than total number of
dead
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Decibel! wrote:
On Tue, Sep 18, 2007 at 09:31:03AM -0700, Joshua D. Drake wrote:
If we do this, then it's not clear that having pgstats track dead space
is worth the trouble at all. It might possibly be of value for testing
purposes to see how
I have finished a first review pass over all of the HOT patch
(updated code is posted on -patches). I haven't found any showstoppers,
but there seem still several areas that need discussion:
* The patch makes undocumented changes that cause autovacuum's decisions
to be driven by total estimated
Tom Lane wrote:
* We also need to think harder about when to invoke the page pruning
code. As the patch stands, if you set a breakpoint at
heap_page_prune_opt it'll seem to be hit constantly (eg, once for every
system catalog probe), which seems uselessly often. And yet it also
seems not
Bruce Momjian [EMAIL PROTECTED] writes:
If we only prune on an update (or insert) why not just do prune every
time?
The problem is you can't prune anymore once you have existing pin on the
target page. I'd really like to get around that, but so far it seems
unacceptably fragile --- the
Tom Lane wrote:
Bruce Momjian [EMAIL PROTECTED] writes:
If we only prune on an update (or insert) why not just do prune every
time?
The problem is you can't prune anymore once you have existing pin on the
target page. I'd really like to get around that, but so far it seems
unacceptably
Bruce Momjian [EMAIL PROTECTED] writes:
Tom Lane wrote:
The problem is you can't prune anymore once you have existing pin on the
target page. I'd really like to get around that, but so far it seems
unacceptably fragile --- the executor really doesn't expect tuples to
get moved around
Tom Lane wrote:
Bruce Momjian [EMAIL PROTECTED] writes:
Tom Lane wrote:
The problem is you can't prune anymore once you have existing pin on the
target page. I'd really like to get around that, but so far it seems
unacceptably fragile --- the executor really doesn't expect tuples to
29 matches
Mail list logo