Tom Lane escribió:
> Alvaro Herrera <[EMAIL PROTECTED]> writes:
> > I'm not having much luck really. I think the problem is that ANALYZE
> > stores reltuples as the number of live tuples, so if you delete a big
> > portion of a big table, then ANALYZE and then VACUUM, there's a huge
> > misestimation and extra index cleanup passes happen, which is a bad
> > thing.
>
> Yeah ... so just go with a constant estimate of say 200 deletable tuples
> per page?
This seems the most reasonable approach for now. So this is the patch.
--
Alvaro Herrera http://www.PlanetPostgreSQL.org/
"I call it GNU/Linux. Except the GNU/ is silent." (Ben Reiter)
Index: src/backend/commands/vacuumlazy.c
===================================================================
RCS file: /home/alvherre/Code/cvs/pgsql/src/backend/commands/vacuumlazy.c,v
retrieving revision 1.90
diff -c -p -r1.90 vacuumlazy.c
*** src/backend/commands/vacuumlazy.c 30 May 2007 20:11:57 -0000 1.90
--- src/backend/commands/vacuumlazy.c 30 Aug 2007 17:23:34 -0000
***************
*** 11,20 ****
* on the number of tuples and pages we will keep track of at once.
*
* We are willing to use at most maintenance_work_mem memory space to keep
! * track of dead tuples. We initially allocate an array of TIDs of that size.
! * If the array threatens to overflow, we suspend the heap scan phase and
! * perform a pass of index cleanup and page compaction, then resume the heap
! * scan with an empty TID array.
*
* We can limit the storage for page free space to MaxFSMPages entries,
* since that's the most the free space map will be willing to remember
--- 11,22 ----
* on the number of tuples and pages we will keep track of at once.
*
* We are willing to use at most maintenance_work_mem memory space to keep
! * track of dead tuples. We initially allocate an array of TIDs of that size,
! * with an upper limit that depends on table size (this limit ensures we don't
! * allocate a huge area uselessly for vacuuming small tables). If the array
! * threatens to overflow, we suspend the heap scan phase and perform a pass of
! * index cleanup and page compaction, then resume the heap scan with an empty
! * TID array.
*
* We can limit the storage for page free space to MaxFSMPages entries,
* since that's the most the free space map will be willing to remember
***************
*** 68,73 ****
--- 70,81 ----
#define REL_TRUNCATE_MINIMUM 1000
#define REL_TRUNCATE_FRACTION 16
+ /*
+ * Guesstimation of number of dead tuples per page. This is used to
+ * provide an upper limit to memory allocated when vacuuming small
+ * tables.
+ */
+ #define LAZY_ALLOC_TUPLES 200
typedef struct LVRelStats
{
*************** lazy_space_alloc(LVRelStats *vacrelstats
*** 971,979 ****
if (vacrelstats->hasindex)
{
! maxtuples = (maintenance_work_mem * 1024L) / sizeof(ItemPointerData);
maxtuples = Min(maxtuples, INT_MAX);
maxtuples = Min(maxtuples, MaxAllocSize / sizeof(ItemPointerData));
/* stay sane if small maintenance_work_mem */
maxtuples = Max(maxtuples, MaxHeapTuplesPerPage);
}
--- 979,990 ----
if (vacrelstats->hasindex)
{
! /* no need to allocate more space than we have pages */
! maxtuples = LAZY_ALLOC_TUPLES * relblocks;
! maxtuples = Min(maxtuples, (maintenance_work_mem * 1024L) / sizeof(ItemPointerData));
maxtuples = Min(maxtuples, INT_MAX);
maxtuples = Min(maxtuples, MaxAllocSize / sizeof(ItemPointerData));
+
/* stay sane if small maintenance_work_mem */
maxtuples = Max(maxtuples, MaxHeapTuplesPerPage);
}
---------------------------(end of broadcast)---------------------------
TIP 1: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly