Fwd: [HACKERS] Avoid endless futile table locks in vacuuming.

2015-12-30 Thread Jeff Janes
Forgetting to CC the list is starting to become a bad habit, sorry.
forwarding to list:

-- Forwarded message --
From: Jeff Janes 
Date: Wed, Dec 30, 2015 at 10:03 AM
Subject: Re: [HACKERS] Avoid endless futile table locks in vacuuming.
To: Tom Lane 


On Dec 29, 2015 4:47 PM, "Tom Lane"  wrote:
>
> Jeff Janes  writes:
> > If we are not doing a scan_all and we fail to acquire a clean-up lock on
> > the last block, and the last block reports that it needs freezing, then we
> > continue on to wait for the clean-up lock. But there is no need, we don't
> > really need to freeze the block, and we already know whether it has tuples
> > or not without the clean up lock.  Couldn't we just set the flag based on
> > hastup, then 'continue'?
>
> Uh, isn't that what my patch is doing?

My reading was it does that only if there are no tuples that could be
frozen.  If there are tuples that could be frozen, it actually does
the freezing, even though that is not necessary unless scan_all is
true.

So like the attached, although it is a bit weird to call
lazy_check_needs_freeze if , under !scan_all, we don't actually care
about whether it needs freezing but only the hastup.

Cheers,

Jeff
diff --git a/src/backend/commands/vacuumlazy.c b/src/backend/commands/vacuumlazy.c
new file mode 100644
index 2429889..e3cfa59
*** a/src/backend/commands/vacuumlazy.c
--- b/src/backend/commands/vacuumlazy.c
*** static BufferAccessStrategy vac_strategy
*** 138,144 
  static void lazy_scan_heap(Relation onerel, LVRelStats *vacrelstats,
  			   Relation *Irel, int nindexes, bool scan_all);
  static void lazy_vacuum_heap(Relation onerel, LVRelStats *vacrelstats);
! static bool lazy_check_needs_freeze(Buffer buf);
  static void lazy_vacuum_index(Relation indrel,
    IndexBulkDeleteResult **stats,
    LVRelStats *vacrelstats);
--- 138,144 
  static void lazy_scan_heap(Relation onerel, LVRelStats *vacrelstats,
  			   Relation *Irel, int nindexes, bool scan_all);
  static void lazy_vacuum_heap(Relation onerel, LVRelStats *vacrelstats);
! static bool lazy_check_needs_freeze(Buffer buf, bool *hastup);
  static void lazy_vacuum_index(Relation indrel,
    IndexBulkDeleteResult **stats,
    LVRelStats *vacrelstats);
*** static void lazy_cleanup_index(Relation
*** 147,152 
--- 147,153 
     LVRelStats *vacrelstats);
  static int lazy_vacuum_page(Relation onerel, BlockNumber blkno, Buffer buffer,
   int tupindex, LVRelStats *vacrelstats, Buffer *vmbuffer);
+ static bool should_attempt_truncation(LVRelStats *vacrelstats);
  static void lazy_truncate_heap(Relation onerel, LVRelStats *vacrelstats);
  static BlockNumber count_nondeletable_pages(Relation onerel,
  		 LVRelStats *vacrelstats);
*** lazy_vacuum_rel(Relation onerel, int opt
*** 175,181 
  	LVRelStats *vacrelstats;
  	Relation   *Irel;
  	int			nindexes;
- 	BlockNumber possibly_freeable;
  	PGRUsage	ru0;
  	TimestampTz starttime = 0;
  	long		secs;
--- 176,181 
*** lazy_vacuum_rel(Relation onerel, int opt
*** 263,276 
  
  	/*
  	 * Optionally truncate the relation.
- 	 *
- 	 * Don't even think about it unless we have a shot at releasing a goodly
- 	 * number of pages.  Otherwise, the time taken isn't worth it.
  	 */
! 	possibly_freeable = vacrelstats->rel_pages - vacrelstats->nonempty_pages;
! 	if (possibly_freeable > 0 &&
! 		(possibly_freeable >= REL_TRUNCATE_MINIMUM ||
! 		 possibly_freeable >= vacrelstats->rel_pages / REL_TRUNCATE_FRACTION))
  		lazy_truncate_heap(onerel, vacrelstats);
  
  	/* Vacuum the Free Space Map */
--- 263,270 
  
  	/*
  	 * Optionally truncate the relation.
  	 */
! 	if (should_attempt_truncation(vacrelstats))
  		lazy_truncate_heap(onerel, vacrelstats);
  
  	/* Vacuum the Free Space Map */
*** lazy_scan_heap(Relation onerel, LVRelSta
*** 498,503 
--- 492,504 
  	 * started skipping blocks, we may as well skip everything up to the next
  	 * not-all-visible block.
  	 *
+ 	 * We don't skip the last page, unless we've already determined that it's
+ 	 * not worth trying to truncate the table.  This avoids having every
+ 	 * vacuum take access exclusive lock on the table to attempt a truncation
+ 	 * that just fails immediately (if there are tuples in the last page).
+ 	 * This is worth avoiding mainly because such a lock must be replayed on
+ 	 * any hot standby, where it can be disruptive.
+ 	 *
  	 * Note: if scan_all is true, we won't actually skip any pages; but we
  	 * maintain next_not_all_visible_block anyway, so as to set up the
  	 * all_visible_according_to_vm flag correctly for each page.
*** lazy_scan_heap(Relation onerel, LVRelSta
*** 530,536 
  		Page		page;
  		OffsetNumber offnum,
  	maxoff;
! 		bool		tupgone,
  	hastup;
  		int			prev_dead_count;
  		int			nfrozen;
--- 531,538 
  		Page		page;
 

Re: Fwd: [HACKERS] Avoid endless futile table locks in vacuuming.

2015-12-30 Thread Tom Lane
Jeff Janes  writes:
> On Dec 29, 2015 4:47 PM, "Tom Lane"  wrote:
>> Uh, isn't that what my patch is doing?

> My reading was it does that only if there are no tuples that could be
> frozen.  If there are tuples that could be frozen, it actually does
> the freezing, even though that is not necessary unless scan_all is
> true.

Ah, now I see.

> So like the attached, although it is a bit weird to call
> lazy_check_needs_freeze if , under !scan_all, we don't actually care
> about whether it needs freezing but only the hastup.

True, but this is such a corner case that it doesn't seem worth expending
additional code to have a special-purpose page scan for it.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: Fwd: [HACKERS] Avoid endless futile table locks in vacuuming.

2015-12-30 Thread Tom Lane
Jeff Janes  writes:
> So like the attached, although it is a bit weird to call
> lazy_check_needs_freeze if , under !scan_all, we don't actually care
> about whether it needs freezing but only the hastup.

I think this misses unpinning the buffer in the added code path.
I rearranged to avoid that, did some other cosmetic work, and committed.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers