On Thu, Apr 18, 2019 at 3:12 PM Masahiko Sawada <sawada.m...@gmail.com> wrote:
>
> On Thu, Apr 18, 2019 at 4:20 AM Peter Geoghegan <p...@bowt.ie> wrote:
> >
> > On Wed, Apr 17, 2019 at 10:46 AM Tom Lane <t...@sss.pgh.pa.us> wrote:
> > > Yeah, if we wanted to expose these complications more directly, we
> > > could think about adding or changing the main counters.  I was wondering
> > > about whether we should have it report "x bytes and y line pointers
> > > freed", rather than counting tuples per se.
> >
>
> It looks good idea to me.
>
> > I like that idea, but I'm pretty sure that there are very few users
> > that are aware of these distinctions at all. It would be a good idea
> > to clearly document them.
>
> I completely agreed. I'm sure that only a few user can do the action
> of enabling index cleanup when the report says there are many dead
> line pointers in the table.
>
> It brought me an another idea of reporting something like "x bytes
> freed, y bytes can be freed after index cleanup". That is, we report
> how much bytes including tuples and line pointers we freed and how
> much bytes of line pointers can be freed after index cleanup. While
> index cleanup is enabled, the latter value should always be 0. If the
> latter value gets large user can be aware of necessity of doing index
> cleanup.
>

Attached the draft version of patch based on the discussion so far.
This patch fixes two issues: the assertion error topminnow reported
and the contents of the vacuum logs.

For the first issue, I've changed lazy_scan_heap so that it counts the
tuples that became dead after HOT pruning in nkeep when index cleanup
is disabled. As per discussions so far, it would be no problem to try
to freeze tuples that ought to have been deleted.

For the second issue, I've changed lazy vacuum so that it reports both
the number of kilobytes we freed and the number of kilobytes can be
freed after index cleanup. The former value includes the size of not
only heap tuples but also line pointers. That is, when a normal tuple
has been marked as either dead or redirected we count only the size of
heap tuple, and when it has been marked as unused we count the size of
both heap tuple and line pointer. Similarly when either a redirect
line pointer or a dead line pointer become unused we count only the
size of line pointer. The latter value we report could be non-zero
only when index cleanup is disabled; it counts the number of bytes of
dead line pointers we left. The advantage of this change is that user
can be aware of both how many bytes we freed and how many bytes we
left due to skipping index cleanup. User can be aware of the necessity
of doing index cleanup by seeing the latter value.

Also with this patch, we count only tuples that has been marked as
unused as deleted tuples. The action of replacing a dead root tuple
with a dead line pointer doesn't count for anything.  It would be
close to the meaning of "deleted tuples" and less confusion. We do
that when actually marking rather than when recording because we could
record and forget dead tuples.

Here is the sample of the report by VACUUM VERBOSE. I used the
following script and the vacuum report is changed by the patch.

* Script
DROP TABLE IF EXISTS test;
CREATE TABLE test (c int primary key, d int);
INSERT INTO test SELECT * FROM  generate_series(1,10000);
DELETE FROM test WHERE c < 2000;
VACUUM (INDEX_CLEANUP FALSE, VERBOSE) test;
VACUUM (INDEX_CLEANUP TRUE, VERBOSE) test;

* HEAD (when index cleanup is disabled)
INFO:  vacuuming "public.test"
INFO:  "test": found 1999 removable, 8001 nonremovable row versions in
45 out of 45 pages
DETAIL:  0 dead row versions cannot be removed yet, oldest xmin: 504
There were 0 unused item pointers.
Skipped 0 pages due to buffer pins, 0 frozen pages.
0 pages are entirely empty.
0 tuples and 1999 item identifiers are left as dead.
CPU: user: 0.00 s, system: 0.00 s, elapsed: 0.00 s.
VACUUM

* Patched (when index cleanup is disabled)
INFO:  vacuuming "public.test"
INFO:  "test": 55 kB freed, 7996 bytes can be freed after index
cleanup, table size: 360 kB
DETAIL:  found 8001 nonremovable row versions in 45 out of 45 pages.
0 dead row versions cannot be removed yet, oldest xmin: 1660
There were 0 unused item pointers.
Skipped 0 pages due to buffer pins, 0 frozen pages.
0 pages are entirely empty.
CPU: user: 0.00 s, system: 0.00 s, elapsed: 0.00 s.
VACUUM

* HEAD (when index cleanup is enabled)
INFO:  vacuuming "public.test"
INFO:  scanned index "test_pkey" to remove 1999 row versions
DETAIL:  CPU: user: 0.00 s, system: 0.00 s, elapsed: 0.00 s
INFO:  "test": removed 1999 row versions in 9 pages
DETAIL:  CPU: user: 0.00 s, system: 0.00 s, elapsed: 0.00 s
INFO:  index "test_pkey" now contains 8001 row versions in 30 pages
DETAIL:  1999 index row versions were removed.
5 index pages have been deleted, 0 are currently reusable.
CPU: user: 0.00 s, system: 0.00 s, elapsed: 0.00 s.
INFO:  "test": found 0 removable, 91 nonremovable row versions in 10
out of 45 pages
DETAIL:  0 dead row versions cannot be removed yet, oldest xmin: 504
There were 0 unused item pointers.
Skipped 0 pages due to buffer pins, 0 frozen pages.
0 pages are entirely empty.
0 tuples and 0 item identifiers are left as dead.
CPU: user: 0.00 s, system: 0.00 s, elapsed: 0.00 s.
VACUUM

* Patched (when index cleanup is enabled)
INFO:  vacuuming "public.test"
INFO:  scanned index "test_pkey" to remove 1999 row versions
DETAIL:  CPU: user: 0.00 s, system: 0.00 s, elapsed: 0.00 s
INFO:  "test": removed 1999 row versions in 9 pages
DETAIL:  CPU: user: 0.00 s, system: 0.00 s, elapsed: 0.00 s
INFO:  index "test_pkey" now contains 8001 row versions in 30 pages
DETAIL:  1999 index row versions were removed.
5 index pages have been deleted, 0 are currently reusable.
CPU: user: 0.00 s, system: 0.00 s, elapsed: 0.00 s.
INFO:  "test": 7996 bytes freed, table size: 360 kB
DETAIL:  found 91 nonremovable row versions in 10 out of 45 pages.
0 dead row versions cannot be removed yet, oldest xmin: 1660
There were 0 unused item pointers.
Skipped 0 pages due to buffer pins, 0 frozen pages.
0 pages are entirely empty.
CPU: user: 0.00 s, system: 0.00 s, elapsed: 0.00 s.
VACUUM

The patch doesn't address the concern that Tom had, which is it might
not be safe in production to accumulate the dead line pointers without
limit when the reloption is set to false. I think this is a separate
issue from the above two issues so I'd like to discuss that after
these are solved.


Regards,

--
Masahiko Sawada
NIPPON TELEGRAPH AND TELEPHONE CORPORATION
NTT Open Source Software Center
diff --git a/src/backend/access/heap/pruneheap.c b/src/backend/access/heap/pruneheap.c
index a3e5192..4f39c76 100644
--- a/src/backend/access/heap/pruneheap.c
+++ b/src/backend/access/heap/pruneheap.c
@@ -147,11 +147,13 @@ heap_page_prune_opt(Relation relation, Buffer buffer)
 		 */
 		if (PageIsFull(page) || PageGetHeapFreeSpace(page) < minfree)
 		{
-			TransactionId ignore = InvalidTransactionId;	/* return value not
-															 * needed */
+			/* return values not needed */
+			TransactionId ignore1 = InvalidTransactionId;
+			Size ignore2;
 
 			/* OK to prune */
-			(void) heap_page_prune(relation, buffer, OldestXmin, true, &ignore);
+			(void) heap_page_prune(relation, buffer, OldestXmin, true, &ignore1,
+								   &ignore2);
 		}
 
 		/* And release buffer lock */
@@ -173,12 +175,13 @@ heap_page_prune_opt(Relation relation, Buffer buffer)
  * send its own new total to pgstats, and we don't want this delta applied
  * on top of that.)
  *
- * Returns the number of tuples deleted from the page and sets
- * latestRemovedXid.
+ * Returns the number of tuples deleted from the page. This function sets
+ * latestRemovedXid and increments freed_bytes.
  */
 int
 heap_page_prune(Relation relation, Buffer buffer, TransactionId OldestXmin,
-				bool report_stats, TransactionId *latestRemovedXid)
+				bool report_stats, TransactionId *latestRemovedXid,
+				Size *freed_bytes)
 {
 	int			ndeleted = 0;
 	Page		page = BufferGetPage(buffer);
@@ -235,10 +238,11 @@ heap_page_prune(Relation relation, Buffer buffer, TransactionId OldestXmin,
 		 * Apply the planned item changes, then repair page fragmentation, and
 		 * update the page's hint bit about whether it has free line pointers.
 		 */
-		heap_page_prune_execute(buffer,
-								prstate.redirected, prstate.nredirected,
-								prstate.nowdead, prstate.ndead,
-								prstate.nowunused, prstate.nunused);
+		*freed_bytes +=
+			heap_page_prune_execute(buffer,
+									prstate.redirected, prstate.nredirected,
+									prstate.nowdead, prstate.ndead,
+									prstate.nowunused, prstate.nunused);
 
 		/*
 		 * Update the page's pd_prune_xid field to either zero, or the lowest
@@ -346,7 +350,7 @@ heap_page_prune(Relation relation, Buffer buffer, TransactionId OldestXmin,
  * LP_DEAD state are added to nowdead[]; and items to be set to LP_UNUSED
  * state are added to nowunused[].
  *
- * Returns the number of tuples (to be) deleted from the page.
+ * Returns the number of tuples (to be) marked as unused from the page.
  */
 static int
 heap_prune_chain(Relation relation, Buffer buffer, OffsetNumber rootoffnum,
@@ -584,14 +588,6 @@ heap_prune_chain(Relation relation, Buffer buffer, OffsetNumber rootoffnum,
 		}
 
 		/*
-		 * If the root entry had been a normal tuple, we are deleting it, so
-		 * count it in the result.  But changing a redirect (even to DEAD
-		 * state) doesn't count.
-		 */
-		if (ItemIdIsNormal(rootlp))
-			ndeleted++;
-
-		/*
 		 * If the DEAD tuple is at the end of the chain, the entire chain is
 		 * dead and the root line pointer can be marked dead.  Otherwise just
 		 * redirect the root to the correct chain member.
@@ -676,8 +672,10 @@ heap_prune_record_unused(PruneState *prstate, OffsetNumber offnum)
  * This is split out because it is also used by heap_xlog_clean()
  * to replay the WAL record when needed after a crash.  Note that the
  * arguments are identical to those of log_heap_clean().
+ *
+ * Return the total number of bytes we freed.
  */
-void
+Size
 heap_page_prune_execute(Buffer buffer,
 						OffsetNumber *redirected, int nredirected,
 						OffsetNumber *nowdead, int ndead,
@@ -686,6 +684,7 @@ heap_page_prune_execute(Buffer buffer,
 	Page		page = (Page) BufferGetPage(buffer);
 	OffsetNumber *offnum;
 	int			i;
+	Size		freed_bytes = 0;
 
 	/* Update all redirected line pointers */
 	offnum = redirected;
@@ -695,6 +694,7 @@ heap_page_prune_execute(Buffer buffer,
 		OffsetNumber tooff = *offnum++;
 		ItemId		fromlp = PageGetItemId(page, fromoff);
 
+		freed_bytes += ItemIdGetLength(fromlp);
 		ItemIdSetRedirect(fromlp, tooff);
 	}
 
@@ -705,6 +705,7 @@ heap_page_prune_execute(Buffer buffer,
 		OffsetNumber off = *offnum++;
 		ItemId		lp = PageGetItemId(page, off);
 
+		freed_bytes += ItemIdGetLength(lp);
 		ItemIdSetDead(lp);
 	}
 
@@ -715,6 +716,8 @@ heap_page_prune_execute(Buffer buffer,
 		OffsetNumber off = *offnum++;
 		ItemId		lp = PageGetItemId(page, off);
 
+		/* Setting unused frees both heap tuple and line pointer */
+		freed_bytes += ItemIdGetLength(lp) + sizeof(ItemIdData);
 		ItemIdSetUnused(lp);
 	}
 
@@ -723,6 +726,8 @@ heap_page_prune_execute(Buffer buffer,
 	 * whether it has free pointers.
 	 */
 	PageRepairFragmentation(page);
+
+	return freed_bytes;
 }
 
 
diff --git a/src/backend/access/heap/vacuumlazy.c b/src/backend/access/heap/vacuumlazy.c
index 8dc76fa..cc8b036 100644
--- a/src/backend/access/heap/vacuumlazy.c
+++ b/src/backend/access/heap/vacuumlazy.c
@@ -44,6 +44,7 @@
 #include "access/transam.h"
 #include "access/visibilitymap.h"
 #include "access/xlog.h"
+#include "utils/builtins.h"
 #include "catalog/storage.h"
 #include "commands/dbcommands.h"
 #include "commands/progress.h"
@@ -125,10 +126,12 @@ typedef struct LVRelStats
 	double		new_rel_tuples; /* new estimated total # of tuples */
 	double		new_live_tuples;	/* new estimated total # of live tuples */
 	double		new_dead_tuples;	/* new estimated total # of dead tuples */
-	double		nleft_dead_tuples;	/* # of dead tuples we left */
-	double		nleft_dead_itemids;	/* # of dead item pointers we left */
+	Size		freed_bytes;		/* bytes of the size of both heap tuple and
+									 * line pointer we freed */
+	Size		left_lp_bytes;		/* bytes of dead line pointer we left, which
+									 * can be freed after index cleanup */
 	BlockNumber pages_removed;
-	double		tuples_deleted;
+	double		tuples_deleted;	/* # of tuples marked as unused */
 	BlockNumber nonempty_pages; /* actually, last nonempty page + 1 */
 	/* List of TIDs of tuples we intend to delete */
 	/* NB: this list is ordered by TID address */
@@ -176,6 +179,7 @@ static bool lazy_tid_reaped(ItemPointer itemptr, void *state);
 static int	vac_cmp_itemptr(const void *left, const void *right);
 static bool heap_page_is_all_visible(Relation rel, Buffer buf,
 						 TransactionId *visibility_cutoff_xid, bool *all_frozen);
+static char *get_pretty_size(Size size);
 
 
 /*
@@ -412,23 +416,27 @@ heap_vacuum_rel(Relation onerel, VacuumParams *params,
 							 get_namespace_name(RelationGetNamespace(onerel)),
 							 RelationGetRelationName(onerel),
 							 vacrelstats->num_index_scans);
+			if (vacrelstats->left_lp_bytes > 0)
+				appendStringInfo(&buf,
+								 _("stats: %s freed, %s can be freed after index cleanup, table size: %s\n"),
+								 get_pretty_size(vacrelstats->freed_bytes),
+								 get_pretty_size(vacrelstats->left_lp_bytes),
+								 get_pretty_size(vacrelstats->rel_pages * BLCKSZ));
+			else
+				appendStringInfo(&buf,
+								 _("stats: %s freed, table size: %s\n"),
+								 get_pretty_size(vacrelstats->freed_bytes),
+								 get_pretty_size(vacrelstats->rel_pages * BLCKSZ));
 			appendStringInfo(&buf, _("pages: %u removed, %u remain, %u skipped due to pins, %u skipped frozen\n"),
 							 vacrelstats->pages_removed,
 							 vacrelstats->rel_pages,
 							 vacrelstats->pinskipped_pages,
 							 vacrelstats->frozenskipped_pages);
 			appendStringInfo(&buf,
-							 _("tuples: %.0f removed, %.0f remain, %.0f are dead but not yet removable, oldest xmin: %u\n"),
-							 vacrelstats->tuples_deleted,
+							 _("tuples: %.0f remain, %.0f are dead but not yet removable, oldest xmin: %u\n"),
 							 vacrelstats->new_rel_tuples,
 							 vacrelstats->new_dead_tuples,
 							 OldestXmin);
-			if (vacrelstats->nleft_dead_tuples > 0 ||
-				vacrelstats->nleft_dead_itemids > 0)
-				appendStringInfo(&buf,
-								 _("%.0f tuples and %.0f item identifiers are left as dead.\n"),
-								 vacrelstats->nleft_dead_tuples,
-								 vacrelstats->nleft_dead_itemids);
 			appendStringInfo(&buf,
 							 _("buffer usage: %d hits, %d misses, %d dirtied\n"),
 							 VacuumPageHit,
@@ -509,12 +517,8 @@ lazy_scan_heap(Relation onerel, VacuumParams *params, LVRelStats *vacrelstats,
 				next_fsm_block_to_vacuum;
 	double		num_tuples,		/* total number of nonremovable tuples */
 				live_tuples,	/* live tuples (reltuples estimate) */
-				tups_vacuumed,	/* tuples cleaned up by vacuum */
 				nkeep,			/* dead-but-not-removable tuples */
-				nunused,		/* unused item pointers */
-				nleft_dead_tuples,		/* tuples we left as dead */
-				nleft_dead_itemids;		/* item pointers we left as dead,
-										 * includes nleft_dead_tuples. */
+				nunused;		/* unused item pointers */
 	IndexBulkDeleteResult **indstats;
 	int			i;
 	PGRUsage	ru0;
@@ -546,8 +550,7 @@ lazy_scan_heap(Relation onerel, VacuumParams *params, LVRelStats *vacrelstats,
 
 	empty_pages = vacuumed_pages = 0;
 	next_fsm_block_to_vacuum = (BlockNumber) 0;
-	num_tuples = live_tuples = tups_vacuumed = nkeep = nunused = 0;
-	nleft_dead_itemids = nleft_dead_tuples = 0;
+	num_tuples = live_tuples = nkeep = nunused = 0;
 
 	indstats = (IndexBulkDeleteResult **)
 		palloc0(nindexes * sizeof(IndexBulkDeleteResult *));
@@ -980,8 +983,10 @@ lazy_scan_heap(Relation onerel, VacuumParams *params, LVRelStats *vacrelstats,
 		 *
 		 * We count tuples removed by the pruning step as removed by VACUUM.
 		 */
-		tups_vacuumed += heap_page_prune(onerel, buf, OldestXmin, false,
-										 &vacrelstats->latestRemovedXid);
+		vacrelstats->tuples_deleted +=
+			heap_page_prune(onerel, buf, OldestXmin, false,
+							&vacrelstats->latestRemovedXid,
+							&vacrelstats->freed_bytes);
 
 		/*
 		 * Now scan the page to collect vacuumable items and check for tuples
@@ -1022,12 +1027,7 @@ lazy_scan_heap(Relation onerel, VacuumParams *params, LVRelStats *vacrelstats,
 
 			ItemPointerSet(&(tuple.t_self), blkno, offnum);
 
-			/*
-			 * DEAD item pointers are to be vacuumed normally; but we don't
-			 * count them in tups_vacuumed, else we'd be double-counting (at
-			 * least in the common case where heap_page_prune() just freed up
-			 * a non-HOT tuple).
-			 */
+			/* DEAD item pointers are to be vacuumed normally */
 			if (ItemIdIsDead(itemid))
 			{
 				lazy_record_dead_tuple(vacrelstats, &(tuple.t_self));
@@ -1068,10 +1068,10 @@ lazy_scan_heap(Relation onerel, VacuumParams *params, LVRelStats *vacrelstats,
 					 *
 					 * If the tuple is HOT-updated then it must only be
 					 * removed by a prune operation; so we keep it just as if
-					 * it were RECENTLY_DEAD.  Also, if it's a heap-only
-					 * tuple, we choose to keep it, because it'll be a lot
-					 * cheaper to get rid of it in the next pruning pass than
-					 * to treat it like an indexed tuple.
+					 * it were RECENTLY_DEAD.  Also, either if it's a heap-only
+					 * tuple or when index cleanup is disabled, we choose to
+					 * keep it, because it'll be a lot cheaper to get rid of
+					 * it in the next pruning pass than to treat it.
 					 *
 					 * If this were to happen for a tuple that actually needed
 					 * to be deleted, we'd be in trouble, because it'd
@@ -1081,20 +1081,11 @@ lazy_scan_heap(Relation onerel, VacuumParams *params, LVRelStats *vacrelstats,
 					 * preventing corruption.
 					 */
 					if (HeapTupleIsHotUpdated(&tuple) ||
-						HeapTupleIsHeapOnly(&tuple))
+						HeapTupleIsHeapOnly(&tuple) ||
+						params->index_cleanup == VACOPT_TERNARY_DISABLED)
 						nkeep += 1;
 					else
-					{
 						tupgone = true; /* we can delete the tuple */
-
-						/*
-						 * Since this dead tuple will not be vacuumed and
-						 * ignored when index cleanup is disabled we count
-						 * count it for reporting.
-						 */
-						if (params->index_cleanup == VACOPT_TERNARY_ENABLED)
-							nleft_dead_tuples++;
-					}
 					all_visible = false;
 					break;
 				case HEAPTUPLE_LIVE:
@@ -1183,7 +1174,6 @@ lazy_scan_heap(Relation onerel, VacuumParams *params, LVRelStats *vacrelstats,
 				lazy_record_dead_tuple(vacrelstats, &(tuple.t_self));
 				HeapTupleHeaderAdvanceLatestRemovedXid(tuple.t_data,
 													   &vacrelstats->latestRemovedXid);
-				tups_vacuumed += 1;
 				has_dead_tuples = true;
 			}
 			else
@@ -1262,16 +1252,33 @@ lazy_scan_heap(Relation onerel, VacuumParams *params, LVRelStats *vacrelstats,
 			else
 			{
 				/*
-				 * Here, we have indexes but index cleanup is disabled. Instead of
-				 * vacuuming the dead tuples on the heap, we just forget them.
-				 *
-				 * Note that vacrelstats->dead_tuples could have tuples which
-				 * became dead after HOT-pruning but are not marked dead yet.
-				 * We do not process them because it's a very rare condition, and
-				 * the next vacuum will process them anyway.
+				 * Here, we have indexes but index cleanup is disabled and
+				 * vacrelstats->dead_tuples have only tuples marked as DEAD.
+				 * Instead of vacuuming the dead tuples on the heap, we forget
+				 * them and count the total bytes of dead line pointers.
 				 */
+#ifdef USE_ASSERT_CHECKING
 				Assert(params->index_cleanup == VACOPT_TERNARY_DISABLED);
-				nleft_dead_itemids += vacrelstats->num_dead_tuples;
+
+				/*
+				 * Check whether vacrelstats->dead_tuples has only tuples
+				 * marked as dead.
+				 */
+				{
+					Page	page = BufferGetPage(buf);
+					int i;
+					for (i = 0; i < vacrelstats->num_dead_tuples; i++)
+					{
+						OffsetNumber offset =
+							ItemPointerGetOffsetNumber(&vacrelstats->dead_tuples[i]);
+						ItemId lp = PageGetItemId(page, offset);
+
+						Assert(ItemIdIsDead(lp));
+					}
+				}
+#endif
+				vacrelstats->left_lp_bytes +=
+					(Size) (vacrelstats->num_dead_tuples * sizeof(ItemIdData));
 			}
 
 			/*
@@ -1398,9 +1405,9 @@ lazy_scan_heap(Relation onerel, VacuumParams *params, LVRelStats *vacrelstats,
 			RecordPageWithFreeSpace(onerel, blkno, freespace, nblocks);
 	}
 
-	/* No dead tuples should be left if index cleanup is enabled */
+	/* No dead line pointer should be left if index cleanup is enabled */
 	Assert((params->index_cleanup == VACOPT_TERNARY_ENABLED &&
-			nleft_dead_tuples == 0 && nleft_dead_itemids == 0) ||
+			vacrelstats->left_lp_bytes == 0) ||
 		   params->index_cleanup == VACOPT_TERNARY_DISABLED);
 
 	/* report that everything is scanned and vacuumed */
@@ -1409,10 +1416,7 @@ lazy_scan_heap(Relation onerel, VacuumParams *params, LVRelStats *vacrelstats,
 	pfree(frozen);
 
 	/* save stats for use later */
-	vacrelstats->tuples_deleted = tups_vacuumed;
-	vacrelstats->new_dead_tuples = nkeep + nleft_dead_tuples;
-	vacrelstats->nleft_dead_tuples = nleft_dead_tuples;
-	vacrelstats->nleft_dead_itemids = nleft_dead_itemids;
+	vacrelstats->new_dead_tuples = nkeep;
 
 	/* now we can compute the new value for pg_class.reltuples */
 	vacrelstats->new_live_tuples = vac_estimate_reltuples(onerel,
@@ -1492,7 +1496,7 @@ lazy_scan_heap(Relation onerel, VacuumParams *params, LVRelStats *vacrelstats,
 		ereport(elevel,
 				(errmsg("\"%s\": removed %.0f row versions in %u pages",
 						RelationGetRelationName(onerel),
-						tups_vacuumed, vacuumed_pages)));
+						vacrelstats->tuples_deleted, vacuumed_pages)));
 
 	/*
 	 * This is pretty messy, but we split it up so that we can skip emitting
@@ -1500,6 +1504,9 @@ lazy_scan_heap(Relation onerel, VacuumParams *params, LVRelStats *vacrelstats,
 	 */
 	initStringInfo(&buf);
 	appendStringInfo(&buf,
+					 _("found %.0f nonremovable row versions in %u out of %u pages.\n"),
+					   num_tuples, vacrelstats->scanned_pages, nblocks);
+	appendStringInfo(&buf,
 					 _("%.0f dead row versions cannot be removed yet, oldest xmin: %u\n"),
 					 nkeep, OldestXmin);
 	appendStringInfo(&buf, _("There were %.0f unused item pointers.\n"),
@@ -1516,16 +1523,24 @@ lazy_scan_heap(Relation onerel, VacuumParams *params, LVRelStats *vacrelstats,
 									"%u pages are entirely empty.\n",
 									empty_pages),
 					 empty_pages);
-	appendStringInfo(&buf, "%.0f tuples and %.0f item identifiers are left as dead.\n",
-					 nleft_dead_tuples, nleft_dead_itemids);
 	appendStringInfo(&buf, _("%s."), pg_rusage_show(&ru0));
 
-	ereport(elevel,
-			(errmsg("\"%s\": found %.0f removable, %.0f nonremovable row versions in %u out of %u pages",
-					RelationGetRelationName(onerel),
-					tups_vacuumed, num_tuples,
-					vacrelstats->scanned_pages, nblocks),
-			 errdetail_internal("%s", buf.data)));
+	if (vacrelstats->left_lp_bytes > 0)
+		ereport(elevel,
+				(errmsg("\"%s\": %s freed, %s can be freed after index cleanup, table size: %s",
+						RelationGetRelationName(onerel),
+						get_pretty_size(vacrelstats->freed_bytes),
+						get_pretty_size(vacrelstats->left_lp_bytes),
+						get_pretty_size(nblocks * BLCKSZ)),
+				 errdetail_internal("%s", buf.data)));
+	else
+		ereport(elevel,
+				(errmsg("\"%s\": %s freed, table size: %s",
+						RelationGetRelationName(onerel),
+						get_pretty_size(vacrelstats->freed_bytes),
+						get_pretty_size(nblocks * BLCKSZ)),
+				 errdetail_internal("%s", buf.data)));
+
 	pfree(buf.data);
 }
 
@@ -1632,6 +1647,11 @@ lazy_vacuum_page(Relation onerel, BlockNumber blkno, Buffer buffer,
 			break;				/* past end of tuples for this block */
 		toff = ItemPointerGetOffsetNumber(&vacrelstats->dead_tuples[tupindex]);
 		itemid = PageGetItemId(page, toff);
+
+		/* Count the number of bytes we freed */
+		vacrelstats->freed_bytes +=
+				ItemIdGetLength(itemid) + sizeof(ItemIdData);
+
 		ItemIdSetUnused(itemid);
 		unused[uncnt++] = toff;
 	}
@@ -1663,6 +1683,9 @@ lazy_vacuum_page(Relation onerel, BlockNumber blkno, Buffer buffer,
 	 */
 	END_CRIT_SECTION();
 
+	/* Count tuples marked as unused */
+	vacrelstats->tuples_deleted += uncnt;
+
 	/*
 	 * Now that we have removed the dead tuples from the page, once again
 	 * check if the page has become all-visible.  The page is already marked
@@ -2375,3 +2398,15 @@ heap_page_is_all_visible(Relation rel, Buffer buf,
 
 	return all_visible;
 }
+
+/* Return a string of formatted size with size units */
+static char *
+get_pretty_size(Size size)
+{
+	Datum size_text;
+
+	size_text = DirectFunctionCall1(pg_size_pretty,
+									Int64GetDatum(size));
+
+	return text_to_cstring(DatumGetTextP(size_text));
+}
diff --git a/src/include/access/heapam.h b/src/include/access/heapam.h
index 77e5e60..2c9e15e 100644
--- a/src/include/access/heapam.h
+++ b/src/include/access/heapam.h
@@ -184,8 +184,9 @@ extern TransactionId heap_compute_xid_horizon_for_tuples(Relation rel,
 extern void heap_page_prune_opt(Relation relation, Buffer buffer);
 extern int heap_page_prune(Relation relation, Buffer buffer,
 				TransactionId OldestXmin,
-				bool report_stats, TransactionId *latestRemovedXid);
-extern void heap_page_prune_execute(Buffer buffer,
+				bool report_stats, TransactionId *latestRemovedXid,
+				Size *freed_bytes);
+extern Size heap_page_prune_execute(Buffer buffer,
 						OffsetNumber *redirected, int nredirected,
 						OffsetNumber *nowdead, int ndead,
 						OffsetNumber *nowunused, int nunused);

Reply via email to