Re: [PATCHES] WIP: pl/pgsql cleanup

2005-02-08 Thread Tom Lane
Neil Conway [EMAIL PROTECTED] writes:
 BTW, both of our fixes suffer from the deficiency that they will
 actually reject input one variable too early: we disallow a SQL
 statement with 1023 variables that we strictly speaking could store.

Right.  I thought about putting the overflow checks inside the switches
so that they wouldn't trigger on the case where we don't need another
variable ... but it doesn't seem worth the extra code to me either.

regards, tom lane

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [PATCHES] WIP: pl/pgsql cleanup

2005-02-08 Thread Neil Conway
On Tue, 2005-02-08 at 13:25 -0500, Tom Lane wrote:
 When you apply this, please put the remaining array-overflow checks
 before the overflow occurs, not after.

Actually, my original fix _does_ check for the overflow before it
occurs. ISTM both fixes are essentially identical, although yours may be
preferable as it is slightly less confusing.

BTW, both of our fixes suffer from the deficiency that they will
actually reject input one variable too early: we disallow a SQL
statement with 1023 variables that we strictly speaking could store. But
I don't see that as a major problem.

-Neil



---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


[PATCHES] memory leak in ALTER TABLE

2005-02-08 Thread Neil Conway
ALTER TABLE ADD COLUMN exhibits a significant memory leak when adding a
column with a default expression. In that situation, we need to rewrite
the heap relation. To evaluate the new default expression, we use
ExecEvalExpr(); however, this can allocate memory in the current memory
context, and ATRewriteTable() does not switch out of the active portal's
heap memory context. The end result is a rather large memory leak (on
the order of gigabytes for a reasonably sized table). To repro, just
create a large table (a few hundred megabytes), and add a serial column
to it.

This patch changes ATRewriteTable() to switch to the per-tuple memory
context before beginning the per-tuple loop. It also removes an explicit
heap_freetuple() in the loop, since that is no longer needed.

In an unrelated change, I noticed the code was scanning through the
attributes of the new tuple descriptor for each tuple of the old table.
I changed this to use precomputation.

Barring any objections, I will apply this to HEAD and REL8_0_STABLE
tomorrow.

-Neil

Index: src/backend/commands/tablecmds.c
===
RCS file: /var/lib/cvs/pgsql/src/backend/commands/tablecmds.c,v
retrieving revision 1.145
diff -c -r1.145 tablecmds.c
*** src/backend/commands/tablecmds.c	27 Jan 2005 23:23:55 -	1.145
--- src/backend/commands/tablecmds.c	9 Feb 2005 04:34:18 -
***
*** 2460,2465 
--- 2460,2468 
  		TupleTableSlot *newslot;
  		HeapScanDesc scan;
  		HeapTuple	tuple;
+ 		MemoryContext oldCxt;
+ 		List *dropped_attrs = NIL;
+ 		ListCell *lc;
  
  		econtext = GetPerTupleExprContext(estate);
  
***
*** 2481,2508 
  		memset(nulls, 'n', i * sizeof(char));
  
  		/*
  		 * Scan through the rows, generating a new row if needed and then
  		 * checking all the constraints.
  		 */
  		scan = heap_beginscan(oldrel, SnapshotNow, 0, NULL);
  
  		while ((tuple = heap_getnext(scan, ForwardScanDirection)) != NULL)
  		{
  			if (newrel)
  			{
! /*
!  * Extract data from old tuple.  We can force to null any
!  * columns that are deleted according to the new tuple.
!  */
! int			natts = newTupDesc-natts;
! 
  heap_deformtuple(tuple, oldTupDesc, values, nulls);
  
! for (i = 0; i  natts; i++)
! {
! 	if (newTupDesc-attrs[i]-attisdropped)
! 		nulls[i] = 'n';
! }
  
  /*
   * Process supplied expressions to replace selected
--- 2484,2522 
  		memset(nulls, 'n', i * sizeof(char));
  
  		/*
+ 		 * Any attributes that are dropped according to the new tuple
+ 		 * descriptor can be set to NULL. We precompute the list of
+ 		 * dropped attributes to avoid needing to do so in the
+ 		 * per-tuple loop.
+ 		 */
+ 		for (i = 0; i  newTupDesc-natts; i++)
+ 		{
+ 			if (newTupDesc-attrs[i]-attisdropped)
+ dropped_attrs = lappend_int(dropped_attrs, i);
+ 		}
+ 
+ 		/*
  		 * Scan through the rows, generating a new row if needed and then
  		 * checking all the constraints.
  		 */
  		scan = heap_beginscan(oldrel, SnapshotNow, 0, NULL);
  
+ 		/*
+ 		 * Switch to per-tuple memory context and reset it for each
+ 		 * tuple produced, so we don't leak memory.
+ 		 */
+ 		oldCxt = MemoryContextSwitchTo(GetPerTupleMemoryContext(estate));
+ 
  		while ((tuple = heap_getnext(scan, ForwardScanDirection)) != NULL)
  		{
  			if (newrel)
  			{
! /* Extract data from old tuple */
  heap_deformtuple(tuple, oldTupDesc, values, nulls);
  
! /* Set dropped attributes to null in new tuple */
! foreach (lc, dropped_attrs)
! 	nulls[lfirst_int(lc)] = 'n';
  
  /*
   * Process supplied expressions to replace selected
***
*** 2526,2531 
--- 2540,2550 
  		nulls[ex-attnum - 1] = ' ';
  }
  
+ /*
+  * Form the new tuple. Note that we don't explicitly
+  * pfree it, since the per-tuple memory context will
+  * be reset shortly.
+  */
  tuple = heap_formtuple(newTupDesc, values, nulls);
  			}
  
***
*** 2572,2588 
  
  			/* Write the tuple out to the new relation */
  			if (newrel)
- 			{
  simple_heap_insert(newrel, tuple);
  
- heap_freetuple(tuple);
- 			}
- 
  			ResetExprContext(econtext);
  
  			CHECK_FOR_INTERRUPTS();
  		}
  
  		heap_endscan(scan);
  	}
  
--- 2591,2604 
  
  			/* Write the tuple out to the new relation */
  			if (newrel)
  simple_heap_insert(newrel, tuple);
  
  			ResetExprContext(econtext);
  
  			CHECK_FOR_INTERRUPTS();
  		}
  
+ 		MemoryContextSwitchTo(oldCxt);
  		heap_endscan(scan);
  	}
  

---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org