The following is the 1st phase of a checkin to address DERBY-132 - improvement
request to add an inplace compress.


This checkin addresses the issue in 3 parts:

1) Code has been added to purge committed rows from heap tables.
   This code uses the same basic algorithm to identify and purge
   committed deleted heap rows that currently exists.  It scans the
   entire heap table in the current thread processing all rows.

2) Code has been added to defragment remaining rows in the page, freeing
   pages at the end of the table.
   This code scans the table and moves rows from the end of the heap table
   toward the front of the table.  For every row moved all index entries
   must be updated after the move.  The allocation system is updated to
   put new rows toward the front of the table.  After it is finished there
   will be a chunk of empty pages at the end of the file.

3) Code has been added to return the free space at the end of the table
   back to the OS.
   Finds the chunk of empty pages at the end of the file.  Updates the
   Allocation data structures to remove the chunk of pages at the end.
   Calls the JVM to return the space at the end of the file back to the
   OS.

4) In order to test all of these paths a new system procedure,
   SYSCS_UTIL.SYSCS_INPLACE_COMPRESS_TABLE has been added.  Eventually
   as a zero admin database the system should call this routine internally.
   It allows each of the 3 above described phases to be called either
   individually or in sequence.

This checkin has passed all exising derby tests, and passes some new tests
that have been added.  There is more work to do, which I will be doing
right away.

Short term TODO (next month or so):
1) much more extensive testing.
2) Add btree purge support.
3) Add btree free space compress support.
3) improve concurrency of reclaim paths, allow for background processing with
locks held for very short time - on the order of the time it takes to
process a page worth of data.


longer term (not sure when)
1) Add btree defragment support.
2) automatically call compresser.
Index: java/engine/org/apache/derby/impl/sql/catalog/DataDictionaryImpl.java
===================================================================
--- java/engine/org/apache/derby/impl/sql/catalog/DataDictionaryImpl.java       
(revision 160236)
+++ java/engine/org/apache/derby/impl/sql/catalog/DataDictionaryImpl.java       
(working copy)
@@ -8430,7 +8430,7 @@
                 tc);
         }
 
-        // SMALLINT SYSCS_UTIL.SYSCS_CHECK_TABLE(varchar(128))
+        // SMALLINT SYSCS_UTIL.SYSCS_CHECK_TABLE(varchar(128), varchar(128))
         {
             // procedure argument names
             String[] arg_names = {"SCHEMANAME", "TABLENAME"};
@@ -8479,6 +8479,48 @@
                 tc);
         }
 
+        // void SYSCS_UTIL.SYSCS_INPLACE_COMPRESS_TABLE(
+        //     IN SCHEMANAME        VARCHAR(128), 
+        //     IN TABLENAME         VARCHAR(128),
+        //     IN PURGE_ROWS        SMALLINT,
+        //     IN DEFRAGMENT_ROWS   SMALLINT,
+        //     IN TRUNCATE_END      SMALLINT
+        //     )
+        {
+            // procedure argument names
+            String[] arg_names = {
+                "SCHEMANAME", 
+                "TABLENAME", 
+                "PURGE_ROWS", 
+                "DEFRAGMENT_ROWS", 
+                "TRUNCATE_END"};
+
+            // procedure argument types
+            TypeDescriptor[] arg_types = {
+                DataTypeDescriptor.getBuiltInDataTypeDescriptor(
+                    Types.VARCHAR, 128),
+                DataTypeDescriptor.getBuiltInDataTypeDescriptor(
+                    Types.VARCHAR, 128),
+                DataTypeDescriptor.getBuiltInDataTypeDescriptor(
+                    Types.SMALLINT),
+                DataTypeDescriptor.getBuiltInDataTypeDescriptor(
+                    Types.SMALLINT),
+                DataTypeDescriptor.getBuiltInDataTypeDescriptor(
+                    Types.SMALLINT)
+            };
+
+            createSystemProcedureOrFunction(
+                "SYSCS_INPLACE_COMPRESS_TABLE",
+                sysUtilUUID,
+                arg_names,
+                arg_types,
+                               0,
+                               0,
+                RoutineAliasInfo.MODIFIES_SQL_DATA,
+                (TypeDescriptor) null,
+                tc);
+        }
+
         /*
                ** SQLJ routine.
                */
Index: 
java/engine/org/apache/derby/impl/store/access/conglomerate/OpenConglomerate.java
===================================================================
--- 
java/engine/org/apache/derby/impl/store/access/conglomerate/OpenConglomerate.java
   (revision 160236)
+++ 
java/engine/org/apache/derby/impl/store/access/conglomerate/OpenConglomerate.java
   (working copy)
@@ -141,7 +141,6 @@
      *     unlockPositionAfterRead(RowPosition)
      **************************************************************************
      */
-
     /**
      * Latch the page containing the current RowPosition, and reposition scan.
      * <p>
@@ -166,7 +165,7 @@
      *
         * @exception  StandardException  Standard exception policy.
      **/
-    protected boolean latchPageAndRepositionScan(RowPosition pos)
+    public boolean latchPageAndRepositionScan(RowPosition pos)
                throws StandardException
     {
         boolean scan_repositioned = false;
@@ -176,8 +175,11 @@
 
         try
         {
-            pos.current_page = 
-                container.getPage(pos.current_rh.getPageNumber());
+            if (pos.current_rh != null)
+            {
+                pos.current_page = 
+                    container.getPage(pos.current_rh.getPageNumber());
+            }
 
         }
         catch (Throwable t)
@@ -243,11 +245,29 @@
         if (pos.current_page == null)
         {
             // position on the next page.
-            pos.current_page = 
-                container.getNextPage(pos.current_rh.getPageNumber());
+            long current_pageno;
 
-            pos.current_slot = Page.FIRST_SLOT_NUMBER - 1;
+            if (pos.current_rh != null)
+            {
+                current_pageno = pos.current_rh.getPageNumber();
+            }
+            else if (pos.current_pageno != ContainerHandle.INVALID_PAGE_NUMBER)
+            {
+                current_pageno = pos.current_pageno;
+            }
+            else
+            {
+                // no valid position, return a null page
+                return(false);
+            }
 
+            pos.current_page = container.getNextPage(current_pageno);
+
+            pos.current_slot   = Page.FIRST_SLOT_NUMBER - 1;
+
+            // now position is tracked by active page
+            pos.current_pageno = ContainerHandle.INVALID_PAGE_NUMBER;
+
             scan_repositioned = true;
         }
 
Index: 
java/engine/org/apache/derby/impl/store/access/conglomerate/GenericScanController.java
===================================================================
--- 
java/engine/org/apache/derby/impl/store/access/conglomerate/GenericScanController.java
      (revision 160236)
+++ 
java/engine/org/apache/derby/impl/store/access/conglomerate/GenericScanController.java
      (working copy)
@@ -833,7 +833,6 @@
                return(ret_row_count);
     }
 
-
     /**
     Reposition the current scan.  This call is semantically the same as if
     the current scan had been closed and a openScan() had been called instead.
@@ -888,7 +887,6 @@
                 SQLState.HEAP_UNIMPLEMENTED_FEATURE));
     }
 
-
     /**************************************************************************
      * abstract protected Methods of This class:
      **************************************************************************
@@ -1017,6 +1015,11 @@
     boolean closeHeldScan)
         throws StandardException
        {
+        
SanityManager.DEBUG_PRINT("GenericScanController.closeForEndTransaction", 
+                "closeHeldScan = " + closeHeldScan +
+                "open_conglom.getHold() = " + open_conglom.getHold());
+                
+
         if ((!open_conglom.getHold()) || closeHeldScan) 
         {
             // close the scan as part of the commit/abort
Index: 
java/engine/org/apache/derby/impl/store/access/conglomerate/RowPosition.java
===================================================================
--- 
java/engine/org/apache/derby/impl/store/access/conglomerate/RowPosition.java    
    (revision 160236)
+++ 
java/engine/org/apache/derby/impl/store/access/conglomerate/RowPosition.java    
    (working copy)
@@ -22,6 +22,7 @@
 
 import org.apache.derby.iapi.services.sanity.SanityManager;
 
+import org.apache.derby.iapi.store.raw.ContainerHandle;
 import org.apache.derby.iapi.store.raw.Page;
 import org.apache.derby.iapi.store.raw.RecordHandle;
 
@@ -42,6 +43,7 @@
     public RecordHandle    current_rh;
     public int             current_slot;
     public boolean         current_rh_qualified;
+    public long            current_pageno;
 
     /**************************************************************************
      * Constructors for This class:
@@ -67,6 +69,7 @@
         current_rh              = null;
         current_slot            = Page.INVALID_SLOT_NUMBER;
         current_rh_qualified    = false;
+        current_pageno          = ContainerHandle.INVALID_PAGE_NUMBER;
     }
 
     public final void positionAtNextSlot()
@@ -75,6 +78,12 @@
         current_rh   = null;
     }
 
+    public final void positionAtPrevSlot()
+    {
+        current_slot--;
+        current_rh   = null;
+    }
+
     public void unlatch()
     {
         if (current_page != null)
@@ -94,7 +103,12 @@
             ret_string = 
                 ";current_slot=" + current_slot +
                 ";current_rh=" + current_rh +
-                ";current_page=" + current_page;
+                ";current_pageno=" + current_pageno +
+                ";current_page=" + 
+                    (current_page == null ? 
+                         "null" : 
String.valueOf(current_page.getPageNumber()));
+
+                // ";current_page=" + current_page;
         }
 
         return(ret_string);
Index: java/engine/org/apache/derby/impl/store/access/sort/Scan.java
===================================================================
--- java/engine/org/apache/derby/impl/store/access/sort/Scan.java       
(revision 160236)
+++ java/engine/org/apache/derby/impl/store/access/sort/Scan.java       
(working copy)
@@ -98,6 +98,17 @@
                 SQLState.SORT_IMPROPER_SCAN_METHOD);
     }
 
+    public int fetchNextGroup(
+    DataValueDescriptor[][]     row_array,
+    RowLocation[]               old_rowloc_array,
+    RowLocation[]               new_rowloc_array)
+        throws StandardException
+    {
+        throw StandardException.newException(
+                SQLState.SORT_IMPROPER_SCAN_METHOD);
+    }
+
+
     /**
      * Insert all rows that qualify for the current scan into the input
      * Hash table.  
Index: java/engine/org/apache/derby/impl/store/access/btree/BTreeController.java
===================================================================
--- java/engine/org/apache/derby/impl/store/access/btree/BTreeController.java   
(revision 160236)
+++ java/engine/org/apache/derby/impl/store/access/btree/BTreeController.java   
(working copy)
@@ -1018,11 +1018,18 @@
          throws StandardException
     {
 
-               if (this.container == null)       
-               {
-            throw StandardException.newException(
-                        SQLState.BTREE_IS_CLOSED,
-                        new Long(err_containerid));
+               if (isClosed())
+        {
+            if (getHold())
+            {
+                reopen();
+            }
+            else
+            {
+                throw StandardException.newException(
+                            SQLState.BTREE_IS_CLOSED,
+                            new Long(err_containerid));
+            } 
         }
 
         if (SanityManager.DEBUG)
Index: java/engine/org/apache/derby/impl/store/access/btree/BTreeScan.java
===================================================================
--- java/engine/org/apache/derby/impl/store/access/btree/BTreeScan.java 
(revision 160236)
+++ java/engine/org/apache/derby/impl/store/access/btree/BTreeScan.java 
(working copy)
@@ -1838,6 +1838,19 @@
                 (int[]) null));
     }
 
+    public int fetchNextGroup(
+    DataValueDescriptor[][] row_array,
+    RowLocation[]           old_rowloc_array,
+    RowLocation[]           new_rowloc_array)
+        throws StandardException
+       {
+        // This interface is currently only used to move rows around in
+        // a heap table, unused in btree's -- so not implemented.
+
+        throw StandardException.newException(
+                SQLState.BTREE_UNIMPLEMENTED_FEATURE);
+    }
+
     /**
      * Insert all rows that qualify for the current scan into the input
      * Hash table.  
Index: java/engine/org/apache/derby/impl/store/access/btree/index/B2I.java
===================================================================
--- java/engine/org/apache/derby/impl/store/access/btree/index/B2I.java 
(revision 160236)
+++ java/engine/org/apache/derby/impl/store/access/btree/index/B2I.java 
(working copy)
@@ -755,6 +755,48 @@
        }
 
     /**
+     * Open a b-tree compress scan.
+     * <p>
+     * B2I does not support a compress scan.
+     * <p>
+        * @see Conglomerate#openCompressScan
+     *
+        * @exception  StandardException  Standard exception policy.
+     **/
+       public ScanManager defragmentConglomerate(
+    TransactionManager              xact_manager,
+    Transaction                     rawtran,
+    boolean                         hold,
+    int                             open_mode,
+    int                             lock_level,
+    LockingPolicy                   locking_policy,
+    int                             isolation_level)
+                       throws StandardException
+       {
+        throw StandardException.newException(
+            SQLState.BTREE_UNIMPLEMENTED_FEATURE);
+       }
+
+       public void purgeConglomerate(
+    TransactionManager              xact_manager,
+    Transaction                     rawtran)
+        throws StandardException
+    {
+        // currently on work to do in btree's for purge rows, purging
+        // happens best when split is about to happen.
+        return;
+    }
+
+       public void compressConglomerate(
+    TransactionManager              xact_manager,
+    Transaction                     rawtran)
+        throws StandardException
+    {
+        // TODO - need to implement for btree
+        return;
+    }
+
+    /**
      * Return an open StoreCostController for the conglomerate.
      * <p>
      * Return an open StoreCostController which can be used to ask about 
Index: java/engine/org/apache/derby/impl/store/access/RAMTransaction.java
===================================================================
--- java/engine/org/apache/derby/impl/store/access/RAMTransaction.java  
(revision 160236)
+++ java/engine/org/apache/derby/impl/store/access/RAMTransaction.java  
(working copy)
@@ -1437,6 +1437,134 @@
        }
 
 
+
+    /**
+     * Purge all committed deleted rows from the conglomerate.
+     * <p>
+     * This call will purge committed deleted rows from the conglomerate,
+     * that space will be available for future inserts into the conglomerate.
+     * <p>
+     *
+     * @param conglomId Id of the conglomerate to purge.
+     *
+        * @exception  StandardException  Standard exception policy.
+     **/
+    public void purgeConglomerate(
+    long    conglomId)
+        throws StandardException
+    {
+        findExistingConglomerate(conglomId).purgeConglomerate(
+            this, 
+            rawtran);
+
+               return;
+    }
+
+    /**
+     * Return free space from the conglomerate back to the OS.
+     * <p>
+     * Returns free space from the conglomerate back to the OS.  Currently
+     * only the sequential free pages at the "end" of the conglomerate can
+     * be returned to the OS.
+     * <p>
+     *
+     * @param conglomId Id of the conglomerate to purge.
+     *
+        * @exception  StandardException  Standard exception policy.
+     **/
+    public void compressConglomerate(
+    long    conglomId)
+        throws StandardException
+    {
+        findExistingConglomerate(conglomId).compressConglomerate(
+            this, 
+            rawtran); 
+
+               return;
+    }
+
+    /**
+     * Compress table in place
+     * <p>
+     * Returns a GroupFetchScanController which can be used to move rows
+     * around in a table, creating a block of free pages at the end of the
+     * table.  The process will move rows from the end of the table toward
+     * the beginning.  The GroupFetchScanController will return the 
+     * old row location, the new row location, and the actual data of any
+     * row moved.  Note that this scan only returns moved rows, not an
+     * entire set of rows, the scan is designed specifically to be
+     * used by either explicit user call of the SYSCS_ONLINE_COMPRESS_TABLE()
+     * procedure, or internal background calls to compress the table.
+     *
+     * The old and new row locations are returned so that the caller can
+     * update any indexes necessary.
+     *
+     * This scan always returns all collumns of the row.
+     * 
+     * All inputs work exactly as in openScan().  The return is 
+     * a GroupFetchScanController, which only allows fetches of groups
+     * of rows from the conglomerate.
+     * <p>
+     *
+        * @return The GroupFetchScanController to be used to fetch the rows.
+     *
+        * @param conglomId             see openScan()
+     * @param hold                  see openScan()
+     * @param open_mode             see openScan()
+     * @param lock_level            see openScan()
+     * @param isolation_level       see openScan()
+     *
+        * @exception  StandardException  Standard exception policy.
+     *
+     * @see ScanController
+     * @see GroupFetchScanController
+     **/
+       public GroupFetchScanController defragmentConglomerate(
+    long                            conglomId,
+    boolean                         online,
+    boolean                         hold,
+    int                             open_mode,
+    int                             lock_level,
+    int                             isolation_level)
+        throws StandardException
+       {
+        if (SanityManager.DEBUG)
+        {
+                       if ((open_mode & 
+                ~(TransactionController.OPENMODE_FORUPDATE | 
+                  TransactionController.OPENMODE_FOR_LOCK_ONLY |
+                  TransactionController.OPENMODE_SECONDARY_LOCKED)) != 0)
+                               SanityManager.THROWASSERT(
+                                       "Bad open mode to openScan:" + 
+                    Integer.toHexString(open_mode));
+
+                       if (!(lock_level == MODE_RECORD |
+                 lock_level == MODE_TABLE))
+                               SanityManager.THROWASSERT(
+                "Bad lock level to openScan:" + lock_level);
+        }
+
+               // Find the conglomerate.
+               Conglomerate conglom = findExistingConglomerate(conglomId);
+
+               // Get a scan controller.
+               ScanManager sm = 
+            conglom.defragmentConglomerate(
+                this, 
+                rawtran, 
+                hold, 
+                open_mode, 
+                determine_lock_level(lock_level),
+                determine_locking_policy(lock_level, isolation_level),
+                isolation_level);
+
+               // Keep track of it so we can release on close.
+               scanControllers.addElement(sm);
+
+               return(sm);
+       }
+
+
        public ScanController openScan(
     long                            conglomId,
     boolean                         hold,
@@ -1468,9 +1596,6 @@
                 (DynamicCompiledOpenConglomInfo) null));
     }
 
-
-
-
        public ScanController openCompiledScan(
     boolean                         hold,
     int                             open_mode,
Index: java/engine/org/apache/derby/impl/store/access/heap/HeapController.java
===================================================================
--- java/engine/org/apache/derby/impl/store/access/heap/HeapController.java     
(revision 160236)
+++ java/engine/org/apache/derby/impl/store/access/heap/HeapController.java     
(working copy)
@@ -70,7 +70,7 @@
 
     /**************************************************************************
      * Protected concrete impl of abstract methods of 
-     *     GenericCongloemrateController class:
+     *     GenericConglomerateController class:
      **************************************************************************
      */
     protected final void getRowPositionFromRowLocation(
@@ -107,6 +107,93 @@
      */
 
     /**
+     * Check and purge committed deleted rows on a page.
+     * <p>
+     * 
+        * @return true, if no purging has been done on page, and thus latch
+     *         can be released before end transaction.  Otherwise the latch
+     *         on the page can not be released before commit.
+     *
+     * @param page   A non-null, latched page must be passed in.  If all
+     *               rows on page are purged, then page will be removed and
+     *               latch released.
+     *
+        * @exception  StandardException  Standard exception policy.
+     **/
+    protected final boolean purgeCommittedDeletes(
+    Page                page)
+        throws StandardException
+    {
+        boolean purgingDone = false;
+
+        // The number records that can be reclaimed is:
+        // total recs - recs_not_deleted
+        int num_possible_commit_delete = 
+            page.recordCount() - page.nonDeletedRecordCount();
+
+        if (num_possible_commit_delete > 0)
+        {
+            // loop backward so that purges which affect the slot table 
+            // don't affect the loop (ie. they only move records we 
+            // have already looked at).
+            for (int slot_no = page.recordCount() - 1; 
+                 slot_no >= 0; 
+                 slot_no--) 
+            {
+                boolean row_is_committed_delete = 
+                    page.isDeletedAtSlot(slot_no);
+
+                if (row_is_committed_delete)
+                {
+                    // At this point we only know that the row is
+                    // deleted, not whether it is committed.
+
+                    // see if we can purge the row, by getting an
+                    // exclusive lock on the row.  If it is marked
+                    // deleted and we can get this lock, then it
+                    // must be a committed delete and we can purge 
+                    // it.
+
+                    RecordHandle rh =
+                        page.fetchFromSlot(
+                            (RecordHandle) null,
+                            slot_no,
+                            RowUtil.EMPTY_ROW,
+                            RowUtil.EMPTY_ROW_FETCH_DESCRIPTOR,
+                            true);
+
+                    row_is_committed_delete =
+                        this.lockRowAtSlotNoWaitExclusive(rh);
+
+                    if (row_is_committed_delete)
+                    {
+                        purgingDone = true;
+
+                        page.purgeAtSlot(slot_no, 1, false);
+                    }
+                }
+            }
+        }
+        if (page.recordCount() == 0)
+        {
+
+            // Deallocate the current page with 0 rows on it.
+            this.removePage(page);
+
+            // removePage guarantees to unlatch the page even if an
+            // exception is thrown. The page is protected against reuse
+            // because removePage locks it with a dealloc lock, so it
+            // is OK to release the latch even after a purgeAtSlot is
+            // called.
+            // @see ContainerHandle#removePage
+
+            purgingDone = true;
+        }
+
+        return(purgingDone);
+    }
+
+    /**
      * Insert a new row into the heap.
      * <p>
      * Overflow policy:
Index: java/engine/org/apache/derby/impl/store/access/heap/HeapScan.java
===================================================================
--- java/engine/org/apache/derby/impl/store/access/heap/HeapScan.java   
(revision 160236)
+++ java/engine/org/apache/derby/impl/store/access/heap/HeapScan.java   
(working copy)
@@ -147,6 +147,28 @@
         }
     }
 
+    protected void setRowLocationArray(
+    RowLocation[]   rowloc_array,
+    int             index,
+    RecordHandle    rh)
+        throws StandardException
+    {
+        if (rowloc_array[index] == null)
+        {
+            rowloc_array[index] = new HeapRowLocation(rh);
+        }
+        else
+        {
+            if (SanityManager.DEBUG)
+            {
+                SanityManager.ASSERT(
+                    rowloc_array[index] instanceof HeapRowLocation);
+            }
+
+            ((HeapRowLocation)rowloc_array[index]).setFrom(rh);
+        }
+    }
+
     /**
     Fetch the row at the next position of the Scan.
 
@@ -254,7 +276,17 @@
                 (int[]) null));
     }
 
+    public int fetchNextGroup(
+    DataValueDescriptor[][] row_array,
+    RowLocation[]           old_rowloc_array,
+    RowLocation[]           new_rowloc_array)
+        throws StandardException
+       {
+        throw(StandardException.newException(
+                SQLState.HEAP_UNIMPLEMENTED_FEATURE));
+    }
 
+
     /**
      * Return ScanInfo object which describes performance of scan.
      * <p>
Index: java/engine/org/apache/derby/impl/store/access/heap/Heap.java
===================================================================
--- java/engine/org/apache/derby/impl/store/access/heap/Heap.java       
(revision 160236)
+++ java/engine/org/apache/derby/impl/store/access/heap/Heap.java       
(working copy)
@@ -62,6 +62,7 @@
 import org.apache.derby.iapi.store.raw.Transaction;
 import org.apache.derby.iapi.store.raw.Page;
 import org.apache.derby.iapi.store.raw.RawStoreFactory;
+import org.apache.derby.iapi.store.raw.RecordHandle;
 
 import org.apache.derby.iapi.types.DataValueDescriptor;
 
@@ -650,7 +651,7 @@
     int                             lock_level,
     LockingPolicy                   locking_policy,
     int                             isolation_level,
-       FormatableBitSet                                            
scanColumnList,
+       FormatableBitSet                                scanColumnList,
     DataValueDescriptor[]              startKeyValue,
     int                             startSearchOperator,
     Qualifier                       qualifier[][],
@@ -702,8 +703,222 @@
                return(heapscan);
        }
 
+       public void purgeConglomerate(
+    TransactionManager              xact_manager,
+    Transaction                     rawtran)
+        throws StandardException
+    {
+        OpenConglomerate        open_for_ddl_lock   = null;
+        HeapController          heapcontroller      = null;
+        TransactionManager      nested_xact         = null;
 
+        try
+        {
+            open_for_ddl_lock = new OpenHeap();
+
+            // Open table in intended exclusive mode in the top level 
+            // transaction, this will stop any ddl from happening until 
+            // purge of whole table is finished.
+
+            if (open_for_ddl_lock.init(
+                    (ContainerHandle) null,
+                    this,
+                    this.format_ids,
+                    xact_manager,
+                    rawtran,
+                    false,
+                    TransactionController.OPENMODE_FORUPDATE,
+                    TransactionController.MODE_RECORD,
+                    null,
+                    null) == null)
+            {
+                throw StandardException.newException(
+                        SQLState.HEAP_CONTAINER_NOT_FOUND, 
+                        new Long(id.getContainerId()));
+            }
+
+            // perform all the "real" work in a non-readonly nested user 
+            // transaction, so that as work is completed on each page resources
+            // can be released.  Must be careful as all locks obtained in 
nested
+            // transaction will conflict with parent transaction - so this call
+            // must be made only if parent transaction can have no conflicting
+            // locks on the table, otherwise the purge will fail with a self
+            // deadlock.
+            nested_xact = (TransactionManager) 
+                xact_manager.startNestedUserTransaction(false);
+
+            // now open the table in a nested user transaction so that each
+            // page worth of work can be committed after it is done.
+
+            OpenConglomerate open_conglom = new OpenHeap();
+
+            if (open_conglom.init(
+                (ContainerHandle) null,
+                this,
+                this.format_ids,
+                nested_xact,
+                rawtran,
+                true,
+                TransactionController.OPENMODE_FORUPDATE,
+                TransactionController.MODE_RECORD,
+                null,
+                null) == null)
+            {
+                throw StandardException.newException(
+                        SQLState.HEAP_CONTAINER_NOT_FOUND, 
+                        new Long(id.getContainerId()).toString());
+            }
+
+            heapcontroller = new HeapController();
+
+            heapcontroller.init(open_conglom);
+
+            Page page   = open_conglom.getContainer().getFirstPage();
+
+            boolean purgingDone = false;
+
+            while (page != null)
+            {
+                long pageno = page.getPageNumber();
+                purgingDone = heapcontroller.purgeCommittedDeletes(page);
+
+                if (purgingDone)
+                {
+                    page = null;
+
+                    // commit xact to free resouurces ASAP, commit will
+                    // unlatch the page if it has not already been unlatched
+                    // by a remove.
+                    open_conglom.getXactMgr().commitNoSync(
+                                TransactionController.RELEASE_LOCKS);
+                    open_conglom.reopen();
+                }
+                else
+                {
+                    page.unlatch();
+                    page = null;
+                }
+
+                page = open_conglom.getContainer().getNextPage(pageno);
+            }
+        }
+        finally
+        {
+            if (open_for_ddl_lock != null)
+                open_for_ddl_lock.close();
+            if (heapcontroller != null)
+                heapcontroller.close();
+            if (nested_xact != null)
+            {
+                nested_xact.commitNoSync(TransactionController.RELEASE_LOCKS);
+                nested_xact.destroy();
+            }
+        }
+
+        return;
+    }
+
+       public void compressConglomerate(
+    TransactionManager              xact_manager,
+    Transaction                     rawtran)
+        throws StandardException
+    {
+        OpenConglomerate        open_conglom    = null;
+        HeapController          heapcontroller  = null;
+
+        try
+        {
+            open_conglom = new OpenHeap();
+
+            // Open table in intended exclusive mode in the top level 
+            // transaction, this will stop any ddl from happening until 
+            // purge of whole table is finished.
+
+            if (open_conglom.init(
+                    (ContainerHandle) null,
+                    this,
+                    this.format_ids,
+                    xact_manager,
+                    rawtran,
+                    false,
+                    TransactionController.OPENMODE_FORUPDATE,
+                    TransactionController.MODE_RECORD,
+                    null,
+                    null) == null)
+            {
+                throw StandardException.newException(
+                        SQLState.HEAP_CONTAINER_NOT_FOUND, 
+                        new Long(id.getContainerId()));
+            }
+
+            heapcontroller = new HeapController();
+
+            heapcontroller.init(open_conglom);
+
+            open_conglom.getContainer().compressContainer();
+        }
+        finally
+        {
+            if (open_conglom != null)
+                open_conglom.close();
+        }
+
+        return;
+    }
+
     /**
+     * Open a heap compress scan.
+     * <p>
+     *
+     * @see Conglomerate#openCompressScan
+     *
+        * @exception  StandardException  Standard exception policy.
+     **/
+       public ScanManager defragmentConglomerate(
+    TransactionManager              xact_manager,
+    Transaction                     rawtran,
+    boolean                         hold,
+    int                             open_mode,
+    int                             lock_level,
+    LockingPolicy                   locking_policy,
+    int                             isolation_level)
+               throws StandardException
+       {
+        OpenConglomerate open_conglom = new OpenHeap();
+
+        if (open_conglom.init(
+                (ContainerHandle) null,
+                this,
+                this.format_ids,
+                xact_manager,
+                rawtran,
+                hold,
+                open_mode,
+                lock_level,
+                null,
+                null) == null)
+        {
+            throw StandardException.newException(
+                    SQLState.HEAP_CONTAINER_NOT_FOUND, 
+                    new Long(id.getContainerId()));
+        }
+
+               HeapCompressScan heap_compress_scan = new HeapCompressScan();
+
+        heap_compress_scan.init(
+            open_conglom,
+            null,
+            null,
+            0,
+            null,
+            null,
+            0);
+
+               return(heap_compress_scan);
+       }
+
+
+    /**
      * Return an open StoreCostController for the conglomerate.
      * <p>
      * Return an open StoreCostController which can be used to ask about 
Index: java/engine/org/apache/derby/impl/store/access/heap/HeapCompressScan.java
===================================================================
--- java/engine/org/apache/derby/impl/store/access/heap/HeapCompressScan.java   
(revision 0)
+++ java/engine/org/apache/derby/impl/store/access/heap/HeapCompressScan.java   
(revision 0)
@@ -0,0 +1,439 @@
+/*
+
+   Derby - Class org.apache.derby.impl.store.access.heap.HeapScan
+
+   Copyright 2005 The Apache Software Foundation or its licensors, as 
applicable.
+
+   Licensed under the Apache License, Version 2.0 (the "License");
+   you may not use this file except in compliance with the License.
+   You may obtain a copy of the License at
+
+      http://www.apache.org/licenses/LICENSE-2.0
+
+   Unless required by applicable law or agreed to in writing, software
+   distributed under the License is distributed on an "AS IS" BASIS,
+   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+   See the License for the specific language governing permissions and
+   limitations under the License.
+
+*/
+
+package org.apache.derby.impl.store.access.heap;
+
+
+/**
+
+  A heap scan object represents an instance of an scan on a heap conglomerate.
+
+**/
+
+import org.apache.derby.iapi.reference.SQLState;
+
+import org.apache.derby.iapi.services.sanity.SanityManager;
+
+import org.apache.derby.iapi.services.io.Storable;
+
+import org.apache.derby.iapi.error.StandardException;
+
+import org.apache.derby.iapi.store.access.conglomerate.Conglomerate;
+import org.apache.derby.iapi.store.access.conglomerate.LogicalUndo;
+import org.apache.derby.iapi.store.access.conglomerate.ScanManager;
+import org.apache.derby.iapi.store.access.conglomerate.TransactionManager;
+
+import org.apache.derby.iapi.store.access.ConglomerateController;
+import org.apache.derby.iapi.store.access.DynamicCompiledOpenConglomInfo;
+import org.apache.derby.iapi.store.access.Qualifier;
+import org.apache.derby.iapi.store.access.RowUtil;
+import org.apache.derby.iapi.store.access.ScanInfo;
+import org.apache.derby.iapi.store.access.ScanController;
+import org.apache.derby.iapi.store.access.SpaceInfo;
+import org.apache.derby.iapi.store.access.TransactionController;
+
+import org.apache.derby.iapi.types.RowLocation;
+
+import org.apache.derby.iapi.store.raw.ContainerHandle;
+import org.apache.derby.iapi.store.raw.LockingPolicy;
+import org.apache.derby.iapi.store.raw.Transaction;
+import org.apache.derby.iapi.store.raw.Page;
+import org.apache.derby.iapi.store.raw.RecordHandle;
+
+import org.apache.derby.iapi.types.DataValueDescriptor;
+
+import org.apache.derby.impl.store.access.conglomerate.ConglomerateUtil;
+import org.apache.derby.impl.store.access.conglomerate.GenericScanController;
+import org.apache.derby.impl.store.access.conglomerate.RowPosition;
+
+import org.apache.derby.iapi.store.access.BackingStoreHashtable;
+import org.apache.derby.iapi.services.io.FormatableBitSet;
+
+import java.util.Hashtable;
+import java.util.Vector;
+
+class HeapCompressScan 
+    extends HeapScan
+{
+
+    /**************************************************************************
+     * Constants of HeapScan
+     **************************************************************************
+     */
+
+    /**************************************************************************
+     * Fields of HeapScan
+     **************************************************************************
+     */
+    private long pagenum_to_start_moving_rows = -1;
+
+
+
+    /**************************************************************************
+     * Constructors for This class:
+     **************************************************************************
+     */
+
+       /**
+        ** The only constructor for a HeapCompressScan returns a scan in the
+        ** closed state, the caller must call open.
+        **/
+       
+       public HeapCompressScan()
+       {
+       }
+
+    /**************************************************************************
+     * Protected override implementation of routines in
+     *     GenericController class:
+     **************************************************************************
+     */
+
+    public int fetchNextGroup(
+    DataValueDescriptor[][] row_array,
+    RowLocation[]           old_rowloc_array,
+    RowLocation[]           new_rowloc_array)
+        throws StandardException
+       {
+        return(fetchRowsForCompress(
+                    row_array, old_rowloc_array, new_rowloc_array));
+    }
+
+    /**
+     * Fetch the next N rows from the table.
+     * <p>
+     * Utility routine used by both fetchSet() and fetchNextGroup().
+     *
+        * @exception  StandardException  Standard exception policy.
+     **/
+    private int fetchRowsForCompress(
+    DataValueDescriptor[][] row_array,
+    RowLocation[]           oldrowloc_array,
+    RowLocation[]           newrowloc_array)
+        throws StandardException
+       {
+        int                     ret_row_count           = 0;
+        DataValueDescriptor[]   fetch_row               = null;
+
+        if (SanityManager.DEBUG)
+        {
+            SanityManager.ASSERT(row_array != null);
+            SanityManager.ASSERT(row_array[0] != null,
+                    "first array slot in fetchNextGroup() must be non-null.");
+        }
+
+        if (getScanState() == SCAN_INPROGRESS)
+        {
+            positionAtResumeScan(scan_position);
+        }
+        else if (getScanState() == SCAN_INIT)
+        {
+            // For first implementation of defragment use a conservative
+            // approach, only move rows from the last "number of free pages"
+            // of the container.  Should always at least be able to empty
+            // that number of pages.
+            SpaceInfo info = 
+                open_conglom.getContainer().getSpaceInfo();
+
+            pagenum_to_start_moving_rows = info.getNumAllocatedPages();
+
+            positionAtStartForForwardScan(scan_position);
+        }
+        else if (getScanState() == SCAN_HOLD_INPROGRESS)
+        {
+            open_conglom.reopen();
+
+            if (SanityManager.DEBUG)
+            {
+                SanityManager.ASSERT(
+                    scan_position.current_rh != null, this.toString()); 
+            }
+
+            // reposition the scan at the row just before the next one to 
+            // return.
+            // This routine handles the mess of repositioning if the row or 
+            // the page has disappeared. This can happen if a lock was not 
+            // held on the row while not holding the latch.
+            open_conglom.latchPageAndRepositionScan(scan_position);
+
+            setScanState(SCAN_INPROGRESS);
+        }
+        else if (getScanState() == SCAN_HOLD_INIT)
+        {
+            open_conglom.reopen();
+
+            positionAtStartForForwardScan(scan_position);
+
+        }
+        else
+        {
+            if (SanityManager.DEBUG)
+                SanityManager.ASSERT(getScanState() == SCAN_DONE);
+
+            return(0);
+        }
+
+        // At this point:
+        // scan_position.current_page is latched.  
+        // scan_position.current_slot is the slot on scan_position.current_page
+        // just before the "next" record this routine should process.
+
+        // loop through successive pages and successive slots on those
+        // pages.  Stop when either the last page is reached 
+        // (scan_position.current_page will be null).  
+        // Along the way apply qualifiers to skip rows which don't qualify.
+
+               while (scan_position.current_page != null)
+               {
+                       while ((scan_position.current_slot + 1) < 
+                    scan_position.current_page.recordCount())
+                       {
+                // Allocate a new row to read the row into.
+                if (fetch_row == null)
+                {
+                     // point at allocated row in array if one exists.
+                    if (row_array[ret_row_count] == null)
+                    {
+                        row_array[ret_row_count] = 
+                          open_conglom.getRuntimeMem().get_row_for_export();
+                    }
+
+                    fetch_row = row_array[ret_row_count];
+                }
+
+                // move scan current position forward.
+                scan_position.positionAtNextSlot();
+
+                this.stat_numrows_visited++;
+
+                if (scan_position.current_page.isDeletedAtSlot(
+                        scan_position.current_slot))
+                {
+                    // At this point assume table level lock, and that this
+                    // transcation did not delete the row, so any
+                    // deleted row must be a committed deleted row which can
+                    // be purged.
+                    scan_position.current_page.purgeAtSlot(
+                        scan_position.current_slot, 1, false);
+
+                    // raw store shuffles following rows down, so 
+                    // postion the scan at previous slot, so next trip
+                    // through loop will pick up correct row.
+                    scan_position.positionAtPrevSlot();
+                    continue;
+                }
+
+                if (scan_position.current_page.getPageNumber() > 
+                        pagenum_to_start_moving_rows)
+                {
+                    // Give raw store a chance to move the row for compression
+                    RecordHandle[] old_handle = new RecordHandle[1];
+                    RecordHandle[] new_handle = new RecordHandle[1];
+                    long[]         new_pageno = new long[1];
+
+                    if (scan_position.current_page.moveRecordForCompressAtSlot(
+                            scan_position.current_slot,
+                            fetch_row,
+                            old_handle,
+                            new_handle) == 1)
+                    {
+                        // raw store moved the row, so bump the row count but 
+                        // postion the scan at previous slot, so next trip
+                        // through loop will pick up correct row.
+                        // The subsequent rows will have been moved forward
+                        // to take place of moved row.
+                        scan_position.positionAtPrevSlot();
+
+                        ret_row_count++;
+                        stat_numrows_qualified++;
+
+
+                        setRowLocationArray(
+                            oldrowloc_array, ret_row_count - 1, old_handle[0]);
+                        setRowLocationArray(
+                            newrowloc_array, ret_row_count - 1, new_handle[0]);
+
+                        fetch_row = null;
+
+                    }
+                }
+                       }
+
+            this.stat_numpages_visited++;
+
+            if (scan_position.current_page.recordCount() == 0)
+            {
+                // need to set the scan position before removing page
+                scan_position.current_pageno = 
+                    scan_position.current_page.getPageNumber();
+
+                open_conglom.getContainer().removePage(
+                    scan_position.current_page);
+
+                // removePage unlatches the page, and page not available
+                // again until after commit.
+                scan_position.current_page = null;
+            }
+            else
+            {
+                positionAfterThisPage(scan_position);
+                scan_position.unlatch();
+            }
+
+
+            if (ret_row_count > 0)
+            {
+                // rows were moved on this page, give caller a chance to
+                // process those and free up access to the table.
+                return(ret_row_count);
+            }
+            else
+            {
+                // no rows were moved so go ahead and commit the transaction
+                // to allow other threads a chance at table.  Compress does
+                // need to sync as long as transaction either completely 
+                // commits or backs out, either is fine.
+                /*
+                open_conglom.getXactMgr().commitNoSync(
+                    TransactionController.RELEASE_LOCKS);
+                open_conglom.reopen();
+                */
+                positionAtResumeScan(scan_position);
+
+            }
+               }
+
+        // Reached last page of scan.
+        positionAtDoneScan(scan_position);
+
+        // we need to decrement when we stop scan at the end of the table.
+        this.stat_numpages_visited--;
+
+               return(ret_row_count);
+    }
+
+    /**
+     * Reposition the scan upon entering the fetchRows loop.
+     * <p>
+     * Called upon entering fetchRows() while in the SCAN_INPROGRESS state.
+     * Do work necessary to look at rows in the current page of the scan.
+     * <p>
+     * The default implementation uses a record handle to maintain a scan
+     * position.  It will get the latch again on the current
+     * scan position and set the slot to the current record handle.
+     *
+        * @exception  StandardException  Standard exception policy.
+     **/
+    protected void positionAtResumeScan(
+    RowPosition pos)
+               throws StandardException
+    {
+        // reposition the scan at the row just before the next one to return.
+        // This routine handles the mess of repositioning if the row or the
+        // page has disappeared. This can happen if a lock was not held on the
+        // row while not holding the latch.
+        open_conglom.latchPageAndRepositionScan(scan_position);
+    }
+
+    /**
+     * Move the scan from SCAN_INIT to SCAN_INPROGRESS.
+     * <p>
+     * This routine is called to move the scan from SCAN_INIT to 
+     * SCAN_INPROGRESS.  Upon return from this routine it is expected
+     * that scan_position is set such that calling the generic 
+     * scan loop will reach the first row of the scan.  Note that this
+     * usually means setting the scan_postion to one before the 1st 
+     * row to be returned.
+     * <p>
+     *
+        * @exception  StandardException  Standard exception policy.
+     **/
+    protected void positionAtStartForForwardScan(
+    RowPosition pos)
+        throws StandardException
+    {
+        if (pos.current_rh == null)
+        {
+            // 1st positioning of scan (delayed from openScan).  Do not
+            // compress the first page, there is no previous page to move
+            // rows to, and moving the special Heap metadata row from the
+            // first page would cause problems.  Setting to next page is
+            // why this scan overrides generic implementation.
+            pos.current_page = 
+                open_conglom.getContainer().getNextPage(
+                    ContainerHandle.FIRST_PAGE_NUMBER);
+
+            // set up for scan to continue at beginning of page following
+            // the first page of the container.
+            pos.current_slot = Page.FIRST_SLOT_NUMBER - 1;
+        }
+        else
+        {
+            // 1st positioning of scan following a reopenScanByRowLocation
+
+            // reposition the scan at the row just before the next one to 
+            // return.  This routine handles the mess of repositioning if the 
+            // row or the page has disappeared. This can happen if a lock was 
+            // not held on the row while not holding the latch.
+            open_conglom.latchPageAndRepositionScan(pos);
+
+            // set up for scan to at the specified record handle (position one
+            // before it so that the loop increment and find it).
+            pos.current_slot -= 1;
+        }
+
+        pos.current_rh              = null;
+        this.stat_numpages_visited  = 1;
+        this.setScanState(SCAN_INPROGRESS);
+    }
+
+
+    /**************************************************************************
+     * Private/Protected methods of This class:
+     **************************************************************************
+     */
+
+    /**
+     * Set scan position to just after current page.
+     * <p>
+     * Used to set the position of the scan if a record handle is not
+     * avaliable.  In this case current_rh will be set to null, and 
+     * current_pageno will be set to the current page number.
+     * On resume of the scan, the scan will be set to just before the first
+     * row returned form a getNextPage(current_pageno) call.
+     * <p>
+     * A positionAtResumeScan(scan_position) is necessary to continue the
+     * scan after this call.
+     *
+        * @exception  StandardException  Standard exception policy.
+     **/
+    private void positionAfterThisPage(
+    RowPosition pos)
+        throws StandardException
+    {
+        pos.current_rh = null;
+        pos.current_pageno = pos.current_page.getPageNumber();
+    }
+
+       /*
+       ** Methods of ScanManager
+       */
+
+}
Index: java/engine/org/apache/derby/impl/store/raw/data/BasePage.java
===================================================================
--- java/engine/org/apache/derby/impl/store/raw/data/BasePage.java      
(revision 160236)
+++ java/engine/org/apache/derby/impl/store/raw/data/BasePage.java      
(working copy)
@@ -412,10 +412,10 @@
        */
 
        public RecordHandle fetch(
-    RecordHandle            handle, 
-    Object[]   row, 
-    FormatableBitSet                 validColumns, 
-    boolean                 forUpdate)
+    RecordHandle        handle, 
+    Object[]            row, 
+    FormatableBitSet    validColumns, 
+    boolean             forUpdate)
        throws StandardException {
 
                if (SanityManager.DEBUG) {
@@ -450,7 +450,7 @@
        public RecordHandle fetchFromSlot(
     RecordHandle            rh, 
     int                     slot, 
-    Object[]   row,
+    Object[]                row,
     FetchDescriptor         fetchDesc,
     boolean                 ignoreDelete)
                 throws StandardException
@@ -1395,12 +1395,11 @@
                // page does not copy over the remaining pieces, i.e.,the new 
head page
                // still points to those pieces.
 
-               owner.getActionSet().actionPurge (t, this, src_slot, num_rows,
-                                                                               
  recordIds, true);
+               owner.getActionSet().actionPurge(
+            t, this, src_slot, num_rows, recordIds, true);
        }
 
 
-
        /**
                Unlatch the page.
                @see Page#unlatch
@@ -2489,7 +2488,6 @@
        public abstract boolean spaceForCopy(int num_rows, int[] spaceNeeded)
                 throws StandardException;
 
-
        /**
                Return the total number of bytes used, reserved, or wasted by 
the
                record at this slot.
Index: 
java/engine/org/apache/derby/impl/store/raw/data/InputStreamContainer.java
===================================================================
--- java/engine/org/apache/derby/impl/store/raw/data/InputStreamContainer.java  
(revision 160236)
+++ java/engine/org/apache/derby/impl/store/raw/data/InputStreamContainer.java  
(working copy)
@@ -132,6 +132,13 @@
                return 0;
        }
 
+       protected void truncatePages(long lastValidPagenum)
+    {
+               // Nothing to do since we are inherently read-only.
+               return;
+    }
+    
+
        /*
        ** Container creation, opening, and closing
        */
Index: java/engine/org/apache/derby/impl/store/raw/data/AllocExtent.java
===================================================================
--- java/engine/org/apache/derby/impl/store/raw/data/AllocExtent.java   
(revision 160236)
+++ java/engine/org/apache/derby/impl/store/raw/data/AllocExtent.java   
(working copy)
@@ -423,7 +423,35 @@
                setExtentFreePageStatus(true);
        }
 
+    protected long compressPages()
+    {
+        int compress_bitnum = -1;
 
+        for (int i = extentLength - 1; i >= 0; i--)
+        {
+            if (freePages.isSet(i))
+            {
+                freePages.clear(i);
+                compress_bitnum = i;
+            }
+            else
+            {
+                break;
+            }
+        }
+
+        if (compress_bitnum >= 0)
+        {
+            extentLength = compress_bitnum;
+            return(extentStart + extentLength - 1);
+        }
+        else
+        {
+            return(-1);
+        }
+
+    }
+
        protected long getExtentEnd()
        {
                return extentEnd;
@@ -764,14 +792,20 @@
                                for (int j = 0; j < 8; j++)
                                {
                                        if (((1 << j) & free[i]) != 0)
+                    {
                                                allocatedPageCount--;
+                    }
                                }
                        }
                }
 
                if (SanityManager.DEBUG)
-                       SanityManager.ASSERT(allocatedPageCount >= 0,
-                                                                "number of 
allocated page < 0");
+        {
+                       SanityManager.ASSERT(
+                allocatedPageCount >= 0,
+                "number of allocated page < 0, val =" + allocatedPageCount +
+                "\nextent = " + toDebugString());
+        }
 
                return allocatedPageCount;
        }
Index: java/engine/org/apache/derby/impl/store/raw/data/FileContainer.java
===================================================================
--- java/engine/org/apache/derby/impl/store/raw/data/FileContainer.java 
(revision 160236)
+++ java/engine/org/apache/derby/impl/store/raw/data/FileContainer.java 
(working copy)
@@ -1281,7 +1281,125 @@
 
        }
 
+       /**
+         Compress free space from container.
 
+         <BR> MT - thread aware - It is assumed that our caller (our super 
class)
+         has already arranged a logical lock on page allocation to only allow a
+         single thread through here.
+
+      Compressing free space is done in allocation page units, working
+      it's way from the end of the container to the beginning.  Each
+      loop operates on the last allocation page in the container.
+
+      Freeing space in the container page involves 2 transactions, an
+      update to an allocation page, N data pages, and possibly the delete
+      of the allocation page.
+         The User Transaction (UT) initiated the compress call.
+         The Nested Top Transaction (NTT) is the transaction started by 
RawStore
+         inside the compress call.  This NTT is committed before compress 
returns.
+         The NTT is used to access high traffic data structures such as the 
+      AllocPage.
+
+         This is outline of the algorithm used in compressing the container.
+
+      Until a non free page is found loop, in each loop return to the OS
+         all space at the end of the container occupied by free pages, 
including
+         the allocation page itself if all of it's pages are free.  
+      
+         1) Find last 2 allocation pages in container (last if there is only 
one).
+         2) invalidate the allocation information cached by the container.
+                Without the cache no page can be gotten from the container.  
Pages
+                already in the page cache are not affected.  Thus by latching 
the 
+                allocPage and invalidating the allocation cache, this NTT 
blocks out 
+                all page gets from this container until it commits.
+         3) the allocPage determines which pages can be released to the OS, 
+         mark that in its data structure (the alloc extent).  Mark the 
+         contiguous block of nallocated/free pages at the end of the file
+         as unallocated.  This change is associated with the NTT.
+      4) The NTT calls the OS to deallocate the space from the file.  Note
+         that the system can handle being booted and asked to get an allocated
+         page which is past end of file, it just extends the file 
automatically.
+         5) If freeing all space on the alloc page, and there is more than one
+         alloc page, then free the alloc page - this requires an update to the 
+         previous alloc page which the loop has kept latched also.
+      6) if the last alloc page was deleted, restart loop at #1
+
+      All NTT latches are released before this routine returns.
+         If we use an NTT, the caller has to commit the NTT to release the
+         allocPage latch.  If we don't use an NTT, the allocPage latch is 
released
+         as this routine returns.
+
+         @param ntt - the nested top transaction for the purpose of freeing 
space.
+                                               If ntt is null, use the user 
transaction for allocation.
+         #param allocHandle - the container handle opened by the ntt, 
+                                               use this to latch the alloc page
+
+         @exception StandardException Standard Cloudscape error policy 
+       */
+       protected void compressContainer(
+    RawTransaction      ntt,
+    BaseContainerHandle allocHandle)
+                throws StandardException 
+       {
+               AllocPage alloc_page      = null;
+               AllocPage prev_alloc_page = null;
+
+               if (firstAllocPageNumber == ContainerHandle.INVALID_PAGE_NUMBER)
+        {
+            // no allocation pages in container, no work to do!
+                       return;
+        }
+
+               try
+               {
+            synchronized(allocCache)
+            {
+                // loop until last 2 alloc pages are reached.
+                alloc_page = (AllocPage) 
+                    allocHandle.getAllocPage(firstAllocPageNumber);
+
+                while (!alloc_page.isLast())
+                {
+                    if (prev_alloc_page != null)
+                    {
+                        // there are more than 2 alloc pages, unlatch the 
+                        // earliest one.
+                        prev_alloc_page.unlatch();
+                    }
+                    prev_alloc_page = alloc_page;
+                    alloc_page      = null;
+
+                    long nextAllocPageNumber = 
+                        prev_alloc_page.getNextAllocPageNumber();
+                    long nextAllocPageOffset = 
+                        prev_alloc_page.getNextAllocPageOffset();
+
+                    alloc_page = (AllocPage) 
+                        allocHandle.getAllocPage(nextAllocPageNumber);
+                }
+
+                alloc_page.compress(this);
+
+                               allocCache.invalidate(); 
+            }
+               }
+               catch (StandardException se)
+               {
+
+                       if (alloc_page != null)
+            {
+                               alloc_page.unlatch();
+                alloc_page = null;
+            }
+                       if (prev_alloc_page != null)
+            {
+                               prev_alloc_page.unlatch();
+                               prev_alloc_page = null;
+            }
+        }
+       }
+
        /**
          Create a new page in the container.
 
@@ -2488,6 +2606,15 @@
                return p;
        }
 
+       protected BasePage getPageForCompress(
+    BaseContainerHandle handle,
+    int                 flag,
+    long                pageno)
+                throws StandardException
+       {
+        return(getPageForInsert(handle, flag));
+    }
+
        /**
                Get a potentially suitable page for insert and latch it.
                @exception StandardException Standard Cloudscape error policy
Index: java/engine/org/apache/derby/impl/store/raw/data/BaseContainer.java
===================================================================
--- java/engine/org/apache/derby/impl/store/raw/data/BaseContainer.java 
(revision 160236)
+++ java/engine/org/apache/derby/impl/store/raw/data/BaseContainer.java 
(working copy)
@@ -165,6 +165,68 @@
 
 
        /**
+               Release free space to the OS.
+               <P>
+        As is possible release any free space to the operating system.  This
+        will usually mean releasing any free pages located at the end of the
+        file using the java truncate() interface.
+
+               @exception StandardException    Standard Cloudscape error policy
+       */
+       public void compressContainer(BaseContainerHandle handle)
+        throws StandardException
+    {
+               RawTransaction ntt = 
handle.getTransaction().startNestedTopTransaction();
+
+               int mode = handle.getMode(); 
+
+               if (SanityManager.DEBUG)
+               {
+                       SanityManager.ASSERT((mode & 
ContainerHandle.MODE_FORUPDATE) ==
+                                                                
ContainerHandle.MODE_FORUPDATE, 
+                                                                "addPage 
handle not for update");
+               }
+
+               // if we are not in the same transaction as the one which 
created the
+               // container and the container may have logged some operation 
already, 
+               // then we need to log allocation regardless of whether user 
changes
+               // are logged.  Otherwise, the database will be corrupted if it
+               // crashed. 
+               if ((mode & ContainerHandle.MODE_CREATE_UNLOGGED) == 0 &&
+                       (mode & ContainerHandle.MODE_UNLOGGED) ==
+                                               ContainerHandle.MODE_UNLOGGED) 
+                       mode &= ~ContainerHandle.MODE_UNLOGGED;
+
+               // make a handle which is tied to the ntt, not to the user 
transaction 
+        // this handle is tied to.  The container is already locked by the 
+        // user transaction, open it nolock
+               BaseContainerHandle allocHandle = (BaseContainerHandle)
+            ntt.openContainer(identity, (LockingPolicy)null, mode);
+
+               if (allocHandle == null)
+        {
+                       throw StandardException.newException(
+                    SQLState.DATA_ALLOC_NTT_CANT_OPEN, 
+                    new Long(getSegmentId()), 
+                    new Long(getContainerId()));
+        }
+
+               // Latch this container, the commit will release the latch
+               ntt.getLockFactory().lockObject(
+                ntt, ntt, this, null, C_LockFactory.WAIT_FOREVER);
+
+               try
+               {
+            compressContainer(ntt, allocHandle);
+               }
+               finally
+               {
+            ntt.commitNoSync(Transaction.RELEASE_LOCKS);
+                       ntt.close();
+               }
+    }
+
+       /**
                Add a page to this container.
 
                <BR> MT - thread aware - 
@@ -650,7 +712,15 @@
                                                                                
                 int flag)
                 throws StandardException;
 
+       protected abstract BasePage getPageForCompress(
+    BaseContainerHandle handle,
+    int                 flag,
+    long                pageno)
+                throws StandardException;
 
+       protected abstract void truncatePages(long lastValidPagenum);
+
+
        /**
                Create a new page in the container.
 
@@ -661,7 +731,12 @@
                                                                                
BaseContainerHandle allocHandle,
                                                                                
boolean isOverflow) throws StandardException;
 
+       protected abstract void compressContainer(
+    RawTransaction      t,
+    BaseContainerHandle allocHandle)
+        throws StandardException;
 
+
        /**
                Deallocate a page from the container.
 
Index: java/engine/org/apache/derby/impl/store/raw/data/BaseContainerHandle.java
===================================================================
--- java/engine/org/apache/derby/impl/store/raw/data/BaseContainerHandle.java   
(revision 160236)
+++ java/engine/org/apache/derby/impl/store/raw/data/BaseContainerHandle.java   
(working copy)
@@ -181,6 +181,22 @@
        }
 
        /**
+               Release free space to the OS.
+               <P>
+        As is possible release any free space to the operating system.  This
+        will usually mean releasing any free pages located at the end of the
+        file using the java truncate() interface.
+
+               @exception StandardException    Standard Cloudscape error policy
+       */
+       public void compressContainer() throws StandardException 
+    {
+               checkUpdateOpen();
+
+               container.compressContainer(this);
+       }
+
+       /**
                Add a page to the container, if flag == 
ContainerHandle.ADD_PAGE_BULK,
                tell the container about it.
 
@@ -338,6 +354,14 @@
                return container.getPageForInsert(this, flag);
        }
 
+       public Page getPageForCompress(int flag, long pageno) 
+                throws StandardException
+       {
+               checkUpdateOpen();
+
+               return container.getPageForCompress(this, flag, pageno);
+       }
+
        /**
                @see ContainerHandle#isReadOnly()
        */
Index: java/engine/org/apache/derby/impl/store/raw/data/RAFContainer.java
===================================================================
--- java/engine/org/apache/derby/impl/store/raw/data/RAFContainer.java  
(revision 160236)
+++ java/engine/org/apache/derby/impl/store/raw/data/RAFContainer.java  
(working copy)
@@ -537,7 +537,55 @@
                return n;
        }
 
+    /**
+     * Short one line description of routine.
+     * <p>
+     * Longer descrption of routine.
+     * <p>
+     *
+        * @return The identifier to be used to open the conglomerate later.
+     *
+     * @param param1 param1 does this.
+     * @param param2 param2 does this.
+     *
+        * @exception  StandardException  Standard exception policy.
+     **/
+       protected void truncatePages(
+    long lastValidPagenum)
+       {  
+               // int n = doTruncatePages(lastValidPagenum); 
 
+        synchronized(this)
+        {
+            boolean inwrite = false;
+            try
+            {
+                dataFactory.writeInProgress();
+                inwrite = true;
+
+                fileData.setLength((lastValidPagenum + 1) * pageSize);
+            }
+            catch (IOException ioe)
+            {
+                // The disk may have run out of space. 
+                // Don't error out in un-allocation since application can
+                // still function even if allocation fails.
+            }
+            catch (StandardException se)
+            {
+                // some problem calling writeInProgress
+            }
+            finally
+            {
+                if (inwrite)
+                    dataFactory.writeFinished();
+            }
+        }
+
+               return;
+       }
+
+
        /*
                Write the header of a random access file and sync it
                @param create if true, the container is being created
Index: java/engine/org/apache/derby/impl/store/raw/data/AllocPage.java
===================================================================
--- java/engine/org/apache/derby/impl/store/raw/data/AllocPage.java     
(revision 160236)
+++ java/engine/org/apache/derby/impl/store/raw/data/AllocPage.java     
(working copy)
@@ -958,8 +958,37 @@
 
        }
 
+       /**
+               compress
 
+               @param myContainer the container object
+       */
+       protected boolean compress(
+    FileContainer myContainer)
+        throws StandardException
+       {
+        boolean all_pages_compressed = false;
 
+               if (SanityManager.DEBUG)
+                       SanityManager.ASSERT(isLatched(), "page is not 
latched");
+
+        long last_valid_page = extent.compressPages();
+        if (last_valid_page >= 0)
+        {
+            // a non-negative return means that pages can be returned to
+            // the operating system.
+            myContainer.truncatePages(last_valid_page);
+
+            if (last_valid_page == this.getPageNumber())
+            {
+                // all pages of the extent have been returned to OS.
+                all_pages_compressed = true;
+            }
+        }
+
+        return(all_pages_compressed);
+       }
+
        /*********************************************************************
         * Extent Testing
         *
@@ -968,6 +997,4 @@
         *
         *********************************************************************/
        public static final String TEST_MULTIPLE_ALLOC_PAGE = 
SanityManager.DEBUG ? "TEST_MULTI_ALLOC_PAGE" : null;
-
 }
-
Index: java/engine/org/apache/derby/impl/store/raw/data/StoredPage.java
===================================================================
--- java/engine/org/apache/derby/impl/store/raw/data/StoredPage.java    
(revision 160236)
+++ java/engine/org/apache/derby/impl/store/raw/data/StoredPage.java    
(working copy)
@@ -1374,6 +1374,17 @@
                return((freeSpace - bytesNeeded) >= 0);
        }
 
+       protected boolean spaceForCopy(int spaceNeeded)
+       {
+        // add up the space needed by the rows, add in minimumRecordSize
+        // if length of actual row is less than minimumRecordSize.
+        int bytesNeeded = slotEntrySize + 
+            (spaceNeeded >= minimumRecordSize ? 
+                 spaceNeeded : minimumRecordSize);
+
+               return((freeSpace - bytesNeeded) >= 0);
+       }
+
     /**
      * Read the record at the given slot into the given row.
      * <P>
@@ -6808,6 +6819,149 @@
                return count;
        }
 
+    /**
+     * Move record to a page toward the beginning of the file.
+     * <p>
+     * As part of compressing the table records need to be moved from the
+     * end of the file toward the beginning of the file.  Only the 
+     * contiguous set of free pages at the very end of the file can
+     * be given back to the OS.  This call is used to purge the row from
+     * the current page, insert it into a previous page, and return the
+     * new row location 
+     * Mark the record identified by position as deleted. The record may be 
+     * undeleted sometime later using undelete() by any transaction that sees 
+     * the record.
+     * <p>
+     * The interface is optimized to work on a number of rows at a time, 
+     * optimally processing all rows on the page at once.  The call will 
+     * process either all rows on the page, or the number of slots in the
+     * input arrays - whichever is smaller.
+     * <B>Locking Policy</B>
+     * <P>
+     * MUST be called with table locked, not locks are requested.  Because
+     * it is called with table locks the call will go ahead and purge any
+     * row which is marked deleted.  It will also use purge rather than
+     * delete to remove the old row after it moves it to a new page.  This
+     * is ok since the table lock insures that no other transaction will
+     * use space on the table before this transaction commits.
+     *
+     * <BR>
+     * A page latch on the new page will be requested and released.
+     *
+     * @param old_handle     An array to be filled in by the call with the 
+     *                       old handles of all rows moved.
+     * @param new_handle     An array to be filled in by the call with the 
+     *                       new handles of all rows moved.
+     * @param new_pageno     An array to be filled in by the call with the 
+     *                       new page number of all rows moved.
+     *
+     * @return the number of rows processed.
+     *
+     * @exception StandardException    Standard Cloudscape error policy
+     *
+     * @see LockingPolicy
+     **/
+       public int moveRecordForCompressAtSlot(
+    int             slot,
+    Object[]        row,
+    RecordHandle[]  old_handle,
+    RecordHandle[]  new_handle)
+               throws StandardException
+    {
+        long src_pageno = getPageNumber();
+
+        try
+        {
+            fetchFromSlot(
+                null,
+                slot,
+                row,
+                (FetchDescriptor) null, // all columns retrieved
+                false);
+
+            int row_size = getRecordPortionLength(slot);
+
+            // first see if row will fit on current page being used to insert
+            StoredPage dest_page = 
+                (StoredPage) owner.getPageForCompress(0, src_pageno);
+
+            if (dest_page != null)
+            {
+                SanityManager.DEBUG_PRINT("moveRecordForCompressAtSlot", 
+                        "last = " + dest_page.getPageNumber()); 
+
+                if ((dest_page.getPageNumber() >= getPageNumber()) ||
+                    (!dest_page.spaceForCopy(row_size)))
+                {
+                    // page won't work
+                    dest_page.unlatch();
+                    dest_page = null;
+                }
+            }
+
+            if (dest_page == null)
+            {
+                // last page did not work, try unfilled page
+                dest_page = (StoredPage)  
+                    owner.getPageForCompress(
+                        ContainerHandle.GET_PAGE_UNFILLED, src_pageno);
+
+                if (dest_page != null)
+                {
+                    SanityManager.DEBUG_PRINT("moveRecordForCompressAtSlot", 
+                            "unfill = " + dest_page.getPageNumber()); 
+
+                    if ((dest_page.getPageNumber() >= getPageNumber()) ||
+                        (!dest_page.spaceForCopy(row_size)))
+                    {
+                        // page won't work
+                        dest_page.unlatch();
+                        dest_page = null;
+                    }
+                }
+            }
+
+            if (dest_page == null)
+            {
+                // last and unfilled page did not work, try getting a free page
+                dest_page = (StoredPage) owner.addPage();
+
+                SanityManager.DEBUG_PRINT("moveRecordForCompressAtSlot", 
+                        "addPage = " + dest_page.getPageNumber()); 
+
+                if (dest_page.getPageNumber() >= getPageNumber())
+                {
+                    owner.removePage(dest_page);
+                    dest_page = null;
+                }
+            }
+
+            if (dest_page != null)
+            {
+                int dest_slot = dest_page.recordCount();
+
+                old_handle[0] = getRecordHandleAtSlot(slot);
+
+                copyAndPurge(dest_page, slot, 1, dest_slot);
+
+                new_handle[0] = dest_page.getRecordHandleAtSlot(dest_slot);
+
+                dest_page.unlatch();
+
+                return(1);
+            }
+            else
+            {
+                return(0);
+            }
+        }
+        catch (IOException ioe)
+        {
+            throw StandardException.newException(
+                SQLState.DATA_UNEXPECTED_EXCEPTION, ioe);
+        }
+    }
+
        /*
         * methods that is called underneath a page action
         */
Index: java/engine/org/apache/derby/iapi/db/OnlineCompress.java
===================================================================
--- java/engine/org/apache/derby/iapi/db/OnlineCompress.java    (revision 0)
+++ java/engine/org/apache/derby/iapi/db/OnlineCompress.java    (revision 0)
@@ -0,0 +1,532 @@
+/*
+
+   Derby - Class org.apache.derby.iapi.db.OnlineCompress
+
+   Copyright 2005 The Apache Software Foundation or its licensors, as 
applicable.
+
+   Licensed under the Apache License, Version 2.0 (the "License");
+   you may not use this file except in compliance with the License.
+   You may obtain a copy of the License at
+
+      http://www.apache.org/licenses/LICENSE-2.0
+
+   Unless required by applicable law or agreed to in writing, software
+   distributed under the License is distributed on an "AS IS" BASIS,
+   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+   See the License for the specific language governing permissions and
+   limitations under the License.
+
+ */
+
+package org.apache.derby.iapi.db;
+
+import org.apache.derby.iapi.error.StandardException;
+import org.apache.derby.iapi.error.PublicAPI;
+
+import org.apache.derby.iapi.sql.dictionary.DataDictionaryContext;
+import org.apache.derby.iapi.sql.dictionary.DataDictionary;
+import org.apache.derby.iapi.sql.dictionary.SchemaDescriptor;
+import org.apache.derby.iapi.sql.dictionary.TableDescriptor;
+import org.apache.derby.iapi.sql.dictionary.ColumnDescriptor;
+import org.apache.derby.iapi.sql.dictionary.ColumnDescriptorList;
+import org.apache.derby.iapi.sql.dictionary.ConstraintDescriptor;
+import org.apache.derby.iapi.sql.dictionary.ConstraintDescriptorList;
+import org.apache.derby.iapi.sql.dictionary.ConglomerateDescriptor;
+
+import org.apache.derby.iapi.sql.depend.DependencyManager;
+
+import org.apache.derby.iapi.sql.execute.ExecRow;
+import org.apache.derby.iapi.sql.execute.ExecutionContext;
+
+import org.apache.derby.iapi.types.DataValueDescriptor;
+import org.apache.derby.iapi.types.DataValueFactory;
+
+
+import org.apache.derby.iapi.sql.conn.LanguageConnectionContext;
+import org.apache.derby.iapi.sql.conn.ConnectionUtil;
+
+import org.apache.derby.iapi.store.access.TransactionController;
+import org.apache.derby.iapi.types.RowLocation;
+import org.apache.derby.iapi.store.access.ScanController;
+import org.apache.derby.iapi.store.access.ConglomerateController;
+import org.apache.derby.iapi.store.access.GroupFetchScanController;
+import org.apache.derby.iapi.store.access.RowUtil;
+import org.apache.derby.iapi.store.access.Qualifier;
+
+import org.apache.derby.iapi.services.sanity.SanityManager;
+
+import org.apache.derby.iapi.reference.SQLState;
+
+import org.apache.derby.iapi.services.io.FormatableBitSet;
+
+import java.sql.SQLException;
+
+public class OnlineCompress
+{
+
+       /** no requirement for a constructor */
+       private OnlineCompress() {
+       }
+       public static void compressTable(
+    String  schemaName, 
+    String  tableName,
+    boolean purgeRows,
+    boolean defragmentRows,
+    boolean truncateEnd)
+        throws SQLException
+       {
+               LanguageConnectionContext lcc       = 
ConnectionUtil.getCurrentLCC();
+               TransactionController     tc        = 
lcc.getTransactionExecute();
+
+               try 
+        {
+            DataDictionary data_dictionary = lcc.getDataDictionary();
+
+            // Each of the following may give up locks allowing ddl on the
+            // table, so each phase needs to do the data dictionary lookup.
+
+            if (purgeRows)
+                purgeRows(schemaName, tableName, data_dictionary, tc);
+
+            if (defragmentRows)
+                defragmentRows(schemaName, tableName, data_dictionary, tc);
+
+            if (truncateEnd)
+                truncateEnd(schemaName, tableName, data_dictionary, tc);
+        }
+               catch (StandardException se)
+               {
+                       throw PublicAPI.wrapStandardException(se);
+               }
+
+       }
+
+       private static void defragmentRows(
+    String                  schemaName, 
+    String                  tableName,
+    DataDictionary          data_dictionary,
+    TransactionController   tc)
+        throws SQLException
+       {
+        GroupFetchScanController base_group_fetch_cc = null;
+        int                      num_indexes         = 0;
+
+        int[][]                  index_col_map       =  null;
+        ScanController[]         index_scan          =  null;
+        ConglomerateController[] index_cc            =  null;
+        DataValueDescriptor[][]  index_row           =  null;
+
+               LanguageConnectionContext lcc       = 
ConnectionUtil.getCurrentLCC();
+               TransactionController     nested_tc = null;
+
+               try {
+
+            SchemaDescriptor sd = 
+                data_dictionary.getSchemaDescriptor(
+                    schemaName, nested_tc, true);
+            TableDescriptor td = 
+                data_dictionary.getTableDescriptor(tableName, sd);
+            nested_tc = 
+                tc.startNestedUserTransaction(false);
+
+            if (td == null)
+            {
+                throw StandardException.newException(
+                    SQLState.LANG_TABLE_NOT_FOUND, 
+                    schemaName + "." + tableName);
+            }
+
+            /* Skip views */
+            if (td.getTableType() == TableDescriptor.VIEW_TYPE)
+            {
+                return;
+            }
+
+
+                       ConglomerateDescriptor heapCD = 
+                td.getConglomerateDescriptor(td.getHeapConglomerateId());
+
+                       /* Get a row template for the base table */
+                       ExecRow baseRow = 
+                lcc.getExecutionContext().getExecutionFactory().getValueRow(
+                    td.getNumberOfColumns());
+
+
+                       /* Fill the row with nulls of the correct type */
+                       ColumnDescriptorList cdl = td.getColumnDescriptorList();
+                       int                                      cdlSize = 
cdl.size();
+
+                       for (int index = 0; index < cdlSize; index++)
+                       {
+                               ColumnDescriptor cd = (ColumnDescriptor) 
cdl.elementAt(index);
+                               baseRow.setColumn(cd.getPosition(), 
cd.getType().getNull());
+                       }
+
+            DataValueDescriptor[][] row_array = new DataValueDescriptor[100][];
+            row_array[0] = baseRow.getRowArray();
+            RowLocation[] old_row_location_array = new RowLocation[100];
+            RowLocation[] new_row_location_array = new RowLocation[100];
+
+            // Create the following 3 arrays which will be used to update
+            // each index as the scan moves rows about the heap as part of
+            // the compress:
+            //     index_col_map - map location of index cols in the base row, 
+            //                     ie. index_col_map[0] is column offset of 1st
+            //                     key collumn in base row.  All offsets are 0 
+            //                     based.
+            //     index_scan - open ScanController used to delete old index 
row
+            //     index_cc   - open ConglomerateController used to insert new 
+            //                  row
+
+            ConglomerateDescriptor[] conglom_descriptors = 
+                td.getConglomerateDescriptors();
+
+            // conglom_descriptors has an entry for the conglomerate and each 
+            // one of it's indexes.
+            num_indexes = conglom_descriptors.length - 1;
+
+            // if indexes exist, set up data structures to update them
+            if (num_indexes > 0)
+            {
+                // allocate arrays
+                index_col_map   = new int[num_indexes][];
+                index_scan      = new ScanController[num_indexes];
+                index_cc        = new ConglomerateController[num_indexes];
+                index_row       = new DataValueDescriptor[num_indexes][];
+
+                setup_indexes(
+                    nested_tc,
+                    td,
+                    index_col_map,
+                    index_scan,
+                    index_cc,
+                    index_row);
+
+                SanityManager.DEBUG_PRINT("OnlineCompress", "index_col_map = " 
+ index_col_map);
+            }
+
+                       /* Open the heap for reading */
+                       base_group_fetch_cc = 
+                nested_tc.defragmentConglomerate(
+                    td.getHeapConglomerateId(), 
+                    false,
+                    true, 
+                    TransactionController.OPENMODE_FORUPDATE, 
+                                   TransactionController.MODE_TABLE,
+                                       
TransactionController.ISOLATION_SERIALIZABLE);
+
+            int num_rows_fetched = 0;
+            while ((num_rows_fetched = 
+                        base_group_fetch_cc.fetchNextGroup(
+                            row_array, 
+                            old_row_location_array, 
+                            new_row_location_array)) != 0)
+            {
+                if (num_indexes > 0)
+                {
+                    for (int row = 0; row < num_rows_fetched; row++)
+                    {
+                        for (int index = 0; index < num_indexes; index++)
+                        {
+                            SanityManager.DEBUG_PRINT("OnlineCompress", 
"calling fixIndex, row = " + row + "; index = " + index);
+                SanityManager.DEBUG_PRINT("OnlineCompress", "before fixIndex 
call index_col_map = " + index_col_map);
+                SanityManager.DEBUG_PRINT("OnlineCompress", "before fixIndex 
call index_col_map[0] = " + index_col_map[0]);
+                            fixIndex(
+                                row_array[row],
+                                index_row[index],
+                                old_row_location_array[row],
+                                new_row_location_array[row],
+                                index_cc[index],
+                                index_scan[index],
+                                index_col_map[index]);
+                        }
+                    }
+                }
+            }
+                       
+               }
+               catch (StandardException se)
+               {
+                       throw PublicAPI.wrapStandardException(se);
+               }
+               finally
+               {
+            try
+            {
+                /* Clean up before we leave */
+                if (base_group_fetch_cc != null)
+                {
+                    base_group_fetch_cc.close();
+                    base_group_fetch_cc = null;
+                }
+
+                if (num_indexes > 0)
+                {
+                    for (int i = 0; i < num_indexes; i++)
+                    {
+                        if (index_scan != null && index_scan[i] != null)
+                        {
+                            index_scan[i].close();
+                            index_scan[i] = null;
+                        }
+                        if (index_cc != null && index_cc[i] != null)
+                        {
+                            index_cc[i].close();
+                            index_cc[i] = null;
+                        }
+                    }
+                }
+
+                if (nested_tc != null)
+                {
+                    nested_tc.destroy();
+                }
+
+            }
+            catch (StandardException se)
+            {
+                throw PublicAPI.wrapStandardException(se);
+            }
+               }
+
+               return;
+       }
+
+       private static void purgeRows(
+    String                  schemaName, 
+    String                  tableName,
+    DataDictionary          data_dictionary,
+    TransactionController   tc)
+        throws StandardException
+       {
+        SchemaDescriptor sd = 
+            data_dictionary.getSchemaDescriptor(schemaName, tc, true);
+        TableDescriptor  td = 
+            data_dictionary.getTableDescriptor(tableName, sd);
+
+        if (td == null)
+        {
+            throw StandardException.newException(
+                SQLState.LANG_TABLE_NOT_FOUND, 
+                schemaName + "." + tableName);
+        }
+
+        /* Skip views */
+        if (td.getTableType() != TableDescriptor.VIEW_TYPE)
+        {
+
+            ConglomerateDescriptor[] conglom_descriptors = 
+                td.getConglomerateDescriptors();
+
+            for (int cd_idx = 0; cd_idx < conglom_descriptors.length; cd_idx++)
+            {
+                ConglomerateDescriptor cd = conglom_descriptors[cd_idx];
+
+                tc.purgeConglomerate(cd.getConglomerateNumber());
+            }
+        }
+
+        return;
+    }
+
+       private static void truncateEnd(
+    String                  schemaName, 
+    String                  tableName,
+    DataDictionary          data_dictionary,
+    TransactionController   tc)
+        throws StandardException
+       {
+        SchemaDescriptor sd = 
+            data_dictionary.getSchemaDescriptor(schemaName, tc, true);
+        TableDescriptor  td = 
+            data_dictionary.getTableDescriptor(tableName, sd);
+
+        if (td == null)
+        {
+            throw StandardException.newException(
+                SQLState.LANG_TABLE_NOT_FOUND, 
+                schemaName + "." + tableName);
+        }
+
+        /* Skip views */
+        if (td.getTableType() != TableDescriptor.VIEW_TYPE)
+        {
+            ConglomerateDescriptor[] conglom_descriptors = 
+                td.getConglomerateDescriptors();
+
+            for (int cd_idx = 0; cd_idx < conglom_descriptors.length; cd_idx++)
+            {
+                ConglomerateDescriptor cd = conglom_descriptors[cd_idx];
+
+                tc.compressConglomerate(cd.getConglomerateNumber());
+            }
+        }
+
+        return;
+    }
+
+    private static void setup_indexes(
+    TransactionController       tc,
+    TableDescriptor             td,
+    int[][]                     index_col_map,
+    ScanController[]            index_scan,
+    ConglomerateController[]    index_cc,
+    DataValueDescriptor[][]     index_row)
+               throws StandardException
+    {
+
+        // Initialize the following 3 arrays which will be used to update
+        // each index as the scan moves rows about the heap as part of
+        // the compress:
+        //     index_col_map - map location of index cols in the base row, ie.
+        //                     index_col_map[0] is column offset of 1st key
+        //                     collumn in base row.  All offsets are 0 based.
+        //     index_scan - open ScanController used to delete old index row
+        //     index_cc   - open ConglomerateController used to insert new row
+
+        ConglomerateDescriptor[] conglom_descriptors =
+                td.getConglomerateDescriptors();
+
+
+        int index_idx = 0;
+        for (int cd_idx = 0; cd_idx < conglom_descriptors.length; cd_idx++)
+        {
+            SanityManager.DEBUG_PRINT("OnlineCompress", "setup loop: " + 
cd_idx);
+            ConglomerateDescriptor index_cd = conglom_descriptors[cd_idx];
+
+            if (!index_cd.isIndex())
+            {
+                // skip the heap descriptor entry
+                continue;
+            }
+            SanityManager.DEBUG_PRINT("OnlineCompress", "setup loop 1: " + 
cd_idx);
+
+            // ScanControllers are used to delete old index row
+            index_scan[index_idx] = 
+                tc.openScan(
+                    index_cd.getConglomerateNumber(),
+                    true,      // hold
+                    TransactionController.OPENMODE_FORUPDATE,
+                    TransactionController.MODE_TABLE,
+                    TransactionController.ISOLATION_SERIALIZABLE,
+                    null,   // full row is retrieved, 
+                            // so that full row can be used for start/stop keys
+                    null,      // startKeyValue - will be reset with 
reopenScan()
+                    0,         // 
+                    null,      // qualifier
+                    null,      // stopKeyValue  - will be reset with 
reopenScan()
+                    0);                // 
+
+            // ConglomerateControllers are used to insert new index row
+            index_cc[index_idx] = 
+                tc.openConglomerate(
+                    index_cd.getConglomerateNumber(),
+                    true,  // hold
+                    TransactionController.OPENMODE_FORUPDATE,
+                    TransactionController.MODE_TABLE,
+                    TransactionController.ISOLATION_SERIALIZABLE);
+
+            // build column map to allow index row to be built from base row
+            int[] baseColumnPositions   = 
+                index_cd.getIndexDescriptor().baseColumnPositions();
+            int[] zero_based_map        = 
+                new int[baseColumnPositions.length];
+
+            for (int i = 0; i < baseColumnPositions.length; i++)
+            {
+                zero_based_map[i] = baseColumnPositions[i] - 1; 
+            }
+
+            index_col_map[index_idx] = zero_based_map;
+
+            // build row array to delete from index and insert into index
+            //     length is length of column map + 1 for RowLocation.
+            index_row[index_idx] = 
+                new DataValueDescriptor[baseColumnPositions.length + 1];
+
+            index_idx++;
+        }
+
+        return;
+    }
+
+
+    /**
+     * Delete old index row and insert new index row in input index.
+     * <p>
+     *
+        * @return The identifier to be used to open the conglomerate later.
+     *
+     * @param param1 param1 does this.
+     * @param param2 param2 does this.
+     *
+        * @exception  StandardException  Standard exception policy.
+     **/
+    private static void fixIndex(
+    DataValueDescriptor[]   base_row,
+    DataValueDescriptor[]   index_row,
+    RowLocation             old_row_loc,
+    RowLocation             new_row_loc,
+    ConglomerateController  index_cc,
+    ScanController          index_scan,
+       int[]                                   index_col_map)
+        throws StandardException
+    {
+        if (SanityManager.DEBUG)
+        {
+            // baseColumnPositions should describe all columns in index row
+            // except for the final column, which is the RowLocation.
+            SanityManager.ASSERT(index_col_map != null);
+            SanityManager.ASSERT(index_row != null);
+            SanityManager.ASSERT(
+                (index_col_map.length == (index_row.length - 1)));
+        }
+
+        // create the index row to delete from from the base row, using map
+        for (int index = 0; index < index_col_map.length; index++)
+        {
+            index_row[index] = base_row[index_col_map[index]];
+        }
+        // last column in index in the RowLocation
+        index_row[index_row.length - 1] = old_row_loc;
+
+        SanityManager.DEBUG_PRINT("OnlineCompress", "row before delete = " +
+                RowUtil.toString(index_row));
+
+        // position the scan for the delete, the scan should already be open.
+        // This is done by setting start scan to full key, GE and stop scan
+        // to full key, GT.
+        index_scan.reopenScan(
+            index_row,
+            ScanController.GE,
+            (Qualifier[][]) null,
+            index_row,
+            ScanController.GT);
+
+        // position the scan, serious problem if scan does not find the row.
+        if (index_scan.next())
+        {
+            index_scan.delete();
+        }
+        else
+        {
+            // Didn't find the row we wanted to delete.
+            if (SanityManager.DEBUG)
+            {
+                SanityManager.THROWASSERT(
+                    "Did not find row to delete." +
+                    "base_row = " + RowUtil.toString(base_row) +
+                    "index_row = " + RowUtil.toString(index_row));
+            }
+        }
+
+        // insert the new index row into the conglomerate
+        index_row[index_row.length - 1] = new_row_loc;
+
+        SanityManager.DEBUG_PRINT("OnlineCompress", "row before insert = " +
+                RowUtil.toString(index_row));
+        index_cc.insert(index_row);
+
+        return;
+    }
+}
Index: 
java/engine/org/apache/derby/iapi/store/access/conglomerate/Conglomerate.java
===================================================================
--- 
java/engine/org/apache/derby/iapi/store/access/conglomerate/Conglomerate.java   
    (revision 160236)
+++ 
java/engine/org/apache/derby/iapi/store/access/conglomerate/Conglomerate.java   
    (working copy)
@@ -312,7 +312,7 @@
     int                             lock_level,
     LockingPolicy                   locking_policy,
     int                             isolation_level,
-       FormatableBitSet                                            
scanColumnList,
+       FormatableBitSet                                scanColumnList,
     DataValueDescriptor[]              startKeyValue,
     int                             startSearchOperator,
     Qualifier                       qualifier[][],
@@ -323,6 +323,61 @@
         throws StandardException;
 
     /**
+     * Online compress table.
+     *
+     * Returns a ScanManager which can be used to move rows
+     * around in a table, creating a block of free pages at the end of the
+     * table.  The process of executing the scan will move rows from the end 
+     * of the table toward the beginning.  The GroupFetchScanController will
+     * return the old row location, the new row location, and the actual data 
+     * of any row moved.  Note that this scan only returns moved rows, not an
+     * entire set of rows, the scan is designed specifically to be
+     * used by either explicit user call of the SYSCS_ONLINE_COMPRESS_TABLE()
+     * procedure, or internal background calls to compress the table.
+     *
+     * The old and new row locations are returned so that the caller can
+     * update any indexes necessary.
+     *
+     * This scan always returns all collumns of the row.
+     * 
+     * All inputs work exactly as in openScan().  The return is 
+     * a GroupFetchScanController, which only allows fetches of groups
+     * of rows from the conglomerate.
+     * <p>
+     * Note that all Conglomerates may not implement openCompressScan(), 
+     * currently only the Heap conglomerate implements this scan.
+     *
+        * @return The GroupFetchScanController to be used to fetch the rows.
+     *
+        * @param conglomId             see openScan()
+     * @param hold                  see openScan()
+     * @param open_mode             see openScan()
+     * @param lock_level            see openScan()
+     * @param isolation_level       see openScan()
+     *
+        * @exception  StandardException  Standard exception policy.
+     **/
+       ScanManager defragmentConglomerate(
+    TransactionManager              xact_manager,
+    Transaction                     rawtran,
+    boolean                         hold,
+    int                             open_mode,
+    int                             lock_level,
+    LockingPolicy                   locking_policy,
+    int                             isolation_level)
+        throws StandardException;
+
+       void purgeConglomerate(
+    TransactionManager              xact_manager,
+    Transaction                     rawtran)
+        throws StandardException;
+
+       void compressConglomerate(
+    TransactionManager              xact_manager,
+    Transaction                     rawtran)
+        throws StandardException;
+
+    /**
      * Return an open StoreCostController for the conglomerate.
      * <p>
      * Return an open StoreCostController which can be used to ask about 
Index: 
java/engine/org/apache/derby/iapi/store/access/GroupFetchScanController.java
===================================================================
--- 
java/engine/org/apache/derby/iapi/store/access/GroupFetchScanController.java    
    (revision 160236)
+++ 
java/engine/org/apache/derby/iapi/store/access/GroupFetchScanController.java    
    (working copy)
@@ -137,6 +137,12 @@
     RowLocation[]           rowloc_array)
         throws StandardException;
 
+    public int fetchNextGroup(
+    DataValueDescriptor[][] row_array,
+    RowLocation[]           oldrowloc_array,
+    RowLocation[]           newrowloc_array)
+        throws StandardException;
+
     /**
     Move to the next position in the scan.  If this is the first
     call to next(), the position is set to the first row.
Index: java/engine/org/apache/derby/iapi/store/access/TransactionController.java
===================================================================
--- java/engine/org/apache/derby/iapi/store/access/TransactionController.java   
(revision 160236)
+++ java/engine/org/apache/derby/iapi/store/access/TransactionController.java   
(working copy)
@@ -1191,8 +1191,82 @@
                int                             stopSearchOperator)
                        throws StandardException;
 
+    /**
+     * Compress table in place
+     * <p>
+     * Returns a GroupFetchScanController which can be used to move rows
+     * around in a table, creating a block of free pages at the end of the
+     * table.  The process will move rows from the end of the table toward
+     * the beginning.  The GroupFetchScanController will return the 
+     * old row location, the new row location, and the actual data of any
+     * row moved.  Note that this scan only returns moved rows, not an
+     * entire set of rows, the scan is designed specifically to be
+     * used by either explicit user call of the SYSCS_ONLINE_COMPRESS_TABLE()
+     * procedure, or internal background calls to compress the table.
+     *
+     * The old and new row locations are returned so that the caller can
+     * update any indexes necessary.
+     *
+     * This scan always returns all collumns of the row.
+     * 
+     * All inputs work exactly as in openScan().  The return is 
+     * a GroupFetchScanController, which only allows fetches of groups
+     * of rows from the conglomerate.
+     * <p>
+     *
+        * @return The GroupFetchScanController to be used to fetch the rows.
+     *
+        * @param conglomId             see openScan()
+     * @param hold                  see openScan()
+     * @param open_mode             see openScan()
+     * @param lock_level            see openScan()
+     * @param isolation_level       see openScan()
+     *
+        * @exception  StandardException  Standard exception policy.
+     *
+     * @see ScanController
+     * @see GroupFetchScanController
+     **/
+       GroupFetchScanController defragmentConglomerate(
+               long                            conglomId,
+        boolean                         online,
+               boolean                         hold,
+               int                             open_mode,
+        int                             lock_level,
+        int                             isolation_level)
+                       throws StandardException;
 
     /**
+     * Purge all committed deleted rows from the conglomerate.
+     * <p>
+     * This call will purge committed deleted rows from the conglomerate,
+     * that space will be available for future inserts into the conglomerate.
+     * <p>
+     *
+     * @param conglomId Id of the conglomerate to purge.
+     *
+        * @exception  StandardException  Standard exception policy.
+     **/
+       void purgeConglomerate(long conglomId)
+                       throws StandardException;
+
+    /**
+     * Return free space from the conglomerate back to the OS.
+     * <p>
+     * Returns free space from the conglomerate back to the OS.  Currently
+     * only the sequential free pages at the "end" of the conglomerate can
+     * be returned to the OS.
+     * <p>
+     *
+     * @param conglomId Id of the conglomerate to purge.
+     *
+        * @exception  StandardException  Standard exception policy.
+     **/
+       void compressConglomerate(long conglomId)
+                       throws StandardException;
+
+
+    /**
      * Retrieve the maximum value row in an ordered conglomerate.
      * <p>
      * Returns true and fetches the rightmost non-null row of an ordered 
Index: java/engine/org/apache/derby/iapi/store/raw/Page.java
===================================================================
--- java/engine/org/apache/derby/iapi/store/raw/Page.java       (revision 
160236)
+++ java/engine/org/apache/derby/iapi/store/raw/Page.java       (working copy)
@@ -227,10 +227,10 @@
      * @see LockingPolicy
      **/
        RecordHandle fetch(
-    RecordHandle            handle, 
-    Object[]   row, 
-    FormatableBitSet                 validColumns, 
-    boolean                 forUpdate)
+    RecordHandle        handle, 
+    Object[]            row, 
+    FormatableBitSet    validColumns, 
+    boolean             forUpdate)
                throws StandardException;
 
     /**
@@ -283,9 +283,9 @@
         * @exception  StandardException  Standard exception policy.
      **/
        boolean spaceForInsert(
-    Object[]   row, 
-    FormatableBitSet                 validColumns, 
-    int                     overflowThreshold) 
+    Object[]            row, 
+    FormatableBitSet    validColumns, 
+    int                 overflowThreshold) 
         throws StandardException;
 
     /**
@@ -311,10 +311,10 @@
      * @exception StandardException Row cannot fit on the page or row is null.
      **/
        RecordHandle insert(
-    Object[]   row, 
-    FormatableBitSet                 validColumns,
-    byte                    insertFlag, 
-    int                     overflowThreshold)
+    Object[]            row, 
+    FormatableBitSet    validColumns,
+    byte                insertFlag, 
+    int                 overflowThreshold)
                throws StandardException;
 
        /**
@@ -350,8 +350,8 @@
         * @exception  StandardException  Standard exception policy.
      **/
        boolean update(
-    RecordHandle            handle, 
-    Object[]   row, 
+    RecordHandle        handle, 
+    Object[]            row, 
     FormatableBitSet                 validColumns)
                throws StandardException;
 
@@ -393,6 +393,55 @@
                throws StandardException;
 
     /**
+     * Move record to a page toward the beginning of the file.
+     * <p>
+     * As part of compressing the table records need to be moved from the
+     * end of the file toward the beginning of the file.  Only the 
+     * contiguous set of free pages at the very end of the file can
+     * be given back to the OS.  This call is used to purge the row from
+     * the current page, insert it into a previous page, and return the
+     * new row location 
+     * Mark the record identified by position as deleted. The record may be 
+     * undeleted sometime later using undelete() by any transaction that sees 
+     * the record.
+     * <p>
+     * The interface is optimized to work on a number of rows at a time, 
+     * optimally processing all rows on the page at once.  The call will 
+     * process either all rows on the page, or the number of slots in the
+     * input arrays - whichever is smaller.
+     * <B>Locking Policy</B>
+     * <P>
+     * MUST be called with table locked, not locks are requested.  Because
+     * it is called with table locks the call will go ahead and purge any
+     * row which is marked deleted.  It will also use purge rather than
+     * delete to remove the old row after it moves it to a new page.  This
+     * is ok since the table lock insures that no other transaction will
+     * use space on the table before this transaction commits.
+     *
+     * <BR>
+     * A page latch on the new page will be requested and released.
+     *
+     * @param old_handle     An array to be filled in by the call with the 
+     *                       old handles of all rows moved.
+     * @param new_handle     An array to be filled in by the call with the 
+     *                       new handles of all rows moved.
+     * @param new_pageno     An array to be filled in by the call with the 
+     *                       new page number of all rows moved.
+     *
+     * @return the number of rows processed.
+     *
+     * @exception StandardException    Standard Cloudscape error policy
+     *
+     * @see LockingPolicy
+     **/
+       public int moveRecordForCompressAtSlot(
+    int             slot,
+    Object[]        row,
+    RecordHandle[]  old_handle,
+    RecordHandle[]  new_handle)
+               throws StandardException;
+
+    /**
      * Fetch the number of fields in a record. 
      * <p>
      * <B>Locking Policy</B>
Index: java/engine/org/apache/derby/iapi/store/raw/ContainerHandle.java
===================================================================
--- java/engine/org/apache/derby/iapi/store/raw/ContainerHandle.java    
(revision 160236)
+++ java/engine/org/apache/derby/iapi/store/raw/ContainerHandle.java    
(working copy)
@@ -194,6 +194,18 @@
        public Page addPage() throws StandardException;
 
 
+       /**
+               Release free space to the OS.
+               <P>
+        As is possible release any free space to the operating system.  This
+        will usually mean releasing any free pages located at the end of the
+        file using the java truncate() interface.
+
+               @exception StandardException    Standard Cloudscape error policy
+       */
+       public void compressContainer() throws StandardException;
+
+
        /**     
                Add an empty page to the container and obtain exclusive access 
to it.
                <P>
@@ -416,6 +428,11 @@
        public Page getPageForInsert(int flag) 
                 throws StandardException;
 
+       public Page getPageForCompress(
+    int     flag,
+    long    pageno) 
+                throws StandardException;
+
        // Try to get a page that is unfilled, 'unfill-ness' is defined by the
        // page.  Since unfill-ness is defined by the page, the only thing 
RawStore
        // guarentees about the page is that it has space for a a minimum sized
@@ -429,7 +446,6 @@
        public static final int GET_PAGE_UNFILLED = 0x1;
 
 
-
     /**
      * Request the system properties associated with a container. 
      * <p>
Index: java/engine/org/apache/derby/catalog/SystemProcedures.java
===================================================================
--- java/engine/org/apache/derby/catalog/SystemProcedures.java  (revision 
160236)
+++ java/engine/org/apache/derby/catalog/SystemProcedures.java  (working copy)
@@ -728,6 +728,24 @@
         return(ret_val ? 1 : 0);
     }
 
+    public static void SYSCS_INPLACE_COMPRESS_TABLE(
+    String  schema,
+    String  tablename,
+    int     purgeRows,
+    int     defragementRows,
+    int     truncateEnd)
+               throws SQLException
+    {
+        org.apache.derby.iapi.db.OnlineCompress.compressTable(
+            schema, 
+            tablename, 
+            (purgeRows == 1),
+            (defragementRows == 1),
+            (truncateEnd == 1));
+
+        return;
+    }
+
     public static String SYSCS_GET_RUNTIMESTATISTICS()
                throws SQLException
     {
@@ -1044,14 +1062,3 @@
        }
        
 }
-
-
-
-
-
-
-
-
-
-
-
Index: 
java/testing/org/apache/derbyTesting/functionTests/tests/storetests/onlineCompressTable.sql
===================================================================
--- 
java/testing/org/apache/derbyTesting/functionTests/tests/storetests/onlineCompressTable.sql
 (revision 0)
+++ 
java/testing/org/apache/derbyTesting/functionTests/tests/storetests/onlineCompressTable.sql
 (revision 0)
@@ -0,0 +1,68 @@
+autocommit off;
+-- start with simple test, does the call work?
+create table test1 (a int);
+-- call SYSCS_UTIL.SYSCS_ONLINE_COMPRESS_TABLE('APP', 'TEST1');
+
+-- expect failures schema/table does not exist
+-- call SYSCS_UTIL.SYSCS_ONLINE_COMPRESS_TABLE(null, 'test2');
+-- call SYSCS_UTIL.SYSCS_ONLINE_COMPRESS_TABLE('APP', 'test2');
+
+-- non existent schema
+-- call SYSCS_UTIL.SYSCS_ONLINE_COMPRESS_TABLE('doesnotexist', 'a');
+
+-- cleanup
+drop table test1;
+
+
+-- load up a table, delete most of it's rows and then see what compress does.
+create table test1 (keycol int, a char(250), b char(250), c char(250), d 
char(250));
+insert into test1 values (1, 'a', 'b', 'c', 'd');
+insert into test1 (select keycol + 1, a, b, c, d from test1);
+insert into test1 (select keycol + 2, a, b, c, d from test1);
+insert into test1 (select keycol + 4, a, b, c, d from test1);
+insert into test1 (select keycol + 8, a, b, c, d from test1);
+insert into test1 (select keycol + 16, a, b, c, d from test1);
+insert into test1 (select keycol + 32, a, b, c, d from test1);
+insert into test1 (select keycol + 64, a, b, c, d from test1);
+insert into test1 (select keycol + 128, a, b, c, d from test1);
+insert into test1 (select keycol + 256, a, b, c, d from test1);
+
+create index test1_idx on test1(keycol);
+commit;
+
+select 
+    conglomeratename, isindex, numallocatedpages, numfreepages, pagesize, 
+    estimspacesaving
+        from new org.apache.derby.diag.SpaceTable('TEST1') t
+                order by conglomeratename;
+
+delete from test1 where keycol > 300;
+commit;
+delete from test1 where keycol < 100;
+commit;
+
+
+call SYSCS_UTIL.SYSCS_INPLACE_COMPRESS_TABLE('APP', 'TEST1', 1, 0, 0);
+
+select 
+    conglomeratename, isindex, numallocatedpages, numfreepages, pagesize, 
+    estimspacesaving
+        from new org.apache.derby.diag.SpaceTable('TEST1') t
+                order by conglomeratename;
+commit;
+
+-- call SYSCS_UTIL.SYSCS_INPLACE_COMPRESS_TABLE('APP', 'TEST1', 0, 1, 0);
+
+select 
+    conglomeratename, isindex, numallocatedpages, numfreepages, pagesize, 
+    estimspacesaving
+        from new org.apache.derby.diag.SpaceTable('TEST1') t
+                order by conglomeratename;
+
+call SYSCS_UTIL.SYSCS_INPLACE_COMPRESS_TABLE('APP', 'TEST1', 0, 0, 1);
+
+select 
+    conglomeratename, isindex, numallocatedpages, numfreepages, pagesize, 
+    estimspacesaving
+        from new org.apache.derby.diag.SpaceTable('TEST1') t
+                order by conglomeratename;
Index: 
java/testing/org/apache/derbyTesting/functionTests/tests/storetests/BaseTest.java
===================================================================
--- 
java/testing/org/apache/derbyTesting/functionTests/tests/storetests/BaseTest.java
   (revision 0)
+++ 
java/testing/org/apache/derbyTesting/functionTests/tests/storetests/BaseTest.java
   (revision 0)
@@ -0,0 +1,80 @@
+/*
+
+   Derby - Class org.apache.derbyTesting.functionTests.harness.procedure
+
+   Copyright 2005 The Apache Software Foundation or its licensors, as 
applicable.
+
+   Licensed under the Apache License, Version 2.0 (the "License");
+   you may not use this file except in compliance with the License.
+   You may obtain a copy of the License at
+
+      http://www.apache.org/licenses/LICENSE-2.0
+
+   Unless required by applicable law or agreed to in writing, software
+   distributed under the License is distributed on an "AS IS" BASIS,
+   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+   See the License for the specific language governing permissions and
+   limitations under the License.
+
+*/
+
+package org.apache.derbyTesting.functionTests.tests.storetests;
+
+import com.ibm.db2j.functionTests.TestUtil;
+
+import org.apache.derby.tools.ij;
+
+import java.sql.Connection;
+import java.sql.SQLException;
+
+
+public abstract class BaseTest
+{
+    abstract void testList(Connection conn) throws SQLException;
+
+    void runTests(String[] argv)
+        throws Throwable
+    {
+               ij.getPropertyArg(argv); 
+        Connection conn = ij.startJBMS();
+        System.out.println("conn from ij.startJBMS() = " + conn);
+        conn.setAutoCommit(false);
+
+        try
+        {
+            testList(conn);
+        }
+        catch (SQLException sqle)
+        {
+                       org.apache.derby.tools.JDBCDisplayUtil.ShowSQLException(
+                System.out, sqle);
+                       sqle.printStackTrace(System.out);
+               }
+    }
+
+    public BaseTest()
+    {
+    }
+
+    protected void beginTest(
+    Connection  conn,
+    String      str)
+        throws SQLException
+    {
+        log("Beginning test: " + str);
+        conn.commit();
+    }
+
+    protected void endTest(
+    Connection  conn,
+    String      str)
+        throws SQLException
+    {
+        conn.commit();
+        log("Ending test: " + str);
+    }
+    protected void log(String   str)
+    {
+        System.out.println(str);
+    }
+}
Index: 
java/testing/org/apache/derbyTesting/functionTests/tests/storetests/copyfiles.ant
===================================================================
--- 
java/testing/org/apache/derbyTesting/functionTests/tests/storetests/copyfiles.ant
   (revision 160236)
+++ 
java/testing/org/apache/derbyTesting/functionTests/tests/storetests/copyfiles.ant
   (working copy)
@@ -6,3 +6,4 @@
 st_b5772.sql
 derby94.sql
 derby94_derby.properties
+onlineCompressTable.sql
Index: 
java/testing/org/apache/derbyTesting/functionTests/tests/store/OnlineCompressTest_app.properties
===================================================================
--- 
java/testing/org/apache/derbyTesting/functionTests/tests/store/OnlineCompressTest_app.properties
    (revision 0)
+++ 
java/testing/org/apache/derbyTesting/functionTests/tests/store/OnlineCompressTest_app.properties
    (revision 0)
@@ -0,0 +1 @@
+usedefaults=true
Index: 
java/testing/org/apache/derbyTesting/functionTests/tests/store/copyfiles.ant
===================================================================
--- 
java/testing/org/apache/derbyTesting/functionTests/tests/store/copyfiles.ant    
    (revision 160236)
+++ 
java/testing/org/apache/derbyTesting/functionTests/tests/store/copyfiles.ant    
    (working copy)
@@ -123,3 +123,5 @@
 xaOffline1_sed.properties
 xab2354.sql
 xab2354_sed.properties
+OnlineCompressTest_app.properties
+OnlineCompressTest_derby.properties
Index: 
java/testing/org/apache/derbyTesting/functionTests/tests/store/OnlineCompressTest_derby.properties
===================================================================
--- 
java/testing/org/apache/derbyTesting/functionTests/tests/store/OnlineCompressTest_derby.properties
  (revision 0)
+++ 
java/testing/org/apache/derbyTesting/functionTests/tests/store/OnlineCompressTest_derby.properties
  (revision 0)
@@ -0,0 +1 @@
+usedefaults=true
Index: 
java/testing/org/apache/derbyTesting/functionTests/tests/store/OnlineCompressTest.java
===================================================================
--- 
java/testing/org/apache/derbyTesting/functionTests/tests/store/OnlineCompressTest.java
      (revision 0)
+++ 
java/testing/org/apache/derbyTesting/functionTests/tests/store/OnlineCompressTest.java
      (revision 0)
@@ -0,0 +1,120 @@
+/*
+
+   Derby - Class org.apache.derbyTesting.functionTests.harness.procedure
+
+   Copyright 2005 The Apache Software Foundation or its licensors, as 
applicable.
+
+   Licensed under the Apache License, Version 2.0 (the "License");
+   you may not use this file except in compliance with the License.
+   You may obtain a copy of the License at
+
+      http://www.apache.org/licenses/LICENSE-2.0
+
+   Unless required by applicable law or agreed to in writing, software
+   distributed under the License is distributed on an "AS IS" BASIS,
+   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+   See the License for the specific language governing permissions and
+   limitations under the License.
+
+ */
+
+package org.apache.derbyTesting.functionTests.tests.store;
+
+import org.apache.derby.iapi.db.OnlineCompress;
+
+import java.sql.Connection;
+import java.sql.PreparedStatement;
+import java.sql.SQLException;
+import java.sql.Statement;
+
+import org.apache.derby.tools.ij;
+
+
+public class OnlineCompressTest extends BaseTest
+{
+
+    OnlineCompressTest()
+    {
+    }
+
+
+    private void createAndLoadTable(
+    Connection  conn,
+    String      tblname)
+        throws SQLException
+    {
+        Statement s = conn.createStatement();
+
+        s.execute(
+            "create table " + tblname + 
+                "(keycol int, indcol1 int, indcol2 int, indcol3 int, data1 
varchar(2000), data2 varchar(2000))");
+
+        PreparedStatement insert_stmt = 
+            conn.prepareStatement(
+                "insert into " + tblname + " values(?, ?, ?, ?, ?, ?)");
+
+        char[]  data1_data = new char[500];
+        char[]  data2_data = new char[500];
+
+        for (int i = 0; i < data1_data.length; i++)
+        {
+            data1_data[i] = 'a';
+            data2_data[i] = 'b';
+        }
+
+        String  data1_str = new String(data1_data);
+        String  data2_str = new String(data2_data);
+
+        for (int i = 0; i < 10000; i++)
+        {
+            insert_stmt.setInt(1, i);               // keycol
+            insert_stmt.setInt(2, i * 10);          // indcol1
+            insert_stmt.setInt(3, i * 100);         // indcol2
+            insert_stmt.setInt(4, -i);              // indcol3
+            insert_stmt.setString(5, data1_str);   // data1_data
+            insert_stmt.setString(6, data2_str);   // data2_data
+        }
+
+        conn.commit();
+    }
+
+    private void test1(Connection conn) 
+        throws SQLException 
+    {
+        beginTest(conn, "test1");
+
+        createAndLoadTable(conn, "test1");
+
+        OnlineCompress.compressTable("APP", "TEST1", true, true, true);
+
+        endTest(conn, "test1");
+    }
+
+    public void testList(Connection conn)
+        throws SQLException
+    {
+        test1(conn);
+    }
+
+    public static void main(String[] argv) 
+        throws Throwable
+    {
+        OnlineCompressTest test = new OnlineCompressTest();
+
+               ij.getPropertyArg(argv); 
+        Connection conn = ij.startJBMS();
+        System.out.println("conn 2 from ij.startJBMS() = " + conn);
+        conn.setAutoCommit(false);
+
+        try
+        {
+            test.testList(conn);
+        }
+        catch (SQLException sqle)
+        {
+                       org.apache.derby.tools.JDBCDisplayUtil.ShowSQLException(
+                System.out, sqle);
+                       sqle.printStackTrace(System.out);
+               }
+    }
+}
Index: 
java/testing/org/apache/derbyTesting/functionTests/tests/store/BaseTest.java
===================================================================
--- 
java/testing/org/apache/derbyTesting/functionTests/tests/store/BaseTest.java    
    (revision 0)
+++ 
java/testing/org/apache/derbyTesting/functionTests/tests/store/BaseTest.java    
    (revision 0)
@@ -0,0 +1,78 @@
+/*
+
+   Derby - Class org.apache.derbyTesting.functionTests.harness.procedure
+
+   Copyright 2005 The Apache Software Foundation or its licensors, as 
applicable.
+
+   Licensed under the Apache License, Version 2.0 (the "License");
+   you may not use this file except in compliance with the License.
+   You may obtain a copy of the License at
+
+      http://www.apache.org/licenses/LICENSE-2.0
+
+   Unless required by applicable law or agreed to in writing, software
+   distributed under the License is distributed on an "AS IS" BASIS,
+   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+   See the License for the specific language governing permissions and
+   limitations under the License.
+
+*/
+
+package org.apache.derbyTesting.functionTests.tests.store;
+
+import org.apache.derby.tools.ij;
+
+import java.sql.Connection;
+import java.sql.SQLException;
+
+
+public abstract class BaseTest
+{
+    abstract void testList(Connection conn) throws SQLException;
+
+    void runTests(String[] argv)
+        throws Throwable
+    {
+               ij.getPropertyArg(argv); 
+        Connection conn = ij.startJBMS();
+        System.out.println("conn from ij.startJBMS() = " + conn);
+        conn.setAutoCommit(false);
+
+        try
+        {
+            testList(conn);
+        }
+        catch (SQLException sqle)
+        {
+                       org.apache.derby.tools.JDBCDisplayUtil.ShowSQLException(
+                System.out, sqle);
+                       sqle.printStackTrace(System.out);
+               }
+    }
+
+    public BaseTest()
+    {
+    }
+
+    protected void beginTest(
+    Connection  conn,
+    String      str)
+        throws SQLException
+    {
+        log("Beginning test: " + str);
+        conn.commit();
+    }
+
+    protected void endTest(
+    Connection  conn,
+    String      str)
+        throws SQLException
+    {
+        conn.commit();
+        log("Ending test: " + str);
+    }
+    protected void log(String   str)
+    {
+        System.out.println(str);
+    }
+}
Index: 
java/testing/org/apache/derbyTesting/functionTests/master/onlineCompressTable.out
===================================================================
--- 
java/testing/org/apache/derbyTesting/functionTests/master/onlineCompressTable.out
   (revision 0)
+++ 
java/testing/org/apache/derbyTesting/functionTests/master/onlineCompressTable.out
   (revision 0)
@@ -0,0 +1,88 @@
+ij> autocommit off;
+ij> -- start with simple test, does the call work?
+create table test1 (a int);
+0 rows inserted/updated/deleted
+ij> -- call SYSCS_UTIL.SYSCS_ONLINE_COMPRESS_TABLE('APP', 'TEST1');
+-- expect failures schema/table does not exist
+-- call SYSCS_UTIL.SYSCS_ONLINE_COMPRESS_TABLE(null, 'test2');
+-- call SYSCS_UTIL.SYSCS_ONLINE_COMPRESS_TABLE('APP', 'test2');
+-- non existent schema
+-- call SYSCS_UTIL.SYSCS_ONLINE_COMPRESS_TABLE('doesnotexist', 'a');
+-- cleanup
+drop table test1;
+0 rows inserted/updated/deleted
+ij> -- load up a table, delete most of it's rows and then see what compress 
does.
+create table test1 (keycol int, a char(250), b char(250), c char(250), d 
char(250));
+0 rows inserted/updated/deleted
+ij> insert into test1 values (1, 'a', 'b', 'c', 'd');
+1 row inserted/updated/deleted
+ij> insert into test1 (select keycol + 1, a, b, c, d from test1);
+1 row inserted/updated/deleted
+ij> insert into test1 (select keycol + 2, a, b, c, d from test1);
+2 rows inserted/updated/deleted
+ij> insert into test1 (select keycol + 4, a, b, c, d from test1);
+4 rows inserted/updated/deleted
+ij> insert into test1 (select keycol + 8, a, b, c, d from test1);
+8 rows inserted/updated/deleted
+ij> insert into test1 (select keycol + 16, a, b, c, d from test1);
+16 rows inserted/updated/deleted
+ij> insert into test1 (select keycol + 32, a, b, c, d from test1);
+32 rows inserted/updated/deleted
+ij> insert into test1 (select keycol + 64, a, b, c, d from test1);
+64 rows inserted/updated/deleted
+ij> insert into test1 (select keycol + 128, a, b, c, d from test1);
+128 rows inserted/updated/deleted
+ij> insert into test1 (select keycol + 256, a, b, c, d from test1);
+256 rows inserted/updated/deleted
+ij> create index test1_idx on test1(keycol);
+0 rows inserted/updated/deleted
+ij> commit;
+ij> select 
+    conglomeratename, isindex, numallocatedpages, numfreepages, pagesize, 
+    estimspacesaving
+        from new org.apache.derby.diag.SpaceTable('TEST1') t
+                order by conglomeratename;
+CONGLOMERATENAME                                                               
                                                 |ISIND&|NUMALLOCATEDPAGES   
|NUMFREEPAGES        |PAGESIZE   |ESTIMSPACESAVING    
+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
+TEST1                                                                          
                                                 |0     |171                 |0 
                  |4096       |0                   
+TEST1_IDX                                                                      
                                                 |1     |4                   |0 
                  |4096       |0                   
+ij> delete from test1 where keycol > 300;
+212 rows inserted/updated/deleted
+ij> commit;
+ij> delete from test1 where keycol < 100;
+99 rows inserted/updated/deleted
+ij> commit;
+ij> call SYSCS_UTIL.SYSCS_INPLACE_COMPRESS_TABLE('APP', 'TEST1', 1, 0, 0);
+0 rows inserted/updated/deleted
+ij> select 
+    conglomeratename, isindex, numallocatedpages, numfreepages, pagesize, 
+    estimspacesaving
+        from new org.apache.derby.diag.SpaceTable('TEST1') t
+                order by conglomeratename;
+CONGLOMERATENAME                                                               
                                                 |ISIND&|NUMALLOCATEDPAGES   
|NUMFREEPAGES        |PAGESIZE   |ESTIMSPACESAVING    
+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
+TEST1                                                                          
                                                 |0     |68                  
|103                 |4096       |421888              
+TEST1_IDX                                                                      
                                                 |1     |4                   |0 
                  |4096       |0                   
+ij> commit;
+ij> -- call SYSCS_UTIL.SYSCS_INPLACE_COMPRESS_TABLE('APP', 'TEST1', 0, 1, 0);
+select 
+    conglomeratename, isindex, numallocatedpages, numfreepages, pagesize, 
+    estimspacesaving
+        from new org.apache.derby.diag.SpaceTable('TEST1') t
+                order by conglomeratename;
+CONGLOMERATENAME                                                               
                                                 |ISIND&|NUMALLOCATEDPAGES   
|NUMFREEPAGES        |PAGESIZE   |ESTIMSPACESAVING    
+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
+TEST1                                                                          
                                                 |0     |68                  
|103                 |4096       |421888              
+TEST1_IDX                                                                      
                                                 |1     |4                   |0 
                  |4096       |0                   
+ij> call SYSCS_UTIL.SYSCS_INPLACE_COMPRESS_TABLE('APP', 'TEST1', 0, 0, 1);
+0 rows inserted/updated/deleted
+ij> select 
+    conglomeratename, isindex, numallocatedpages, numfreepages, pagesize, 
+    estimspacesaving
+        from new org.apache.derby.diag.SpaceTable('TEST1') t
+                order by conglomeratename;
+CONGLOMERATENAME                                                               
                                                 |ISIND&|NUMALLOCATEDPAGES   
|NUMFREEPAGES        |PAGESIZE   |ESTIMSPACESAVING    
+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
+TEST1                                                                          
                                                 |0     |68                  
|32                  |4096       |131072              
+TEST1_IDX                                                                      
                                                 |1     |4                   |0 
                  |4096       |0                   
+ij> 
Index: 
java/testing/org/apache/derbyTesting/functionTests/suites/storetests.runall
===================================================================
--- java/testing/org/apache/derbyTesting/functionTests/suites/storetests.runall 
(revision 160236)
+++ java/testing/org/apache/derbyTesting/functionTests/suites/storetests.runall 
(working copy)
@@ -1,4 +1,5 @@
 storetests/st_schema.sql
+storetests/onlineCompressTable.sql
 storetests/st_1.sql
 storetests/st_b5772.sql
 storetests/derby94.sql

Reply via email to