I have been experimenting a bit with in-place compression on a large
table, but I have not been successful in getting it to give back space
to the file system.  I have table that I have done a lot of bulk
changes to over time.  It is now much smaller than it was at its peak
size, but I am not able to reduce the size of the corresponding files
by in-place compression.  My last attempt:

ij> select conglomeratename, isindex, numallocatedpages, numfreepages, pagesize, estimspacesaving from new org.apache.derby.diag.SpaceTable('T') t order by conglomeratename; CONGLOMERATENAME |ISIND&|NUMALLOCATEDPAGES |NUMFREEPAGES |PAGESIZE |ESTIMSPACESAVING
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
T |0 |8671 |411071 |4096 |1683746816 X_I |1 |757 |19535 |4096 |80015360

2 rows selected
ij> CALL SYSCS_UTIL.SYSCS_INPLACE_COMPRESS_TABLE('OYSTEIN', 'T', 1, 1, 1);
0 rows inserted/updated/deleted
ij> select conglomeratename, isindex, numallocatedpages, numfreepages, pagesize, estimspacesaving from new org.apache.derby.diag.SpaceTable('T') t order by conglomeratename; CONGLOMERATENAME |ISIND&|NUMALLOCATEDPAGES |NUMFREEPAGES |PAGESIZE |ESTIMSPACESAVING
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
T |0 |7843 |411899 |4096 |1687138304 X_I |1 |757 |19479 |4096 |79785984

2 rows selected

As you can see, there is a lot of free pages, both before and after
the compress, and the total number of pages has not been changed.

Some more details:

The table has two columns an integer and a varchar(1024) column.  All
values for the varchar column is the same 640 characters.  (The
varchar column was added by alter table.) I have a unique index on the
integer column.  I have populated the index by repeatedly doubling its
size by 'insert into T select i+k, c from T', where k is the current
number of records in the table.  I have also deleted rows and
reinserted the values, usually the rows with highest integer values,
but at one point I executed 'delete from T where random() > 0.25'.  I
have also reinserted some of the rows that was deleted by that query.
After the random deletions I have several times tried to compress the
table with no luck.  Finally, I deleted all rows with an integer value
higher than 65536, but compress does still not release pages as shown
above.

--
Øystein

Reply via email to