Chuck,

Are the columns in the table in question all of fixed length or are there some 
variable length columns say with a NOTE datatype?

Also, 9 indexes seem awfully high. What are you indexing?

Perhaps if you post your CREATE TABLE and CREATE INDEXES and any appropriate 
ALTER TABLE statements we can get a better Idea of what is involved.

Under version 8 my impression is that you should not have the problems you are 
experiencing.
If the table definition is of all fixed length columns then an update to an 
existing row should not change the size of the RX2 file. 

Your symptoms seem to suggest that you have one or more NOTE datatype columns 
in that table definition.

 Jim Bentley
American Celiac Society
[email protected]
tel: 1-504-737-3293




________________________________
From: "[email protected]" <[email protected]>
To: RBASE-L Mailing List <[email protected]>
Sent: Mon, November 23, 2009 2:47:06 PM
Subject: [RBASE-L] - Large Tables & Indexes

 
Rbase 8.0.21.31001 

We have a very large table three million
plus records and growing. We have indexes on 9 columns. The Database is
about 8 Gig.  We do update processing that can easily touch 100 K
rows with multiple columns. Via trail and error I learned that I must drop
the indexes on the updated rows or processing takes forever. After the
processing  I re-create the indexes. I have found that I need to pack
after each Drop / Alter Table Add or the database gets to about 17 Gig
where 8.0 seams to self destruct (I get Disk Errors and can't save the
table).  Am I missing something. Is there a way to update large tables
without dropping the indexes?  Has anyone else experienced 8.0 'Blowing
Up' at a little over 17 Gig? My indexes are all separate indexes. What
will happen to performance is I combine some of the indexes? Will  it
save substantial space ? 



      

Reply via email to