Hi chuck,

I have several databases with a table in excess of 2 million records
(heavily accessed).  The table has one column with a PK and three other
columns with indexes.  The table is updated every time a transaction is
processed and the table is searched frequently (all day long).  There are
also times when they perform updates on 300k rows while normal processing is
occurring.  No reports of slowdowns although I am sure there is some
degradation in performance.  The database files do not grow significantly
during a week of production.  

All are running V8, Build 8.0.22.31102.

 

Question: Are there any triggers defined for any of your tables.

 

John

 

From: [email protected] [mailto:[email protected]] On Behalf Of
[email protected]
Sent: Monday, November 23, 2009 5:39 PM
To: RBASE-L Mailing List
Subject: [RBASE-L] - Re: Large Tables & Indexes

 


Thanks for the reply James. 

There are NO NOTE datatype  Columns 

they are all Text / Date / Integer 

I need to be able to search quickly by all of the 9 columns that I have
indexed. Searching 3 million rows without an index is not useable. Again -
all of my indexes are single indexes - should I be combining them into a
multiple column index. 

I'm sure I could break up some of the data into other related tables, but my
users are used to doing quick and dirty command line querys and being able
to look at all the info they need to see 






James Bentley <[email protected]> 
Sent by: [email protected] 

11/23/2009 04:28 PM 


Please respond to
[email protected]


To

[email protected] (RBASE-L Mailing List) 


cc

        

Subject

[RBASE-L] - Re: Large Tables & Indexes

 

                




Chuck,

Are the columns in the table in question all of fixed length or are there
some variable length columns say with a NOTE datatype?

Also, 9 indexes seem awfully high. What are you indexing?

Perhaps if you post your CREATE TABLE and CREATE INDEXES and any appropriate
ALTER TABLE statements we can get a better Idea of what is involved.

Under version 8 my impression is that you should not have the problems you
are experiencing.
If the table definition is of all fixed length columns then an update to an
existing row should not change the size of the RX2 file. 

Your symptoms seem to suggest that you have one or more NOTE datatype
columns in that table definition. 
  
Jim Bentley
American Celiac Society
[email protected]
tel: 1-504-737-3293 



  _____  

From: "[email protected]" <[email protected]>
To: RBASE-L Mailing List <[email protected]>
Sent: Mon, November 23, 2009 2:47:06 PM
Subject: [RBASE-L] - Large Tables & Indexes


Rbase 8.0.21.31001 

We have a very large table three million plus records and growing. We have
indexes on 9 columns. The Database is about 8 Gig.  We do update processing
that can easily touch 100 K rows with multiple columns. Via trail and error
I learned that I must drop the indexes on the updated rows or processing
takes forever. After the processing  I re-create the indexes. I have found
that I need to pack after each Drop / Alter Table Add or the database gets
to about 17 Gig where 8.0 seams to self destruct (I get Disk Errors and
can't save the table).  Am I missing something. Is there a way to update
large tables without dropping the indexes?  Has anyone else experienced 8.0
'Blowing Up' at a little over 17 Gig? My indexes are all separate indexes.
What will happen to performance is I combine some of the indexes? Will  it
save substantial space ? 

Reply via email to