Mike,

I think this will work for you.  
Remember whenever you access a huge table, you have to do it using a 
good/unique indexes.  (700 dups in a 1,000,000 row table is not bad)
Another thing to keep in mind that the IN () clause and the between clause 
use indexes.

With that in mind, you may want to keep a few smaller tables that hold id 
number ranges for commonly accessed data sets.

Few updates and deletes on the big tables.  Good.

If you have to update information in the big table, ideally it will not be 
an indexed column that is changed.  Updating indexes on huge tables is a 
performance killer.

One thing I do on large table that instead of deleting rows is it to change 
one of the primary linking columns to a -1 and then delete the rows at a 
later time when in single user mode just before a reload. This way only one 
index has to be updated instead of all the indexes.

Troy Sosamon

===== Original Message from [EMAIL PROTECTED] at 9/26/01 5:10 pm
>Thanks Bill,
>
>The whole thing is basically unwieldy but I need to figure it out. One row of
>data will be appended to the each of the 700 tables every day. The data will
>almost never be edited. The calculations require many, many accesses to get
>previous data out of the table every day hence the need for a good index.
>The "autonumber" column is a requirement as it helps cut down the number
>of accesses to the database by probably 70% - 90% during the calculations.
>This will be the primary index. I know how to do the autonumbering, but like
>I said it will not/cannot be unique and in fact will have about 700 duplicates
>in the single table approach.
>
>I am also toying with the idea of perhaps keeping it all in one giant table and
>yanking out 1 "table" into a temporary table and creating the autonumber
>column there- run the calculations and then update the master table. I may
>try this first and if it does not work out then I will start breaking the data out
>into smaller tables. I do not relish working with 10 databases or umpteen
>DBASE files.
>
>Best regards,
>Mike Young
>
>On Wed, 26 Sep 2001 10:46:43 -0500, Bill Downall wrote:
>
>>Hopefully, Troy and RJ and others will chip in here. They deal with
>>mega-row tables
>>
>>Your 700 identical tables create a lot of "unwieldiness," too. It is quite
>>possible for you to create your very own autonumbering formula. In a
>>control table, containing a row of information about each of your 700
>>kinds of data, you can store an integer column representing a next
>>number. Steve Hartmann has a great technique for retrieving and
>>guaranteeing success at autonumbering that way.
>>
>>As long as your retrieval is always based on indexed integer columns,
>>performance with a million rows need not be unwieldy. And you could
>>subdivide your data into 3 or 4 groups to get the size smaller if you
>>wanted to, and still have far far fewer programming and maintenance
>>headaches than you will have with 700 tables.
>>
>>
>>
>>
>>On Wed, 26 Sep 2001 08:19:35 -0700, Michael Young wrote:
>>
>>>
>>>I guess at this point I could ask what is the maximum number of rows
>>that a
>>>table can accept and at what point does it become too unwieldy.?
>>
>>
>>
>>

Reply via email to