Instead of 700 tables w/ 130 cols each, could you make 1 table with 131 
columns using the extra column to identify which table it would have been in 
if you actually had 700 tables?

When designing large tables of data, you need to know some basic info about 
your data sets to figure out how best to deal with it.
1. How many rows do you expect to have?
2. How fast will new rows be added?
3. How often will the data be edited?
4. How ofter will rows be deleted?

If you are under 100,000 rows, no problem, don't wast any brain power on it 
because R:base is going to handle everything just fine as long as you have 
decent indexes.

If you are between 100,000 rows and 2,000,000, you need to use indexes and 
access data wisely.

Over 2,000,000 you need to manage the data programatically, keep tables of 
index ranges, keep smaller tables of inserts changes and updates and then 
update the big tables in batch.

Troy


===== Original Message from [EMAIL PROTECTED] at 9/26/01 9:46 am
>Hopefully, Troy and RJ and others will chip in here. They deal with
>mega-row tables
>
>Your 700 identical tables create a lot of "unwieldiness," too. It is quite
>possible for you to create your very own autonumbering formula. In a
>control table, containing a row of information about each of your 700
>kinds of data, you can store an integer column representing a next
>number. Steve Hartmann has a great technique for retrieving and
>guaranteeing success at autonumbering that way.
>
>As long as your retrieval is always based on indexed integer columns,
>performance with a million rows need not be unwieldy. And you could
>subdivide your data into 3 or 4 groups to get the size smaller if you
>wanted to, and still have far far fewer programming and maintenance
>headaches than you will have with 700 tables.
>
>
>
>
>On Wed, 26 Sep 2001 08:19:35 -0700, Michael Young wrote:
>
>>
>>I guess at this point I could ask what is the maximum number of rows
>that a
>>table can accept and at what point does it become too unwieldy.?

Reply via email to