Thanks Larry,

This is good advice and very nearly what I (so far) think I will try (read my 
response to Bill Downall's post). I really won't know till I try it.

Best regards,
Mike Young

On Wed, 26 Sep 2001 09:24:00 -0700 (PDT), Lawrence Lustig wrote:

>> I thought about this but the table would end up with
>> millions of rows of data 
>> and the only useful index would have at least 700
>> duplicates and I am not 
>> sure how efficient that would be. 
>
>Mike:
>
>There's a saying in programming: "Premature
>optimization is the root of all evil".  Make up a test
>database and see what the response is like.  Don't
>make a messy database design unless you _know_ that
>you can't write your project with a clean and easily
>maintainable design.  Also, don't let your design be
>too heavily influenced by what the client's data looks
>like on paper, or any limitations they may have run
>across trying to represent the in Excel or Access.
>
>I have a database with a large 1.5 million row table
>against which complex GROUP BY selects and JOINS are
>done in Oterro, and the response is perfectly
>acceptable for on-line querying. 
>
>As for your indexing described above, it seems to me
>that if each row belongs to only one "subgroup" and
>each row has a unique column value within that
>subgroup, then a key built on those two columns will
>uniquely identify the row.
>
>> Speed is going to be important. Every day there are
>> around 120 complex 
>> calculations made for each table with the results
>> being stored in the table. The 
>> report analysis is a whole other very complex
>> operation.
>
>Consider the possiblity of holding all the data in one
>table and then, if the calculations in the one table
>are too slow, doing the following:
>
>1) Identify the subset of records for the calculation.
>2) Put them in a temporary table.
>3) Perform the calculations.
>4) Update the master table.
> 
>> I guess at this point I could ask what is the
>> maximum number of rows that a 
>> table can accept and at what point does it become
>> too unwieldy.?
>
>Putting the rows in a single table will never be
>nearly as unwieldy as trying to maintain 700 identical
>tables.  It might be a little _slower_ in terms of
>retrieval, but almost certainly not slower enough to
>eliminate the ease of programming.
>--
>Larry
>
>__________________________________________________
>Do You Yahoo!?
>Get email alerts & NEW webcam video instant messaging with Yahoo! 
Messenger. http://im.yahoo.com



Reply via email to