Is there a generally-accepted rule of thumb for estimating the
performance cost of joins?   I can't find one even in Date, but
intuitively it seems as though there must be one by now.

I'm thinking of something like 'if not doing anything costs 0, and
reading 1 table costs 100, then joining a second table brings the cost
to  150, a third to 225, etc' (or 110 or  500 or whatever the
numbers/functions really are).   

I'm working on a large information system --probably tens of millions
of records-- and am guessing that reads should outnumber writes by a
factor of 10 at least.   High performance from the human user's point
of view is an absolute requirement for success.

Rather than store category and other  'flavoring' strings repetitively
in the information records themselves, I've stored indexes into other
tables.  But that means every lookup joins 3-7 tables.  How much am I
paying for that?   

Adding the flavoring strings themselves to each record would increase
the overall storage requirement by some large fraction less than 50%.
Since disk space is already relatively cheap, that shouldn't be a
problem.  But some of the flavoring is guaranteed to change at an
unpredictable rate,  which would mean repeatedly traversing every
information record to update it.

I'm new to databases, but I'm sure this is a 'been there done that'
problem for the more experienced people on the list!

Thanks for any insights!
Margaret

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:    http://lists.mysql.com/[EMAIL PROTECTED]

Reply via email to