First of all, I'd try optimizing your app before writing a whole new
back-end. As such, I'd keep to the normal mysql list.

For example, even if the indexes are big, try to index all the columns that
might be searched. Heck, start by indexing all of them. If the data is
read-only, try myisampack.

Or, do the index thing above and use InnoDB for this app and be sure to
select only those columns that you need. InnoDB does not read the whole
record if it does not need to, even in a table scan (which is the worst case
scenario you are calculating).

All your calculations assume a full table scan which can be avoided by good
choice of indexes and by using InnoDB to avoid whole-record retrieval. Am I
missing something? Pulling data from a small 14GB table should not be a
problem. My machine (<$10K) deals with 100GB of data and does 5000 to 10000
queries per second.

Also, your reference to denormalization didn't make any sense to me. What
level of normal form are you expecting?

Sincerely,
Steven Roussey
http://Network54.com/ 




---------------------------------------------------------------------
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/           (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php

Reply via email to