Hello,

I have _a lot_ of http log data to throw into a mysql db (currently over 1.5
billion rows).  New data is coming in all the time, so I don't want to lock
myself into one set of big tables that are over 100 gigs each.  I'd rather
arrange this data into smaller chunks, then merge the tables together so it
looks like one big table when my users perform sql queries.  My biggest
concern is speed right now, and the most common search on these tables will
be 'count' queries.  Currently 'count' queries take over a minute to
perform.  I'd love to get that number down.  Here are some questions:

1) I've need multiple indexes in my tables (for instance, one table has 24
fields, 18 of which should have keys).  Should I index each table
separately, or do I index the big merged "virtual" table?
2) When creating the merged table, do I define keys, or do I not bother,
since the individual tables are already indexed?  Common sense says I should
be indexing once over ALL tables, since the searches people perform are
always going to span multiple tables.

Any experiences you can throw out at me regarding 'merge' would be greatly
appreciated.

Thanks.
Brad


---------------------------------------------------------------------
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/           (the list archive)

To request this thread, e-mail <[EMAIL PROTECTED]>
To unsubscribe, e-mail <[EMAIL PROTECTED]>
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php

Reply via email to