On 8/9/11 11:26 PM, Trevyn Meyer wrote:
> I have a client who expects to have some large tables.  10-100 GB of data.
>
> Do you have any suggestions how to handle those?  I suspect even if
> indexed you would have speed issues?
>
> I can image a rolling system, where tables are rolled, like log files or
> something?  Any suggestions on how to handle 20 tables, with lots of
> rows? Break them into smaller?

It really depends on your data characteristics, and data access 
patterns.  I've worked with tables that are 50-100GB, and had no problem 
with them.  I'd recommend using InnoDB rather than MyISAM, with the 
innodb_file_per_table turned on.

If you're storing large blobs, then using the more recent versions of 
InnoDB, with the Barracuda format enabled, will help.  If you have 
control of the server software, I'd probably recommend using Percona 
Server, which is an enterprise-optimized version of MySQL.

If you generally access the more recent data, and only rarely access the 
older data, then rolling tables can make sense.  I'd generally prefer 
optimizing your hardware to sharding your data though.  10-100GB of data 
is really not all that much for MySQL, and you should be able to handle 
it just fine with the proper hardware.

If you'd like, I do MySQL database consulting, and I could come take an 
hour or two to look at your schema and queries in more detail, and give 
a better recommendation.  Just contact me off-list.

Thanks!

Steve Meyers


_______________________________________________

UPHPU mailing list
[email protected]
http://uphpu.org/mailman/listinfo/uphpu
IRC: #uphpu on irc.freenode.net

Reply via email to