----- Original Message -----
> From: "Antonio Fernández Pérez" <antoniofernan...@fabergroup.es>
> Subject: Advices for work with big tables
> 
> Hi,
> 
> I write to the list because I need your advices.
> 
> I'm working with a database with some tables that have a lot of rows, for
> example I have a table with 8GB of data.
> 
> How can I do to have a fluid job with this table?

The two easiest points of optimisation are:
 * Make sure your queries can use indexes as much as possible
   * Percona Toolkit has good stuff for figuring that out. Have a look at 
pt-query-digest.
 * Provide enough memory to your MySQL server to fit the entire database (and 
don't forget to allocate it to your InnoDB bufferpool if that applies :-) )
   * failing that, at least enough memory to keep the most frequently used 
dataset in memory
     * failing *that*, but here you're running into disk bottlenecks already, 
enough memory to keep your indexes in memory
   * faster disks for data that doesn't fit in memory (SSD, FusionIO etc)

Memory required for the full dataset: select sum(data_length+index_length) from 
information_schema.tables;
Memory required for indexes: select sum(index_length) from 
information_schema.tables;

There's no easy way to figure out your active data set size, that depends on 
what queries are performed most often.


Depending on what type of disk cabinet you have, it may be possible to replace 
some drives with full SSDs, or with disks that have a built-in SSD cache.


-- 
Unhappiness is discouraged and will be corrected with kitten pictures.

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:    http://lists.mysql.com/mysql

Reply via email to