Advices for work with big tables

2014-05-16 Thread Antonio Fernández Pérez
Hi,

I write to the list because I need your advices.

I'm working with a database with some tables that have a lot of rows, for
example I have a table with 8GB of data.

How can I do to have a fluid job with this table?

My server works with disk cabin and I think that sharding and partitioning
are technologies that not applies. Work with a lot of data produces that
there are some slow query, even with the correct indexes created.

So, one option is to delete data but, I use a RADIUS system to authenticate
and authorize users to connect to Internet. For this reason I need work
with almost all data. Another solution is increase the server resources.

Any ideas?

Thanks in advance.

Regards,

Antonio.


Re: Advices for work with big tables

2014-05-16 Thread Reindl Harald

Am 16.05.2014 15:49, schrieb Antonio Fernández Pérez:
 I write to the list because I need your advices.
 
 I'm working with a database with some tables that have a lot of rows, for
 example I have a table with 8GB of data.
 
 How can I do to have a fluid job with this table?
 
 My server works with disk cabin and I think that sharding and partitioning
 are technologies that not applies. Work with a lot of data produces that
 there are some slow query, even with the correct indexes created.
 
 So, one option is to delete data but, I use a RADIUS system to authenticate
 and authorize users to connect to Internet. For this reason I need work
 with almost all data. Another solution is increase the server resources.

why in the world do you start a new thread?

* you started a similar one
* you got a repsonse and nothing came back from you

now you can start everyday the same thread and
get ignored really fast or keep the dicussion in
one - honestly with the informations you give
nobody can really help you

http://www.catb.org/esr/faqs/smart-questions.html

 Original-Nachricht 
Betreff: Big innodb tables, how can I work with them?
Datum: Thu, 15 May 2014 14:26:34 +0200
Von: Antonio Fernández Pérez antoniofernan...@fabergroup.es
An: mysql mysql@lists.mysql.com

I have in my server database some tables that are too much big and produce
some slow query, even with correct indexes created.

For my application, it's necessary to have all the data because we make an
authentication process with RADIUS users (AAA protocol) to determine if one
user can or not navigate in Internet (Depending on the time of all his
sessions).

So, with 8GB of data in one table, what are your advices to follow?
Fragmentation and sharding discarted because we are working with disk
arrays, so not apply. Another option is to delete rows, but in this case, I
can't. For the other hand, maybe de only possible solution is increase the
resources (RAM).

Any ideas?

 Original-Nachricht 
Betreff: Re: Big innodb tables, how can I work with them?
Datum: Thu, 15 May 2014 14:45:36 +0200
Von: Reindl Harald h.rei...@thelounge.net
An: mysql@lists.mysql.com

Am 15.05.2014 14:26, schrieb Antonio Fernández Pérez:
 I have in my server database some tables that are too much big and produce
 some slow query, even with correct indexes created.

 For my application, it's necessary to have all the data because we make an
 authentication process with RADIUS users (AAA protocol) to determine if one
 user can or not navigate in Internet (Depending on the time of all his
 sessions).

 So, with 8GB of data in one table, what are your advices to follow?
 Fragmentation and sharding discarted because we are working with disk
 arrays, so not apply. Another option is to delete rows, but in this case, I
 can't. For the other hand, maybe de only possible solution is increase the
 resources (RAM)

rule of thumbs is innodb_buffer_pool = database-size or at least
as much RAM that frequently accessed data stays always in the pool



signature.asc
Description: OpenPGP digital signature


Re: Advices for work with big tables

2014-05-16 Thread Johan De Meersman
- Original Message -
 From: Antonio Fernández Pérez antoniofernan...@fabergroup.es
 Subject: Advices for work with big tables
 
 Hi,
 
 I write to the list because I need your advices.
 
 I'm working with a database with some tables that have a lot of rows, for
 example I have a table with 8GB of data.
 
 How can I do to have a fluid job with this table?

The two easiest points of optimisation are:
 * Make sure your queries can use indexes as much as possible
   * Percona Toolkit has good stuff for figuring that out. Have a look at 
pt-query-digest.
 * Provide enough memory to your MySQL server to fit the entire database (and 
don't forget to allocate it to your InnoDB bufferpool if that applies :-) )
   * failing that, at least enough memory to keep the most frequently used 
dataset in memory
 * failing *that*, but here you're running into disk bottlenecks already, 
enough memory to keep your indexes in memory
   * faster disks for data that doesn't fit in memory (SSD, FusionIO etc)

Memory required for the full dataset: select sum(data_length+index_length) from 
information_schema.tables;
Memory required for indexes: select sum(index_length) from 
information_schema.tables;

There's no easy way to figure out your active data set size, that depends on 
what queries are performed most often.


Depending on what type of disk cabinet you have, it may be possible to replace 
some drives with full SSDs, or with disks that have a built-in SSD cache.


-- 
Unhappiness is discouraged and will be corrected with kitten pictures.

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: Advices for work with big tables

2014-05-16 Thread shawn l.green

Hello Antonio,

On 5/16/2014 9:49 AM, Antonio Fernández Pérez wrote:

Hi,

I write to the list because I need your advices.

I'm working with a database with some tables that have a lot of rows, for
example I have a table with 8GB of data.



How you design your tables can have a huge impact on performance. If you 
are frequently doing aggregation (GROUP BY...) queries for reports, then 
you may need to have your data pre-aggregated at various levels. 
Perhaps, as an example, you want to run a weekly report of how often 
someone logs in. Every day, you have an average of 100 users each 
logging in 10 times a day. That is 1000 rows of connection information. 
 Multiply that and you have 7 rows, multiply that by a year and you 
have 365000 rows (appx)


If you create a table or set of tables where you have already summarized 
your most frequently used data for example (login, date, total minutes 
connected for that date, total number of connections for that day, ... ) 
then you have reduced how much work your weekly report needs to do from 
7 rows to just 7.  How much faster would that be?


Each day, you add the previous day's totals to your summary tables.

For more information on how to do this kind of pre-computation analysis 
and optimization, do some research on the topic of OLAP (online 
analytical processing)

http://en.wikipedia.org/wiki/OLAP


How can I do to have a fluid job with this table?



Stop trying to use just the one table for everything?



My server works with disk cabin and I think that sharding and partitioning
are technologies that not applies. Work with a lot of data produces that
there are some slow query, even with the correct indexes created.



Partition pruning is a very good way of improving query performance. The 
trick is to design your partitions to match the majority of your query 
patterns.


http://dev.mysql.com/doc/refman/5.6/en/partitioning.html



So, one option is to delete data but, I use a RADIUS system to authenticate
and authorize users to connect to Internet. For this reason I need work
with almost all data. Another solution is increase the server resources.

Any ideas?



See above.



Thanks in advance.

Regards,

Antonio.



--
Shawn Green
MySQL Senior Principal Technical Support Engineer
Oracle USA, Inc. - Hardware and Software, Engineered to Work Together.
Office: Blountville, TN

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql