Re: Big innodb tables, how can I work with them?

2014-05-19 Thread Manuel Arostegui
2014-05-15 14:26 GMT+02:00 Antonio Fernández Pérez 
antoniofernan...@fabergroup.es:

 ​

 ​
 ​Hi,

 I have in my server database some tables that are too much big and produce
 some slow query, even with correct indexes created.

 For my application, it's necessary to have all the data because we make an
 authentication process with RADIUS users (AAA protocol) to determine if one
 user can or not navigate in Internet (Depending on the time of all his
 sessions).

 So, with 8GB of data in one table, what are your advices to follow?
 Fragmentation and sharding discarted because we are working with disk
 arrays, so not apply. Another option is to delete rows, but in this case, I
 can't. For the other hand, maybe de only possible solution is increase the
 resources (RAM).


Adding more RAM will only save you for a few weeks/months until the data
isn't able to fit in memory any longer. You will face the same problem soon
(if your data is and will be still growing).
There will be a point where you just can't buy more and better hardware
(actually you kinda can, but you will spend load of money and might end up
with nice servers just doing nothing because they support more memory in
their motherboard so you need to upgrade it too).

You should give your application a thought and start considering
noSQL/table sharding/partitioning/archiving.
Maybe it is too late, but before needing another hardware upgrade, yo
should've thought about a solution that would allow you keep growing
without needing to spend all in hardware (unless you have unlimited money).

Good luck!
Manuel.


Re: Big innodb tables, how can I work with them?

2014-05-19 Thread Johan De Meersman

- Original Message -
 From: Manuel Arostegui man...@tuenti.com
 Subject: Re: Big innodb tables, how can I work with them?
 
 noSQL/table sharding/partitioning/archiving.

I keep wondering how people believe that NoSQL solutions magically don't need 
RAM to work. Nearly all of them slow down to a crawl, many even worse than an 
SQL database, as soon as the full or working set no longer fits in memory, too.

Don't get me wrong - they have certain benefits and definite usecases, but it's 
time people stop presenting them as a magic bullet. They require understanding 
and work, just like any other technology.


-- 
Unhappiness is discouraged and will be corrected with kitten pictures.

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: Big innodb tables, how can I work with them?

2014-05-19 Thread Manuel Arostegui
2014-05-19 11:49 GMT+02:00 Johan De Meersman vegiv...@tuxera.be:


 - Original Message -
  From: Manuel Arostegui man...@tuenti.com
  Subject: Re: Big innodb tables, how can I work with them?
 
  noSQL/table sharding/partitioning/archiving.

 I keep wondering how people believe that NoSQL solutions magically don't
 need RAM to work. Nearly all of them slow down to a crawl, many even worse
 than an SQL database, as soon as the full or working set no longer fits in
 memory, too.

 Don't get me wrong - they have certain benefits and definite usecases, but
 it's time people stop presenting them as a magic bullet. They require
 understanding and work, just like any other technology.


I was thinking about its distributed system as it might speed up reads :-)
We do have a huge noSQL cluster here at work and it certainly needs lot of
RAM.

Manuel


Re: Big innodb tables, how can I work with them?

2014-05-19 Thread Andrew Moore
What kind of queries is this table serving? 8GB is not a huge amount of
data at all and IMO it's not enough to warrant sharding.


On Thu, May 15, 2014 at 1:26 PM, Antonio Fernández Pérez 
antoniofernan...@fabergroup.es wrote:

 ​

 ​
 ​Hi,

 I have in my server database some tables that are too much big and produce
 some slow query, even with correct indexes created.

 For my application, it's necessary to have all the data because we make an
 authentication process with RADIUS users (AAA protocol) to determine if one
 user can or not navigate in Internet (Depending on the time of all his
 sessions).

 So, with 8GB of data in one table, what are your advices to follow?
 Fragmentation and sharding discarted because we are working with disk
 arrays, so not apply. Another option is to delete rows, but in this case, I
 can't. For the other hand, maybe de only possible solution is increase the
 resources (RAM).

 Any ideas?

 Thanks in advance.

 Regards,

 Antonio.​



Re: Big innodb tables, how can I work with them?

2014-05-19 Thread Antonio Fernández Pérez
​Hi,

Thanks for your replies.

In our case, we can't implement NOSQL solution. Thats requires modify/check
all our application and all services (Including FreeRADIUS that I'm not
sure if it's compatible).

Andrew, I have heard about people that has a lot of data, more than me. I
know that MySQL support this amount but in this case and thinking in the
future, I have this problem with my architecture; how can I grow in
database servers without delete rows in the tables.
I have checked slow queries and now there aren't.
These tables are serving queries from FreeRADIUS service. For example,
SUMs, COUNTS, nomal SELECTs ... Always with a where condition.

Excuse me, what is the meaning of IMO?

Thanks.

Regards,

Antonio.​


Re: Big innodb tables, how can I work with them?

2014-05-15 Thread Reindl Harald


Am 15.05.2014 14:26, schrieb Antonio Fernández Pérez:
 I have in my server database some tables that are too much big and produce
 some slow query, even with correct indexes created.
 
 For my application, it's necessary to have all the data because we make an
 authentication process with RADIUS users (AAA protocol) to determine if one
 user can or not navigate in Internet (Depending on the time of all his
 sessions).
 
 So, with 8GB of data in one table, what are your advices to follow?
 Fragmentation and sharding discarted because we are working with disk
 arrays, so not apply. Another option is to delete rows, but in this case, I
 can't. For the other hand, maybe de only possible solution is increase the
 resources (RAM)

rule of thumbs is innodb_buffer_pool = database-size or at least
as much RAM that frequently accessed data stays always in the pool



signature.asc
Description: OpenPGP digital signature