Am 16.05.2014 15:49, schrieb Antonio Fernández Pérez:
> I write to the list because I need your advices.
> 
> I'm working with a database with some tables that have a lot of rows, for
> example I have a table with 8GB of data.
> 
> How can I do to have a fluid job with this table?
> 
> My server works with disk cabin and I think that sharding and partitioning
> are technologies that not applies. Work with a lot of data produces that
> there are some slow query, even with the correct indexes created.
> 
> So, one option is to delete data but, I use a RADIUS system to authenticate
> and authorize users to connect to Internet. For this reason I need work
> with almost all data. Another solution is increase the server resources.

why in the world do you start a new thread?

* you started a similar one
* you got a repsonse and nothing came back from you

now you can start everyday the same thread and
get ignored really fast or keep the dicussion in
one - honestly with the informations you give
nobody can really help you

http://www.catb.org/esr/faqs/smart-questions.html

-------- Original-Nachricht --------
Betreff: Big innodb tables, how can I work with them?
Datum: Thu, 15 May 2014 14:26:34 +0200
Von: Antonio Fernández Pérez <antoniofernan...@fabergroup.es>
An: mysql <mysql@lists.mysql.com>

I have in my server database some tables that are too much big and produce
some slow query, even with correct indexes created.

For my application, it's necessary to have all the data because we make an
authentication process with RADIUS users (AAA protocol) to determine if one
user can or not navigate in Internet (Depending on the time of all his
sessions).

So, with 8GB of data in one table, what are your advices to follow?
Fragmentation and sharding discarted because we are working with disk
arrays, so not apply. Another option is to delete rows, but in this case, I
can't. For the other hand, maybe de only possible solution is increase the
resources (RAM).

Any ideas?

-------- Original-Nachricht --------
Betreff: Re: Big innodb tables, how can I work with them?
Datum: Thu, 15 May 2014 14:45:36 +0200
Von: Reindl Harald <h.rei...@thelounge.net>
An: mysql@lists.mysql.com

Am 15.05.2014 14:26, schrieb Antonio Fernández Pérez:
> I have in my server database some tables that are too much big and produce
> some slow query, even with correct indexes created.
>
> For my application, it's necessary to have all the data because we make an
> authentication process with RADIUS users (AAA protocol) to determine if one
> user can or not navigate in Internet (Depending on the time of all his
> sessions).
>
> So, with 8GB of data in one table, what are your advices to follow?
> Fragmentation and sharding discarted because we are working with disk
> arrays, so not apply. Another option is to delete rows, but in this case, I
> can't. For the other hand, maybe de only possible solution is increase the
> resources (RAM)

rule of thumbs is innodb_buffer_pool = database-size or at least
as much RAM that frequently accessed data stays always in the pool

Attachment: signature.asc
Description: OpenPGP digital signature

Reply via email to