Hello Antonio,

On 5/16/2014 9:49 AM, Antonio Fernández Pérez wrote:
Hi,

I write to the list because I need your advices.

I'm working with a database with some tables that have a lot of rows, for
example I have a table with 8GB of data.


How you design your tables can have a huge impact on performance. If you are frequently doing aggregation (GROUP BY...) queries for reports, then you may need to have your data pre-aggregated at various levels. Perhaps, as an example, you want to run a weekly report of how often someone logs in. Every day, you have an average of 100 users each logging in 10 times a day. That is 1000 rows of connection information. Multiply that and you have 70000 rows, multiply that by a year and you have 365000 rows (appx)

If you create a table or set of tables where you have already summarized your most frequently used data for example (login, date, total minutes connected for that date, total number of connections for that day, ... ) then you have reduced how much work your weekly report needs to do from 70000 rows to just 7. How much faster would that be?

Each day, you add the previous day's totals to your summary tables.

For more information on how to do this kind of pre-computation analysis and optimization, do some research on the topic of OLAP (online analytical processing)
http://en.wikipedia.org/wiki/OLAP

How can I do to have a fluid job with this table?


Stop trying to use just the one table for everything?


My server works with disk cabin and I think that sharding and partitioning
are technologies that not applies. Work with a lot of data produces that
there are some slow query, even with the correct indexes created.


Partition pruning is a very good way of improving query performance. The trick is to design your partitions to match the majority of your query patterns.

http://dev.mysql.com/doc/refman/5.6/en/partitioning.html


So, one option is to delete data but, I use a RADIUS system to authenticate
and authorize users to connect to Internet. For this reason I need work
with almost all data. Another solution is increase the server resources.

Any ideas?


See above.


Thanks in advance.

Regards,

Antonio.


--
Shawn Green
MySQL Senior Principal Technical Support Engineer
Oracle USA, Inc. - Hardware and Software, Engineered to Work Together.
Office: Blountville, TN

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:    http://lists.mysql.com/mysql

Reply via email to