Re: scanning 2 rows slow index fast 26GB MyISAM

2004-12-23 Thread Sergio Salvi
Bryan,

Can you send the output of show create table matrix? You've just sent 
the location table output, but your select command refers to a table 
called matrix.

[]s,
Sergio.

On Wed, 22 Dec 2004, Bryan Heitman wrote:

 I am experiencing extreme slowness performing a query in which 2 rows are 
 returned hanging in the sending data status.
 
 Performing an index only query such as SELECT COUNT(*) is extremely quick so 
 I know the only extra step is retrieving the data from the MYD.
 
 I am looking for thoughts on why this is slow and what can be done to speed 
 it up.  I find it unusual why it would take this long to simply grab 2 rows 
 from the MYD.  vmstat reports high reads and strace confirms pread()'s on 
 the MYD file.
 
 The only abnormality is my  table size MYD is 26 gig and my MYI is about 30 
 gig.
 
 Test system details, tests were performed with no load.
 System: Redhat Linux 2.4.28
 Mysql: tested on versions 4.0.22 and latest 4.1 tree
 IDE (WD 2500 JB 8 mb buff) disk1 used for MYD (written contiguously at 
 beginning of disk)
 IDE (WD 2500 JB 8 mb buff) disk2 used for MYI
 dual xeon 2.4ghz
 1gb ddr266 mem
 
 Here are query tests  detail below:
 
 mysql select count(*) from matrix where accountid = 11 and wordid = 72 and 
 position = 'Body' and date  now() - interval 10 day;
 +--+
 | count(*) |
 +--+
 |2 |
 +--+
 1 row in set (0.06 sec)
 
 
 mysql select locationid from matrix where accountid = 11 and wordid = 72 
 and position = 'Body' and date  now() - interval 10 day;
 ++
 | locationid |
 ++
 |  47932 |
 |  29571 |
 ++
 2 rows in set (5 min 35.93 sec)
 
 mysql explain select locationid from matrix where accountid = 11 and wordid 
 = 71 and position = 'Body' and date  now() - interval 10 day;
 ++--+---+---+-+---+---+-+
 | table  | type | possible_keys | key   | key_len | ref   | rows 
 | Extra   |
 ++--+---+---+-+---+---+-+
 | matrix | ref  | myKey | myKey |   9 | const,const,const | 
 56909 | Using where |
 ++--+---+---+-+---+---+-+
 
 CREATE TABLE `location` (
   `LocationID` int(11) unsigned NOT NULL auto_increment,
   `ImapUID` int(11) unsigned NOT NULL default '0',
   `AccountID` int(11) unsigned NOT NULL default '0',
   `Date` timestamp(19) NOT NULL,
   `FromWho` tinyblob,
   `Subject` tinyblob,
   `SentTo` tinyblob,
   `mailbox` varchar(255) default NULL,
   `body` longblob,
   PRIMARY KEY  (`LocationID`),
   KEY `myKey` (`LocationID`,`AccountID`,`Date`)
 ) TYPE=MyISAM MAX_ROWS=10 AVG_ROW_LENGTH=300 DATA 
 DIRECTORY='/home/SQL/' INDEX DIRECTORY='/home/SQL/'
 
 
 
 Best regards,
 
 Bryan Heitman
 FuseMail Team 
 
 
 


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: scanning 2 rows slow index fast 26GB MyISAM

2004-12-23 Thread Sergio Salvi

On Thu, 23 Dec 2004, Bryan Heitman wrote:

 My mistake!  Here you go:

Ok, no prob :)

 
 CREATE TABLE `matrix` (
   `WordID` int(11) unsigned NOT NULL default '0',
   `LocationID` int(11) unsigned NOT NULL default '0',
   `Position` enum('Body','From','Subject','To','Mailbox','File') NOT NULL 
 default 'Body',
   `times` int(11) unsigned NOT NULL default '0',
   `MyOrder` int(11) unsigned NOT NULL default '0',
   `AccountID` int(11) unsigned NOT NULL default '0',
   `date` timestamp(19) NOT NULL,
   KEY `myKey` (`AccountID`,`WordID`,`Position`,`date`),
   KEY `myKey2` (`LocationID`)
 ) TYPE=MyISAM MAX_ROWS=10 AVG_ROW_LENGTH=300 DATA 
 DIRECTORY='/home/imap/fuse3.disk2/SQL/search/'
 

Oops, I forgot to ask you to send the output of show index from matrix. 
But your index myKey looks goods, you could try changing the order of 
the fields in your key. Try creating a index with your fields ordered by 
the Cardinality value from the show index from matrix output 
(in asceding order).

Also, what happens if you don't specify the date value in your query? 
Check the time it takes and the explain output.

Another thing I would suggest is to create (or replace) your index, 
trying all (or almost all) of the possible combinations regarding the 
order of the keys in your index. It helped me in some situations, and 
sometimes it's better for me to keep two indices with the same keys but 
different order, because of my different selects.

Hope that helps!

[]s,
Sergio


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: how to bind mysql to a certain ip-address?

2004-06-25 Thread Sergio Salvi
bind-address is what you need.

http://dev.mysql.com/doc/mysql/en/Server_options.html

[]s,
Sergio.

On Fri, 25 Jun 2004, Marten Lehmann wrote:

 Hello,
 
 I was reading through the documentation, but I couldn't find anything on 
 how to bind mysql to a certain ip-address. The only thing I found was 
 how to bind mysql to a different port:
 
 mysqld_safe --port=port_number
 
 But I don't want mysql to listen on all interfaces.
 
 Regards
 Marten
 
 


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: strange table speed issue

2004-06-24 Thread Sergio Salvi
Hi!

Can you send the output of the following command?

show index from ip2org;

It seems you don't have an index on both fields (even though it says you 
have multi-field index)...

[]s,
Sergio Salvi.

On Thu, 24 Jun 2004, MerchantSense wrote:

 Hi - I need some help :)
 
  I set up a simple mysql table:
  
  mysql describe ip2org;
  +--+-+--+-+-+---+
  | Field| Type| Null | Key | Default | Extra |
  +--+-+--+-+-+---+
  | ip_start | bigint(20)  |  | MUL | 0   |   |
  | ip_end   | bigint(20)  |  | MUL | 0   |   |
  | org  | varchar(80) |  | | |   |
  +--+-+--+-+-+---+
  3 rows in set (0.00 sec)
  
  It's got just over 2 million  rows, and is populated with integers.
 
  I do this:
  SELECT org from ip2org where ip_start=1094799892 and 
  ip_end=1094799892;
  
  and it take 12 seconds...!
  
  all my indexes seem to be there:
  mysql explain SELECT org from ip2org where ip_start=1094799892 and
  ip_end=1094799892;
  
 | table  | type | possible_keys   | key  | key_len | ref  | rows| Extra
 | ip2org | ALL  | ip_start,ip_end | NULL |NULL | NULL | 2943079 |where
 used |
 
  is this normal?  the fields are simple numbers, so it should be 
  lightning fast, no?  I realize it's a big table, but it's just numbers.
 this should come back instantly, no?
  thanks!
  
 
 
 
 
 


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: strange table speed issue

2004-06-24 Thread Sergio Salvi
On Fri, 25 Jun 2004, Michael Stassen wrote:

 
 Sergio Salvi wrote:
  Hi!
  
  Can you send the output of the following command?
  
  show index from ip2org;
  
  It seems you don't have an index on both fields (even though it says you 
  have multi-field index)...
 
 MUL doesn't mean part of a multi-field index.  From the manual 
 http://dev.mysql.com/doc/mysql/en/DESCRIBE.html: The MUL value indicates 
 that multiple occurences of a given value allowed within the field.
 

Right! Thanks for noting that.

[]s,
Salvi.

 
 
  
  []s,
  Sergio Salvi.
  
  On Thu, 24 Jun 2004, MerchantSense wrote:
  
  
 Hi - I need some help :)
 
  I set up a simple mysql table:
  
  mysql describe ip2org;
  +--+-+--+-+-+---+
  | Field| Type| Null | Key | Default | Extra |
  +--+-+--+-+-+---+
  | ip_start | bigint(20)  |  | MUL | 0   |   |
  | ip_end   | bigint(20)  |  | MUL | 0   |   |
  | org  | varchar(80) |  | | |   |
  +--+-+--+-+-+---+
  3 rows in set (0.00 sec)
  
  It's got just over 2 million  rows, and is populated with integers.
 
  I do this:
  SELECT org from ip2org where ip_start=1094799892 and 
  ip_end=1094799892;
  
  and it take 12 seconds...!
  
  all my indexes seem to be there:
  mysql explain SELECT org from ip2org where ip_start=1094799892 and
  ip_end=1094799892;
  
 | table  | type | possible_keys   | key  | key_len | ref  | rows| Extra
 | ip2org | ALL  | ip_start,ip_end | NULL |NULL | NULL | 2943079 |where
 used |
 
  is this normal?  the fields are simple numbers, so it should be 
  lightning fast, no?  I realize it's a big table, but it's just numbers.
 this should come back instantly, no?
  thanks!
  
 
 
 
 
 
  
  
  
 
 
 



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: strange table speed issue

2004-06-24 Thread Sergio Salvi
On Thu, 24 Jun 2004, MerchantSense wrote:

 Yes, but I now have multi-column indexes, but still have the problem
 It's not using the indexes at all... very strange - some sort of bug?
 
 Look at this:
 mysql explain SELECT org from ip2org use key (ip_start,ip_end) where
 ip_start=1094799892 and ip_end=1094799892;
 ++--+--+--+-+--+
 -++
 | table  | type | possible_keys| key  | key_len | ref  | rows
 | Extra  |
 ++--+--+--+-+--+
 -++
 | ip2org | ALL  | ip_start,ip_end,ip_end_2 | NULL |NULL | NULL | 2943079
 | where used |
 ++--+--+--+-+--+
 -++
 
 With these indexes:
 

Oops, you forgot to paste it :)

Anyway, make sure you've used the following command to create this index:

alter table ip2org add index test (ip_start,ip_end);

[]s,
Salvi.

 
 
 -Original Message-
 From: Michael Stassen [mailto:[EMAIL PROTECTED] 
 Sent: Thursday, June 24, 2004 10:37 PM
 To: Sergio Salvi
 Cc: MerchantSense; [EMAIL PROTECTED]
 Subject: Re: strange table speed issue
 
 
 Sergio Salvi wrote:
  Hi!
  
  Can you send the output of the following command?
  
  show index from ip2org;
  
  It seems you don't have an index on both fields (even though it says you 
  have multi-field index)...
 
 MUL doesn't mean part of a multi-field index.  From the manual 
 http://dev.mysql.com/doc/mysql/en/DESCRIBE.html: The MUL value indicates 
 that multiple occurences of a given value allowed within the field.
 
 
 
  
  []s,
  Sergio Salvi.
  
  On Thu, 24 Jun 2004, MerchantSense wrote:
  
  
 Hi - I need some help :)
 
  I set up a simple mysql table:
  
  mysql describe ip2org;
  +--+-+--+-+-+---+
  | Field| Type| Null | Key | Default | Extra |
  +--+-+--+-+-+---+
  | ip_start | bigint(20)  |  | MUL | 0   |   |
  | ip_end   | bigint(20)  |  | MUL | 0   |   |
  | org  | varchar(80) |  | | |   |
  +--+-+--+-+-+---+
  3 rows in set (0.00 sec)
  
  It's got just over 2 million  rows, and is populated with integers.
 
  I do this:
  SELECT org from ip2org where ip_start=1094799892 and 
  ip_end=1094799892;
  
  and it take 12 seconds...!
  
  all my indexes seem to be there:
  mysql explain SELECT org from ip2org where ip_start=1094799892 and
  ip_end=1094799892;
  
 | table  | type | possible_keys   | key  | key_len | ref  | rows|
 Extra
 | ip2org | ALL  | ip_start,ip_end | NULL |NULL | NULL | 2943079 |where
 used |
 
  is this normal?  the fields are simple numbers, so it should be 
  lightning fast, no?  I realize it's a big table, but it's just numbers.
 this should come back instantly, no?
  thanks!
  
 
 
 
 
 
  
  
  
 
 
 
 
 


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Performance issues

2004-06-22 Thread Sergio Salvi
Hi!

Can you give more details on the problematic inserts you're doing (table 
structure, indexes and insert command) ?

Also, do you believe your queries would benefit from MySQL's query cache? 
Maybe it's worth upgrading to version 4 and use this feature, even if 
you allocate just a small amount of memory for that.

[]s,
Sergio Salvi.

On Tue, 22 Jun 2004, Aram Mirzadeh wrote:

 
 
 We have an internal SNMP monitoring system that is monitoring about 
 10,000 devices.  Each device is pinged then pulled for about an average 
 of 25-30 elements.  Each of the ping results and elements are then 
 stored in text file, then another system picks them up (NFS) and inserts 
 them into a MyISAM (3.23.54) database.  The data is kept for 13 weeks.
 
 The database system is a Xeon 4 way, 12GB of ram with a striped raid 
 array dedicated to the database files and its indexes and such.
 
 Every 5 minutes another process goes through the last set of inserts and 
 compares them for any threshold breaches, so the entire last set of data 
 is looked at.
 
 We're falling behind on the inserts because the system can't seem to 
 handle the amount of inserts, the front end that generates the web pages 
 based on the previous records is dogging down.
 
 I have read the regular optimizations papers and have done as much as I 
 felt safe, are there any huge database optimization papers?  Anything I 
 should be looking at?
 
 Here is the relavent my.cnf entries:
 
 set-variable= key_buffer=256M
 set-variable= max_allowed_packet=1M
 set-variable= table_cache=256
 set-variable= sort_buffer=1M
 set-variable= record_buffer=1M
 set-variable= myisam_sort_buffer_size=64M
 set-variable= thread_cache=8
 set-variable= thread_concurrency=8
 
 [mysql]
 no-auto-rehash
 
 [isamchk]
 set-variable= key_buffer=128M
 set-variable= sort_buffer=128M
 set-variable= read_buffer=2M
 set-variable= write_buffer=2M
 
 [myisamchk]
 set-variable= key_buffer=128M
 set-variable= sort_buffer=128M
 set-variable= read_buffer=2M
 set-variable= write_buffer=2M
 
 And here is my top output:
 
 MySQL on 1.2.3.4 (3.23.54) up 2+06:36:05 [13:10:01]
   Queries: 191.5M  qps: 1022 Slow:   296.0 Se/In/Up/De(%): 
 22/10/62/00
   qps now:  147 Slow qps: 0.0  Threads:9 (   2/   7) 
 50/00/00/00
   Key Efficiency: 96.7%  Bps in/out:  4.0k/ 1.6k   Now in/out:  6.2k/767.7k
 
 Any suggestions would be greatly appreciated.
 
 Aram
 
 





-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: How do you deal with URL's?

2004-06-21 Thread Sergio Salvi
On Mon, 21 Jun 2004, David Blomstrom wrote:

 Suppose I have a field with the names of states,
 linked to their home pages:
 
 a href=http://www.alabama.gov/;Alabama/a
 a href=http://access.wa.gov/;Washington/a
 
 If I display this on a webpage, I'll get the names of
 the states, linked to their home pages. But is there a
 simple strategy that will let me to display the names
 UNLINKED on another page, or do I have to create a
 second field that lists simple state names, with no
 URL's?

Doing the way you're suggesting would create some problems:

- data redundancy (a href=http:// and /a on every field)
- storing two different information on a single field (state name and URL)
- the MySQL index would be filled with at least 8 bytes of useless data 
(a href=), wasting more disk space
- no database-friendly way to search for states or even to sort them by 
name

Separate data from how it's displayed. I mean, create a table called 
states with the fields state_id, state_name and state_url. Put 
the data in the according field:

state_id state_name state_url
1 Alabama http://www.alabama.gov
2 Washington http://access.wa.gov
...and so on

Then you can easily list only state names (ordered by state_name, for 
example), not showing the URL.

[]s,
Salvi.

 
 Thanks.
 


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: How do you deal with URL's?

2004-06-21 Thread Sergio Salvi

On Mon, 21 Jun 2004, David Blomstrom wrote:

 Sergio Salvi wrote:
 

[...]

 
 Separate data from how it's displayed. I mean, create
 a table called 
 states with the fields state_id, state_name and
 state_url. Put 
 the data in the according field:
 
 state_id state_name state_url
 1 Alabama http://www.alabama.gov
 2 Washington http://access.wa.gov
 and so on
 
 OK, I see. But I assume you mean http://www, rather
 than http without the beginning tag - or do you insert
 that with PHP? It just occurred to me that I probably
 don't need to put /a in the database, because I
 probably add that with PHP somehow.

I would add the a href HTML tag in the application and store only the 
URL in the database. Just do not remove the http://; and assume in your 
application that every URL starts with that because some of them may be 
https://;. If you are space usage paranoid (hehe), you could create a 
enum(y,n) field called use_ssl and remove the http://; from every 
URL, setting the use_ssl field to y when the URL is a https:// one.

Do not assume you'll always display the URLs in an HTML output. What if 
you decide to send it by plain-text mail? Or output in XML to export it to 
another system?

HTH.

[]s,
SAlvi

 
 
 --- Bob Ramsey [EMAIL PROTECTED] wrote:
  Personally, I'd split that into 2 fields.  I think
  that's a better way 
  to model the data unless there's something I don't
  know.
  
  Otherwise, try this:
  
  mysql select * from url;
  +---+
  | url   |
  +---+
  | a href=http://www.alabama.gov;Alabama/a  |
  | a href=http://access.wa.gov;Washington/a |
  +---+
  2 rows in set (0.00 sec)
  
  mysql select substring(url,locate('',url)+2, 
  char_length(url)-locate('',url
  )-5) as state from url;
  ++
  | state  |
  ++
  | Alabama|
  | Washington |
  ++
  2 rows in set (0.00 sec)
  
  mysql
  
  
  What I had to to was to have mysql take the string:
  
  a href=http://www.alabama.gov;Alabama/a
  
  and give me the parts between  and /a.  First, I
  had to find the 
  position of  and then add 2 to it.  The substring
  function in mysql 
  takes the parameters string, starting_position, and
  length.  Using 
  locate, I got the starting postion and added 2 to
  it.  For length, I had 
  to use locate again; locating  gives me the
  position of the  in .  
  Subtracting 5 gives me the right length after
  discounting the /a and 
  the 2 positions I'm off from .
  
  
  Someone more experienced that I can tell you if
  there's a more effecient 
  way.  My inclination would be that for best results,
  you should split 
  the field in two and build your webpage like this:
  
  a href=$URL$STATE/a
  
  Hope this helps.
 
 I think I understand. Thanks.
 
 
 
   
   
 __
 Do you Yahoo!?
 New and Improved Yahoo! Mail - 100MB free storage!
 http://promotions.yahoo.com/new_mail 
 
 



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Downgrade from 4.1 to 4.0

2003-10-30 Thread Sergio Salvi
Run this command to list your currently installed packages:

rpm -qa | grep -i mysql

Then for each package listed, run:

rpm -e PACKAGE_NAME

This rpm -e will remove your MySQL package. There should be separate 
packages for MySQL: server, client, shared libraries, etc. Remove all of 
them that are related to your 4.1 installation. Your datafiles and my.cnf 
files SHOULD not be removed when you run rpm -e, but you could move them 
to another directory to make sure that they won't be deleted.

After that, just install your 4.0 RPMs and move your datafiles to the 
original directory.

Also check the documentation:

http://www.mysql.com/documentation/mysql/bychapter/manual_Installing.html#Upgrade

[]s,
Sergio Salvi.

On Thu, 30 Oct 2003, Mike Nelson wrote:

 Hello,
  
 Can somebody give me some instructions on how to uninstall Mysql
 4.1.  I have a redhat 2.1 AS box.  I installed the rpm dist from
 mysql.com.  I want to put MySql 4.0 on that machine.  I would think this
 is something fairly easy to do.  I cannot seem to locate any
 instructions though.  Im fairly new to Linux by the way, if you have not
 already guessed.
  
 Any help would be greatly appreciated.
  
 Thank You,
  
 Mike.
 
 
 **
 This email and any files transmitted with it are confidential and
 intended solely for the use of the individual or entity to whom they
 are addressed. If you have received this email in error please notify
 the system manager.
 
 This footnote also confirms that this email message has been swept by
 MIMEsweeper for the presence of computer viruses.
 
 www.mimesweeper.com
 **
 
 


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Mysql Performance Question

2003-10-29 Thread Sergio Salvi
Rainer,

To find out if there is any query that is taking too long to run, enable 
the log-slow-queries option and set long_query_time to some 
initial value like 5 or 10 (it's in seconds).

Then check your slow queries log file and try to understand why those 
queries are taking too much time to run. After you optimize those queries, 
lower the long_query_time value and keep looking at the slow queries log, 
until you get almost no slow queries or your long_query_time is near 1 (or 
zero!)

If you didn't understand, just let me know :)

Regards,
Sergio.

On Wed, 22 Oct 2003, Rainer Sip wrote:

 Thanks for your input. I'll try give my.cnf a try.
 
 I'm not technical at all and am unable to describe the queries myself.
 Attached some output from the server (Sorry for the long post):
 
 
 SHOW STATUS
 -
 +--++
 | Variable_name| Value  |
 +--++
 | Aborted_clients  | 106|
 | Aborted_connects | 1  |
 | Bytes_received   | 1386152010 |
 | Bytes_sent   | 1017147314 |
 | Com_alter_table  | 0  |
 | Com_analyze  | 0  |
 | Com_backup_table | 0  |
 | Com_change_db| 404084 |
 | Com_delete   | 954139 |
 | Com_flush| 0  |
 | Com_insert   | 38972  |
 | Com_insert_select| 0  |
 | Com_kill | 0  |
 | Com_load | 0  |
 | Com_select   | 5274720|
 | Com_set_option   | 0  |
 | Com_show_binlog_events   | 0  |
 | Com_show_binlogs | 0  |
 | Com_show_create  | 0  |
 | Com_show_databases   | 4  |
 | Com_show_fields  | 2  |
 | Com_show_grants  | 0  |
 | Com_show_keys| 0  |
 | Com_show_logs| 0  |
 | Com_show_master_status   | 0  |
 | Com_show_new_master  | 0  |
 | Com_show_open_tables | 0  |
 | Com_show_processlist | 8  |
 | Com_show_slave_hosts | 0  |
 | Com_show_slave_status| 0  |
 | Com_show_status  | 9  |
 | Com_show_innodb_status   | 0  |
 | Com_show_tables  | 56 |
 | Com_show_variables   | 20 |
 | Com_slave_start  | 0  |
 | Com_slave_stop   | 0  |
 | Com_truncate | 0  |
 | Com_unlock_tables| 0  |
 | Com_update   | 1412924|
 | Connections  | 403875 |
 | Created_tmp_disk_tables  | 688|
 | Created_tmp_tables   | 105261 |
 | Created_tmp_files| 0  |
 | Delayed_insert_threads   | 0  |
 | Delayed_writes   | 0  |
 | Delayed_errors   | 0  |
 | Flush_commands   | 1  |
 | Handler_commit   | 0  |
 | Handler_delete   | 27226  |
 | Handler_read_first   | 611592 |
 | Handler_read_key | 2133888023 |
 | Handler_read_next| 2198593130 |
 | Handler_read_prev| 278|
 | Handler_read_rnd | 34278228   |
 | Handler_read_rnd_next| 249766468  |
 | Handler_rollback | 0  |
 | Handler_update   | 1007151|
 | Handler_write| 184271195  |
 | Key_blocks_used  | 106432 |
 | Key_read_requests| 3936331483 |
 | Key_reads| 100865 |
 | Key_write_requests   | 1897202|
 | Key_writes   | 168|
 | Max_used_connections | 234|
 | Not_flushed_key_blocks   | 0  |
 | Not_flushed_delayed_rows | 0  |
 | Open_tables  | 1024   |
 | Open_files   | 1055   |
 | Open_streams | 0  |
 | Opened_tables| 6601   |
 | Questions| 17691302   |
 | Qcache_queries_in_cache  | 1564   |
 | Qcache_inserts   | 5109816|
 | Qcache_hits  | 9173397|
 | Qcache_lowmem_prunes | 89550  |
 | Qcache_not_cached| 183073 |
 | Qcache_free_memory   | 4490312|
 | Qcache_free_blocks   | 1239   |
 | Qcache_total_blocks  | 4393   |
 | Rpl_status   | NULL   |
 | Select_full_join | 712|
 | Select_full_range_join   | 119|
 | Select_range | 24655  |
 | Select_range_check   | 0  |
 | Select_scan  | 407032 |
 | Slave_open_temp_tables   | 0  |
 | Slave_running| OFF|
 | Slow_launch_threads  | 0  |
 | Slow_queries | 2892   |
 | Sort_merge_passes| 0  |
 | Sort_range   | 244321 |
 | Sort_rows| 2279797622 |
 | Sort_scan| 151217 |
 | Table_locks_immediate| 7551643