RE: date-IFNULL-sum bug?
2012/10/08 14:52 -0700, Rick James Do not use + for DATE arithmetic! Use, for example + INTERVAL 1 YEAR No, those operations are well defined. Amongst the timestamp-functions there is constant reference to numeric context, and character context--and well there is, because there are no time-constants, only numerals and character strings taken for timestamps. It is also the only means of doing some things. -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/mysql
Query Cache Crashing
Hi MySql, Since upgrading to 5.5.27 (5.5.27-ndb-7.2.8-cluster-gpl-log to be exact) I have experienced problems with the query cache. I am wondering if I am the only one? Twice I have had a core-dump (show at end of mail) on two separate servers running this version. Now I had a complete lock, where 'SHOW PROCESSLIST' showed this for every connection: Waiting for query cache lock This resulted in connections building until the limit was hit. I could only cure this with a 'kill -9' on the mysqld process. Are any other users experiencing this? Do any users know of a fix? Thanks! Ben Clewett. Thread pointer: 0x7f6ea014cf90 Attempting backtrace. You can use the following information to find out where mysqld died. If you see no messages after this, something went terribly wrong... stack_bottom = 410220e8 thread_stack 0x4 /usr/sbin/mysqld(my_print_stacktrace+0x35)[0x81c7f5] /usr/sbin/mysqld(handle_fatal_signal+0x403)[0x6e9383] /lib64/libpthread.so.0[0x7f6f491b2b30] /usr/sbin/mysqld(_ZN11Query_cache12unlink_tableEP23Query_cache_block_table+0x1b)[0x5bd1ab] /usr/sbin/mysqld(_ZN11Query_cache19free_query_internalEP17Query_cache_block+0x7a)[0x5bdb5a] /usr/sbin/mysqld(_ZN11Query_cache27invalidate_query_block_listEP3THDP23Query_cache_block_table+0x7e)[0x5be8ee] /usr/sbin/mysqld(_ZN11Query_cache12insert_tableEjPcP23Query_cache_block_tablejhPFcP3THDS0_jPyEy+0x91)[0x5bf7e1] /usr/sbin/mysqld(_ZN11Query_cache25register_tables_from_listEP10TABLE_LISTjP23Query_cache_block_table+0x176)[0x5bfa76] /usr/sbin/mysqld(_ZN11Query_cache19register_all_tablesEP17Query_cache_blockP10TABLE_LISTj+0x15)[0x5bfbb5] /usr/sbin/mysqld(_ZN11Query_cache11store_queryEP3THDP10TABLE_LIST+0x39e)[0x5bff5e] /usr/sbin/mysqld[0x5eb02d] /usr/sbin/mysqld(_Z21mysql_execute_commandP3THD+0x269b)[0x5f0aeb] /usr/sbin/mysqld(_Z11mysql_parseP3THDPcjP12Parser_state+0x18a)[0x5f2cba] /usr/sbin/mysqld(_Z16dispatch_command19enum_server_commandP3THDPcj+0x1321)[0x5f4861] /usr/sbin/mysqld(_Z24do_handle_one_connectionP3THD+0x167)[0x68d807] /usr/sbin/mysqld(handle_one_connection+0x54)[0x68d874] /lib64/libpthread.so.0[0x7f6f491ab040] /lib64/libc.so.6(clone+0x6d)[0x7f6f4847c08d] -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/mysql
Re: Slow queries / inserts InnoDB
You are forcing mysql to do full table scans with the substr... Use explain to see that you aren't using any index. Avoid the use of substr in the where clause, by splitting your data, index that field and do you query over that field. That is why your query is so slow. the slow insert, is due you S.O... On Mon, Oct 8, 2012 at 2:59 AM, Adrián Espinosa Moreno aespinosamor...@gmail.com wrote: Hi. I have developed my first application with VB.NET at work (junior developer) and basically it reads several thousands of lines from X number of files, and inserts them into a mysql database. The application consists of several steps: 1) Dump all lines with all fields into a temp table. a. This works fine. Optimal speed for the hardware we have. 2) Query temp table to obtain certain. I query by a unique ID. I insert all unique ID (isn field). If the ID matches my interests, I insert it into an ArrayList. a. Query: SELECT COUNT(SUBSTR(ISN,2)), SUBSTR(isn, 2) FROM SMS.TEMPDATADUMP WHERE error = 0 GROUP BY SUBSTR(ISN,2) HAVING COUNT(SUBSTR(ISN,2)) = 4 b. The isn is not unique per line, but per data (sms) c. Once I have all isn on an arraylist, I do the following query: i. SELECT select, my, fields,of,interest FROM SMS.TEMPDATADUMP WHERE SUBSTR(ISN, 2) = ' isn ' ORDER BY type LIMIT 1 d. To obtain some data. During the process I query around 10 times other table per ISN. e. Here is the problem. If I have a few files to process (around 3000-4000 lines in total, small array) this steps work fine, good speed. But If I have big files or a lot of files (more than 1 lines in total, big array), this steps are incredibly slow. Queries and inserts are too slow. Meaning, one-two inserts per second, while the other case inserts are around 800 per second. Our hardware is not optimized for database server, but I don’t have other choice. It is mostly a desktop computer Intel core i5, windows 32 bits, 3GB RAM, one disk 500GB 7200rpm. I have tried some optimizations commented in mysqlperformance blog without success. Any way to optimize this? Thank you very much in advance. Adrián Espinosa Moreno.
RE: Query Cache Crashing
As for the crash, I don't know. Instead, I recommend either shrinking the size (if you use a big QC) or turning it off. This would make the issue go away. ANY modification to a particular table leads to ALL entries in the Query cache being purged. For that reason, we (Yahoo) almost never use the QC on any of our many servers. Please provide SHOW GLOBAL VARIABLES LIKE 'query%'; SHOW GLOBAL STATUS LIKE 'Qc%'; Because of inefficiencies in 'pruning', having a query_cache_size bigger than 50M may actually degrade performance. In you have a reproducible test case, submit to bugs.mysql.com . -Original Message- From: Ben Clewett [mailto:b...@clewett.org.uk] Sent: Tuesday, October 09, 2012 4:47 AM To: mysql@lists.mysql.com Subject: Query Cache Crashing Hi MySql, Since upgrading to 5.5.27 (5.5.27-ndb-7.2.8-cluster-gpl-log to be exact) I have experienced problems with the query cache. I am wondering if I am the only one? Twice I have had a core-dump (show at end of mail) on two separate servers running this version. Now I had a complete lock, where 'SHOW PROCESSLIST' showed this for every connection: Waiting for query cache lock This resulted in connections building until the limit was hit. I could only cure this with a 'kill -9' on the mysqld process. Are any other users experiencing this? Do any users know of a fix? Thanks! Ben Clewett. Thread pointer: 0x7f6ea014cf90 Attempting backtrace. You can use the following information to find out where mysqld died. If you see no messages after this, something went terribly wrong... stack_bottom = 410220e8 thread_stack 0x4 /usr/sbin/mysqld(my_print_stacktrace+0x35)[0x81c7f5] /usr/sbin/mysqld(handle_fatal_signal+0x403)[0x6e9383] /lib64/libpthread.so.0[0x7f6f491b2b30] /usr/sbin/mysqld(_ZN11Query_cache12unlink_tableEP23Query_cache_block_ta ble+0x1b)[0x5bd1ab] /usr/sbin/mysqld(_ZN11Query_cache19free_query_internalEP17Query_cache_b lock+0x7a)[0x5bdb5a] /usr/sbin/mysqld(_ZN11Query_cache27invalidate_query_block_listEP3THDP23 Query_cache_block_table+0x7e)[0x5be8ee] /usr/sbin/mysqld(_ZN11Query_cache12insert_tableEjPcP23Query_cache_block _tablejhPFcP3THDS0_jPyEy+0x91)[0x5bf7e1] /usr/sbin/mysqld(_ZN11Query_cache25register_tables_from_listEP10TABLE_L ISTjP23Query_cache_block_table+0x176)[0x5bfa76] /usr/sbin/mysqld(_ZN11Query_cache19register_all_tablesEP17Query_cache_b lockP10TABLE_LISTj+0x15)[0x5bfbb5] /usr/sbin/mysqld(_ZN11Query_cache11store_queryEP3THDP10TABLE_LIST+0x39e )[0x5bff5e] /usr/sbin/mysqld[0x5eb02d] /usr/sbin/mysqld(_Z21mysql_execute_commandP3THD+0x269b)[0x5f0aeb] /usr/sbin/mysqld(_Z11mysql_parseP3THDPcjP12Parser_state+0x18a)[0x5f2cba ] /usr/sbin/mysqld(_Z16dispatch_command19enum_server_commandP3THDPcj+0x13 21)[0x5f4861] /usr/sbin/mysqld(_Z24do_handle_one_connectionP3THD+0x167)[0x68d807] /usr/sbin/mysqld(handle_one_connection+0x54)[0x68d874] /lib64/libpthread.so.0[0x7f6f491ab040] /lib64/libc.so.6(clone+0x6d)[0x7f6f4847c08d] -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/mysql -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/mysql
RE: Slow queries / inserts InnoDB
When creating the temp table, add another column, isn2, that is SUBSTR(ISN, 2). Add INDEX(isn2, type) Change the query to... isn2 = ' isn ' ORDER BY type LIMIT 1 It that does not work, please provide clearer details, including SHOW CREATE TABLE SHOW TABLE STATUS EXPLAIN SELECT (with substitutions filled in) -Original Message- From: Andrés Tello [mailto:mr.crip...@gmail.com] Sent: Tuesday, October 09, 2012 7:04 AM To: Adrián Espinosa Moreno Cc: mysql@lists.mysql.com Subject: Re: Slow queries / inserts InnoDB You are forcing mysql to do full table scans with the substr... Use explain to see that you aren't using any index. Avoid the use of substr in the where clause, by splitting your data, index that field and do you query over that field. That is why your query is so slow. the slow insert, is due you S.O... On Mon, Oct 8, 2012 at 2:59 AM, Adrián Espinosa Moreno aespinosamor...@gmail.com wrote: Hi. I have developed my first application with VB.NET at work (junior developer) and basically it reads several thousands of lines from X number of files, and inserts them into a mysql database. The application consists of several steps: 1) Dump all lines with all fields into a temp table. a. This works fine. Optimal speed for the hardware we have. 2) Query temp table to obtain certain. I query by a unique ID. I insert all unique ID (isn field). If the ID matches my interests, I insert it into an ArrayList. a. Query: SELECT COUNT(SUBSTR(ISN,2)), SUBSTR(isn, 2) FROM SMS.TEMPDATADUMP WHERE error = 0 GROUP BY SUBSTR(ISN,2) HAVING COUNT(SUBSTR(ISN,2)) = 4 b. The isn is not unique per line, but per data (sms) c. Once I have all isn on an arraylist, I do the following query: i. SELECT select, my, fields,of,interest FROM SMS.TEMPDATADUMP WHERE SUBSTR(ISN, 2) = ' isn ' ORDER BY type LIMIT 1 d. To obtain some data. During the process I query around 10 times other table per ISN. e. Here is the problem. If I have a few files to process (around 3000-4000 lines in total, small array) this steps work fine, good speed. But If I have big files or a lot of files (more than 1 lines in total, big array), this steps are incredibly slow. Queries and inserts are too slow. Meaning, one-two inserts per second, while the other case inserts are around 800 per second. Our hardware is not optimized for database server, but I don't have other choice. It is mostly a desktop computer Intel core i5, windows 32 bits, 3GB RAM, one disk 500GB 7200rpm. I have tried some optimizations commented in mysqlperformance blog without success. Any way to optimize this? Thank you very much in advance. Adrián Espinosa Moreno. -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/mysql
Monitoring Sessions
Hi All, How to monitor individual session number of rows selected or updated by sessions, number of bytes sent and reviewed by session in a given time period, sessions connects runs command and then disconnects, SHOW GLOBAL STATUS is not helping me in this case as i want low level session details, there are nearly 50's application server requesting to 1 databases server, which server is generating more traffic need to monitor and what kind of queries it is firing, binlog file are genrating nearly 7-8 GB daily. Data trafic we can also get on network level but can we get more details as mention. --Anupam
Re: Slow queries / inserts InnoDB
Tello is right ! Moreno, and I sugest that: if you want query by SUBSTR(ISN,2) ,you should create a field named like isnInfo, and save SUBSTR(ISN,2) to this filed when you insert . don't forget create a index on this field . 于 12-10-9 下午10:04, Andrés Tello 写道: You are forcing mysql to do full table scans with the substr... Use explain to see that you aren't using any index. Avoid the use of substr in the where clause, by splitting your data, index that field and do you query over that field. That is why your query is so slow. the slow insert, is due you S.O... On Mon, Oct 8, 2012 at 2:59 AM, Adrián Espinosa Moreno aespinosamor...@gmail.com wrote: Hi. I have developed my first application with VB.NET at work (junior developer) and basically it reads several thousands of lines from X number of files, and inserts them into a mysql database. The application consists of several steps: 1) Dump all lines with all fields into a temp table. a. This works fine. Optimal speed for the hardware we have. 2) Query temp table to obtain certain. I query by a unique ID. I insert all unique ID (isn field). If the ID matches my interests, I insert it into an ArrayList. a. Query: SELECT COUNT(SUBSTR(ISN,2)), SUBSTR(isn, 2) FROM SMS.TEMPDATADUMP WHERE error = 0 GROUP BY SUBSTR(ISN,2) HAVING COUNT(SUBSTR(ISN,2)) = 4 b. The isn is not unique per line, but per data (sms) c. Once I have all isn on an arraylist, I do the following query: i. SELECT select, my, fields,of,interest FROM SMS.TEMPDATADUMP WHERE SUBSTR(ISN, 2) = ' isn ' ORDER BY type LIMIT 1 d. To obtain some data. During the process I query around 10 times other table per ISN. e. Here is the problem. If I have a few files to process (around 3000-4000 lines in total, small array) this steps work fine, good speed. But If I have big files or a lot of files (more than 1 lines in total, big array), this steps are incredibly slow. Queries and inserts are too slow. Meaning, one-two inserts per second, while the other case inserts are around 800 per second. Our hardware is not optimized for database server, but I don’t have other choice. It is mostly a desktop computer Intel core i5, windows 32 bits, 3GB RAM, one disk 500GB 7200rpm. I have tried some optimizations commented in mysqlperformance blog without success. Any way to optimize this? Thank you very much in advance. Adrián Espinosa Moreno. -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/mysql