Re: Slower performance with LOCK TABLES
Thanks for your reply. I failed to mention that these are MYISAM tables, so no transactions. And like I said, this is not a production box nor is there any application running, so there's no contention for the tables being locked. I'm trying to update a database design on two tables with 200 million records each, so anything I can do to increase the performance of these long running queries will shorten the migration running time. What I was referring to was that in the documentation, that when using LOCK TABLES, mysql does not update the key cache until the lock is released, versus when not using LOCK TABLES it does update the key cache on each insert/update/delete. see: http://tuxradar.com/practicalphp/18/2/22 In my testing, I'm seeing a slow down when I use LOCK TABLES versus running the same queries without it. I'm just trying to find a reason why that might be the case. -Hank On Thu, Sep 22, 2011 at 12:42 AM, Antony T Curtis antonycur...@verizon.netwrote: LOCK TABLES...WRITE is very likely to reduce performance if you are using a transactional storage engine, such as InnoDB/XtraDB or PBXT. The reason is that only one connection is holding the write lock and no other concurrent operation may occur on the table. LOCK TABLES is only really useful for non-transactional tables and maybe a few specialized operations where it has its advantages but for 99.9% of cases, it should not be used. What does increase performance is the proper use of transactions with appropriate use of SELECT...FOR UPDATE and SELECT...LOCK IN SHARE MODE. Regards, Antony. On 21 Sep 2011, at 20:34, Hank wrote: According to everything I've read, using LOCK TABLES...WRITE for updates, inserts and deletes should improve performance of mysql server, but I think I've been seeing the opposite effect. I've been doing quite a bit of testing on a 64bit install of CentOS 5.5 installed as a guest OS on a VMWare ESXi 4.0 hypervisor on a Dell R610. There are no other VMs on this box, and there are no other users or threads running on the OS. Just me. I'm using this box strictly for testing of large database migration scripts. It seems like when I execute some of these long running statements without locking the tables, the code runs quite a bit faster than when I do lock the tables. And before testing each run, I do restart the server so there is no query caching and I also use FLUSH TABLES between each test run. All I'm asking is this: Can anything think of a scenario on a single user-box and mysql instance, that locking tables would cause these DML statements to slow down compared to not locking the tables? Thanks, -Hank
Community Support better than Official Support? (was: Can I Develop using Community Edition and Deploy onto Enterprise Edition??)
This comment has me intrigued: On 9/21/2011 17:50, John Daisley wrote: Partitioning is available in the community edition and has been for a while now. Support is the only real difference and since Oracle took over the support available in the community is usually faster and better than you get from Oracle. I work in MySQL Support and other than the tools that we were given to work with, very little should have changed in our attitude, our knowledge, or our level of professionalism (that I am aware of). Perhaps there are thinks that the other support providers are doing better? Please use this thread as a forum to which you can vent all of your complaints or concerns about MySQL support or to describe ways in which the other support systems are better. If it's policy changes, tell us. If it's response times, tell us. If it's our level of services, tell us. If you don't like the font on the web site, tell us. This is your chance to completely rip us a new one and to brag about your favorite service offerings at the same time. All opinions about any support providers are welcome. Thank you kindly, -- Shawn Green MySQL Principal Technical Support Engineer Oracle USA, Inc. - Hardware and Software, Engineered to Work Together. Office: Blountville, TN -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org
Re: Slower performance with LOCK TABLES
Even for MyISAM tables, LOCK TABLES is not usually the best solution for increasing performance. When there is little to no contention, LOCK TABLES doesn't offer much value. MyISAM works best when you can get more work done in a statement: Instead of executing a bunch of insert statements, combine them into a single multi-row insert statement, as an example. On 22 Sep 2011, at 06:13, Hank wrote: Thanks for your reply. I failed to mention that these are MYISAM tables, so no transactions. And like I said, this is not a production box nor is there any application running, so there's no contention for the tables being locked. I'm trying to update a database design on two tables with 200 million records each, so anything I can do to increase the performance of these long running queries will shorten the migration running time. What I was referring to was that in the documentation, that when using LOCK TABLES, mysql does not update the key cache until the lock is released, versus when not using LOCK TABLES it does update the key cache on each insert/update/delete. see: http://tuxradar.com/practicalphp/18/2/22 In my testing, I'm seeing a slow down when I use LOCK TABLES versus running the same queries without it. I'm just trying to find a reason why that might be the case. -Hank On Thu, Sep 22, 2011 at 12:42 AM, Antony T Curtis antonycur...@verizon.net wrote: LOCK TABLES...WRITE is very likely to reduce performance if you are using a transactional storage engine, such as InnoDB/XtraDB or PBXT. The reason is that only one connection is holding the write lock and no other concurrent operation may occur on the table. LOCK TABLES is only really useful for non-transactional tables and maybe a few specialized operations where it has its advantages but for 99.9% of cases, it should not be used. What does increase performance is the proper use of transactions with appropriate use of SELECT...FOR UPDATE and SELECT...LOCK IN SHARE MODE. Regards, Antony. On 21 Sep 2011, at 20:34, Hank wrote: According to everything I've read, using LOCK TABLES...WRITE for updates, inserts and deletes should improve performance of mysql server, but I think I've been seeing the opposite effect. I've been doing quite a bit of testing on a 64bit install of CentOS 5.5 installed as a guest OS on a VMWare ESXi 4.0 hypervisor on a Dell R610. There are no other VMs on this box, and there are no other users or threads running on the OS. Just me. I'm using this box strictly for testing of large database migration scripts. It seems like when I execute some of these long running statements without locking the tables, the code runs quite a bit faster than when I do lock the tables. And before testing each run, I do restart the server so there is no query caching and I also use FLUSH TABLES between each test run. All I'm asking is this: Can anything think of a scenario on a single user-box and mysql instance, that locking tables would cause these DML statements to slow down compared to not locking the tables? Thanks, -Hank
Re: Slower performance with LOCK TABLES
That is what I'm doing. I'm doing a correlated update on 200 million records. One UPDATE statement. Also, I'm not asking for a tutorial when not to use LOCK TABLES. I'm trying to figure out why, despite what the documentation says, using LOCK TABLES hinders performance for large update statements on MYISAM tables when it is supposed to increase performance on exactly the type of queries I am performing. If you can't help answer *that* question, please stop lecturing me on the reasons not to use LOCK TABLES. Thanks. -Hank On Thu, Sep 22, 2011 at 10:19 AM, Antony T Curtis antonycur...@verizon.netwrote: Even for MyISAM tables, LOCK TABLES is not usually the best solution for increasing performance. When there is little to no contention, LOCK TABLES doesn't offer much value. MyISAM works best when you can get more work done in a statement: Instead of executing a bunch of insert statements, combine them into a single multi-row insert statement, as an example. On 22 Sep 2011, at 06:13, Hank wrote: Thanks for your reply. I failed to mention that these are MYISAM tables, so no transactions. And like I said, this is not a production box nor is there any application running, so there's no contention for the tables being locked. I'm trying to update a database design on two tables with 200 million records each, so anything I can do to increase the performance of these long running queries will shorten the migration running time. What I was referring to was that in the documentation, that when using LOCK TABLES, mysql does not update the key cache until the lock is released, versus when not using LOCK TABLES it does update the key cache on each insert/update/delete. see: http://tuxradar.com/practicalphp/18/2/22 In my testing, I'm seeing a slow down when I use LOCK TABLES versus running the same queries without it. I'm just trying to find a reason why that might be the case. -Hank On Thu, Sep 22, 2011 at 12:42 AM, Antony T Curtis antonycur...@verizon.net wrote: LOCK TABLES...WRITE is very likely to reduce performance if you are using a transactional storage engine, such as InnoDB/XtraDB or PBXT. The reason is that only one connection is holding the write lock and no other concurrent operation may occur on the table. LOCK TABLES is only really useful for non-transactional tables and maybe a few specialized operations where it has its advantages but for 99.9% of cases, it should not be used. What does increase performance is the proper use of transactions with appropriate use of SELECT...FOR UPDATE and SELECT...LOCK IN SHARE MODE. Regards, Antony. On 21 Sep 2011, at 20:34, Hank wrote: According to everything I've read, using LOCK TABLES...WRITE for updates, inserts and deletes should improve performance of mysql server, but I think I've been seeing the opposite effect. I've been doing quite a bit of testing on a 64bit install of CentOS 5.5 installed as a guest OS on a VMWare ESXi 4.0 hypervisor on a Dell R610. There are no other VMs on this box, and there are no other users or threads running on the OS. Just me. I'm using this box strictly for testing of large database migration scripts. It seems like when I execute some of these long running statements without locking the tables, the code runs quite a bit faster than when I do lock the tables. And before testing each run, I do restart the server so there is no query caching and I also use FLUSH TABLES between each test run. All I'm asking is this: Can anything think of a scenario on a single user-box and mysql instance, that locking tables would cause these DML statements to slow down compared to not locking the tables? Thanks, -Hank
RE: Community Support better than Official Support? (was: Can I Develop using Community Edition and Deploy onto Enterprise Edition??)
That's a great attitude. I always appreciate it when a support organization listens to users. I've been on both sides of the fence, and I always hated it when there //was// a fence. One thing I always favored, again as both a user and as a tech support professional, was a public list of known bugs (excluding security-related ones, of course). It saves a lot of head-banging, the kind of frustration that can lead to a very high level of anger. Just last week I spent a day trying to get a particular feature to work (in a completely unrelated product), not knowing that it was flat out broken. The company in question has three options: - Paid support: they gave me a free trial, and I quickly discovered that it was useless. Their only response was Take two reboots and call me in the morning. - E-mail support: bitbuc...@blackhole.com - The user forum: it has many experienced users, some beta testers, and (because the product is used world-wide) a response time measured in hours. What it doesn't have is any presence from the company. As you can imagine, if not for the other users this program would be just a bad memory. Regards, Jerry Schwartz Global Information Incorporated 195 Farmington Ave. Farmington, CT 06032 860.674.8796 / FAX: 860.674.8341 E-mail: je...@gii.co.jp Web site: www.giiresearch.com -Original Message- From: Shawn Green (MySQL) [mailto:shawn.l.gr...@oracle.com] Sent: Thursday, September 22, 2011 9:53 AM To: john.dais...@butterflysystems.co.uk Cc: John Daisley; Claudio Nanni; Johan De Meersman; Alastair Armstrong; mysql@lists.mysql.com Subject: Community Support better than Official Support? (was: Can I Develop using Community Edition and Deploy onto Enterprise Edition??) This comment has me intrigued: On 9/21/2011 17:50, John Daisley wrote: Partitioning is available in the community edition and has been for a while now. Support is the only real difference and since Oracle took over the support available in the community is usually faster and better than you get from Oracle. I work in MySQL Support and other than the tools that we were given to work with, very little should have changed in our attitude, our knowledge, or our level of professionalism (that I am aware of). Perhaps there are thinks that the other support providers are doing better? Please use this thread as a forum to which you can vent all of your complaints or concerns about MySQL support or to describe ways in which the other support systems are better. If it's policy changes, tell us. If it's response times, tell us. If it's our level of services, tell us. If you don't like the font on the web site, tell us. This is your chance to completely rip us a new one and to brag about your favorite service offerings at the same time. All opinions about any support providers are welcome. Thank you kindly, -- Shawn Green MySQL Principal Technical Support Engineer Oracle USA, Inc. - Hardware and Software, Engineered to Work Together. Office: Blountville, TN -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/mysql?unsub=je...@gii.co.jp -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org
Re: Slower performance with LOCK TABLES
Hi, Why dont u use a stored proc to update rows ,where u commit for every 1k or 10k rows. This will be much faster than ur individual update stmt. regards anandkl On Thu, Sep 22, 2011 at 8:24 PM, Hank hes...@gmail.com wrote: That is what I'm doing. I'm doing a correlated update on 200 million records. One UPDATE statement. Also, I'm not asking for a tutorial when not to use LOCK TABLES. I'm trying to figure out why, despite what the documentation says, using LOCK TABLES hinders performance for large update statements on MYISAM tables when it is supposed to increase performance on exactly the type of queries I am performing. If you can't help answer *that* question, please stop lecturing me on the reasons not to use LOCK TABLES. Thanks. -Hank On Thu, Sep 22, 2011 at 10:19 AM, Antony T Curtis antonycur...@verizon.netwrote: Even for MyISAM tables, LOCK TABLES is not usually the best solution for increasing performance. When there is little to no contention, LOCK TABLES doesn't offer much value. MyISAM works best when you can get more work done in a statement: Instead of executing a bunch of insert statements, combine them into a single multi-row insert statement, as an example. On 22 Sep 2011, at 06:13, Hank wrote: Thanks for your reply. I failed to mention that these are MYISAM tables, so no transactions. And like I said, this is not a production box nor is there any application running, so there's no contention for the tables being locked. I'm trying to update a database design on two tables with 200 million records each, so anything I can do to increase the performance of these long running queries will shorten the migration running time. What I was referring to was that in the documentation, that when using LOCK TABLES, mysql does not update the key cache until the lock is released, versus when not using LOCK TABLES it does update the key cache on each insert/update/delete. see: http://tuxradar.com/practicalphp/18/2/22 In my testing, I'm seeing a slow down when I use LOCK TABLES versus running the same queries without it. I'm just trying to find a reason why that might be the case. -Hank On Thu, Sep 22, 2011 at 12:42 AM, Antony T Curtis antonycur...@verizon.net wrote: LOCK TABLES...WRITE is very likely to reduce performance if you are using a transactional storage engine, such as InnoDB/XtraDB or PBXT. The reason is that only one connection is holding the write lock and no other concurrent operation may occur on the table. LOCK TABLES is only really useful for non-transactional tables and maybe a few specialized operations where it has its advantages but for 99.9% of cases, it should not be used. What does increase performance is the proper use of transactions with appropriate use of SELECT...FOR UPDATE and SELECT...LOCK IN SHARE MODE. Regards, Antony. On 21 Sep 2011, at 20:34, Hank wrote: According to everything I've read, using LOCK TABLES...WRITE for updates, inserts and deletes should improve performance of mysql server, but I think I've been seeing the opposite effect. I've been doing quite a bit of testing on a 64bit install of CentOS 5.5 installed as a guest OS on a VMWare ESXi 4.0 hypervisor on a Dell R610. There are no other VMs on this box, and there are no other users or threads running on the OS. Just me. I'm using this box strictly for testing of large database migration scripts. It seems like when I execute some of these long running statements without locking the tables, the code runs quite a bit faster than when I do lock the tables. And before testing each run, I do restart the server so there is no query caching and I also use FLUSH TABLES between each test run. All I'm asking is this: Can anything think of a scenario on a single user-box and mysql instance, that locking tables would cause these DML statements to slow down compared to not locking the tables? Thanks, -Hank
Re: Slower performance with LOCK TABLES
Actually, that would be orders of magnitude slower. I'm using MYISAM tables, so there's no commit. On Thu, Sep 22, 2011 at 1:48 PM, Ananda Kumar anan...@gmail.com wrote: Hi, Why dont u use a stored proc to update rows ,where u commit for every 1k or 10k rows. This will be much faster than ur individual update stmt. regards anandkl On Thu, Sep 22, 2011 at 8:24 PM, Hank hes...@gmail.com wrote: That is what I'm doing. I'm doing a correlated update on 200 million records. One UPDATE statement. Also, I'm not asking for a tutorial when not to use LOCK TABLES. I'm trying to figure out why, despite what the documentation says, using LOCK TABLES hinders performance for large update statements on MYISAM tables when it is supposed to increase performance on exactly the type of queries I am performing. If you can't help answer *that* question, please stop lecturing me on the reasons not to use LOCK TABLES. Thanks. -Hank On Thu, Sep 22, 2011 at 10:19 AM, Antony T Curtis antonycur...@verizon.netwrote: Even for MyISAM tables, LOCK TABLES is not usually the best solution for increasing performance. When there is little to no contention, LOCK TABLES doesn't offer much value. MyISAM works best when you can get more work done in a statement: Instead of executing a bunch of insert statements, combine them into a single multi-row insert statement, as an example. On 22 Sep 2011, at 06:13, Hank wrote: Thanks for your reply. I failed to mention that these are MYISAM tables, so no transactions. And like I said, this is not a production box nor is there any application running, so there's no contention for the tables being locked. I'm trying to update a database design on two tables with 200 million records each, so anything I can do to increase the performance of these long running queries will shorten the migration running time. What I was referring to was that in the documentation, that when using LOCK TABLES, mysql does not update the key cache until the lock is released, versus when not using LOCK TABLES it does update the key cache on each insert/update/delete. see: http://tuxradar.com/practicalphp/18/2/22 In my testing, I'm seeing a slow down when I use LOCK TABLES versus running the same queries without it. I'm just trying to find a reason why that might be the case. -Hank On Thu, Sep 22, 2011 at 12:42 AM, Antony T Curtis antonycur...@verizon.net wrote: LOCK TABLES...WRITE is very likely to reduce performance if you are using a transactional storage engine, such as InnoDB/XtraDB or PBXT. The reason is that only one connection is holding the write lock and no other concurrent operation may occur on the table. LOCK TABLES is only really useful for non-transactional tables and maybe a few specialized operations where it has its advantages but for 99.9% of cases, it should not be used. What does increase performance is the proper use of transactions with appropriate use of SELECT...FOR UPDATE and SELECT...LOCK IN SHARE MODE. Regards, Antony. On 21 Sep 2011, at 20:34, Hank wrote: According to everything I've read, using LOCK TABLES...WRITE for updates, inserts and deletes should improve performance of mysql server, but I think I've been seeing the opposite effect. I've been doing quite a bit of testing on a 64bit install of CentOS 5.5 installed as a guest OS on a VMWare ESXi 4.0 hypervisor on a Dell R610. There are no other VMs on this box, and there are no other users or threads running on the OS. Just me. I'm using this box strictly for testing of large database migration scripts. It seems like when I execute some of these long running statements without locking the tables, the code runs quite a bit faster than when I do lock the tables. And before testing each run, I do restart the server so there is no query caching and I also use FLUSH TABLES between each test run. All I'm asking is this: Can anything think of a scenario on a single user-box and mysql instance, that locking tables would cause these DML statements to slow down compared to not locking the tables? Thanks, -Hank
Re: Slower performance with LOCK TABLES
No, Use a cursor(select column_name to be used in where condition of update stmt), loop through it for each update. regards anandkl On Thu, Sep 22, 2011 at 11:36 PM, Hank hes...@gmail.com wrote: Actually, that would be orders of magnitude slower. I'm using MYISAM tables, so there's no commit. On Thu, Sep 22, 2011 at 1:48 PM, Ananda Kumar anan...@gmail.com wrote: Hi, Why dont u use a stored proc to update rows ,where u commit for every 1k or 10k rows. This will be much faster than ur individual update stmt. regards anandkl On Thu, Sep 22, 2011 at 8:24 PM, Hank hes...@gmail.com wrote: That is what I'm doing. I'm doing a correlated update on 200 million records. One UPDATE statement. Also, I'm not asking for a tutorial when not to use LOCK TABLES. I'm trying to figure out why, despite what the documentation says, using LOCK TABLES hinders performance for large update statements on MYISAM tables when it is supposed to increase performance on exactly the type of queries I am performing. If you can't help answer *that* question, please stop lecturing me on the reasons not to use LOCK TABLES. Thanks. -Hank On Thu, Sep 22, 2011 at 10:19 AM, Antony T Curtis antonycur...@verizon.netwrote: Even for MyISAM tables, LOCK TABLES is not usually the best solution for increasing performance. When there is little to no contention, LOCK TABLES doesn't offer much value. MyISAM works best when you can get more work done in a statement: Instead of executing a bunch of insert statements, combine them into a single multi-row insert statement, as an example. On 22 Sep 2011, at 06:13, Hank wrote: Thanks for your reply. I failed to mention that these are MYISAM tables, so no transactions. And like I said, this is not a production box nor is there any application running, so there's no contention for the tables being locked. I'm trying to update a database design on two tables with 200 million records each, so anything I can do to increase the performance of these long running queries will shorten the migration running time. What I was referring to was that in the documentation, that when using LOCK TABLES, mysql does not update the key cache until the lock is released, versus when not using LOCK TABLES it does update the key cache on each insert/update/delete. see: http://tuxradar.com/practicalphp/18/2/22 In my testing, I'm seeing a slow down when I use LOCK TABLES versus running the same queries without it. I'm just trying to find a reason why that might be the case. -Hank On Thu, Sep 22, 2011 at 12:42 AM, Antony T Curtis antonycur...@verizon.net wrote: LOCK TABLES...WRITE is very likely to reduce performance if you are using a transactional storage engine, such as InnoDB/XtraDB or PBXT. The reason is that only one connection is holding the write lock and no other concurrent operation may occur on the table. LOCK TABLES is only really useful for non-transactional tables and maybe a few specialized operations where it has its advantages but for 99.9% of cases, it should not be used. What does increase performance is the proper use of transactions with appropriate use of SELECT...FOR UPDATE and SELECT...LOCK IN SHARE MODE. Regards, Antony. On 21 Sep 2011, at 20:34, Hank wrote: According to everything I've read, using LOCK TABLES...WRITE for updates, inserts and deletes should improve performance of mysql server, but I think I've been seeing the opposite effect. I've been doing quite a bit of testing on a 64bit install of CentOS 5.5 installed as a guest OS on a VMWare ESXi 4.0 hypervisor on a Dell R610. There are no other VMs on this box, and there are no other users or threads running on the OS. Just me. I'm using this box strictly for testing of large database migration scripts. It seems like when I execute some of these long running statements without locking the tables, the code runs quite a bit faster than when I do lock the tables. And before testing each run, I do restart the server so there is no query caching and I also use FLUSH TABLES between each test run. All I'm asking is this: Can anything think of a scenario on a single user-box and mysql instance, that locking tables would cause these DML statements to slow down compared to not locking the tables? Thanks, -Hank
Re: Slower performance with LOCK TABLES
Sorry, but you do not understand my original issue or question. -Hank On Thu, Sep 22, 2011 at 2:10 PM, Ananda Kumar anan...@gmail.com wrote: No, Use a cursor(select column_name to be used in where condition of update stmt), loop through it for each update. regards anandkl On Thu, Sep 22, 2011 at 11:36 PM, Hank hes...@gmail.com wrote: Actually, that would be orders of magnitude slower. I'm using MYISAM tables, so there's no commit. On Thu, Sep 22, 2011 at 1:48 PM, Ananda Kumar anan...@gmail.com wrote: Hi, Why dont u use a stored proc to update rows ,where u commit for every 1k or 10k rows. This will be much faster than ur individual update stmt. regards anandkl On Thu, Sep 22, 2011 at 8:24 PM, Hank hes...@gmail.com wrote: That is what I'm doing. I'm doing a correlated update on 200 million records. One UPDATE statement. Also, I'm not asking for a tutorial when not to use LOCK TABLES. I'm trying to figure out why, despite what the documentation says, using LOCK TABLES hinders performance for large update statements on MYISAM tables when it is supposed to increase performance on exactly the type of queries I am performing. If you can't help answer *that* question, please stop lecturing me on the reasons not to use LOCK TABLES. Thanks. -Hank On Thu, Sep 22, 2011 at 10:19 AM, Antony T Curtis antonycur...@verizon.netwrote: Even for MyISAM tables, LOCK TABLES is not usually the best solution for increasing performance. When there is little to no contention, LOCK TABLES doesn't offer much value. MyISAM works best when you can get more work done in a statement: Instead of executing a bunch of insert statements, combine them into a single multi-row insert statement, as an example. On 22 Sep 2011, at 06:13, Hank wrote: Thanks for your reply. I failed to mention that these are MYISAM tables, so no transactions. And like I said, this is not a production box nor is there any application running, so there's no contention for the tables being locked. I'm trying to update a database design on two tables with 200 million records each, so anything I can do to increase the performance of these long running queries will shorten the migration running time. What I was referring to was that in the documentation, that when using LOCK TABLES, mysql does not update the key cache until the lock is released, versus when not using LOCK TABLES it does update the key cache on each insert/update/delete. see: http://tuxradar.com/practicalphp/18/2/22 In my testing, I'm seeing a slow down when I use LOCK TABLES versus running the same queries without it. I'm just trying to find a reason why that might be the case. -Hank On Thu, Sep 22, 2011 at 12:42 AM, Antony T Curtis antonycur...@verizon.net wrote: LOCK TABLES...WRITE is very likely to reduce performance if you are using a transactional storage engine, such as InnoDB/XtraDB or PBXT. The reason is that only one connection is holding the write lock and no other concurrent operation may occur on the table. LOCK TABLES is only really useful for non-transactional tables and maybe a few specialized operations where it has its advantages but for 99.9% of cases, it should not be used. What does increase performance is the proper use of transactions with appropriate use of SELECT...FOR UPDATE and SELECT...LOCK IN SHARE MODE. Regards, Antony. On 21 Sep 2011, at 20:34, Hank wrote: According to everything I've read, using LOCK TABLES...WRITE for updates, inserts and deletes should improve performance of mysql server, but I think I've been seeing the opposite effect. I've been doing quite a bit of testing on a 64bit install of CentOS 5.5 installed as a guest OS on a VMWare ESXi 4.0 hypervisor on a Dell R610. There are no other VMs on this box, and there are no other users or threads running on the OS. Just me. I'm using this box strictly for testing of large database migration scripts. It seems like when I execute some of these long running statements without locking the tables, the code runs quite a bit faster than when I do lock the tables. And before testing each run, I do restart the server so there is no query caching and I also use FLUSH TABLES between each test run. All I'm asking is this: Can anything think of a scenario on a single user-box and mysql instance, that locking tables would cause these DML statements to slow down compared to not locking the tables? Thanks, -Hank
Re: Slower performance with LOCK TABLES
May be if u can let the audience know a sip-net of ur sql, some can help u On Thu, Sep 22, 2011 at 11:43 PM, Hank hes...@gmail.com wrote: Sorry, but you do not understand my original issue or question. -Hank On Thu, Sep 22, 2011 at 2:10 PM, Ananda Kumar anan...@gmail.com wrote: No, Use a cursor(select column_name to be used in where condition of update stmt), loop through it for each update. regards anandkl On Thu, Sep 22, 2011 at 11:36 PM, Hank hes...@gmail.com wrote: Actually, that would be orders of magnitude slower. I'm using MYISAM tables, so there's no commit. On Thu, Sep 22, 2011 at 1:48 PM, Ananda Kumar anan...@gmail.com wrote: Hi, Why dont u use a stored proc to update rows ,where u commit for every 1k or 10k rows. This will be much faster than ur individual update stmt. regards anandkl On Thu, Sep 22, 2011 at 8:24 PM, Hank hes...@gmail.com wrote: That is what I'm doing. I'm doing a correlated update on 200 million records. One UPDATE statement. Also, I'm not asking for a tutorial when not to use LOCK TABLES. I'm trying to figure out why, despite what the documentation says, using LOCK TABLES hinders performance for large update statements on MYISAM tables when it is supposed to increase performance on exactly the type of queries I am performing. If you can't help answer *that* question, please stop lecturing me on the reasons not to use LOCK TABLES. Thanks. -Hank On Thu, Sep 22, 2011 at 10:19 AM, Antony T Curtis antonycur...@verizon.netwrote: Even for MyISAM tables, LOCK TABLES is not usually the best solution for increasing performance. When there is little to no contention, LOCK TABLES doesn't offer much value. MyISAM works best when you can get more work done in a statement: Instead of executing a bunch of insert statements, combine them into a single multi-row insert statement, as an example. On 22 Sep 2011, at 06:13, Hank wrote: Thanks for your reply. I failed to mention that these are MYISAM tables, so no transactions. And like I said, this is not a production box nor is there any application running, so there's no contention for the tables being locked. I'm trying to update a database design on two tables with 200 million records each, so anything I can do to increase the performance of these long running queries will shorten the migration running time. What I was referring to was that in the documentation, that when using LOCK TABLES, mysql does not update the key cache until the lock is released, versus when not using LOCK TABLES it does update the key cache on each insert/update/delete. see: http://tuxradar.com/practicalphp/18/2/22 In my testing, I'm seeing a slow down when I use LOCK TABLES versus running the same queries without it. I'm just trying to find a reason why that might be the case. -Hank On Thu, Sep 22, 2011 at 12:42 AM, Antony T Curtis antonycur...@verizon.net wrote: LOCK TABLES...WRITE is very likely to reduce performance if you are using a transactional storage engine, such as InnoDB/XtraDB or PBXT. The reason is that only one connection is holding the write lock and no other concurrent operation may occur on the table. LOCK TABLES is only really useful for non-transactional tables and maybe a few specialized operations where it has its advantages but for 99.9% of cases, it should not be used. What does increase performance is the proper use of transactions with appropriate use of SELECT...FOR UPDATE and SELECT...LOCK IN SHARE MODE. Regards, Antony. On 21 Sep 2011, at 20:34, Hank wrote: According to everything I've read, using LOCK TABLES...WRITE for updates, inserts and deletes should improve performance of mysql server, but I think I've been seeing the opposite effect. I've been doing quite a bit of testing on a 64bit install of CentOS 5.5 installed as a guest OS on a VMWare ESXi 4.0 hypervisor on a Dell R610. There are no other VMs on this box, and there are no other users or threads running on the OS. Just me. I'm using this box strictly for testing of large database migration scripts. It seems like when I execute some of these long running statements without locking the tables, the code runs quite a bit faster than when I do lock the tables. And before testing each run, I do restart the server so there is no query caching and I also use FLUSH TABLES between each test run. All I'm asking is this: Can anything think of a scenario on a single user-box and mysql instance, that locking tables would cause these DML statements to slow down compared to not locking the tables? Thanks, -Hank
query optimization
I have a table with 24 million rows, I need to figure out how to optimize a query. It has to do with mac addresses and radius packets - I want to see the # of connections and the min/max date. So I basically want all this data: select cpe_mac,count(*) c,min(r3_dt) mindt,max(r3_dt) maxdt, max(rec_num) recn from radiuscap where r3_dt=SUBDATE(NOW(),INTERVAL 30 DAY) and r3_type='Access' group by cpe_mac order by cpe_mac ; This piece of the query takes 30 seconds to run and produces 3500 rows. I have r3_dt indexed. I also want a status field of the row with the highest r3_dt: select rec_num,cpe_mac,req_status from rad_r3cap where r3_type='Access' and (cpe_mac,r3_dt) in (select cpe_mac,max(r3_dt) from rad_r3cap) ; This piece of the query takes forever, I let it run for an hour and it still didn't finish, it's obviously not using indexes. I have no idea how far along it got. I wrote a php script to run the 1st query, then do 3500 individual lookups for the status using the max(rec_num) field in the 1st query, and I can get the data in 31 seconds. So I CAN produce this data, but very slowly, and not in 1 sql query. I want to consolidate this into 1 sql so I can make a view. If anyone can point me in the right direction, I'd appreciate it! mysql desc rad_r3cap; +-+-+--+-+-++ | Field | Type | Null | Key | Default | Extra | +-+-+--+-+-++ | rec_num | int(11) | NO | PRI | NULL | auto_increment | | r3_dt | datetime | YES | MUL | NULL | | | r3_micros | int(11) | YES | | NULL | | | r3_type | varchar(16) | YES | | NULL | | | req_status | varchar(16) | YES | | NULL | | | req_comment | varchar(64) | YES | | NULL | | | asn_ip | varchar(16) | YES | MUL | NULL | | | asn_name | varchar(16) | YES | | NULL | | | bsid | varchar(12) | YES | MUL | NULL | | | cpe_ip | varchar(16) | YES | | NULL | | | cpe_mac | varchar(12) | YES | MUL | NULL | | | filename | varchar(32) | YES | | NULL | | | linenum | int(11) | YES | | NULL | | | r3_hour | datetime | YES | MUL | NULL | | | user_name | varchar(64) | YES | | NULL | | +-+-+--+-+-++ mysql show indexes in rad_r3cap; +---++--+--+-+---+-+--++--++-+ | Table | Non_unique | Key_name | Seq_in_index | Column_name | Collation | Cardinality | Sub_part | Packed | Null | Index_type | Comment | +---++--+--+-+---+-+--++--++-+ | rad_r3cap | 0 | PRIMARY | 1 | rec_num | A | 23877677 | NULL | NULL | | BTREE | | | rad_r3cap | 0 | r3cap_dt | 1 | r3_dt | A | NULL | NULL | NULL | YES | BTREE | | | rad_r3cap | 0 | r3cap_dt | 2 | r3_micros | A | NULL | NULL | NULL | YES | BTREE | | | rad_r3cap | 1 | r3cap_bsid | 1 | bsid | A | 346 | NULL | NULL | YES | BTREE | | | rad_r3cap | 1 | r3cap_asnip | 1 | asn_ip | A | 55 | NULL | NULL | YES | BTREE | | | rad_r3cap | 1 | r3cap_cpemac | 1 | cpe_mac | A | 4758 | NULL | NULL | YES | BTREE | | | rad_r3cap | 1 | r3cap_date | 1 | r3_hour | A | 1548 | NULL | NULL | YES | BTREE | | +---++--+--+-+---+-+--++--++-+ 7 rows in set (0.00 sec)
RE: Community Support better than Official Support? (was: Can I Develop using Community Edition and Deploy onto Enterprise Edition??)
; 2011/09/22 13:08 -0400, Jerry Schwartz The user forum: it has many experienced users, some beta testers, and (because the product is used world-wide) a response time measured in hours. What it doesn't have is any presence from the company. Is n't that what companies nowadays want? Computers are now often used to get workers and patrons to pay for that which formerly the company paid: forms, instruction books, With online banking the bank pays fewer tellers. The company s only bizness is to sell something, and after the sale vanish if may be. -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org
Re: Slower performance with LOCK TABLES
Like I said, the problem is not just one particular SQL statement. It is several dozen statements operating on tables with several hundred million records. The problem is that I am finding that when I use LOCK TABLES, these queries run slower (please read my ORIGINAL post with all this information). I am trying to find a logical or reasonable explanation WHY this would be the case, despite the fact that the documentation states otherwise (see: Right here: http://dev.mysql.com/doc/refman/5.5/en/lock-tables-restrictions.html ) But if seeing some SQL will make you happy, here is just one example: UPDATE dest d straight_join source s set d.seq=s.seq WHERE d.key=s.key; for 140 million records in dest and 220 million records in source. Source is indexed by key+seq (key is primary key, but seq is included as a covering index). There is no index on dest.seq -- that index is built once the update is complete. This query takes about 3.5 hours when I don't use LOCK TABLES, and over 4 hours when I do use LOCK TABLES. -Hank On Thu, Sep 22, 2011 at 2:18 PM, Ananda Kumar anan...@gmail.com wrote: May be if u can let the audience know a sip-net of ur sql, some can help u On Thu, Sep 22, 2011 at 11:43 PM, Hank hes...@gmail.com wrote: Sorry, but you do not understand my original issue or question. -Hank On Thu, Sep 22, 2011 at 2:10 PM, Ananda Kumar anan...@gmail.com wrote: No, Use a cursor(select column_name to be used in where condition of update stmt), loop through it for each update. regards anandkl On Thu, Sep 22, 2011 at 11:36 PM, Hank hes...@gmail.com wrote: Actually, that would be orders of magnitude slower. I'm using MYISAM tables, so there's no commit. On Thu, Sep 22, 2011 at 1:48 PM, Ananda Kumar anan...@gmail.comwrote: Hi, Why dont u use a stored proc to update rows ,where u commit for every 1k or 10k rows. This will be much faster than ur individual update stmt. regards anandkl On Thu, Sep 22, 2011 at 8:24 PM, Hank hes...@gmail.com wrote: That is what I'm doing. I'm doing a correlated update on 200 million records. One UPDATE statement. Also, I'm not asking for a tutorial when not to use LOCK TABLES. I'm trying to figure out why, despite what the documentation says, using LOCK TABLES hinders performance for large update statements on MYISAM tables when it is supposed to increase performance on exactly the type of queries I am performing. If you can't help answer *that* question, please stop lecturing me on the reasons not to use LOCK TABLES. Thanks. -Hank On Thu, Sep 22, 2011 at 10:19 AM, Antony T Curtis antonycur...@verizon.netwrote: Even for MyISAM tables, LOCK TABLES is not usually the best solution for increasing performance. When there is little to no contention, LOCK TABLES doesn't offer much value. MyISAM works best when you can get more work done in a statement: Instead of executing a bunch of insert statements, combine them into a single multi-row insert statement, as an example. On 22 Sep 2011, at 06:13, Hank wrote: Thanks for your reply. I failed to mention that these are MYISAM tables, so no transactions. And like I said, this is not a production box nor is there any application running, so there's no contention for the tables being locked. I'm trying to update a database design on two tables with 200 million records each, so anything I can do to increase the performance of these long running queries will shorten the migration running time. What I was referring to was that in the documentation, that when using LOCK TABLES, mysql does not update the key cache until the lock is released, versus when not using LOCK TABLES it does update the key cache on each insert/update/delete. see: http://tuxradar.com/practicalphp/18/2/22 In my testing, I'm seeing a slow down when I use LOCK TABLES versus running the same queries without it. I'm just trying to find a reason why that might be the case. -Hank On Thu, Sep 22, 2011 at 12:42 AM, Antony T Curtis antonycur...@verizon.net wrote: LOCK TABLES...WRITE is very likely to reduce performance if you are using a transactional storage engine, such as InnoDB/XtraDB or PBXT. The reason is that only one connection is holding the write lock and no other concurrent operation may occur on the table. LOCK TABLES is only really useful for non-transactional tables and maybe a few specialized operations where it has its advantages but for 99.9% of cases, it should not be used. What does increase performance is the proper use of transactions with appropriate use of SELECT...FOR UPDATE and SELECT...LOCK IN SHARE MODE. Regards, Antony. On 21 Sep 2011, at 20:34, Hank wrote: According to everything I've read, using LOCK TABLES...WRITE for updates, inserts and deletes should improve performance of mysql server, but I think I've been seeing the opposite
Re: query optimization
Your outer query select cpe_mac,max(r3_dt) from rad_r3cap, is doing a full table scan, you might want to check on this and use a WHERE condition to use indexed column On Fri, Sep 23, 2011 at 12:14 AM, supr_star suprstar1...@yahoo.com wrote: I have a table with 24 million rows, I need to figure out how to optimize a query. It has to do with mac addresses and radius packets - I want to see the # of connections and the min/max date. So I basically want all this data: select cpe_mac,count(*) c,min(r3_dt) mindt,max(r3_dt) maxdt, max(rec_num) recn from radiuscap where r3_dt=SUBDATE(NOW(),INTERVAL 30 DAY) and r3_type='Access' group by cpe_mac order by cpe_mac ; This piece of the query takes 30 seconds to run and produces 3500 rows. I have r3_dt indexed. I also want a status field of the row with the highest r3_dt: select rec_num,cpe_mac,req_status from rad_r3cap where r3_type='Access' and (cpe_mac,r3_dt) in (select cpe_mac,max(r3_dt) from rad_r3cap) ; This piece of the query takes forever, I let it run for an hour and it still didn't finish, it's obviously not using indexes. I have no idea how far along it got. I wrote a php script to run the 1st query, then do 3500 individual lookups for the status using the max(rec_num) field in the 1st query, and I can get the data in 31 seconds. So I CAN produce this data, but very slowly, and not in 1 sql query. I want to consolidate this into 1 sql so I can make a view. If anyone can point me in the right direction, I'd appreciate it! mysql desc rad_r3cap; +-+-+--+-+-++ | Field | Type| Null | Key | Default | Extra | +-+-+--+-+-++ | rec_num | int(11) | NO | PRI | NULL| auto_increment | | r3_dt | datetime| YES | MUL | NULL|| | r3_micros | int(11) | YES | | NULL|| | r3_type | varchar(16) | YES | | NULL|| | req_status | varchar(16) | YES | | NULL|| | req_comment | varchar(64) | YES | | NULL|| | asn_ip | varchar(16) | YES | MUL | NULL|| | asn_name| varchar(16) | YES | | NULL|| | bsid| varchar(12) | YES | MUL | NULL|| | cpe_ip | varchar(16) | YES | | NULL|| | cpe_mac | varchar(12) | YES | MUL | NULL|| | filename| varchar(32) | YES | | NULL|| | linenum | int(11) | YES | | NULL|| | r3_hour | datetime| YES | MUL | NULL|| | user_name | varchar(64) | YES | | NULL|| +-+-+--+-+-++ mysql show indexes in rad_r3cap; +---++--+--+-+---+-+--++--++-+ | Table | Non_unique | Key_name | Seq_in_index | Column_name | Collation | Cardinality | Sub_part | Packed | Null | Index_type | Comment | +---++--+--+-+---+-+--++--++-+ | rad_r3cap | 0 | PRIMARY |1 | rec_num | A |23877677 | NULL | NULL | | BTREE | | | rad_r3cap | 0 | r3cap_dt |1 | r3_dt | A |NULL | NULL | NULL | YES | BTREE | | | rad_r3cap | 0 | r3cap_dt |2 | r3_micros | A |NULL | NULL | NULL | YES | BTREE | | | rad_r3cap | 1 | r3cap_bsid |1 | bsid| A | 346 | NULL | NULL | YES | BTREE | | | rad_r3cap | 1 | r3cap_asnip |1 | asn_ip | A | 55 | NULL | NULL | YES | BTREE | | | rad_r3cap | 1 | r3cap_cpemac |1 | cpe_mac | A |4758 | NULL | NULL | YES | BTREE | | | rad_r3cap | 1 | r3cap_date |1 | r3_hour | A |1548 | NULL | NULL | YES | BTREE | | +---++--+--+-+---+-+--++--++-+ 7 rows in set (0.00 sec)
Re: Slower performance with LOCK TABLES
do u have index on dest,key On Fri, Sep 23, 2011 at 12:21 AM, Hank hes...@gmail.com wrote: Like I said, the problem is not just one particular SQL statement. It is several dozen statements operating on tables with several hundred million records. The problem is that I am finding that when I use LOCK TABLES, these queries run slower (please read my ORIGINAL post with all this information). I am trying to find a logical or reasonable explanation WHY this would be the case, despite the fact that the documentation states otherwise (see: Right here: http://dev.mysql.com/doc/refman/5.5/en/lock-tables-restrictions.html ) But if seeing some SQL will make you happy, here is just one example: UPDATE dest d straight_join source s set d.seq=s.seq WHERE d.key=s.key; for 140 million records in dest and 220 million records in source. Source is indexed by key+seq (key is primary key, but seq is included as a covering index). There is no index on dest.seq -- that index is built once the update is complete. This query takes about 3.5 hours when I don't use LOCK TABLES, and over 4 hours when I do use LOCK TABLES. -Hank On Thu, Sep 22, 2011 at 2:18 PM, Ananda Kumar anan...@gmail.com wrote: May be if u can let the audience know a sip-net of ur sql, some can help u On Thu, Sep 22, 2011 at 11:43 PM, Hank hes...@gmail.com wrote: Sorry, but you do not understand my original issue or question. -Hank On Thu, Sep 22, 2011 at 2:10 PM, Ananda Kumar anan...@gmail.com wrote: No, Use a cursor(select column_name to be used in where condition of update stmt), loop through it for each update. regards anandkl On Thu, Sep 22, 2011 at 11:36 PM, Hank hes...@gmail.com wrote: Actually, that would be orders of magnitude slower. I'm using MYISAM tables, so there's no commit. On Thu, Sep 22, 2011 at 1:48 PM, Ananda Kumar anan...@gmail.comwrote: Hi, Why dont u use a stored proc to update rows ,where u commit for every 1k or 10k rows. This will be much faster than ur individual update stmt. regards anandkl On Thu, Sep 22, 2011 at 8:24 PM, Hank hes...@gmail.com wrote: That is what I'm doing. I'm doing a correlated update on 200 million records. One UPDATE statement. Also, I'm not asking for a tutorial when not to use LOCK TABLES. I'm trying to figure out why, despite what the documentation says, using LOCK TABLES hinders performance for large update statements on MYISAM tables when it is supposed to increase performance on exactly the type of queries I am performing. If you can't help answer *that* question, please stop lecturing me on the reasons not to use LOCK TABLES. Thanks. -Hank On Thu, Sep 22, 2011 at 10:19 AM, Antony T Curtis antonycur...@verizon.netwrote: Even for MyISAM tables, LOCK TABLES is not usually the best solution for increasing performance. When there is little to no contention, LOCK TABLES doesn't offer much value. MyISAM works best when you can get more work done in a statement: Instead of executing a bunch of insert statements, combine them into a single multi-row insert statement, as an example. On 22 Sep 2011, at 06:13, Hank wrote: Thanks for your reply. I failed to mention that these are MYISAM tables, so no transactions. And like I said, this is not a production box nor is there any application running, so there's no contention for the tables being locked. I'm trying to update a database design on two tables with 200 million records each, so anything I can do to increase the performance of these long running queries will shorten the migration running time. What I was referring to was that in the documentation, that when using LOCK TABLES, mysql does not update the key cache until the lock is released, versus when not using LOCK TABLES it does update the key cache on each insert/update/delete. see: http://tuxradar.com/practicalphp/18/2/22 In my testing, I'm seeing a slow down when I use LOCK TABLES versus running the same queries without it. I'm just trying to find a reason why that might be the case. -Hank On Thu, Sep 22, 2011 at 12:42 AM, Antony T Curtis antonycur...@verizon.net wrote: LOCK TABLES...WRITE is very likely to reduce performance if you are using a transactional storage engine, such as InnoDB/XtraDB or PBXT. The reason is that only one connection is holding the write lock and no other concurrent operation may occur on the table. LOCK TABLES is only really useful for non-transactional tables and maybe a few specialized operations where it has its advantages but for 99.9% of cases, it should not be used. What does increase performance is the proper use of transactions with appropriate use of SELECT...FOR UPDATE and SELECT...LOCK IN SHARE MODE. Regards, Antony. On 21 Sep 2011, at 20:34, Hank wrote: According to everything I've read, using LOCK TABLES...WRITE for updates,
Re: Slower performance with LOCK TABLES
On Thu, Sep 22, 2011 at 3:11 PM, Hassan Schroeder hassan.schroe...@gmail.com wrote: On Thu, Sep 22, 2011 at 11:51 AM, Hank hes...@gmail.com wrote: Like I said, the problem is not just one particular SQL statement. It is several dozen statements operating on tables with several hundred million records. The problem is that I am finding that when I use LOCK TABLES, these queries run slower (please read my ORIGINAL post with all this information). Wandering out my area of expertise here :-) but have you done any key cache tuning or are you running with the defaults? mysql show variables like 'key_%'; Also, what is the exact LOCK TABLE statement you're using? No, I haven't done any key cache tuning, as that's out of my area of expertise as well! I have 8GB of memory on this box, and I can go up to 12GB. Here are the variables: | key_buffer_size | 402653184 | | key_cache_age_threshold | 300 | | key_cache_block_size | 1024 | | key_cache_division_limit | 100 | The lock statement is quite simple: LOCK TABLE dest d write, source s read; thanks.