Re: Slower performance with LOCK TABLES

2011-09-26 Thread Johan De Meersman
- Original Message -
 From: Hank hes...@gmail.com
 
  Just an update. Using the load index into cache statement for the
 200 million row indexed source table, my correlated update
 statement ran in 1 hour, 45 minutes to update 144 million rows.   A 50%
 increase in performance!

Good to hear :-)

Ignore leaves might be a nice trick in this situation, actually. I never 
thought of it, but the leaves contain the record pointers, which you don't need 
because you have a covering index. Nice thinking :-)


-- 
Bier met grenadyn
Is als mosterd by den wyn
Sy die't drinkt, is eene kwezel
Hy die't drinkt, is ras een ezel

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: Slower performance with LOCK TABLES

2011-09-23 Thread Johan De Meersman


- Original Message -
 From: Hank hes...@gmail.com
 
 (please read my ORIGINAL post with all this information).

Welcome on the Internet, where everyone will tell you everything you need to 
know, except for what you want to know :-)

 I am trying to find a logical or reasonable explanation WHY this would be the
 case, despite the fact that the documentation states otherwise (see: Right 
 here:
 http://dev.mysql.com/doc/refman/5.5/en/lock-tables-restrictions.html)

I believe you're misinterpreting that, as is the author from the blog you 
originally referenced.

What it says, is If you are going to run many operations. You are updating 
many rows, but you are only doing ONE operation: a single update statement.

While this explains why you're not seeing benefit, I have to admit that I'm at 
a loss, too, as to why you are experiencing an actual slowdown - the update 
statement will lock the tables, too, so it shouldn't really make any difference 
at all.

 But if seeing some SQL will make you happy, here is just one example:
 
 UPDATE dest d straight_join source s set d.seq=s.seq WHERE d.key=s.key;

See, this is why people ask to see your queries. You never mentioned you were 
doing a join in the update :-)

I'll ignore the join condition in the where clause, as it makes little 
difference in this case; but I do note that you're using a straight join. Is 
the optimizer really reading the tables in the wrong order, or is it just 
something you picked up off a blog without realising the impact?

  Source is indexed by key+seq (key is primary key, but seq is
  included as a covering index).

Good practice, that should prevent source from being read from disk, if your 
index is fully in the cache. Do you have an index on dest.key, too? That might 
help performance as well if it fits in memory, too, because you'll only need 
disk access for flushing writes, then.

 This query takes about 3.5 hours when I don't use LOCK TABLES, and over 4 
 hours when I do use LOCK TABLES.

Most peculiar. Is the difference in performance consistent in repeated 
executions?

 And before testing each run, I do restart the server so there is no
 query caching and I also use FLUSH TABLES between each test run.

That's good, as it will give you the worst-case scenario. However, since such 
an update is going to wreck your index cache anyway, you may just as well 
preload the appropriate indices into it beforehand, if the cache is sized big 
enough to hold them. That might give a minor performance boost, too, as the 
server won't have to go to disk every so often to fetch index blocks.
See http://dev.mysql.com/doc/refman/5.0/en/index-preloading.html for that.

 | key_buffer_size  | 402653184 |

400MB of key buffer, that's not bad at all. Do a /show table status/ in your 
database, and sum the index sizes. If your key buffer is larger than this (and 
why not scale it for growth a bit?) all your indices will fit, which will save 
on disk access for index lookups *and* for index-covered queries.



-- 
Bier met grenadyn
Is als mosterd by den wyn
Sy die't drinkt, is eene kwezel
Hy die't drinkt, is ras een ezel

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: Slower performance with LOCK TABLES

2011-09-23 Thread Hank
Hello Johan,
 Thanks for your comprehensive reply. I'll try to answer each of your
questions below.
-Hank

  But if seeing some SQL will make you happy, here is just one example:
 
  UPDATE dest d straight_join source s set d.seq=s.seq WHERE d.key=s.key;

 See, this is why people ask to see your queries. You never mentioned you were 
 doing a join in the update :-)

 I'll ignore the join condition in the where clause, as it makes little 
 difference in this case; but I do note that
 you're using a straight join. Is the optimizer really reading the tables in 
 the wrong order, or is it just
 something you picked up off a blog without realising the impact?

Yes, I'm using a straight join intentionally.  I have 144 million
unindexed rows in dest. I want Mysql to start with those rows
sequentially,  then join them to the matching record in source using
its index (244 million rows).  If I don't do that, mysql tries to use
the indexed table first, causing a full table scans on dest .  So with
straight_join, it does it in proper order.  During experimentation
with different joins, a regular join would run for days.  A straight
join runs for 3-4 hours.

   Source is indexed by key+seq (key is primary key, but seq is  included as 
  a covering index).

 Good practice, that should prevent source from being read from disk, if your 
 index is fully in the cache.

With 244 million records in the source table, I'm not sure that
would fit in the cache.

 Do you have an index on dest.key, too? That might help performance as well if 
 it fits in memory, too,
 because you'll only need disk access for flushing writes, then.

I do not have an index on dest.key, also intentionally,  for two
reasons. First, updating 144 million records in place is slow enough,
but trying to update 144 million records AND the index on that field
would absolutely kill the performance of the update. Once the update
is complete, I re-create the index with a sort using myisamchk.
Second, the starting value of dest.key for all 144 million records
is 0 so an index on that field wouldn't really help, I think.

  This query takes about 3.5 hours when I don't use LOCK TABLES, and over 4 
  hours when I do
  use LOCK TABLES.

 Most peculiar. Is the difference in performance consistent in repeated 
 executions?

yes, I've done these tests about a dozen times now, and while not
exactly scientific, the results are that LOCK TABLES always results in
longer running times. Not just for this query, but other full table
update/select/delete/insert queries.  Not more than twice as long, but
easily a 10% to 25% increase.

  And before testing each run, I do restart the server so there is no
  query caching and I also use FLUSH TABLES between each test run.

 That's good, as it will give you the worst-case scenario. However, since such 
 an update is going to wreck
your index cache anyway, you may just as well preload the appropriate indices 
into it beforehand, if the
cache is sized big enough to hold them. That might give a minor performance 
boost, too, as the server won't have to go to disk every so often to fetch 
index blocks.
 See http://dev.mysql.com/doc/refman/5.0/en/index-preloading.html for that.

I'll look into that, but the index is huge.  Ok, I just preloaded the
source index using LOAD INDEX INTO CACHE source IGNORE LEAVES;..
it took two minutes/15 seconds to pre-load the index. I then ran  two
small tests on smaller tables using the same update statement, and
they both yielded a massive increase in update speed. I'll have to
rebuild the large dest table again to try it on the biggest UPDATE,
but wow, if this is any indication, this was a great suggestion. I'll
report back on the results later today. Thank you!

  | key_buffer_size          | 402653184 |

 400MB of key buffer, that's not bad at all. Do a /show table status/ in your 
 database, and sum the index sizes. If your key buffer is larger than this 
 (and why not scale it for growth a bit?) all your indices will fit, which 
 will save on disk access for index lookups *and* for index-covered queries.

The index length for source is 5,889,037,312.

Thanks again for your assistance.

-Hank

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: Slower performance with LOCK TABLES

2011-09-23 Thread Hank
Hello Johan,

 Just an update. Using the load index into cache statement for the
200 million row indexed source table, my correlated update statement
ran in 1 hour, 45 minutes to update 144 million rows.   A 50% increase
in performance!

Thank you very much,

-Hank



On Fri, Sep 23, 2011 at 6:56 AM, Johan De Meersman vegiv...@tuxera.be wrote:


 - Original Message -
 From: Hank hes...@gmail.com

 (please read my ORIGINAL post with all this information).

 Welcome on the Internet, where everyone will tell you everything you need to 
 know, except for what you want to know :-)

 I am trying to find a logical or reasonable explanation WHY this would be the
 case, despite the fact that the documentation states otherwise (see: Right 
 here:
 http://dev.mysql.com/doc/refman/5.5/en/lock-tables-restrictions.html)

 I believe you're misinterpreting that, as is the author from the blog you 
 originally referenced.

 What it says, is If you are going to run many operations. You are updating 
 many rows, but you are only doing ONE operation: a single update statement.

 While this explains why you're not seeing benefit, I have to admit that I'm 
 at a loss, too, as to why you are experiencing an actual slowdown - the 
 update statement will lock the tables, too, so it shouldn't really make any 
 difference at all.

 But if seeing some SQL will make you happy, here is just one example:

 UPDATE dest d straight_join source s set d.seq=s.seq WHERE d.key=s.key;

 See, this is why people ask to see your queries. You never mentioned you were 
 doing a join in the update :-)

 I'll ignore the join condition in the where clause, as it makes little 
 difference in this case; but I do note that you're using a straight join. Is 
 the optimizer really reading the tables in the wrong order, or is it just 
 something you picked up off a blog without realising the impact?

  Source is indexed by key+seq (key is primary key, but seq is
  included as a covering index).

 Good practice, that should prevent source from being read from disk, if your 
 index is fully in the cache. Do you have an index on dest.key, too? That 
 might help performance as well if it fits in memory, too, because you'll only 
 need disk access for flushing writes, then.

 This query takes about 3.5 hours when I don't use LOCK TABLES, and over 4 
 hours when I do use LOCK TABLES.

 Most peculiar. Is the difference in performance consistent in repeated 
 executions?

 And before testing each run, I do restart the server so there is no
 query caching and I also use FLUSH TABLES between each test run.

 That's good, as it will give you the worst-case scenario. However, since such 
 an update is going to wreck your index cache anyway, you may just as well 
 preload the appropriate indices into it beforehand, if the cache is sized big 
 enough to hold them. That might give a minor performance boost, too, as the 
 server won't have to go to disk every so often to fetch index blocks.
 See http://dev.mysql.com/doc/refman/5.0/en/index-preloading.html for that.

 | key_buffer_size          | 402653184 |

 400MB of key buffer, that's not bad at all. Do a /show table status/ in your 
 database, and sum the index sizes. If your key buffer is larger than this 
 (and why not scale it for growth a bit?) all your indices will fit, which 
 will save on disk access for index lookups *and* for index-covered queries.



 --
 Bier met grenadyn
 Is als mosterd by den wyn
 Sy die't drinkt, is eene kwezel
 Hy die't drinkt, is ras een ezel


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: Slower performance with LOCK TABLES

2011-09-23 Thread Hal�sz S�ndor
 2011/09/23 12:56 +0200, Johan De Meersman 
What it says, is If you are going to run many operations. You are updating 
many rows, but you are only doing ONE operation: a single update statement. 

For what it's worth, the downloading HTML help claims this only for MyISAM 
tables, because between LOCK TABLES and UNLOCK TABLES there is no key-cache 
flushing. InnoDB is not mentioned.


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: Slower performance with LOCK TABLES

2011-09-22 Thread Hank
Thanks for your reply.  I failed to mention that these are MYISAM tables, so
no transactions.  And like I said, this is not a production box nor is there
any application running, so there's no contention for the tables being
locked.  I'm trying to update a database design on two tables with 200
million records each, so anything I can do to increase the performance of
these long running queries will shorten the migration running time.

What I was referring to was that in the documentation,  that when using LOCK
TABLES, mysql does not update the key cache until the lock is released,
versus when not using LOCK TABLES it does update the key cache on each
insert/update/delete.

see: http://tuxradar.com/practicalphp/18/2/22

In my testing, I'm seeing a slow down when I use LOCK TABLES versus running
the same queries without it.  I'm just trying to find a reason why that
might be the case.

-Hank


On Thu, Sep 22, 2011 at 12:42 AM, Antony T Curtis
antonycur...@verizon.netwrote:

 LOCK TABLES...WRITE is very likely to reduce performance if you are using a
 transactional storage engine, such as InnoDB/XtraDB or PBXT. The reason is
 that only one connection is holding the write lock and no other concurrent
 operation may occur on the table.

 LOCK TABLES is only really useful for non-transactional tables and maybe a
 few specialized operations where it has its advantages but for 99.9% of
 cases, it should not be used.

 What does increase performance is the proper use of transactions with
 appropriate use of SELECT...FOR UPDATE and SELECT...LOCK IN SHARE MODE.

 Regards,

 Antony.



 On 21 Sep 2011, at 20:34, Hank wrote:

  According to everything I've read, using LOCK TABLES...WRITE for updates,
 inserts and deletes should improve performance of mysql server, but I
 think
 I've been seeing the opposite effect.

 I've been doing quite a bit of testing on a 64bit install of CentOS 5.5
 installed as a guest OS on a VMWare ESXi 4.0 hypervisor on a Dell R610.
 There are no other VMs on this box, and there are no other users or
 threads
 running on the OS. Just me.  I'm using this box strictly for testing of
 large database migration scripts.

 It seems like when I execute some of these long running statements without
 locking the tables, the code runs quite a bit faster than when I do lock
 the
 tables.  And before testing each run, I do restart the server so there is
 no
 query caching and I also use FLUSH TABLES between each test run.

 All I'm asking is this:  Can anything think of a scenario on a single
 user-box and mysql instance, that locking tables would cause these DML
 statements to slow down compared to not locking the tables?

 Thanks,

 -Hank





Re: Slower performance with LOCK TABLES

2011-09-22 Thread Antony T Curtis
Even for MyISAM tables, LOCK TABLES is not usually the best solution  
for increasing performance. When there is little to no contention,  
LOCK TABLES doesn't offer much value.


MyISAM works best when you can get more work done in a statement:  
Instead of executing a bunch of insert statements, combine them into a  
single multi-row insert statement, as an example.



On 22 Sep 2011, at 06:13, Hank wrote:

Thanks for your reply.  I failed to mention that these are MYISAM  
tables, so no transactions.  And like I said, this is not a  
production box nor is there any application running, so there's no  
contention for the tables being locked.  I'm trying to update a  
database design on two tables with 200 million records each, so  
anything I can do to increase the performance of these long running  
queries will shorten the migration running time.


What I was referring to was that in the documentation,  that when  
using LOCK TABLES, mysql does not update the key cache until the  
lock is released, versus when not using LOCK TABLES it does update  
the key cache on each insert/update/delete.


see: http://tuxradar.com/practicalphp/18/2/22

In my testing, I'm seeing a slow down when I use LOCK TABLES versus  
running the same queries without it.  I'm just trying to find a  
reason why that might be the case.


-Hank


On Thu, Sep 22, 2011 at 12:42 AM, Antony T Curtis antonycur...@verizon.net 
 wrote:
LOCK TABLES...WRITE is very likely to reduce performance if you are  
using a transactional storage engine, such as InnoDB/XtraDB or PBXT.  
The reason is that only one connection is holding the write lock and  
no other concurrent operation may occur on the table.


LOCK TABLES is only really useful for non-transactional tables and  
maybe a few specialized operations where it has its advantages but  
for 99.9% of cases, it should not be used.


What does increase performance is the proper use of transactions  
with appropriate use of SELECT...FOR UPDATE and SELECT...LOCK IN  
SHARE MODE.


Regards,

Antony.



On 21 Sep 2011, at 20:34, Hank wrote:

According to everything I've read, using LOCK TABLES...WRITE for  
updates,
inserts and deletes should improve performance of mysql server, but  
I think

I've been seeing the opposite effect.

I've been doing quite a bit of testing on a 64bit install of CentOS  
5.5
installed as a guest OS on a VMWare ESXi 4.0 hypervisor on a Dell  
R610.
There are no other VMs on this box, and there are no other users or  
threads
running on the OS. Just me.  I'm using this box strictly for testing  
of

large database migration scripts.

It seems like when I execute some of these long running statements  
without
locking the tables, the code runs quite a bit faster than when I do  
lock the
tables.  And before testing each run, I do restart the server so  
there is no

query caching and I also use FLUSH TABLES between each test run.

All I'm asking is this:  Can anything think of a scenario on a single
user-box and mysql instance, that locking tables would cause these DML
statements to slow down compared to not locking the tables?

Thanks,

-Hank






Re: Slower performance with LOCK TABLES

2011-09-22 Thread Hank
That is what I'm doing. I'm doing a correlated update on 200 million
records. One UPDATE statement.

Also, I'm not asking for a tutorial when not to use LOCK TABLES.  I'm trying
to figure out why, despite what the documentation says, using LOCK TABLES
hinders performance for large update statements on MYISAM tables when it is
supposed to increase performance on exactly the type of queries I am
performing.

If you can't help answer *that* question, please stop lecturing me on the
reasons not to use LOCK TABLES. Thanks.

-Hank


On Thu, Sep 22, 2011 at 10:19 AM, Antony T Curtis
antonycur...@verizon.netwrote:

 Even for MyISAM tables, LOCK TABLES is not usually the best solution for
 increasing performance. When there is little to no contention, LOCK TABLES
 doesn't offer much value.

 MyISAM works best when you can get more work done in a statement: Instead
 of executing a bunch of insert statements, combine them into a single
 multi-row insert statement, as an example.


 On 22 Sep 2011, at 06:13, Hank wrote:

 Thanks for your reply.  I failed to mention that these are MYISAM tables,
 so no transactions.  And like I said, this is not a production box nor is
 there any application running, so there's no contention for the tables being
 locked.  I'm trying to update a database design on two tables with 200
 million records each, so anything I can do to increase the performance of
 these long running queries will shorten the migration running time.

 What I was referring to was that in the documentation,  that when using
 LOCK TABLES, mysql does not update the key cache until the lock is released,
 versus when not using LOCK TABLES it does update the key cache on each
 insert/update/delete.

 see: http://tuxradar.com/practicalphp/18/2/22

 In my testing, I'm seeing a slow down when I use LOCK TABLES versus running
 the same queries without it.  I'm just trying to find a reason why that
 might be the case.

 -Hank


 On Thu, Sep 22, 2011 at 12:42 AM, Antony T Curtis 
 antonycur...@verizon.net wrote:

 LOCK TABLES...WRITE is very likely to reduce performance if you are using
 a transactional storage engine, such as InnoDB/XtraDB or PBXT. The reason is
 that only one connection is holding the write lock and no other concurrent
 operation may occur on the table.

 LOCK TABLES is only really useful for non-transactional tables and maybe a
 few specialized operations where it has its advantages but for 99.9% of
 cases, it should not be used.

 What does increase performance is the proper use of transactions with
 appropriate use of SELECT...FOR UPDATE and SELECT...LOCK IN SHARE MODE.

 Regards,

 Antony.



 On 21 Sep 2011, at 20:34, Hank wrote:

  According to everything I've read, using LOCK TABLES...WRITE for updates,
 inserts and deletes should improve performance of mysql server, but I
 think
 I've been seeing the opposite effect.

 I've been doing quite a bit of testing on a 64bit install of CentOS 5.5
 installed as a guest OS on a VMWare ESXi 4.0 hypervisor on a Dell R610.
 There are no other VMs on this box, and there are no other users or
 threads
 running on the OS. Just me.  I'm using this box strictly for testing of
 large database migration scripts.

 It seems like when I execute some of these long running statements
 without
 locking the tables, the code runs quite a bit faster than when I do lock
 the
 tables.  And before testing each run, I do restart the server so there is
 no
 query caching and I also use FLUSH TABLES between each test run.

 All I'm asking is this:  Can anything think of a scenario on a single
 user-box and mysql instance, that locking tables would cause these DML
 statements to slow down compared to not locking the tables?

 Thanks,

 -Hank







Re: Slower performance with LOCK TABLES

2011-09-22 Thread Ananda Kumar
Hi,
Why dont u use a stored proc to update rows ,where u commit for every 1k or
10k rows.
This will be much faster than ur individual update stmt.

regards
anandkl

On Thu, Sep 22, 2011 at 8:24 PM, Hank hes...@gmail.com wrote:

 That is what I'm doing. I'm doing a correlated update on 200 million
 records. One UPDATE statement.

 Also, I'm not asking for a tutorial when not to use LOCK TABLES.  I'm
 trying
 to figure out why, despite what the documentation says, using LOCK TABLES
 hinders performance for large update statements on MYISAM tables when it is
 supposed to increase performance on exactly the type of queries I am
 performing.

 If you can't help answer *that* question, please stop lecturing me on the
 reasons not to use LOCK TABLES. Thanks.

 -Hank


 On Thu, Sep 22, 2011 at 10:19 AM, Antony T Curtis
 antonycur...@verizon.netwrote:

  Even for MyISAM tables, LOCK TABLES is not usually the best solution for
  increasing performance. When there is little to no contention, LOCK
 TABLES
  doesn't offer much value.
 
  MyISAM works best when you can get more work done in a statement: Instead
  of executing a bunch of insert statements, combine them into a single
  multi-row insert statement, as an example.
 
 
  On 22 Sep 2011, at 06:13, Hank wrote:
 
  Thanks for your reply.  I failed to mention that these are MYISAM tables,
  so no transactions.  And like I said, this is not a production box nor is
  there any application running, so there's no contention for the tables
 being
  locked.  I'm trying to update a database design on two tables with 200
  million records each, so anything I can do to increase the performance of
  these long running queries will shorten the migration running time.
 
  What I was referring to was that in the documentation,  that when using
  LOCK TABLES, mysql does not update the key cache until the lock is
 released,
  versus when not using LOCK TABLES it does update the key cache on each
  insert/update/delete.
 
  see: http://tuxradar.com/practicalphp/18/2/22
 
  In my testing, I'm seeing a slow down when I use LOCK TABLES versus
 running
  the same queries without it.  I'm just trying to find a reason why that
  might be the case.
 
  -Hank
 
 
  On Thu, Sep 22, 2011 at 12:42 AM, Antony T Curtis 
  antonycur...@verizon.net wrote:
 
  LOCK TABLES...WRITE is very likely to reduce performance if you are
 using
  a transactional storage engine, such as InnoDB/XtraDB or PBXT. The
 reason is
  that only one connection is holding the write lock and no other
 concurrent
  operation may occur on the table.
 
  LOCK TABLES is only really useful for non-transactional tables and maybe
 a
  few specialized operations where it has its advantages but for 99.9% of
  cases, it should not be used.
 
  What does increase performance is the proper use of transactions with
  appropriate use of SELECT...FOR UPDATE and SELECT...LOCK IN SHARE MODE.
 
  Regards,
 
  Antony.
 
 
 
  On 21 Sep 2011, at 20:34, Hank wrote:
 
   According to everything I've read, using LOCK TABLES...WRITE for
 updates,
  inserts and deletes should improve performance of mysql server, but I
  think
  I've been seeing the opposite effect.
 
  I've been doing quite a bit of testing on a 64bit install of CentOS 5.5
  installed as a guest OS on a VMWare ESXi 4.0 hypervisor on a Dell R610.
  There are no other VMs on this box, and there are no other users or
  threads
  running on the OS. Just me.  I'm using this box strictly for testing of
  large database migration scripts.
 
  It seems like when I execute some of these long running statements
  without
  locking the tables, the code runs quite a bit faster than when I do
 lock
  the
  tables.  And before testing each run, I do restart the server so there
 is
  no
  query caching and I also use FLUSH TABLES between each test run.
 
  All I'm asking is this:  Can anything think of a scenario on a single
  user-box and mysql instance, that locking tables would cause these DML
  statements to slow down compared to not locking the tables?
 
  Thanks,
 
  -Hank
 
 
 
 
 



Re: Slower performance with LOCK TABLES

2011-09-22 Thread Hank
Actually, that would be orders of magnitude slower.

  I'm using MYISAM tables, so there's no commit.




On Thu, Sep 22, 2011 at 1:48 PM, Ananda Kumar anan...@gmail.com wrote:

 Hi,
 Why dont u use a stored proc to update rows ,where u commit for every 1k or
 10k rows.
 This will be much faster than ur individual update stmt.

 regards
 anandkl

 On Thu, Sep 22, 2011 at 8:24 PM, Hank hes...@gmail.com wrote:

 That is what I'm doing. I'm doing a correlated update on 200 million
 records. One UPDATE statement.

 Also, I'm not asking for a tutorial when not to use LOCK TABLES.  I'm
 trying
 to figure out why, despite what the documentation says, using LOCK TABLES
 hinders performance for large update statements on MYISAM tables when it
 is
 supposed to increase performance on exactly the type of queries I am
 performing.

 If you can't help answer *that* question, please stop lecturing me on the
 reasons not to use LOCK TABLES. Thanks.

 -Hank


 On Thu, Sep 22, 2011 at 10:19 AM, Antony T Curtis
 antonycur...@verizon.netwrote:

  Even for MyISAM tables, LOCK TABLES is not usually the best solution for
  increasing performance. When there is little to no contention, LOCK
 TABLES
  doesn't offer much value.
 
  MyISAM works best when you can get more work done in a statement:
 Instead
  of executing a bunch of insert statements, combine them into a single
  multi-row insert statement, as an example.
 
 
  On 22 Sep 2011, at 06:13, Hank wrote:
 
  Thanks for your reply.  I failed to mention that these are MYISAM
 tables,
  so no transactions.  And like I said, this is not a production box nor
 is
  there any application running, so there's no contention for the tables
 being
  locked.  I'm trying to update a database design on two tables with 200
  million records each, so anything I can do to increase the performance
 of
  these long running queries will shorten the migration running time.
 
  What I was referring to was that in the documentation,  that when using
  LOCK TABLES, mysql does not update the key cache until the lock is
 released,
  versus when not using LOCK TABLES it does update the key cache on each
  insert/update/delete.
 
  see: http://tuxradar.com/practicalphp/18/2/22
 
  In my testing, I'm seeing a slow down when I use LOCK TABLES versus
 running
  the same queries without it.  I'm just trying to find a reason why that
  might be the case.
 
  -Hank
 
 
  On Thu, Sep 22, 2011 at 12:42 AM, Antony T Curtis 
  antonycur...@verizon.net wrote:
 
  LOCK TABLES...WRITE is very likely to reduce performance if you are
 using
  a transactional storage engine, such as InnoDB/XtraDB or PBXT. The
 reason is
  that only one connection is holding the write lock and no other
 concurrent
  operation may occur on the table.
 
  LOCK TABLES is only really useful for non-transactional tables and
 maybe a
  few specialized operations where it has its advantages but for 99.9% of
  cases, it should not be used.
 
  What does increase performance is the proper use of transactions with
  appropriate use of SELECT...FOR UPDATE and SELECT...LOCK IN SHARE MODE.
 
  Regards,
 
  Antony.
 
 
 
  On 21 Sep 2011, at 20:34, Hank wrote:
 
   According to everything I've read, using LOCK TABLES...WRITE for
 updates,
  inserts and deletes should improve performance of mysql server, but I
  think
  I've been seeing the opposite effect.
 
  I've been doing quite a bit of testing on a 64bit install of CentOS
 5.5
  installed as a guest OS on a VMWare ESXi 4.0 hypervisor on a Dell
 R610.
  There are no other VMs on this box, and there are no other users or
  threads
  running on the OS. Just me.  I'm using this box strictly for testing
 of
  large database migration scripts.
 
  It seems like when I execute some of these long running statements
  without
  locking the tables, the code runs quite a bit faster than when I do
 lock
  the
  tables.  And before testing each run, I do restart the server so there
 is
  no
  query caching and I also use FLUSH TABLES between each test run.
 
  All I'm asking is this:  Can anything think of a scenario on a single
  user-box and mysql instance, that locking tables would cause these DML
  statements to slow down compared to not locking the tables?
 
  Thanks,
 
  -Hank
 
 
 
 
 





Re: Slower performance with LOCK TABLES

2011-09-22 Thread Ananda Kumar
No,
Use a cursor(select column_name to be used in where condition of update
stmt), loop through it for each update.

regards
anandkl

On Thu, Sep 22, 2011 at 11:36 PM, Hank hes...@gmail.com wrote:


 Actually, that would be orders of magnitude slower.

   I'm using MYISAM tables, so there's no commit.




 On Thu, Sep 22, 2011 at 1:48 PM, Ananda Kumar anan...@gmail.com wrote:

 Hi,
 Why dont u use a stored proc to update rows ,where u commit for every 1k
 or 10k rows.
 This will be much faster than ur individual update stmt.

 regards
 anandkl

 On Thu, Sep 22, 2011 at 8:24 PM, Hank hes...@gmail.com wrote:

 That is what I'm doing. I'm doing a correlated update on 200 million
 records. One UPDATE statement.

 Also, I'm not asking for a tutorial when not to use LOCK TABLES.  I'm
 trying
 to figure out why, despite what the documentation says, using LOCK TABLES
 hinders performance for large update statements on MYISAM tables when it
 is
 supposed to increase performance on exactly the type of queries I am
 performing.

 If you can't help answer *that* question, please stop lecturing me on the
 reasons not to use LOCK TABLES. Thanks.

 -Hank


 On Thu, Sep 22, 2011 at 10:19 AM, Antony T Curtis
 antonycur...@verizon.netwrote:

  Even for MyISAM tables, LOCK TABLES is not usually the best solution
 for
  increasing performance. When there is little to no contention, LOCK
 TABLES
  doesn't offer much value.
 
  MyISAM works best when you can get more work done in a statement:
 Instead
  of executing a bunch of insert statements, combine them into a single
  multi-row insert statement, as an example.
 
 
  On 22 Sep 2011, at 06:13, Hank wrote:
 
  Thanks for your reply.  I failed to mention that these are MYISAM
 tables,
  so no transactions.  And like I said, this is not a production box nor
 is
  there any application running, so there's no contention for the tables
 being
  locked.  I'm trying to update a database design on two tables with 200
  million records each, so anything I can do to increase the performance
 of
  these long running queries will shorten the migration running time.
 
  What I was referring to was that in the documentation,  that when using
  LOCK TABLES, mysql does not update the key cache until the lock is
 released,
  versus when not using LOCK TABLES it does update the key cache on each
  insert/update/delete.
 
  see: http://tuxradar.com/practicalphp/18/2/22
 
  In my testing, I'm seeing a slow down when I use LOCK TABLES versus
 running
  the same queries without it.  I'm just trying to find a reason why that
  might be the case.
 
  -Hank
 
 
  On Thu, Sep 22, 2011 at 12:42 AM, Antony T Curtis 
  antonycur...@verizon.net wrote:
 
  LOCK TABLES...WRITE is very likely to reduce performance if you are
 using
  a transactional storage engine, such as InnoDB/XtraDB or PBXT. The
 reason is
  that only one connection is holding the write lock and no other
 concurrent
  operation may occur on the table.
 
  LOCK TABLES is only really useful for non-transactional tables and
 maybe a
  few specialized operations where it has its advantages but for 99.9%
 of
  cases, it should not be used.
 
  What does increase performance is the proper use of transactions with
  appropriate use of SELECT...FOR UPDATE and SELECT...LOCK IN SHARE
 MODE.
 
  Regards,
 
  Antony.
 
 
 
  On 21 Sep 2011, at 20:34, Hank wrote:
 
   According to everything I've read, using LOCK TABLES...WRITE for
 updates,
  inserts and deletes should improve performance of mysql server, but I
  think
  I've been seeing the opposite effect.
 
  I've been doing quite a bit of testing on a 64bit install of CentOS
 5.5
  installed as a guest OS on a VMWare ESXi 4.0 hypervisor on a Dell
 R610.
  There are no other VMs on this box, and there are no other users or
  threads
  running on the OS. Just me.  I'm using this box strictly for testing
 of
  large database migration scripts.
 
  It seems like when I execute some of these long running statements
  without
  locking the tables, the code runs quite a bit faster than when I do
 lock
  the
  tables.  And before testing each run, I do restart the server so
 there is
  no
  query caching and I also use FLUSH TABLES between each test run.
 
  All I'm asking is this:  Can anything think of a scenario on a single
  user-box and mysql instance, that locking tables would cause these
 DML
  statements to slow down compared to not locking the tables?
 
  Thanks,
 
  -Hank
 
 
 
 
 






Re: Slower performance with LOCK TABLES

2011-09-22 Thread Hank
Sorry, but you do not understand my original issue or question.

-Hank


On Thu, Sep 22, 2011 at 2:10 PM, Ananda Kumar anan...@gmail.com wrote:

 No,
 Use a cursor(select column_name to be used in where condition of update
 stmt), loop through it for each update.

 regards
 anandkl


 On Thu, Sep 22, 2011 at 11:36 PM, Hank hes...@gmail.com wrote:


 Actually, that would be orders of magnitude slower.

   I'm using MYISAM tables, so there's no commit.




 On Thu, Sep 22, 2011 at 1:48 PM, Ananda Kumar anan...@gmail.com wrote:

 Hi,
 Why dont u use a stored proc to update rows ,where u commit for every 1k
 or 10k rows.
 This will be much faster than ur individual update stmt.

 regards
 anandkl

 On Thu, Sep 22, 2011 at 8:24 PM, Hank hes...@gmail.com wrote:

 That is what I'm doing. I'm doing a correlated update on 200 million
 records. One UPDATE statement.

 Also, I'm not asking for a tutorial when not to use LOCK TABLES.  I'm
 trying
 to figure out why, despite what the documentation says, using LOCK
 TABLES
 hinders performance for large update statements on MYISAM tables when it
 is
 supposed to increase performance on exactly the type of queries I am
 performing.

 If you can't help answer *that* question, please stop lecturing me on
 the
 reasons not to use LOCK TABLES. Thanks.

 -Hank


 On Thu, Sep 22, 2011 at 10:19 AM, Antony T Curtis
 antonycur...@verizon.netwrote:

  Even for MyISAM tables, LOCK TABLES is not usually the best solution
 for
  increasing performance. When there is little to no contention, LOCK
 TABLES
  doesn't offer much value.
 
  MyISAM works best when you can get more work done in a statement:
 Instead
  of executing a bunch of insert statements, combine them into a single
  multi-row insert statement, as an example.
 
 
  On 22 Sep 2011, at 06:13, Hank wrote:
 
  Thanks for your reply.  I failed to mention that these are MYISAM
 tables,
  so no transactions.  And like I said, this is not a production box nor
 is
  there any application running, so there's no contention for the tables
 being
  locked.  I'm trying to update a database design on two tables with 200
  million records each, so anything I can do to increase the performance
 of
  these long running queries will shorten the migration running time.
 
  What I was referring to was that in the documentation,  that when
 using
  LOCK TABLES, mysql does not update the key cache until the lock is
 released,
  versus when not using LOCK TABLES it does update the key cache on each
  insert/update/delete.
 
  see: http://tuxradar.com/practicalphp/18/2/22
 
  In my testing, I'm seeing a slow down when I use LOCK TABLES versus
 running
  the same queries without it.  I'm just trying to find a reason why
 that
  might be the case.
 
  -Hank
 
 
  On Thu, Sep 22, 2011 at 12:42 AM, Antony T Curtis 
  antonycur...@verizon.net wrote:
 
  LOCK TABLES...WRITE is very likely to reduce performance if you are
 using
  a transactional storage engine, such as InnoDB/XtraDB or PBXT. The
 reason is
  that only one connection is holding the write lock and no other
 concurrent
  operation may occur on the table.
 
  LOCK TABLES is only really useful for non-transactional tables and
 maybe a
  few specialized operations where it has its advantages but for 99.9%
 of
  cases, it should not be used.
 
  What does increase performance is the proper use of transactions with
  appropriate use of SELECT...FOR UPDATE and SELECT...LOCK IN SHARE
 MODE.
 
  Regards,
 
  Antony.
 
 
 
  On 21 Sep 2011, at 20:34, Hank wrote:
 
   According to everything I've read, using LOCK TABLES...WRITE for
 updates,
  inserts and deletes should improve performance of mysql server, but
 I
  think
  I've been seeing the opposite effect.
 
  I've been doing quite a bit of testing on a 64bit install of CentOS
 5.5
  installed as a guest OS on a VMWare ESXi 4.0 hypervisor on a Dell
 R610.
  There are no other VMs on this box, and there are no other users or
  threads
  running on the OS. Just me.  I'm using this box strictly for testing
 of
  large database migration scripts.
 
  It seems like when I execute some of these long running statements
  without
  locking the tables, the code runs quite a bit faster than when I do
 lock
  the
  tables.  And before testing each run, I do restart the server so
 there is
  no
  query caching and I also use FLUSH TABLES between each test run.
 
  All I'm asking is this:  Can anything think of a scenario on a
 single
  user-box and mysql instance, that locking tables would cause these
 DML
  statements to slow down compared to not locking the tables?
 
  Thanks,
 
  -Hank
 
 
 
 
 







Re: Slower performance with LOCK TABLES

2011-09-22 Thread Ananda Kumar
May be if u can let the audience know a sip-net of ur sql, some can help u

On Thu, Sep 22, 2011 at 11:43 PM, Hank hes...@gmail.com wrote:


 Sorry, but you do not understand my original issue or question.

 -Hank



 On Thu, Sep 22, 2011 at 2:10 PM, Ananda Kumar anan...@gmail.com wrote:

 No,
 Use a cursor(select column_name to be used in where condition of update
 stmt), loop through it for each update.

 regards
 anandkl


 On Thu, Sep 22, 2011 at 11:36 PM, Hank hes...@gmail.com wrote:


 Actually, that would be orders of magnitude slower.

   I'm using MYISAM tables, so there's no commit.




 On Thu, Sep 22, 2011 at 1:48 PM, Ananda Kumar anan...@gmail.com wrote:

 Hi,
 Why dont u use a stored proc to update rows ,where u commit for every 1k
 or 10k rows.
 This will be much faster than ur individual update stmt.

 regards
 anandkl

 On Thu, Sep 22, 2011 at 8:24 PM, Hank hes...@gmail.com wrote:

 That is what I'm doing. I'm doing a correlated update on 200 million
 records. One UPDATE statement.

 Also, I'm not asking for a tutorial when not to use LOCK TABLES.  I'm
 trying
 to figure out why, despite what the documentation says, using LOCK
 TABLES
 hinders performance for large update statements on MYISAM tables when
 it is
 supposed to increase performance on exactly the type of queries I am
 performing.

 If you can't help answer *that* question, please stop lecturing me on
 the
 reasons not to use LOCK TABLES. Thanks.

 -Hank


 On Thu, Sep 22, 2011 at 10:19 AM, Antony T Curtis
 antonycur...@verizon.netwrote:

  Even for MyISAM tables, LOCK TABLES is not usually the best solution
 for
  increasing performance. When there is little to no contention, LOCK
 TABLES
  doesn't offer much value.
 
  MyISAM works best when you can get more work done in a statement:
 Instead
  of executing a bunch of insert statements, combine them into a single
  multi-row insert statement, as an example.
 
 
  On 22 Sep 2011, at 06:13, Hank wrote:
 
  Thanks for your reply.  I failed to mention that these are MYISAM
 tables,
  so no transactions.  And like I said, this is not a production box
 nor is
  there any application running, so there's no contention for the
 tables being
  locked.  I'm trying to update a database design on two tables with
 200
  million records each, so anything I can do to increase the
 performance of
  these long running queries will shorten the migration running time.
 
  What I was referring to was that in the documentation,  that when
 using
  LOCK TABLES, mysql does not update the key cache until the lock is
 released,
  versus when not using LOCK TABLES it does update the key cache on
 each
  insert/update/delete.
 
  see: http://tuxradar.com/practicalphp/18/2/22
 
  In my testing, I'm seeing a slow down when I use LOCK TABLES versus
 running
  the same queries without it.  I'm just trying to find a reason why
 that
  might be the case.
 
  -Hank
 
 
  On Thu, Sep 22, 2011 at 12:42 AM, Antony T Curtis 
  antonycur...@verizon.net wrote:
 
  LOCK TABLES...WRITE is very likely to reduce performance if you are
 using
  a transactional storage engine, such as InnoDB/XtraDB or PBXT. The
 reason is
  that only one connection is holding the write lock and no other
 concurrent
  operation may occur on the table.
 
  LOCK TABLES is only really useful for non-transactional tables and
 maybe a
  few specialized operations where it has its advantages but for 99.9%
 of
  cases, it should not be used.
 
  What does increase performance is the proper use of transactions
 with
  appropriate use of SELECT...FOR UPDATE and SELECT...LOCK IN SHARE
 MODE.
 
  Regards,
 
  Antony.
 
 
 
  On 21 Sep 2011, at 20:34, Hank wrote:
 
   According to everything I've read, using LOCK TABLES...WRITE for
 updates,
  inserts and deletes should improve performance of mysql server, but
 I
  think
  I've been seeing the opposite effect.
 
  I've been doing quite a bit of testing on a 64bit install of CentOS
 5.5
  installed as a guest OS on a VMWare ESXi 4.0 hypervisor on a Dell
 R610.
  There are no other VMs on this box, and there are no other users or
  threads
  running on the OS. Just me.  I'm using this box strictly for
 testing of
  large database migration scripts.
 
  It seems like when I execute some of these long running statements
  without
  locking the tables, the code runs quite a bit faster than when I do
 lock
  the
  tables.  And before testing each run, I do restart the server so
 there is
  no
  query caching and I also use FLUSH TABLES between each test run.
 
  All I'm asking is this:  Can anything think of a scenario on a
 single
  user-box and mysql instance, that locking tables would cause these
 DML
  statements to slow down compared to not locking the tables?
 
  Thanks,
 
  -Hank
 
 
 
 
 








Re: Slower performance with LOCK TABLES

2011-09-22 Thread Hank
Like I said, the problem is not just one particular SQL statement. It is
several dozen statements operating on tables with several hundred million
records.  The problem is that I am finding that when I use LOCK TABLES,
these queries run slower (please read my ORIGINAL post with all this
information).  I am trying to find a logical or reasonable explanation WHY
this would be the case, despite the fact that the documentation states
otherwise (see: Right here:
http://dev.mysql.com/doc/refman/5.5/en/lock-tables-restrictions.html )

But if seeing some SQL will make you happy, here is just one example:

UPDATE dest d straight_join source s set d.seq=s.seq WHERE d.key=s.key;

for 140 million records in dest and  220 million records in source.
 Source is indexed by key+seq (key is primary key, but seq is included as a
covering index). There is no index on dest.seq -- that index is built once
the update is complete.  This query takes about 3.5 hours when I don't use
LOCK TABLES, and over 4 hours when I do use LOCK TABLES.

-Hank


On Thu, Sep 22, 2011 at 2:18 PM, Ananda Kumar anan...@gmail.com wrote:

 May be if u can let the audience know a sip-net of ur sql, some can help u


 On Thu, Sep 22, 2011 at 11:43 PM, Hank hes...@gmail.com wrote:


 Sorry, but you do not understand my original issue or question.

 -Hank



 On Thu, Sep 22, 2011 at 2:10 PM, Ananda Kumar anan...@gmail.com wrote:

 No,
 Use a cursor(select column_name to be used in where condition of update
 stmt), loop through it for each update.

 regards
 anandkl


 On Thu, Sep 22, 2011 at 11:36 PM, Hank hes...@gmail.com wrote:


 Actually, that would be orders of magnitude slower.

   I'm using MYISAM tables, so there's no commit.




 On Thu, Sep 22, 2011 at 1:48 PM, Ananda Kumar anan...@gmail.comwrote:

 Hi,
 Why dont u use a stored proc to update rows ,where u commit for every
 1k or 10k rows.
 This will be much faster than ur individual update stmt.

 regards
 anandkl

 On Thu, Sep 22, 2011 at 8:24 PM, Hank hes...@gmail.com wrote:

 That is what I'm doing. I'm doing a correlated update on 200 million
 records. One UPDATE statement.

 Also, I'm not asking for a tutorial when not to use LOCK TABLES.  I'm
 trying
 to figure out why, despite what the documentation says, using LOCK
 TABLES
 hinders performance for large update statements on MYISAM tables when
 it is
 supposed to increase performance on exactly the type of queries I am
 performing.

 If you can't help answer *that* question, please stop lecturing me on
 the
 reasons not to use LOCK TABLES. Thanks.

 -Hank


 On Thu, Sep 22, 2011 at 10:19 AM, Antony T Curtis
 antonycur...@verizon.netwrote:

  Even for MyISAM tables, LOCK TABLES is not usually the best solution
 for
  increasing performance. When there is little to no contention, LOCK
 TABLES
  doesn't offer much value.
 
  MyISAM works best when you can get more work done in a statement:
 Instead
  of executing a bunch of insert statements, combine them into a
 single
  multi-row insert statement, as an example.
 
 
  On 22 Sep 2011, at 06:13, Hank wrote:
 
  Thanks for your reply.  I failed to mention that these are MYISAM
 tables,
  so no transactions.  And like I said, this is not a production box
 nor is
  there any application running, so there's no contention for the
 tables being
  locked.  I'm trying to update a database design on two tables with
 200
  million records each, so anything I can do to increase the
 performance of
  these long running queries will shorten the migration running time.
 
  What I was referring to was that in the documentation,  that when
 using
  LOCK TABLES, mysql does not update the key cache until the lock is
 released,
  versus when not using LOCK TABLES it does update the key cache on
 each
  insert/update/delete.
 
  see: http://tuxradar.com/practicalphp/18/2/22
 
  In my testing, I'm seeing a slow down when I use LOCK TABLES versus
 running
  the same queries without it.  I'm just trying to find a reason why
 that
  might be the case.
 
  -Hank
 
 
  On Thu, Sep 22, 2011 at 12:42 AM, Antony T Curtis 
  antonycur...@verizon.net wrote:
 
  LOCK TABLES...WRITE is very likely to reduce performance if you are
 using
  a transactional storage engine, such as InnoDB/XtraDB or PBXT. The
 reason is
  that only one connection is holding the write lock and no other
 concurrent
  operation may occur on the table.
 
  LOCK TABLES is only really useful for non-transactional tables and
 maybe a
  few specialized operations where it has its advantages but for
 99.9% of
  cases, it should not be used.
 
  What does increase performance is the proper use of transactions
 with
  appropriate use of SELECT...FOR UPDATE and SELECT...LOCK IN SHARE
 MODE.
 
  Regards,
 
  Antony.
 
 
 
  On 21 Sep 2011, at 20:34, Hank wrote:
 
   According to everything I've read, using LOCK TABLES...WRITE for
 updates,
  inserts and deletes should improve performance of mysql server,
 but I
  think
  I've been seeing the opposite 

Re: Slower performance with LOCK TABLES

2011-09-22 Thread Ananda Kumar
do u have index on dest,key

On Fri, Sep 23, 2011 at 12:21 AM, Hank hes...@gmail.com wrote:


 Like I said, the problem is not just one particular SQL statement. It is
 several dozen statements operating on tables with several hundred million
 records.  The problem is that I am finding that when I use LOCK TABLES,
 these queries run slower (please read my ORIGINAL post with all this
 information).  I am trying to find a logical or reasonable explanation WHY
 this would be the case, despite the fact that the documentation states
 otherwise (see: Right here:
 http://dev.mysql.com/doc/refman/5.5/en/lock-tables-restrictions.html )

 But if seeing some SQL will make you happy, here is just one example:

 UPDATE dest d straight_join source s set d.seq=s.seq WHERE d.key=s.key;

 for 140 million records in dest and  220 million records in source.
  Source is indexed by key+seq (key is primary key, but seq is included as a
 covering index). There is no index on dest.seq -- that index is built once
 the update is complete.  This query takes about 3.5 hours when I don't use
 LOCK TABLES, and over 4 hours when I do use LOCK TABLES.

 -Hank



 On Thu, Sep 22, 2011 at 2:18 PM, Ananda Kumar anan...@gmail.com wrote:

 May be if u can let the audience know a sip-net of ur sql, some can help u


 On Thu, Sep 22, 2011 at 11:43 PM, Hank hes...@gmail.com wrote:


 Sorry, but you do not understand my original issue or question.

 -Hank



 On Thu, Sep 22, 2011 at 2:10 PM, Ananda Kumar anan...@gmail.com wrote:

 No,
 Use a cursor(select column_name to be used in where condition of
 update stmt), loop through it for each update.

 regards
 anandkl


 On Thu, Sep 22, 2011 at 11:36 PM, Hank hes...@gmail.com wrote:


 Actually, that would be orders of magnitude slower.

   I'm using MYISAM tables, so there's no commit.




 On Thu, Sep 22, 2011 at 1:48 PM, Ananda Kumar anan...@gmail.comwrote:

 Hi,
 Why dont u use a stored proc to update rows ,where u commit for every
 1k or 10k rows.
 This will be much faster than ur individual update stmt.

 regards
 anandkl

 On Thu, Sep 22, 2011 at 8:24 PM, Hank hes...@gmail.com wrote:

 That is what I'm doing. I'm doing a correlated update on 200 million
 records. One UPDATE statement.

 Also, I'm not asking for a tutorial when not to use LOCK TABLES.  I'm
 trying
 to figure out why, despite what the documentation says, using LOCK
 TABLES
 hinders performance for large update statements on MYISAM tables when
 it is
 supposed to increase performance on exactly the type of queries I am
 performing.

 If you can't help answer *that* question, please stop lecturing me on
 the
 reasons not to use LOCK TABLES. Thanks.

 -Hank


 On Thu, Sep 22, 2011 at 10:19 AM, Antony T Curtis
 antonycur...@verizon.netwrote:

  Even for MyISAM tables, LOCK TABLES is not usually the best
 solution for
  increasing performance. When there is little to no contention, LOCK
 TABLES
  doesn't offer much value.
 
  MyISAM works best when you can get more work done in a statement:
 Instead
  of executing a bunch of insert statements, combine them into a
 single
  multi-row insert statement, as an example.
 
 
  On 22 Sep 2011, at 06:13, Hank wrote:
 
  Thanks for your reply.  I failed to mention that these are MYISAM
 tables,
  so no transactions.  And like I said, this is not a production box
 nor is
  there any application running, so there's no contention for the
 tables being
  locked.  I'm trying to update a database design on two tables with
 200
  million records each, so anything I can do to increase the
 performance of
  these long running queries will shorten the migration running time.
 
  What I was referring to was that in the documentation,  that when
 using
  LOCK TABLES, mysql does not update the key cache until the lock is
 released,
  versus when not using LOCK TABLES it does update the key cache on
 each
  insert/update/delete.
 
  see: http://tuxradar.com/practicalphp/18/2/22
 
  In my testing, I'm seeing a slow down when I use LOCK TABLES versus
 running
  the same queries without it.  I'm just trying to find a reason why
 that
  might be the case.
 
  -Hank
 
 
  On Thu, Sep 22, 2011 at 12:42 AM, Antony T Curtis 
  antonycur...@verizon.net wrote:
 
  LOCK TABLES...WRITE is very likely to reduce performance if you
 are using
  a transactional storage engine, such as InnoDB/XtraDB or PBXT. The
 reason is
  that only one connection is holding the write lock and no other
 concurrent
  operation may occur on the table.
 
  LOCK TABLES is only really useful for non-transactional tables and
 maybe a
  few specialized operations where it has its advantages but for
 99.9% of
  cases, it should not be used.
 
  What does increase performance is the proper use of transactions
 with
  appropriate use of SELECT...FOR UPDATE and SELECT...LOCK IN SHARE
 MODE.
 
  Regards,
 
  Antony.
 
 
 
  On 21 Sep 2011, at 20:34, Hank wrote:
 
   According to everything I've read, using LOCK TABLES...WRITE for
 updates,
  

Re: Slower performance with LOCK TABLES

2011-09-22 Thread Hank
On Thu, Sep 22, 2011 at 3:11 PM, Hassan Schroeder 
hassan.schroe...@gmail.com wrote:

 On Thu, Sep 22, 2011 at 11:51 AM, Hank hes...@gmail.com wrote:
  Like I said, the problem is not just one particular SQL statement. It is
  several dozen statements operating on tables with several hundred million
  records.  The problem is that I am finding that when I use LOCK TABLES,
  these queries run slower (please read my ORIGINAL post with all this
  information).

 Wandering out my area of expertise here :-) but have you done any
 key cache tuning or are you running with the defaults?

 mysql show variables like 'key_%';

 Also, what is the exact LOCK TABLE statement you're using?


No, I haven't done any key cache tuning, as that's out of my area of
expertise as well! I have 8GB of memory on this box, and I can go up to
12GB.

Here are the variables:

| key_buffer_size  | 402653184 |
| key_cache_age_threshold  | 300   |
| key_cache_block_size | 1024  |
| key_cache_division_limit | 100   |

The lock statement is quite simple:

LOCK TABLE dest d write, source s read;

thanks.


Re: Slower performance with LOCK TABLES

2011-09-21 Thread Antony T Curtis
LOCK TABLES...WRITE is very likely to reduce performance if you are  
using a transactional storage engine, such as InnoDB/XtraDB or PBXT.  
The reason is that only one connection is holding the write lock and  
no other concurrent operation may occur on the table.


LOCK TABLES is only really useful for non-transactional tables and  
maybe a few specialized operations where it has its advantages but for  
99.9% of cases, it should not be used.


What does increase performance is the proper use of transactions with  
appropriate use of SELECT...FOR UPDATE and SELECT...LOCK IN SHARE MODE.


Regards,

Antony.


On 21 Sep 2011, at 20:34, Hank wrote:

According to everything I've read, using LOCK TABLES...WRITE for  
updates,
inserts and deletes should improve performance of mysql server, but  
I think

I've been seeing the opposite effect.

I've been doing quite a bit of testing on a 64bit install of CentOS  
5.5
installed as a guest OS on a VMWare ESXi 4.0 hypervisor on a Dell  
R610.
There are no other VMs on this box, and there are no other users or  
threads
running on the OS. Just me.  I'm using this box strictly for testing  
of

large database migration scripts.

It seems like when I execute some of these long running statements  
without
locking the tables, the code runs quite a bit faster than when I do  
lock the
tables.  And before testing each run, I do restart the server so  
there is no

query caching and I also use FLUSH TABLES between each test run.

All I'm asking is this:  Can anything think of a scenario on a single
user-box and mysql instance, that locking tables would cause these DML
statements to slow down compared to not locking the tables?

Thanks,

-Hank



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org