Actually, that would be orders of magnitude slower.

  I'm using MYISAM tables, so there's no commit.




On Thu, Sep 22, 2011 at 1:48 PM, Ananda Kumar <anan...@gmail.com> wrote:

> Hi,
> Why dont u use a stored proc to update rows ,where u commit for every 1k or
> 10k rows.
> This will be much faster than ur individual update stmt.
>
> regards
> anandkl
>
> On Thu, Sep 22, 2011 at 8:24 PM, Hank <hes...@gmail.com> wrote:
>
>> That is what I'm doing. I'm doing a correlated update on 200 million
>> records. One UPDATE statement.
>>
>> Also, I'm not asking for a tutorial when not to use LOCK TABLES.  I'm
>> trying
>> to figure out why, despite what the documentation says, using LOCK TABLES
>> hinders performance for large update statements on MYISAM tables when it
>> is
>> supposed to increase performance on exactly the type of queries I am
>> performing.
>>
>> If you can't help answer *that* question, please stop lecturing me on the
>> reasons not to use LOCK TABLES. Thanks.
>>
>> -Hank
>>
>>
>> On Thu, Sep 22, 2011 at 10:19 AM, Antony T Curtis
>> <antonycur...@verizon.net>wrote:
>>
>> > Even for MyISAM tables, LOCK TABLES is not usually the best solution for
>> > increasing performance. When there is little to no contention, LOCK
>> TABLES
>> > doesn't offer much value.
>> >
>> > MyISAM works best when you can get more work done in a statement:
>> Instead
>> > of executing a bunch of insert statements, combine them into a single
>> > multi-row insert statement, as an example.
>> >
>> >
>> > On 22 Sep 2011, at 06:13, Hank wrote:
>> >
>> > Thanks for your reply.  I failed to mention that these are MYISAM
>> tables,
>> > so no transactions.  And like I said, this is not a production box nor
>> is
>> > there any application running, so there's no contention for the tables
>> being
>> > locked.  I'm trying to update a database design on two tables with 200
>> > million records each, so anything I can do to increase the performance
>> of
>> > these long running queries will shorten the migration running time.
>> >
>> > What I was referring to was that in the documentation,  that when using
>> > LOCK TABLES, mysql does not update the key cache until the lock is
>> released,
>> > versus when not using LOCK TABLES it does update the key cache on each
>> > insert/update/delete.
>> >
>> > see: http://tuxradar.com/practicalphp/18/2/22
>> >
>> > In my testing, I'm seeing a slow down when I use LOCK TABLES versus
>> running
>> > the same queries without it.  I'm just trying to find a reason why that
>> > might be the case.
>> >
>> > -Hank
>> >
>> >
>> > On Thu, Sep 22, 2011 at 12:42 AM, Antony T Curtis <
>> > antonycur...@verizon.net> wrote:
>> >
>> >> LOCK TABLES...WRITE is very likely to reduce performance if you are
>> using
>> >> a transactional storage engine, such as InnoDB/XtraDB or PBXT. The
>> reason is
>> >> that only one connection is holding the write lock and no other
>> concurrent
>> >> operation may occur on the table.
>> >>
>> >> LOCK TABLES is only really useful for non-transactional tables and
>> maybe a
>> >> few specialized operations where it has its advantages but for 99.9% of
>> >> cases, it should not be used.
>> >>
>> >> What does increase performance is the proper use of transactions with
>> >> appropriate use of SELECT...FOR UPDATE and SELECT...LOCK IN SHARE MODE.
>> >>
>> >> Regards,
>> >>
>> >> Antony.
>> >>
>> >>
>> >>
>> >> On 21 Sep 2011, at 20:34, Hank wrote:
>> >>
>> >>  According to everything I've read, using LOCK TABLES...WRITE for
>> updates,
>> >>> inserts and deletes should improve performance of mysql server, but I
>> >>> think
>> >>> I've been seeing the opposite effect.
>> >>>
>> >>> I've been doing quite a bit of testing on a 64bit install of CentOS
>> 5.5
>> >>> installed as a guest OS on a VMWare ESXi 4.0 hypervisor on a Dell
>> R610.
>> >>> There are no other VMs on this box, and there are no other users or
>> >>> threads
>> >>> running on the OS. Just me.  I'm using this box strictly for testing
>> of
>> >>> large database migration scripts.
>> >>>
>> >>> It seems like when I execute some of these long running statements
>> >>> without
>> >>> locking the tables, the code runs quite a bit faster than when I do
>> lock
>> >>> the
>> >>> tables.  And before testing each run, I do restart the server so there
>> is
>> >>> no
>> >>> query caching and I also use FLUSH TABLES between each test run.
>> >>>
>> >>> All I'm asking is this:  Can anything think of a scenario on a single
>> >>> user-box and mysql instance, that locking tables would cause these DML
>> >>> statements to slow down compared to not locking the tables?
>> >>>
>> >>> Thanks,
>> >>>
>> >>> -Hank
>> >>>
>> >>
>> >>
>> >
>> >
>>
>
>

Reply via email to