[question]any performance tools about UPDATE

2011-09-23 Thread jiangwen jiang
Hi,

Is there any performance toolds about UPDATE/INSERT querys?
I want to monitor the UPDATE/INSERT performance, check out if there's any
performance bottleneck, for example:
slow INSERT/UPDATE
more I/O where execute INSERT

Regards

Thanks
J.W


Re: Slower performance with LOCK TABLES

2011-09-23 Thread Johan De Meersman


- Original Message -
 From: Hank hes...@gmail.com
 
 (please read my ORIGINAL post with all this information).

Welcome on the Internet, where everyone will tell you everything you need to 
know, except for what you want to know :-)

 I am trying to find a logical or reasonable explanation WHY this would be the
 case, despite the fact that the documentation states otherwise (see: Right 
 here:
 http://dev.mysql.com/doc/refman/5.5/en/lock-tables-restrictions.html)

I believe you're misinterpreting that, as is the author from the blog you 
originally referenced.

What it says, is If you are going to run many operations. You are updating 
many rows, but you are only doing ONE operation: a single update statement.

While this explains why you're not seeing benefit, I have to admit that I'm at 
a loss, too, as to why you are experiencing an actual slowdown - the update 
statement will lock the tables, too, so it shouldn't really make any difference 
at all.

 But if seeing some SQL will make you happy, here is just one example:
 
 UPDATE dest d straight_join source s set d.seq=s.seq WHERE d.key=s.key;

See, this is why people ask to see your queries. You never mentioned you were 
doing a join in the update :-)

I'll ignore the join condition in the where clause, as it makes little 
difference in this case; but I do note that you're using a straight join. Is 
the optimizer really reading the tables in the wrong order, or is it just 
something you picked up off a blog without realising the impact?

  Source is indexed by key+seq (key is primary key, but seq is
  included as a covering index).

Good practice, that should prevent source from being read from disk, if your 
index is fully in the cache. Do you have an index on dest.key, too? That might 
help performance as well if it fits in memory, too, because you'll only need 
disk access for flushing writes, then.

 This query takes about 3.5 hours when I don't use LOCK TABLES, and over 4 
 hours when I do use LOCK TABLES.

Most peculiar. Is the difference in performance consistent in repeated 
executions?

 And before testing each run, I do restart the server so there is no
 query caching and I also use FLUSH TABLES between each test run.

That's good, as it will give you the worst-case scenario. However, since such 
an update is going to wreck your index cache anyway, you may just as well 
preload the appropriate indices into it beforehand, if the cache is sized big 
enough to hold them. That might give a minor performance boost, too, as the 
server won't have to go to disk every so often to fetch index blocks.
See http://dev.mysql.com/doc/refman/5.0/en/index-preloading.html for that.

 | key_buffer_size  | 402653184 |

400MB of key buffer, that's not bad at all. Do a /show table status/ in your 
database, and sum the index sizes. If your key buffer is larger than this (and 
why not scale it for growth a bit?) all your indices will fit, which will save 
on disk access for index lookups *and* for index-covered queries.



-- 
Bier met grenadyn
Is als mosterd by den wyn
Sy die't drinkt, is eene kwezel
Hy die't drinkt, is ras een ezel

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



mysqldump: Got error: 1017: Can't find file: './ssconsole/ss_requestmaster.frm' (errno: 24) when using LOCK TABLES

2011-09-23 Thread Shafi AHMED
 

Folks

 

I have a mysql database of 200G size and the backup fails due to the foll.
Issue.

 

mysqldump: Got error: 1017: Can't find file:
'./ssconsole/ss_requestmaster.frm' (errno: 24) when using LOCK TABLES

 

Can someone assist pls.?

 

Best Rgs,

Shafi AHMED

 

 

 



Re: mysqldump: Got error: 1017: Can't find file: './ssconsole/ss_requestmaster.frm' (errno: 24) when using LOCK TABLES

2011-09-23 Thread Johan De Meersman
Someone seems to have deleted that file, which contains the description of the 
corresponding table. Recreate the exact same table (EXACT, including keys, 
indices, datatypes, encoding, the lot) and copy that tables's .frm file to 
replace the lost one. Then pray to the elder gods and restart your mysqld to 
see if it works. 

- Original Message -

 From: Shafi AHMED shafi.ah...@sifycorp.com
 To: mysql@lists.mysql.com
 Sent: Friday, 23 September, 2011 1:42:26 PM
 Subject: mysqldump: Got error: 1017: Can't find file:
 './ssconsole/ss_requestmaster.frm' (errno: 24) when using LOCK
 TABLES

 Folks

 I have a mysql database of 200G size and the backup fails due to the
 foll. Issue.

 mysqldump: Got error: 1017: Can't find file:
 './ssconsole/ss_requestmaster.frm' (errno: 24) when using LOCK
 TABLES

 Can someone assist pls.?

 Best Rgs,
 Shafi AHMED

-- 
Bier met grenadyn 
Is als mosterd by den wyn 
Sy die't drinkt, is eene kwezel 
Hy die't drinkt, is ras een ezel 


Re: Slower performance with LOCK TABLES

2011-09-23 Thread Hank
Hello Johan,
 Thanks for your comprehensive reply. I'll try to answer each of your
questions below.
-Hank

  But if seeing some SQL will make you happy, here is just one example:
 
  UPDATE dest d straight_join source s set d.seq=s.seq WHERE d.key=s.key;

 See, this is why people ask to see your queries. You never mentioned you were 
 doing a join in the update :-)

 I'll ignore the join condition in the where clause, as it makes little 
 difference in this case; but I do note that
 you're using a straight join. Is the optimizer really reading the tables in 
 the wrong order, or is it just
 something you picked up off a blog without realising the impact?

Yes, I'm using a straight join intentionally.  I have 144 million
unindexed rows in dest. I want Mysql to start with those rows
sequentially,  then join them to the matching record in source using
its index (244 million rows).  If I don't do that, mysql tries to use
the indexed table first, causing a full table scans on dest .  So with
straight_join, it does it in proper order.  During experimentation
with different joins, a regular join would run for days.  A straight
join runs for 3-4 hours.

   Source is indexed by key+seq (key is primary key, but seq is  included as 
  a covering index).

 Good practice, that should prevent source from being read from disk, if your 
 index is fully in the cache.

With 244 million records in the source table, I'm not sure that
would fit in the cache.

 Do you have an index on dest.key, too? That might help performance as well if 
 it fits in memory, too,
 because you'll only need disk access for flushing writes, then.

I do not have an index on dest.key, also intentionally,  for two
reasons. First, updating 144 million records in place is slow enough,
but trying to update 144 million records AND the index on that field
would absolutely kill the performance of the update. Once the update
is complete, I re-create the index with a sort using myisamchk.
Second, the starting value of dest.key for all 144 million records
is 0 so an index on that field wouldn't really help, I think.

  This query takes about 3.5 hours when I don't use LOCK TABLES, and over 4 
  hours when I do
  use LOCK TABLES.

 Most peculiar. Is the difference in performance consistent in repeated 
 executions?

yes, I've done these tests about a dozen times now, and while not
exactly scientific, the results are that LOCK TABLES always results in
longer running times. Not just for this query, but other full table
update/select/delete/insert queries.  Not more than twice as long, but
easily a 10% to 25% increase.

  And before testing each run, I do restart the server so there is no
  query caching and I also use FLUSH TABLES between each test run.

 That's good, as it will give you the worst-case scenario. However, since such 
 an update is going to wreck
your index cache anyway, you may just as well preload the appropriate indices 
into it beforehand, if the
cache is sized big enough to hold them. That might give a minor performance 
boost, too, as the server won't have to go to disk every so often to fetch 
index blocks.
 See http://dev.mysql.com/doc/refman/5.0/en/index-preloading.html for that.

I'll look into that, but the index is huge.  Ok, I just preloaded the
source index using LOAD INDEX INTO CACHE source IGNORE LEAVES;..
it took two minutes/15 seconds to pre-load the index. I then ran  two
small tests on smaller tables using the same update statement, and
they both yielded a massive increase in update speed. I'll have to
rebuild the large dest table again to try it on the biggest UPDATE,
but wow, if this is any indication, this was a great suggestion. I'll
report back on the results later today. Thank you!

  | key_buffer_size          | 402653184 |

 400MB of key buffer, that's not bad at all. Do a /show table status/ in your 
 database, and sum the index sizes. If your key buffer is larger than this 
 (and why not scale it for growth a bit?) all your indices will fit, which 
 will save on disk access for index lookups *and* for index-covered queries.

The index length for source is 5,889,037,312.

Thanks again for your assistance.

-Hank

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: mysqldump: Got error: 1017: Can't find file: './ssconsole/ss_requestmaster.frm' (errno: 24) when using LOCK TABLES

2011-09-23 Thread Dan Nelson
In the last episode (Sep 23), Shafi AHMED said:
 I have a mysql database of 200G size and the backup fails due to the foll.
 Issue.
 
 mysqldump: Got error: 1017: Can't find file:
 './ssconsole/ss_requestmaster.frm' (errno: 24) when using LOCK TABLES
 
 Can someone assist pls.?

$ perror 24
OS error code  24:  Too many open files

You need to bump up the max files limit in your OS.  It may be defaulting to
a small number like 1024.  If you can't change that limit, edit your my.cnf
and lower the table_open_cache number.  You'll lose performance though,
since mysql will have to stop accessing some tables to open others.

http://dev.mysql.com/doc/refman/5.5/en/not-enough-file-handles.html

-- 
Dan Nelson
dnel...@allantgroup.com

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: Slower performance with LOCK TABLES

2011-09-23 Thread Hank
Hello Johan,

 Just an update. Using the load index into cache statement for the
200 million row indexed source table, my correlated update statement
ran in 1 hour, 45 minutes to update 144 million rows.   A 50% increase
in performance!

Thank you very much,

-Hank



On Fri, Sep 23, 2011 at 6:56 AM, Johan De Meersman vegiv...@tuxera.be wrote:


 - Original Message -
 From: Hank hes...@gmail.com

 (please read my ORIGINAL post with all this information).

 Welcome on the Internet, where everyone will tell you everything you need to 
 know, except for what you want to know :-)

 I am trying to find a logical or reasonable explanation WHY this would be the
 case, despite the fact that the documentation states otherwise (see: Right 
 here:
 http://dev.mysql.com/doc/refman/5.5/en/lock-tables-restrictions.html)

 I believe you're misinterpreting that, as is the author from the blog you 
 originally referenced.

 What it says, is If you are going to run many operations. You are updating 
 many rows, but you are only doing ONE operation: a single update statement.

 While this explains why you're not seeing benefit, I have to admit that I'm 
 at a loss, too, as to why you are experiencing an actual slowdown - the 
 update statement will lock the tables, too, so it shouldn't really make any 
 difference at all.

 But if seeing some SQL will make you happy, here is just one example:

 UPDATE dest d straight_join source s set d.seq=s.seq WHERE d.key=s.key;

 See, this is why people ask to see your queries. You never mentioned you were 
 doing a join in the update :-)

 I'll ignore the join condition in the where clause, as it makes little 
 difference in this case; but I do note that you're using a straight join. Is 
 the optimizer really reading the tables in the wrong order, or is it just 
 something you picked up off a blog without realising the impact?

  Source is indexed by key+seq (key is primary key, but seq is
  included as a covering index).

 Good practice, that should prevent source from being read from disk, if your 
 index is fully in the cache. Do you have an index on dest.key, too? That 
 might help performance as well if it fits in memory, too, because you'll only 
 need disk access for flushing writes, then.

 This query takes about 3.5 hours when I don't use LOCK TABLES, and over 4 
 hours when I do use LOCK TABLES.

 Most peculiar. Is the difference in performance consistent in repeated 
 executions?

 And before testing each run, I do restart the server so there is no
 query caching and I also use FLUSH TABLES between each test run.

 That's good, as it will give you the worst-case scenario. However, since such 
 an update is going to wreck your index cache anyway, you may just as well 
 preload the appropriate indices into it beforehand, if the cache is sized big 
 enough to hold them. That might give a minor performance boost, too, as the 
 server won't have to go to disk every so often to fetch index blocks.
 See http://dev.mysql.com/doc/refman/5.0/en/index-preloading.html for that.

 | key_buffer_size          | 402653184 |

 400MB of key buffer, that's not bad at all. Do a /show table status/ in your 
 database, and sum the index sizes. If your key buffer is larger than this 
 (and why not scale it for growth a bit?) all your indices will fit, which 
 will save on disk access for index lookups *and* for index-covered queries.



 --
 Bier met grenadyn
 Is als mosterd by den wyn
 Sy die't drinkt, is eene kwezel
 Hy die't drinkt, is ras een ezel


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: Slower performance with LOCK TABLES

2011-09-23 Thread Hal�sz S�ndor
 2011/09/23 12:56 +0200, Johan De Meersman 
What it says, is If you are going to run many operations. You are updating 
many rows, but you are only doing ONE operation: a single update statement. 

For what it's worth, the downloading HTML help claims this only for MyISAM 
tables, because between LOCK TABLES and UNLOCK TABLES there is no key-cache 
flushing. InnoDB is not mentioned.


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: mysqldump: Got error: 1017: Can't find file: './ssconsole/ss_requestmaster.frm' (errno: 24) when using LOCK TABLES

2011-09-23 Thread Prabhat Kumar
correct.  mysqldump by default has --lock-tables enabled, which means it
tries to lock all tables to be dumped before starting the dump. And doing
LOCK TABLES t1, t2, ... for really big number of tables will inevitably
exhaust all available file descriptors, as LOCK needs all tables to be
opened.

Workarounds: --skip-lock-tables will disable such a locking completely.
Alternatively, --lock-all-tables will make mysqldump to use FLUSH TABLES
WITH READ LOCK which locks all tables in all databases (without opening
them). In this case mysqldump will automatically disable --lock-tables
because it makes no sense when --lock-all-tables is used. or  try with add
--single_transaction to your mysqldump command

On Fri, Sep 23, 2011 at 9:49 AM, Dan Nelson dnel...@allantgroup.com wrote:

 In the last episode (Sep 23), Shafi AHMED said:
  I have a mysql database of 200G size and the backup fails due to the
 foll.
  Issue.
 
  mysqldump: Got error: 1017: Can't find file:
  './ssconsole/ss_requestmaster.frm' (errno: 24) when using LOCK TABLES
 
  Can someone assist pls.?

 $ perror 24
 OS error code  24:  Too many open files

 You need to bump up the max files limit in your OS.  It may be defaulting
 to
 a small number like 1024.  If you can't change that limit, edit your my.cnf
 and lower the table_open_cache number.  You'll lose performance though,
 since mysql will have to stop accessing some tables to open others.

 http://dev.mysql.com/doc/refman/5.5/en/not-enough-file-handles.html

 --
Dan Nelson
dnel...@allantgroup.com

 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:
 http://lists.mysql.com/mysql?unsub=aim.prab...@gmail.com




-- 
Best Regards,

Prabhat Kumar
MySQL DBA

My Blog: http://adminlinux.blogspot.com
My LinkedIn: http://www.linkedin.com/in/profileprabhat


Re: [question]any performance tools about UPDATE

2011-09-23 Thread Prabhat Kumar
I don't think any other than  show full processlist. In which state query
is locked or not.
I/O  related things you check at OS level.

On Thu, Sep 22, 2011 at 11:07 PM, jiangwen jiang jiangwen...@gmail.comwrote:

 Hi,

 Is there any performance toolds about UPDATE/INSERT querys?
 I want to monitor the UPDATE/INSERT performance, check out if there's any
 performance bottleneck, for example:
 slow INSERT/UPDATE
 more I/O where execute INSERT

 Regards

 Thanks
 J.W




-- 
Best Regards,

Prabhat Kumar
MySQL DBA

My Blog: http://adminlinux.blogspot.com
My LinkedIn: http://www.linkedin.com/in/profileprabhat