RE: Concurrent read performance problems

2013-08-12 Thread Rick James
Please provide SHOW CREATE TABLE and SHOW TABLE STATUS for each table.
It smells like there is an inconsistency in the datatype of facts.accounts.id 
and what it is JOINing to.

Also provide the full SELECT.
How much RAM do you have?

 -Original Message-
 From: Johan De Meersman [mailto:vegiv...@tuxera.be]
 Sent: Sunday, August 11, 2013 2:16 PM
 To: Brad Heller
 Cc: Johnny Withers; MySQL General List
 Subject: Re: Concurrent read performance problems
 
 Good to hear. A word of warning, though: make sure you don't have more
 connections allocating those buffers than your machine can handle memory-
 wise, or you'll start swapping and performance will REALLY go down the
 drain.
 
 A query/index based solution would still be preferred. Could you for
 instance materialize that subselect and periodically refresh it? Other
 tricks may also be available.
 
 Brad Heller b...@cloudability.com wrote:
 Johan, your suggestion to tweak max_heap_table_size and tmp_table_size
 fixed the issue. Bumping them both to 512MB got our performance back
 on-par. I came up with a way to avoid the contention using a complex
 set of temp tables, but performance was abysmal.
 
 By reverting to the more straight-forward query with the subselect as
 well as tweaking the max_healp_table_size and tmp_table_size I saw no
 resource contention causing slowdowns, as well as a 12x performance
 boost.
 Thanks
 for your help!
 
 *Brad Heller *| Engineering Lead | Cloudability.com | 541-231-1514 |
 Skype:
 brad.heller | @bradhe http://www.twitter.com/bradhe |
 @cloudabilityhttp://www.twitter.com/cloudability
 
 
 On Sun, Aug 11, 2013 at 1:32 PM, Johan De Meersman
 vegiv...@tuxera.bewrote:
 
  True, which is why I said I suspected file-based sort :-) At one
 million
  rows, that seems to be an accurate guess, too. Still on the phone,
 though,
  and in bed. I'll read the thread better tomorrow, but you might get
 some
  benefit from cutting out the subselect if that's possible.
 
  If you have plenty of memory, have a look at the max_temp_table_size
 and
  max_heap_table variables, too; those decide when the sort goes to
 disk.
 
 
  Johnny Withers joh...@pixelated.net wrote:
 
  Just because it says filrsort doesn't mean it'll create a file on
 disk.
  Table schema and full query would be helpful here too
 
 
 http://www.mysqlperformanceblog.com/2009/03/05/what-does-using-filesort
 -mean-in-mysql/
  On Aug 11, 2013 1:28 PM, Brad Heller b...@cloudability.com
 wrote:
 
   Yes sorry, here's the explain. It was taken from MariaDB 5.5.32.
 Looks like
   there is a lot of filesort goin' on here. Also note that I'm only
 using the
   first two fields of the covering index (intentionally).
 
 
   +--+-++---+
  --
  ++-+
  --
  +-+
  --
  +
   | id   | select_type | table  | type  |
  possible_keys
  | key| key_len | ref
 | rows| Extra
   |
 
   +--+-++---+
  --
  ++-+
  --
  +-+
  --
  +
   |1 | PRIMARY | derived2 | ALL   | NULL
   | NULL   | NULL| NULL
  | 1004685 | Using temporary;
 Using
   filesort   |
   |2 | DERIVED | accounts   | range |
   PRIMARY,unique_account_identifiers | unique_account_identifiers |
 257 |
   NULL  |   3 |
 Using
   where; Using index; Using temporary; Using filesort |
   |2 | DERIVED | facts  | ref   | covering
   | covering   | 4   |
   facts.accounts.id|  334895 |
 Using
   where
 |
 
   +--+-++---+
  --
  ++-+
  --
  +-+
  --
  +
 
 
   *Brad Heller *| Engineering Lead | Cloudability.com | 541-231-1514
 |
   Skype:
   brad.heller | @bradhe http://www.twitter.com/bradhe |
  @cloudabilityhttp://www.twitter.com/cloudability
 
 
   On Sun, Aug 11, 2013 at 8:45 AM, Johan De Meersman
 vegiv...@tuxera.be
 
  wrote:
 
 
   On my phone now, but it smells of file-based sorting, making disk
 access
   the bottleneck. Can you provide the explain?
 
 
   Brad Heller b...@cloudability.com wrote:
 
 
   Hey list, first time posting here so apologies if this is the
 wrong
 
   forum
 
   for this but I'm really out of options on how to solve this
 problem!
 
   *Short version

Re: Concurrent read performance problems

2013-08-11 Thread Johan De Meersman
On my phone now, but it smells of file-based sorting, making disk access the 
bottleneck. Can you provide the explain?

Brad Heller b...@cloudability.com wrote:
Hey list, first time posting here so apologies if this is the wrong
forum
for this but I'm really out of options on how to solve this problem!

*Short version:*

1. High concurrent reads, performing the same well-indexed query type
to
the same two tables.
2. No additional traffic at all--just reads from these two tables. No
writes anywhere.
3. Very fast (sub-second) when server is only servicing one request at
a
time.
4. Very slow (30s+) when server is servicing 2-3 requests at a time.

Questions:

1. What resource is under contention for servicing read-only queries if
you
have a large buffer pool and a plenty-big thread cache?
2. What parameters can I tune to increase concurrent reads to these two
tables?

*Long version:*

I've got a MySQL server that has only about 50 connections open to it
at
any given time. It basically only has one OLAP query type being ran
against
it that amounts to something like this:

SELECT (3 fields with 1 count) FROM (SELECT (3 fields with 1 aggregate)
FROM table INNER JOIN ... WHERE ... GROUP BY ...) dt GROUP BY ...;

These queries are well indexed and run very well
individually--sub-second,
usually even faster. When I run a few of these queries simultaneously
(2-3
on my laptop, 6-7 on our production boxes) performance grinds to a
halt:
Consistently about 30 seconds to service a query.

Ideally, I'd like to be able to run 50+ of these queries concurrently.

I've tried MySQL 5.5.27 and MySQL 5.6.13 and get the same results on
both
machines. I've tried tweaking the following my.cnf parameters to be
higher:

thread_concurrency = 20
thread_cache_size = 340
innodb_buffer_pool_size=2G

A few other my.cnf parameters that I have set:

innodb_file_format=Barracuda
innodb_file_format_max=Barracuda
innodb_file_per_table=1
skip-external-locking
innodb_log_files_in_group=2
innodb_log_file_size=2000M
max_allowed_packet=64M

Thanks in advance,

Brad Heller

-- 
Sent from Kaiten Mail. Please excuse my brevity.

Re: Concurrent read performance problems

2013-08-11 Thread Brad Heller
Yes sorry, here's the explain. It was taken from MariaDB 5.5.32. Looks like
there is a lot of filesort goin' on here. Also note that I'm only using the
first two fields of the covering index (intentionally).

+--+-++---+++-+---+-+---+
| id   | select_type | table  | type  | possible_keys
   | key| key_len | ref
  | rows| Extra
|
+--+-++---+++-+---+-+---+
|1 | PRIMARY | derived2 | ALL   | NULL
| NULL   | NULL| NULL
   | 1004685 | Using temporary; Using
filesort   |
|2 | DERIVED | accounts   | range |
PRIMARY,unique_account_identifiers | unique_account_identifiers | 257 |
NULL  |   3 | Using
where; Using index; Using temporary; Using filesort |
|2 | DERIVED | facts  | ref   | covering
| covering   | 4   |
facts.accounts.id|  334895 | Using
where
  |
+--+-++---+++-+---+-+---+


*Brad Heller *| Engineering Lead | Cloudability.com | 541-231-1514 | Skype:
brad.heller | @bradhe http://www.twitter.com/bradhe |
@cloudabilityhttp://www.twitter.com/cloudability


On Sun, Aug 11, 2013 at 8:45 AM, Johan De Meersman vegiv...@tuxera.bewrote:

 On my phone now, but it smells of file-based sorting, making disk access
 the bottleneck. Can you provide the explain?


 Brad Heller b...@cloudability.com wrote:

 Hey list, first time posting here so apologies if this is the wrong forum
 for this but I'm really out of options on how to solve this problem!

 *Short version:*


 1. High concurrent reads, performing the same well-indexed query type to
 the same two tables.
 2. No additional traffic at all--just reads from these two tables. No
 writes anywhere.
 3. Very fast (sub-second) when server is only servicing one request at a
 time.
 4. Very slow (30s+) when server is servicing 2-3 requests at a time.

 Questions:

 1. What resource is under contention for servicing read-only queries if you
 have a large buffer pool and a plenty-big thread cache?
 2. What parameters can I tune to increase concurrent reads to these two
 tables?

 *Long version:*


 I've got a MySQL server that has only about 50 connections open to it at
 any given time. It basically only has one OLAP q
  uery
 type being ran against
 it that amounts to something like this:

 SELECT (3 fields with 1 count) FROM (SELECT (3 fields with 1 aggregate)
 FROM table INNER JOIN ... WHERE ... GROUP BY ...) dt GROUP BY ...;

 These queries are well indexed and run very well individually--sub-second,
 usually even faster. When I run a few of these queries simultaneously (2-3
 on my laptop, 6-7 on our production boxes) performance grinds to a halt:
 Consistently about 30 seconds to service a query.

 Ideally, I'd like to be able to run 50+ of these queries concurrently.

 I've tried MySQL 5.5.27 and MySQL 5.6.13 and get the same results on both
 machines. I've tried tweaking the following my.cnf parameters to be higher:

 thread_concurrency = 20
 thread_cache_size = 340
 innodb_buffer_pool_size=2G

 A few other my.cnf parameters that I have set:

 innodb_file_format=Barracuda
 innodb_file_format_max=Barracuda
 innodb_file_per_table=1
 skip-external-locking
 innodb_log_files_in_group=2
 innodb_log_file_size=2000M
 max_allowed_packet=64M

 Thanks in advance,

 Brad Heller


 --
 Sent from Kaiten Mail. Please excuse my brevity.



Re: Concurrent read performance problems

2013-08-11 Thread Johnny Withers
Just because it says filrsort doesn't mean it'll create a file on disk.
Table schema and full query would be helpful here too

http://www.mysqlperformanceblog.com/2009/03/05/what-does-using-filesort-mean-in-mysql/
On Aug 11, 2013 1:28 PM, Brad Heller b...@cloudability.com wrote:

 Yes sorry, here's the explain. It was taken from MariaDB 5.5.32. Looks like
 there is a lot of filesort goin' on here. Also note that I'm only using the
 first two fields of the covering index (intentionally).


 +--+-++---+++-+---+-+---+
 | id   | select_type | table  | type  | possible_keys
| key| key_len | ref
   | rows| Extra
 |

 +--+-++---+++-+---+-+---+
 |1 | PRIMARY | derived2 | ALL   | NULL
 | NULL   | NULL| NULL
| 1004685 | Using temporary; Using
 filesort   |
 |2 | DERIVED | accounts   | range |
 PRIMARY,unique_account_identifiers | unique_account_identifiers | 257 |
 NULL  |   3 | Using
 where; Using index; Using temporary; Using filesort |
 |2 | DERIVED | facts  | ref   | covering
 | covering   | 4   |
 facts.accounts.id|  334895 | Using
 where
   |

 +--+-++---+++-+---+-+---+


 *Brad Heller *| Engineering Lead | Cloudability.com | 541-231-1514 |
 Skype:
 brad.heller | @bradhe http://www.twitter.com/bradhe |
 @cloudabilityhttp://www.twitter.com/cloudability


 On Sun, Aug 11, 2013 at 8:45 AM, Johan De Meersman vegiv...@tuxera.be
 wrote:

  On my phone now, but it smells of file-based sorting, making disk access
  the bottleneck. Can you provide the explain?
 
 
  Brad Heller b...@cloudability.com wrote:
 
  Hey list, first time posting here so apologies if this is the wrong
 forum
  for this but I'm really out of options on how to solve this problem!
 
  *Short version:*
 
 
  1. High concurrent reads, performing the same well-indexed query type to
  the same two tables.
  2. No additional traffic at all--just reads from these two tables. No
  writes anywhere.
  3. Very fast (sub-second) when server is only servicing one request at a
  time.
  4. Very slow (30s+) when server is servicing 2-3 requests at a time.
 
  Questions:
 
  1. What resource is under contention for servicing read-only queries if
 you
  have a large buffer pool and a plenty-big thread cache?
  2. What parameters can I tune to increase concurrent reads to these two
  tables?
 
  *Long version:*
 
 
  I've got a MySQL server that has only about 50 connections open to it at
  any given time. It basically only has one OLAP q
   uery
  type being ran against
  it that amounts to something like this:
 
  SELECT (3 fields with 1 count) FROM (SELECT (3 fields with 1 aggregate)
  FROM table INNER JOIN ... WHERE ... GROUP BY ...) dt GROUP BY ...;
 
  These queries are well indexed and run very well
 individually--sub-second,
  usually even faster. When I run a few of these queries simultaneously
 (2-3
  on my laptop, 6-7 on our production boxes) performance grinds to a halt:
  Consistently about 30 seconds to service a query.
 
  Ideally, I'd like to be able to run 50+ of these queries concurrently.
 
  I've tried MySQL 5.5.27 and MySQL 5.6.13 and get the same results on
 both
  machines. I've tried tweaking the following my.cnf parameters to be
 higher:
 
  thread_concurrency = 20
  thread_cache_size = 340
  innodb_buffer_pool_size=2G
 
  A few other my.cnf parameters that I have set:
 
  innodb_file_format=Barracuda
  innodb_file_format_max=Barracuda
  innodb_file_per_table=1
  skip-external-locking
  innodb_log_files_in_group=2
  innodb_log_file_size=2000M
  max_allowed_packet=64M
 
  Thanks in advance,
 
  Brad Heller
 
 
  --
  Sent from Kaiten Mail. Please excuse my brevity.
 



Re: Concurrent read performance problems

2013-08-11 Thread Johan De Meersman
True, which is why I said I suspected file-based sort :-) At one million rows, 
that seems to be an accurate guess, too. Still on the phone, though, and in 
bed. I'll read the thread better tomorrow, but you might get some benefit from 
cutting out the subselect if that's possible.

If you have plenty of memory, have a look at the max_temp_table_size and 
max_heap_table variables, too; those decide when the sort goes to disk.

Johnny Withers joh...@pixelated.net wrote:
Just because it says filrsort doesn't mean it'll create a file on disk.
Table schema and full query would be helpful here too

http://www.mysqlperformanceblog.com/2009/03/05/what-does-using-filesort-mean-in-mysql/
On Aug 11, 2013 1:28 PM, Brad Heller b...@cloudability.com wrote:

 Yes sorry, here's the explain. It was taken from MariaDB 5.5.32.
Looks like
 there is a lot of filesort goin' on here. Also note that I'm only
using the
 first two fields of the covering index (intentionally).



+--+-++---+++-+---+-+---+
 | id   | select_type | table  | type  | possible_keys
| key| key_len | ref
   | rows| Extra
 |


+--+-++---+++-+---+-+---+
 |1 | PRIMARY | derived2 | ALL   | NULL
 | NULL   | NULL| NULL
| 1004685 | Using temporary; Using
 filesort   |
 |2 | DERIVED | accounts   | range |
 PRIMARY,unique_account_identifiers | unique_account_identifiers | 257
|
 NULL  |   3 |
Using
 where; Using index; Using temporary; Using filesort |
 |2 | DERIVED | facts  | ref   | covering
 | covering   | 4   |
 facts.accounts.id|  334895 |
Using
 where
   |


+--+-++---+++-+---+-+---+


 *Brad Heller *| Engineering Lead | Cloudability.com | 541-231-1514 |
 Skype:
 brad.heller | @bradhe http://www.twitter.com/bradhe |
 @cloudabilityhttp://www.twitter.com/cloudability


 On Sun, Aug 11, 2013 at 8:45 AM, Johan De Meersman
vegiv...@tuxera.be
 wrote:

  On my phone now, but it smells of file-based sorting, making disk
access
  the bottleneck. Can you provide the explain?
 
 
  Brad Heller b...@cloudability.com wrote:
 
  Hey list, first time posting here so apologies if this is the
wrong
 forum
  for this but I'm really out of options on how to solve this
problem!
 
  *Short version:*
 
 
  1. High concurrent reads, performing the same well-indexed query
type to
  the same two tables.
  2. No additional traffic at all--just reads from these two tables.
No
  writes anywhere.
  3. Very fast (sub-second) when server is only servicing one
request at a
  time.
  4. Very slow (30s+) when server is servicing 2-3 requests at a
time.
 
  Questions:
 
  1. What resource is under contention for servicing read-only
queries if
 you
  have a large buffer pool and a plenty-big thread cache?
  2. What parameters can I tune to increase concurrent reads to
these two
  tables?
 
  *Long version:*
 
 
  I've got a MySQL server that has only about 50 connections open to
it at
  any given time. It basically only has one OLAP q
   uery
  type being ran against
  it that amounts to something like this:
 
  SELECT (3 fields with 1 count) FROM (SELECT (3 fields with 1
aggregate)
  FROM table INNER JOIN ... WHERE ... GROUP BY ...) dt GROUP BY ...;
 
  These queries are well indexed and run very well
 individually--sub-second,
  usually even faster. When I run a few of these queries
simultaneously
 (2-3
  on my laptop, 6-7 on our production boxes) performance grinds to a
halt:
  Consistently about 30 seconds to service a query.
 
  Ideally, I'd like to be able to run 50+ of these queries
concurrently.
 
  I've tried MySQL 5.5.27 and MySQL 5.6.13 and get the same results
on
 both
  machines. I've tried tweaking the following my.cnf parameters to
be
 higher:
 
  thread_concurrency = 20
  thread_cache_size = 340
  innodb_buffer_pool_size=2G
 
  A few other my.cnf parameters that I have set:
 
  innodb_file_format=Barracuda
  innodb_file_format_max=Barracuda
  

Re: Concurrent read performance problems

2013-08-11 Thread Brad Heller
Johan, your suggestion to tweak max_heap_table_size and tmp_table_size
fixed the issue. Bumping them both to 512MB got our performance back
on-par. I came up with a way to avoid the contention using a complex set of
temp tables, but performance was abysmal.

By reverting to the more straight-forward query with the subselect as well
as tweaking the max_healp_table_size and tmp_table_size I saw no resource
contention causing slowdowns, as well as a 12x performance boost. Thanks
for your help!

*Brad Heller *| Engineering Lead | Cloudability.com | 541-231-1514 | Skype:
brad.heller | @bradhe http://www.twitter.com/bradhe |
@cloudabilityhttp://www.twitter.com/cloudability


On Sun, Aug 11, 2013 at 1:32 PM, Johan De Meersman vegiv...@tuxera.bewrote:

 True, which is why I said I suspected file-based sort :-) At one million
 rows, that seems to be an accurate guess, too. Still on the phone, though,
 and in bed. I'll read the thread better tomorrow, but you might get some
 benefit from cutting out the subselect if that's possible.

 If you have plenty of memory, have a look at the max_temp_table_size and
 max_heap_table variables, too; those decide when the sort goes to disk.


 Johnny Withers joh...@pixelated.net wrote:

 Just because it says filrsort doesn't mean it'll create a file on disk.
 Table schema and full query would be helpful here too

 http://www.mysqlperformanceblog.com/2009/03/05/what-does-using-filesort-mean-in-mysql/
 On Aug 11, 2013 1:28 PM, Brad Heller b...@cloudability.com wrote:

  Yes sorry, here's the explain. It was taken from MariaDB 5.5.32. Looks like
  there is a lot of filesort goin' on here. Also note that I'm only using the
  first two fields of the covering index (intentionally).


  +--+-++---+
 --
 ++-+
 --
 +-+
 --
 +
  | id   | select_type | table  | type  |
 possible_keys
 | key| key_len | ref
| rows| Extra
  |

  +--+-++---+
 --
 ++-+
 --
 +-+
 --
 +
  |1 | PRIMARY | derived2 | ALL   | NULL
  | NULL   | NULL| NULL
 | 1004685 | Using temporary; Using
  filesort   |
  |2 | DERIVED | accounts   | range |
  PRIMARY,unique_account_identifiers | unique_account_identifiers | 257 |
  NULL  |   3 | Using
  where; Using index; Using temporary; Using filesort |
  |2 | DERIVED | facts  | ref   | covering
  | covering   | 4   |
  facts.accounts.id|  334895 | Using
  where
|

  +--+-++---+
 --
 ++-+
 --
 +-+
 --
 +


  *Brad Heller *| Engineering Lead | Cloudability.com | 541-231-1514 |
  Skype:
  brad.heller | @bradhe http://www.twitter.com/bradhe |
  @cloudabilityhttp://www.twitter.com/cloudability


  On Sun, Aug 11, 2013 at 8:45 AM, Johan De Meersman vegiv...@tuxera.be

 wrote:


  On my phone now, but it smells of file-based sorting, making disk access
  the bottleneck. Can you provide the explain?


  Brad Heller b...@cloudability.com wrote:


  Hey list, first time posting here so apologies if this is the wrong

  forum

  for this but I'm really out of options on how to solve this problem!

  *Short version:*


  1. High concurrent reads, performing the same well-indexed query type to
  the same two tables.
  2. No additional traffic at all--just reads from these two tables. No


 writes anywhere.
  3. Very fast (sub-second) when server is only servicing one request at a
  time.
  4. Very slow (30s+) when server is servicing 2-3 requests at a time.

  Questions:

  1. What resource is under contention for servicing read-only queries if

  you

  have a large buffer pool and a plenty-big thread cache?
  2. What parameters can I tune to increase concurrent reads to these two
  tables?

  *Long version:*


  I've got a MySQL server that has only about 50 connections open to it at
  any given time. It basically only has one OLAP q
   uery
  type being ran against
  it that amounts to something like this:

  S
  ELECT
 (3 fields with 1 count) FROM (SELECT (3 fields with 1 aggregate)
  FROM table INNER JOIN ... WHERE ... GROUP BY ...) dt GROUP BY ...;

  These queries are well indexed and run very well

  individually--sub-second,

  usually 

Re: Concurrent read performance problems

2013-08-11 Thread Johan De Meersman
Good to hear. A word of warning, though: make sure you don't have more 
connections allocating those buffers than your machine can handle memory-wise, 
or you'll start swapping and performance will REALLY go down the drain.

A query/index based solution would still be preferred. Could you for instance 
materialize that subselect and periodically refresh it? Other tricks may also 
be available.

Brad Heller b...@cloudability.com wrote:
Johan, your suggestion to tweak max_heap_table_size and tmp_table_size
fixed the issue. Bumping them both to 512MB got our performance back
on-par. I came up with a way to avoid the contention using a complex
set of
temp tables, but performance was abysmal.

By reverting to the more straight-forward query with the subselect as
well
as tweaking the max_healp_table_size and tmp_table_size I saw no
resource
contention causing slowdowns, as well as a 12x performance boost.
Thanks
for your help!

*Brad Heller *| Engineering Lead | Cloudability.com | 541-231-1514 |
Skype:
brad.heller | @bradhe http://www.twitter.com/bradhe |
@cloudabilityhttp://www.twitter.com/cloudability


On Sun, Aug 11, 2013 at 1:32 PM, Johan De Meersman
vegiv...@tuxera.bewrote:

 True, which is why I said I suspected file-based sort :-) At one
million
 rows, that seems to be an accurate guess, too. Still on the phone,
though,
 and in bed. I'll read the thread better tomorrow, but you might get
some
 benefit from cutting out the subselect if that's possible.

 If you have plenty of memory, have a look at the max_temp_table_size
and
 max_heap_table variables, too; those decide when the sort goes to
disk.


 Johnny Withers joh...@pixelated.net wrote:

 Just because it says filrsort doesn't mean it'll create a file on
disk.
 Table schema and full query would be helpful here too


http://www.mysqlperformanceblog.com/2009/03/05/what-does-using-filesort-mean-in-mysql/
 On Aug 11, 2013 1:28 PM, Brad Heller b...@cloudability.com
wrote:

  Yes sorry, here's the explain. It was taken from MariaDB 5.5.32.
Looks like
  there is a lot of filesort goin' on here. Also note that I'm only
using the
  first two fields of the covering index (intentionally).


  +--+-++---+
 --
 ++-+
 --
 +-+
 --
 +
  | id   | select_type | table  | type  |
 possible_keys
 | key| key_len | ref
| rows| Extra
  |

  +--+-++---+
 --
 ++-+
 --
 +-+
 --
 +
  |1 | PRIMARY | derived2 | ALL   | NULL
  | NULL   | NULL| NULL
 | 1004685 | Using temporary;
Using
  filesort   |
  |2 | DERIVED | accounts   | range |
  PRIMARY,unique_account_identifiers | unique_account_identifiers |
257 |
  NULL  |   3 |
Using
  where; Using index; Using temporary; Using filesort |
  |2 | DERIVED | facts  | ref   | covering
  | covering   | 4   |
  facts.accounts.id|  334895 |
Using
  where
|

  +--+-++---+
 --
 ++-+
 --
 +-+
 --
 +


  *Brad Heller *| Engineering Lead | Cloudability.com | 541-231-1514
|
  Skype:
  brad.heller | @bradhe http://www.twitter.com/bradhe |
  @cloudabilityhttp://www.twitter.com/cloudability


  On Sun, Aug 11, 2013 at 8:45 AM, Johan De Meersman
vegiv...@tuxera.be

 wrote:


  On my phone now, but it smells of file-based sorting, making disk
access
  the bottleneck. Can you provide the explain?


  Brad Heller b...@cloudability.com wrote:


  Hey list, first time posting here so apologies if this is the
wrong

  forum

  for this but I'm really out of options on how to solve this
problem!

  *Short version:*


  1. High concurrent reads, performing the same well-indexed query
type to
  the same two tables.
  2. No additional traffic at all--just reads from these two
tables. No


 writes anywhere.
  3. Very fast (sub-second) when server is only servicing one
request at a
  time.
  4. Very slow (30s+) when server is servicing 2-3 requests at a
time.

  Questions:

  1. What resource is under contention for servicing read-only
queries if

  you

  have a large buffer pool and a plenty-big thread cache?
  2. What parameters can I tune to increase concurrent reads to
these two
  tables?

  *Long version:*


  

Concurrent read performance problems

2013-08-10 Thread Brad Heller
Hey list, first time posting here so apologies if this is the wrong forum
for this but I'm really out of options on how to solve this problem!

*Short version:*

1. High concurrent reads, performing the same well-indexed query type to
the same two tables.
2. No additional traffic at all--just reads from these two tables. No
writes anywhere.
3. Very fast (sub-second) when server is only servicing one request at a
time.
4. Very slow (30s+) when server is servicing 2-3 requests at a time.

Questions:

1. What resource is under contention for servicing read-only queries if you
have a large buffer pool and a plenty-big thread cache?
2. What parameters can I tune to increase concurrent reads to these two
tables?

*Long version:*

I've got a MySQL server that has only about 50 connections open to it at
any given time. It basically only has one OLAP query type being ran against
it that amounts to something like this:

SELECT (3 fields with 1 count) FROM (SELECT (3 fields with 1 aggregate)
FROM table INNER JOIN ... WHERE ... GROUP BY ...) dt GROUP BY ...;

These queries are well indexed and run very well individually--sub-second,
usually even faster. When I run a few of these queries simultaneously (2-3
on my laptop, 6-7 on our production boxes) performance grinds to a halt:
Consistently about 30 seconds to service a query.

Ideally, I'd like to be able to run 50+ of these queries concurrently.

I've tried MySQL 5.5.27 and MySQL 5.6.13 and get the same results on both
machines. I've tried tweaking the following my.cnf parameters to be higher:

thread_concurrency = 20
thread_cache_size = 340
innodb_buffer_pool_size=2G

A few other my.cnf parameters that I have set:

innodb_file_format=Barracuda
innodb_file_format_max=Barracuda
innodb_file_per_table=1
skip-external-locking
innodb_log_files_in_group=2
innodb_log_file_size=2000M
max_allowed_packet=64M

Thanks in advance,

Brad Heller


Re: Performance problems on MySQL

2010-09-06 Thread Alexandre Vieira
Hi,

We're chaning it to INT(9). Apparently someone remembered to change the type
of data in this field from an alphanumeric value to an INT(9).

I'm going to change this asap.

Thanks

BR
AJ

On Mon, Sep 6, 2010 at 5:17 AM, mos mo...@fastmail.fm wrote:

 At 04:44 AM 9/3/2010, Alexandre Vieira wrote:

 Hi Johnny,

 mysql EXPLAIN SELECT * FROM clientinfo WHERE userid='911930694';

 ++-++---+---+-+-+---+--+---+
 | id | select_type | table  | type  | possible_keys | key |
 key_len
 | ref   | rows | Extra |

 ++-++---+---+-+-+---+--+---+
 |  1 | SIMPLE  | clientinfo | const | PRIMARY   | PRIMARY | 23
 | const |1 |   |

 ++-++---+---+-+-+---+--+---+
 1 row in set (0.53 sec)

 Thanks

 BR
 AJ


 Alexandre,
 Do you have UserId declared as CHAR? It looks numeric to me. If it is
 stored as an integer then don't use the ' ' in the select statement
 otherwise it needs to convert it.
 If UserId values are integers and you have the column defined as CHAR, then
 declare the column UserId as integer or BigInt and the searches should be
 faster than searching on CHAR.

 Mike




  On Thu, Sep 2, 2010 at 8:52 PM, Johnny Withers joh...@pixelated.net
 wrote:

  What about an explain of this query:
 
 
  SELECT * FROM clientinfo WHERE userid='182106617';
 
  -JW
 
 
  On Thu, Sep 2, 2010 at 9:35 AM, Alexandre Vieira nul...@gmail.com
 wrote:
 
  John, Johnny,
 
  Thanks for the prompt answer.
 
  mysql SHOW CREATE TABLE clientinfo;
 
 
 ++--+
  | Table  | Create
  Table
  |
 
 
 ++--+
  | clientinfo | CREATE TABLE `clientinfo` (
`userid` varchar(21) NOT NULL default '',
`units` float default NULL,
`date_last_query` datetime default NULL,
`last_acc` int(10) unsigned default NULL,
`date_last_units` datetime default NULL,
`notification` int(10) unsigned NOT NULL default '0',
`package` char(1) default NULL,
`user_type` varchar(5) default NULL,
PRIMARY KEY  (`userid`)
  ) ENGINE=InnoDB DEFAULT CHARSET=latin1 |
 
 
 ++--+
  1 row in set (0.00 sec)
  mysql SHOW INDEX FROM clientinfo;
 
 
 +++--+--+-+---+-+--++--++-+
  | Table  | Non_unique | Key_name | Seq_in_index | Column_name |
  Collation | Cardinality | Sub_part | Packed | Null | Index_type |
 Comment |
 
 
 +++--+--+-+---+-+--++--++-+
  | clientinfo |  0 | PRIMARY  |1 | userid  |
  A |  460056 | NULL | NULL   |  | BTREE  |
   |
 
 
 +++--+--+-+---+-+--++--++-+
  1 row in set (0.00 sec)
 
 
  SELECT * FROM clientinfo WHERE userid='182106617';
 
  UPDATE clientinfo SET
 
 units=0.0,date_last_query=now(),user_type='POS',last_acc=167,date_last_units=now(),notification=0
  WHERE userid='152633876';
 
  INSERT INTO clientinfo VALUES
  ('171918726',101.0,now(),1,now(),0,'D','PRE') ON DUPLICATE KEY UPDATE
  units=101.0, date_last_query=now(), last_acc=1, date_last_units=now(),
  notification=0, package='D', user_type='PRE';
 
  DELETE FROM clientinfo WHERE units='155618918';
 
  There are no other type of queries.
 
  We're running this DB on a Sun-Fire V240. 2xUIIIi 1.5ghz with 2GB of
 RAM.
 
  We also run some 

Re: Performance problems on MySQL

2010-09-06 Thread Alexandre Vieira
Hi List,

In a 20m interval in our max load I have:

OS WAIT ARRAY INFO: reservation count 637, signal count 625
Mutex spin waits 0, rounds 19457, OS waits 428
RW-shared spins 238, OS waits 119; RW-excl spins 13, OS waits 8

(The values are the difference between the start and end of this 20m
interval)

The machine has 2 CPU's and usually has 40-50% of idle CPU.

Our workload consists on lots of parallel simple queries (SELECTs and
UPDATEs with a simple condition on the PK) on a 500k record/40MB table with
an INDEX on the PK.

| innodb_sync_spin_loops  | 20 |
| innodb_thread_concurrency   | 16 |
| innodb_thread_sleep_delay   | 1  |

I've been sampling my innodb status and there are always 16 queries inside
InnoDB and some 20-30 in queue. Therefore lowering thread_sleep_delay won't
help.

Since I have 47 spin rounds per OS Wait, would innodb gain something with
rising sync_spin_loops a little bit?

Also, should I be capping thread_concurrency with a 2 CPU machine?

Unfortunately this machine only has 2 RAID1 disks. I can't spread the disk
load (datafile/logfiles) between disks.

extended device statistics
r/sw/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b device
0.0   83.70.0  379.3  0.0  1.00.1   11.5   1  94 d2 (/var)

Usually the iostat busy indicator is near 100%.

Any hints on something I could tune to have less OS Waits and help with
the Disk I/O?

=
100906 18:33:40 INNODB MONITOR OUTPUT
=
Per second averages calculated from the last 47 seconds
--
SEMAPHORES
--
OS WAIT ARRAY INFO: reservation count 266140, signal count 259267
Mutex spin waits 0, rounds 7407879, OS waits 179189
RW-shared spins 93878, OS waits 46196; RW-excl spins 9473, OS waits 7311

---
LOG
---
Log sequence number 62 2833945222
Log flushed up to   62 2833944847
Last checkpoint at  62 2828803314
1 pending log writes, 0 pending chkp writes
18419416 log i/o's done, 37.64 log i/o's/second

--
ROW OPERATIONS
--
16 queries inside InnoDB, 27 queries in queue
1 read views open inside InnoDB
Main thread id 11, state: sleeping
Number of rows inserted 603196, updated 9006533, deleted 111028, read
30145300
0.17 inserts/s, 18.49 updates/s, 0.00 deletes/s, 41.47 reads/s

If nothing else can be done I'll advise the client to acquire new HW for
this BD.

By the way, upgrading from 5.0.45-log to 5.1.50 would make a huge difference
in terms of performance?

BR
AJ

On Mon, Sep 6, 2010 at 10:46 AM, Alexandre Vieira nul...@gmail.com wrote:

 Hi,

 We're chaning it to INT(9). Apparently someone remembered to change the
 type of data in this field from an alphanumeric value to an INT(9).

 I'm going to change this asap.

 Thanks

 BR
 AJ


 On Mon, Sep 6, 2010 at 5:17 AM, mos mo...@fastmail.fm wrote:

 At 04:44 AM 9/3/2010, Alexandre Vieira wrote:

 Hi Johnny,

 mysql EXPLAIN SELECT * FROM clientinfo WHERE userid='911930694';

 ++-++---+---+-+-+---+--+---+
 | id | select_type | table  | type  | possible_keys | key |
 key_len
 | ref   | rows | Extra |

 ++-++---+---+-+-+---+--+---+
 |  1 | SIMPLE  | clientinfo | const | PRIMARY   | PRIMARY | 23
 | const |1 |   |

 ++-++---+---+-+-+---+--+---+
 1 row in set (0.53 sec)

 Thanks

 BR
 AJ


 Alexandre,
 Do you have UserId declared as CHAR? It looks numeric to me. If it is
 stored as an integer then don't use the ' ' in the select statement
 otherwise it needs to convert it.
 If UserId values are integers and you have the column defined as CHAR,
 then declare the column UserId as integer or BigInt and the searches should
 be faster than searching on CHAR.

 Mike




  On Thu, Sep 2, 2010 at 8:52 PM, Johnny Withers joh...@pixelated.net
 wrote:

  What about an explain of this query:
 
 
  SELECT * FROM clientinfo WHERE userid='182106617';
 
  -JW
 
 
  On Thu, Sep 2, 2010 at 9:35 AM, Alexandre Vieira nul...@gmail.com
 wrote:
 
  John, Johnny,
 
  Thanks for the prompt answer.
 
  mysql SHOW CREATE TABLE clientinfo;
 
 
 ++--+
  | Table  | Create
  Table
  |
 
 
 

Re: Performance problems on MySQL

2010-09-05 Thread Shawn Green (MySQL)

On 9/3/2010 3:15 PM, Johnny Withers wrote:

It seems that when your index is PRIMARY on InnoDB tables, it's magic and is
part of the data thereby it is not included in the index_length field.

I have never noticed this. I don't think adding a new index will make a
difference.

You could try moving your log files to a different disk array than where
your data is. If you have binary and query logging enabled, it's probably a
good idea.

Johnny is correct. The PRIMARY KEY to an InnoDB table is indeed part of 
the data:


http://dev.mysql.com/doc/refman/5.1/en/innodb-index-types.html

That explains why there is no length to this index.

--
Shawn Green
MySQL Principal Technical Support Engineer
Oracle USA, Inc.
Office: Blountville, TN

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: Performance problems on MySQL

2010-09-05 Thread mos

At 04:44 AM 9/3/2010, Alexandre Vieira wrote:

Hi Johnny,

mysql EXPLAIN SELECT * FROM clientinfo WHERE userid='911930694';
++-++---+---+-+-+---+--+---+
| id | select_type | table  | type  | possible_keys | key | key_len
| ref   | rows | Extra |
++-++---+---+-+-+---+--+---+
|  1 | SIMPLE  | clientinfo | const | PRIMARY   | PRIMARY | 23
| const |1 |   |
++-++---+---+-+-+---+--+---+
1 row in set (0.53 sec)

Thanks

BR
AJ


Alexandre,
 Do you have UserId declared as CHAR? It looks numeric to me. If it is 
stored as an integer then don't use the ' ' in the select statement 
otherwise it needs to convert it.
If UserId values are integers and you have the column defined as CHAR, then 
declare the column UserId as integer or BigInt and the searches should be 
faster than searching on CHAR.


Mike




On Thu, Sep 2, 2010 at 8:52 PM, Johnny Withers joh...@pixelated.net wrote:

 What about an explain of this query:


 SELECT * FROM clientinfo WHERE userid='182106617';

 -JW


 On Thu, Sep 2, 2010 at 9:35 AM, Alexandre Vieira nul...@gmail.com wrote:

 John, Johnny,

 Thanks for the prompt answer.

 mysql SHOW CREATE TABLE clientinfo;

 
++--+

 | Table  | Create
 Table
 |

 
++--+

 | clientinfo | CREATE TABLE `clientinfo` (
   `userid` varchar(21) NOT NULL default '',
   `units` float default NULL,
   `date_last_query` datetime default NULL,
   `last_acc` int(10) unsigned default NULL,
   `date_last_units` datetime default NULL,
   `notification` int(10) unsigned NOT NULL default '0',
   `package` char(1) default NULL,
   `user_type` varchar(5) default NULL,
   PRIMARY KEY  (`userid`)
 ) ENGINE=InnoDB DEFAULT CHARSET=latin1 |

 
++--+

 1 row in set (0.00 sec)
 mysql SHOW INDEX FROM clientinfo;

 
+++--+--+-+---+-+--++--++-+

 | Table  | Non_unique | Key_name | Seq_in_index | Column_name |
 Collation | Cardinality | Sub_part | Packed | Null | Index_type | 
Comment |


 
+++--+--+-+---+-+--++--++-+

 | clientinfo |  0 | PRIMARY  |1 | userid  |
 A |  460056 | NULL | NULL   |  | 
BTREE  | |


 
+++--+--+-+---+-+--++--++-+

 1 row in set (0.00 sec)


 SELECT * FROM clientinfo WHERE userid='182106617';

 UPDATE clientinfo SET
 
units=0.0,date_last_query=now(),user_type='POS',last_acc=167,date_last_units=now(),notification=0

 WHERE userid='152633876';

 INSERT INTO clientinfo VALUES
 ('171918726',101.0,now(),1,now(),0,'D','PRE') ON DUPLICATE KEY UPDATE
 units=101.0, date_last_query=now(), last_acc=1, date_last_units=now(),
 notification=0, package='D', user_type='PRE';

 DELETE FROM clientinfo WHERE units='155618918';

 There are no other type of queries.

 We're running this DB on a Sun-Fire V240. 2xUIIIi 1.5ghz with 2GB of RAM.

 We also run some other applications in the server, but nothing that
 consumes all the CPU/Memory. The machine has almost 1GB of free memory and
 50% of idle CPU time at any time.

 TIA

 BR
 Alex


 --
 Alexandre Vieira - nul...@gmail.com




 --
 -
 Johnny Withers
 601.209.4985
 joh...@pixelated.net




--
Alexandre 

Re: Performance problems on MySQL

2010-09-03 Thread Alexandre Vieira
Hi Johnny,

mysql EXPLAIN SELECT * FROM clientinfo WHERE userid='911930694';
++-++---+---+-+-+---+--+---+
| id | select_type | table  | type  | possible_keys | key | key_len
| ref   | rows | Extra |
++-++---+---+-+-+---+--+---+
|  1 | SIMPLE  | clientinfo | const | PRIMARY   | PRIMARY | 23
| const |1 |   |
++-++---+---+-+-+---+--+---+
1 row in set (0.53 sec)

Thanks

BR
AJ

On Thu, Sep 2, 2010 at 8:52 PM, Johnny Withers joh...@pixelated.net wrote:

 What about an explain of this query:


 SELECT * FROM clientinfo WHERE userid='182106617';

 -JW


 On Thu, Sep 2, 2010 at 9:35 AM, Alexandre Vieira nul...@gmail.com wrote:

 John, Johnny,

 Thanks for the prompt answer.

 mysql SHOW CREATE TABLE clientinfo;

 ++--+
 | Table  | Create
 Table
 |

 ++--+
 | clientinfo | CREATE TABLE `clientinfo` (
   `userid` varchar(21) NOT NULL default '',
   `units` float default NULL,
   `date_last_query` datetime default NULL,
   `last_acc` int(10) unsigned default NULL,
   `date_last_units` datetime default NULL,
   `notification` int(10) unsigned NOT NULL default '0',
   `package` char(1) default NULL,
   `user_type` varchar(5) default NULL,
   PRIMARY KEY  (`userid`)
 ) ENGINE=InnoDB DEFAULT CHARSET=latin1 |

 ++--+
 1 row in set (0.00 sec)
 mysql SHOW INDEX FROM clientinfo;

 +++--+--+-+---+-+--++--++-+
 | Table  | Non_unique | Key_name | Seq_in_index | Column_name |
 Collation | Cardinality | Sub_part | Packed | Null | Index_type | Comment |

 +++--+--+-+---+-+--++--++-+
 | clientinfo |  0 | PRIMARY  |1 | userid  |
 A |  460056 | NULL | NULL   |  | BTREE  | |

 +++--+--+-+---+-+--++--++-+
 1 row in set (0.00 sec)


 SELECT * FROM clientinfo WHERE userid='182106617';

 UPDATE clientinfo SET
 units=0.0,date_last_query=now(),user_type='POS',last_acc=167,date_last_units=now(),notification=0
 WHERE userid='152633876';

 INSERT INTO clientinfo VALUES
 ('171918726',101.0,now(),1,now(),0,'D','PRE') ON DUPLICATE KEY UPDATE
 units=101.0, date_last_query=now(), last_acc=1, date_last_units=now(),
 notification=0, package='D', user_type='PRE';

 DELETE FROM clientinfo WHERE units='155618918';

 There are no other type of queries.

 We're running this DB on a Sun-Fire V240. 2xUIIIi 1.5ghz with 2GB of RAM.

 We also run some other applications in the server, but nothing that
 consumes all the CPU/Memory. The machine has almost 1GB of free memory and
 50% of idle CPU time at any time.

 TIA

 BR
 Alex


 --
 Alexandre Vieira - nul...@gmail.com




 --
 -
 Johnny Withers
 601.209.4985
 joh...@pixelated.net




-- 
Alexandre Vieira - nul...@gmail.com


Re: Performance problems on MySQL

2010-09-03 Thread Jangita

On 02/09/2010 6:05 p, Alexandre Vieira wrote:

Hi Jangita,

I'm 15779 innodb_buffer_pool_pages_free from a total of 22400. That's
246MB of 350MB free.

| Innodb_buffer_pool_pages_data | 6020   |
| Innodb_buffer_pool_pages_dirty| 1837   |
| Innodb_buffer_pool_pages_flushed  | 673837 |
| Innodb_buffer_pool_pages_free | 15779  |
| Innodb_buffer_pool_pages_latched  | 0  |
| Innodb_buffer_pool_pages_misc | 601|
| Innodb_buffer_pool_pages_total| 22400  |
| Innodb_buffer_pool_read_ahead_rnd | 1  |
| Innodb_buffer_pool_read_ahead_seq | 0  |
| Innodb_buffer_pool_read_requests  | 48471963   |
| Innodb_buffer_pool_reads  | 3497   |
| Innodb_buffer_pool_wait_free  | 0  |
| Innodb_buffer_pool_write_requests | 21700478   |

Why would I need to increase?

Thanks

BR
AJ
I'm guessing (just a guess) that you have alot free buffer_pool_size 
because mysql doesn't use it because it cannot fit information into it 
at one go so doesn't use it at all? Try and up it; if it doesn't work 
you could always set it back.


--
Jangita | +256 76 91 8383 | Y!  MSN: jang...@yahoo.com
Skype: jangita | GTalk: jangita.nyag...@gmail.com

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: Performance problems on MySQL

2010-09-03 Thread Johnny Withers
Ok, so I'm stumped?

What kind of hardware is behind this thing?

-JW

On Fri, Sep 3, 2010 at 4:44 AM, Alexandre Vieira nul...@gmail.com wrote:

 Hi Johnny,

 mysql EXPLAIN SELECT * FROM clientinfo WHERE userid='911930694';

 ++-++---+---+-+-+---+--+---+
 | id | select_type | table  | type  | possible_keys | key | key_len
 | ref   | rows | Extra |

 ++-++---+---+-+-+---+--+---+
 |  1 | SIMPLE  | clientinfo | const | PRIMARY   | PRIMARY | 23
 | const |1 |   |

 ++-++---+---+-+-+---+--+---+
 1 row in set (0.53 sec)

 Thanks

 BR
 AJ





Re: Performance problems on MySQL

2010-09-03 Thread Alexandre Vieira
Hi,

I've done some tests with INT(8) vs the VARCHAR(23) on the userid PK and it
makes a little difference but not enough for the application to run in real
time processing.

It's a Sun Fire V240 2x 1.5ghz UltraSparc IIIi with 2GB of RAM.

MySQL is eating 179MB of RAM and 5,4% of CPU.

  PID USERNAME  SIZE   RSS STATE  PRI NICE  TIME  CPU PROCESS/NLWP
  6229 mysql 455M  179M sleep   580   0:03.11 5,4% mysqld/68

The machine has ~1GB of free memory. MySQL and InnoDB has free pages to grow
and we have ~50% of free CPU time.

Currently I can't use the replication server since the application running
on top if this BD can only talk to 1 data source.
At the moment it's also not possible to change the application in order to
make it use the DB more wisely.

Basically we have a table with lots of selects, lots of update, lots of
inserts and deletes. Data manipulation is random, doesn't follow any
specific pattern. All working concurrently.

A big bottleneck is:

8 queries inside InnoDB, 28 queries in queue
1 read views open inside InnoDB

Increasing innodb_thread_concurrency might help without causing any problems
to the overall performance.

Makes total sense if you read:
http://peter-zaitsev.livejournal.com/9138.html

Thanks in advance.

BR
AJ

On Fri, Sep 3, 2010 at 2:31 PM, Johnny Withers joh...@pixelated.net wrote:

 Ok, so I'm stumped?

 What kind of hardware is behind this thing?

 -JW

 On Fri, Sep 3, 2010 at 4:44 AM, Alexandre Vieira nul...@gmail.com wrote:

 Hi Johnny,

 mysql EXPLAIN SELECT * FROM clientinfo WHERE userid='911930694';

 ++-++---+---+-+-+---+--+---+
 | id | select_type | table  | type  | possible_keys | key |
 key_len | ref   | rows | Extra |

 ++-++---+---+-+-+---+--+---+
 |  1 | SIMPLE  | clientinfo | const | PRIMARY   | PRIMARY |
 23  | const |1 |   |

 ++-++---+---+-+-+---+--+---+
 1 row in set (0.53 sec)

 Thanks

 BR
 AJ





-- 
Alexandre Vieira - nul...@gmail.com


Re: Performance problems on MySQL

2010-09-03 Thread Johnny Withers
What does

SHOW TABLE STATUS LIKE 'table_name'

Say about this table?

-JW

On Fri, Sep 3, 2010 at 8:59 AM, Alexandre Vieira nul...@gmail.com wrote:

 Hi,

 I've done some tests with INT(8) vs the VARCHAR(23) on the userid PK and it
 makes a little difference but not enough for the application to run in real
 time processing.

 It's a Sun Fire V240 2x 1.5ghz UltraSparc IIIi with 2GB of RAM.

 MySQL is eating 179MB of RAM and 5,4% of CPU.

   PID USERNAME  SIZE   RSS STATE  PRI NICE  TIME  CPU PROCESS/NLWP
   6229 mysql 455M  179M sleep   580   0:03.11 5,4% mysqld/68

 The machine has ~1GB of free memory. MySQL and InnoDB has free pages to
 grow and we have ~50% of free CPU time.

 Currently I can't use the replication server since the application running
 on top if this BD can only talk to 1 data source.
 At the moment it's also not possible to change the application in order to
 make it use the DB more wisely.

 Basically we have a table with lots of selects, lots of update, lots of
 inserts and deletes. Data manipulation is random, doesn't follow any
 specific pattern. All working concurrently.

 A big bottleneck is:

 8 queries inside InnoDB, 28 queries in queue

 1 read views open inside InnoDB

 Increasing innodb_thread_concurrency might help without causing any
 problems to the overall performance.

 Makes total sense if you read:
 http://peter-zaitsev.livejournal.com/9138.html

 Thanks in advance.

 BR
 AJ


 On Fri, Sep 3, 2010 at 2:31 PM, Johnny Withers joh...@pixelated.netwrote:

 Ok, so I'm stumped?

 What kind of hardware is behind this thing?

 -JW

 On Fri, Sep 3, 2010 at 4:44 AM, Alexandre Vieira nul...@gmail.comwrote:

 Hi Johnny,

 mysql EXPLAIN SELECT * FROM clientinfo WHERE userid='911930694';

 ++-++---+---+-+-+---+--+---+
 | id | select_type | table  | type  | possible_keys | key |
 key_len | ref   | rows | Extra |

 ++-++---+---+-+-+---+--+---+
 |  1 | SIMPLE  | clientinfo | const | PRIMARY   | PRIMARY |
 23  | const |1 |   |

 ++-++---+---+-+-+---+--+---+
 1 row in set (0.53 sec)

 Thanks

 BR
 AJ





 --
 Alexandre Vieira - nul...@gmail.com




-- 
-
Johnny Withers
601.209.4985
joh...@pixelated.net


Re: Performance problems on MySQL

2010-09-03 Thread Alexandre Vieira
Hi,

mysql SHOW TABLE STATUS LIKE 'clientinfo';
+++-++++-+-+--+---++-+-++---+--+++
| Name   | Engine | Version | Row_format | Rows   | Avg_row_length |
Data_length | Max_data_length | Index_length | Data_free | Auto_increment |
Create_time | Update_time | Check_time | Collation |
Checksum | Create_options | Comment|
+++-++++-+-+--+---++-+-++---+--+++
| clientinfo | InnoDB |  10 | Compact| 504762 | 82 |
41500672 |   0 |0 | 0 |   NULL |
2010-09-01 03:21:36 | NULL| NULL   | latin1_swedish_ci |
NULL || InnoDB free: 276480 kB |
+++-++++-+-+--+---++-+-++---+--+++
1 row in set (0.02 sec)

BR
AJ

On Fri, Sep 3, 2010 at 3:18 PM, Johnny Withers joh...@pixelated.net wrote:

 What does

 SHOW TABLE STATUS LIKE 'table_name'

 Say about this table?

 -JW


 On Fri, Sep 3, 2010 at 8:59 AM, Alexandre Vieira nul...@gmail.com wrote:

 Hi,

 I've done some tests with INT(8) vs the VARCHAR(23) on the userid PK and
 it makes a little difference but not enough for the application to run in
 real time processing.

 It's a Sun Fire V240 2x 1.5ghz UltraSparc IIIi with 2GB of RAM.

 MySQL is eating 179MB of RAM and 5,4% of CPU.

   PID USERNAME  SIZE   RSS STATE  PRI NICE  TIME  CPU PROCESS/NLWP
   6229 mysql 455M  179M sleep   580   0:03.11 5,4% mysqld/68

 The machine has ~1GB of free memory. MySQL and InnoDB has free pages to
 grow and we have ~50% of free CPU time.

 Currently I can't use the replication server since the application running
 on top if this BD can only talk to 1 data source.
 At the moment it's also not possible to change the application in order to
 make it use the DB more wisely.

 Basically we have a table with lots of selects, lots of update, lots of
 inserts and deletes. Data manipulation is random, doesn't follow any
 specific pattern. All working concurrently.

 A big bottleneck is:

 8 queries inside InnoDB, 28 queries in queue

 1 read views open inside InnoDB

 Increasing innodb_thread_concurrency might help without causing any
 problems to the overall performance.

 Makes total sense if you read:
 http://peter-zaitsev.livejournal.com/9138.html

 Thanks in advance.

 BR
 AJ


 On Fri, Sep 3, 2010 at 2:31 PM, Johnny Withers joh...@pixelated.netwrote:

 Ok, so I'm stumped?

 What kind of hardware is behind this thing?

 -JW

 On Fri, Sep 3, 2010 at 4:44 AM, Alexandre Vieira nul...@gmail.comwrote:

 Hi Johnny,

 mysql EXPLAIN SELECT * FROM clientinfo WHERE userid='911930694';

 ++-++---+---+-+-+---+--+---+
 | id | select_type | table  | type  | possible_keys | key |
 key_len | ref   | rows | Extra |

 ++-++---+---+-+-+---+--+---+
 |  1 | SIMPLE  | clientinfo | const | PRIMARY   | PRIMARY |
 23  | const |1 |   |

 ++-++---+---+-+-+---+--+---+
 1 row in set (0.53 sec)

 Thanks

 BR
 AJ





 --
 Alexandre Vieira - nul...@gmail.com




 --
 -
 Johnny Withers
 601.209.4985
 joh...@pixelated.net




-- 
Alexandre Vieira - nul...@gmail.com


Re: Performance problems on MySQL

2010-09-03 Thread Johnny Withers
Very confusing...

Why is index_length zero ?

On top of that, there's only 500K rows in the table with a data size of
41MB. Maybe InnoDB is flushing to disk too often?

What's the output of iostat -dxk 60 ? (run for a minute+ to get 2 output
girds)



  --

*Johnny Withers*
jwith...@ecashsoftware.com
601.919.2275 x112
 [image: eCash Software Systems]


On Fri, Sep 3, 2010 at 9:20 AM, Alexandre Vieira nul...@gmail.com wrote:

 Hi,

 mysql SHOW TABLE STATUS LIKE 'clientinfo';

 +++-++++-+-+--+---++-+-++---+--+++
 | Name   | Engine | Version | Row_format | Rows   | Avg_row_length |
 Data_length | Max_data_length | Index_length | Data_free | Auto_increment |
 Create_time | Update_time | Check_time | Collation |
 Checksum | Create_options | Comment|

 +++-++++-+-+--+---++-+-++---+--+++
 | clientinfo | InnoDB |  10 | Compact| 504762 | 82 |
 41500672 |   0 |0 | 0 |   NULL |
 2010-09-01 03:21:36 | NULL| NULL   | latin1_swedish_ci |
 NULL || InnoDB free: 276480 kB |

 +++-++++-+-+--+---++-+-++---+--+++
 1 row in set (0.02 sec)

 BR
 AJ


 On Fri, Sep 3, 2010 at 3:18 PM, Johnny Withers joh...@pixelated.netwrote:

 What does

 SHOW TABLE STATUS LIKE 'table_name'

 Say about this table?

 -JW


 On Fri, Sep 3, 2010 at 8:59 AM, Alexandre Vieira nul...@gmail.comwrote:

 Hi,

 I've done some tests with INT(8) vs the VARCHAR(23) on the userid PK and
 it makes a little difference but not enough for the application to run in
 real time processing.

 It's a Sun Fire V240 2x 1.5ghz UltraSparc IIIi with 2GB of RAM.

 MySQL is eating 179MB of RAM and 5,4% of CPU.

   PID USERNAME  SIZE   RSS STATE  PRI NICE  TIME  CPU PROCESS/NLWP
   6229 mysql 455M  179M sleep   580   0:03.11 5,4% mysqld/68

 The machine has ~1GB of free memory. MySQL and InnoDB has free pages to
 grow and we have ~50% of free CPU time.

 Currently I can't use the replication server since the application
 running on top if this BD can only talk to 1 data source.
 At the moment it's also not possible to change the application in order
 to make it use the DB more wisely.

 Basically we have a table with lots of selects, lots of update, lots of
 inserts and deletes. Data manipulation is random, doesn't follow any
 specific pattern. All working concurrently.

 A big bottleneck is:

 8 queries inside InnoDB, 28 queries in queue

 1 read views open inside InnoDB

 Increasing innodb_thread_concurrency might help without causing any
 problems to the overall performance.

 Makes total sense if you read:
 http://peter-zaitsev.livejournal.com/9138.html

 Thanks in advance.

 BR
 AJ


 On Fri, Sep 3, 2010 at 2:31 PM, Johnny Withers joh...@pixelated.netwrote:

 Ok, so I'm stumped?

 What kind of hardware is behind this thing?

 -JW

 On Fri, Sep 3, 2010 at 4:44 AM, Alexandre Vieira nul...@gmail.comwrote:

 Hi Johnny,

 mysql EXPLAIN SELECT * FROM clientinfo WHERE userid='911930694';

 ++-++---+---+-+-+---+--+---+
 | id | select_type | table  | type  | possible_keys | key |
 key_len | ref   | rows | Extra |

 ++-++---+---+-+-+---+--+---+
 |  1 | SIMPLE  | clientinfo | const | PRIMARY   | PRIMARY |
 23  | const |1 |   |

 ++-++---+---+-+-+---+--+---+
 1 row in set (0.53 sec)

 Thanks

 BR
 AJ





 --
 Alexandre Vieira - nul...@gmail.com




 --
 -
 Johnny Withers
 601.209.4985
 joh...@pixelated.net




 --
 Alexandre Vieira - nul...@gmail.com




-- 
-
Johnny Withers
601.209.4985
joh...@pixelated.net


Re: Performance problems on MySQL

2010-09-03 Thread Alexandre Vieira
Hi,

The DB is working on /var, which is md2 / md12 / md22.

  extended device statistics
device   r/sw/s   kr/s   kw/s wait actv  svc_t  %w  %b

md2  0.1   80.00.4  471.4  0.0  1.0   12.2   0  94
md10 0.05.70.0   78.8  0.0  0.1   19.7   0   9
md11 0.00.00.00.0  0.0  0.00.0   0   0
md12 0.0   80.00.3  471.4  0.0  0.89.9   0  76
md20 0.05.70.0   78.8  0.0  0.1   21.1   0   9
md21 0.00.00.00.0  0.0  0.00.0   0   0
md22 0.0   80.00.1  471.3  0.0  0.8   10.6   0  81
sd0  0.2   86.80.3  550.5  0.0  0.9   10.6   0  78
sd1  0.2   86.80.2  550.4  0.0  1.0   11.3   0  83
sd30 0.00.00.00.0  0.0  0.00.0   0   0
nfs1 0.00.00.00.0  0.0  0.00.0   0   0
  extended device statistics
device   r/sw/s   kr/s   kw/s wait actv  svc_t  %w  %b
md0  0.05.60.0   83.2  0.0  0.2   28.0   0  10
md1  0.00.00.00.0  0.0  0.00.0   0   0
md2  0.1   84.20.7  527.2  0.0  1.0   11.8   0  93
md10 0.05.60.0   83.2  0.0  0.1   19.0   0   8
md11 0.00.00.00.0  0.0  0.00.0   0   0
md12 0.0   84.20.3  527.2  0.0  0.89.7   0  77
md20 0.05.60.0   83.2  0.0  0.1   19.9   0   8
md21 0.00.00.00.0  0.0  0.00.0   0   0
md22 0.0   84.10.4  527.2  0.0  0.9   10.3   0  82
sd0  0.2   91.10.3  610.7  0.0  0.9   10.4   0  79
sd1  0.2   91.00.4  610.7  0.0  1.0   11.0   0  84
sd30 0.00.00.00.0  0.0  0.00.0   0   0
nfs1 0.00.00.00.0  0.0  0.00.0   0   0

I really can't say why Index_Lenght is 0... It might be something with the
index?

mysql SHOW INDEX FROM clientinfo;
+++--+--+-+---+-+--++--++-+
| Table  | Non_unique | Key_name | Seq_in_index | Column_name |
Collation | Cardinality | Sub_part | Packed | Null | Index_type | Comment |
+++--+--+-+---+-+--++--++-+
| clientinfo |  0 | PRIMARY  |1 | userid  |
A |  503836 | NULL | NULL   |  | BTREE  | |
+++--+--+-+---+-+--++--++-+
1 row in set (0.00 sec)

mysql SHOW TABLE STATUS LIKE 'clientinfo';
+++-++++-+-+--+---++-+-++---+--+++
| Name   | Engine | Version | Row_format | Rows   | Avg_row_length |
Data_length | Max_data_length | Index_length | Data_free | Auto_increment |
Create_time | Update_time | Check_time | Collation |
Checksum | Create_options | Comment|
+++-++++-+-+--+---++-+-++---+--+++
| clientinfo | InnoDB |  10 | Compact| 508170 | 81 |
41500672 |   0 |0 | 0 |   NULL |
2010-09-01 03:21:36 | NULL| NULL   | latin1_swedish_ci |
NULL || InnoDB free: 276480 kB |
+++-++++-+-+--+---++-+-++---+--+++
1 row in set (0.00 sec)

BR
AJ

On Fri, Sep 3, 2010 at 3:46 PM, Johnny Withers joh...@pixelated.net wrote:

 Very confusing...

 Why is index_length zero ?

 On top of that, there's only 500K rows in the table with a data size of
 41MB. Maybe InnoDB is flushing to disk too often?

 What's the output of iostat -dxk 60 ? (run for a minute+ to get 2 output
 girds)



   --

 *Johnny Withers*
 jwith...@ecashsoftware.com
 601.919.2275 x112
  [image: eCash Software Systems]


 On Fri, Sep 3, 2010 at 9:20 AM, Alexandre Vieira nul...@gmail.com wrote:

 Hi,

 mysql SHOW TABLE STATUS LIKE 'clientinfo';

 +++-++++-+-+--+---++-+-++---+--+++
 | Name   | Engine | Version | Row_format | Rows  

Re: Performance problems on MySQL

2010-09-03 Thread Johnny Withers
I think your MySQL instance is disk bound.

If you look at your iostats, md2, 12 and 22 have a ~10ms wait time before a
request can be processed. iostat is also reporting those disks are 75%+
utilized which means they are doing about all they can do.

Anyway you can add more disks? Add faster disks?

I'm not an iostat expert, use my research and recommendations with caution
=)


  --

*Johnny Withers*
jwith...@ecashsoftware.com
601.919.2275 x112
 [image: eCash Software Systems]


On Fri, Sep 3, 2010 at 10:37 AM, Alexandre Vieira nul...@gmail.com wrote:

 Hi,

 The DB is working on /var, which is md2 / md12 / md22.

   extended device statistics
 device   r/sw/s   kr/s   kw/s wait actv  svc_t  %w  %b

 md2  0.1   80.00.4  471.4  0.0  1.0   12.2   0  94
 md10 0.05.70.0   78.8  0.0  0.1   19.7   0   9
 md11 0.00.00.00.0  0.0  0.00.0   0   0
 md12 0.0   80.00.3  471.4  0.0  0.89.9   0  76
 md20 0.05.70.0   78.8  0.0  0.1   21.1   0   9
 md21 0.00.00.00.0  0.0  0.00.0   0   0
 md22 0.0   80.00.1  471.3  0.0  0.8   10.6   0  81
 sd0  0.2   86.80.3  550.5  0.0  0.9   10.6   0  78
 sd1  0.2   86.80.2  550.4  0.0  1.0   11.3   0  83
 sd30 0.00.00.00.0  0.0  0.00.0   0   0
 nfs1 0.00.00.00.0  0.0  0.00.0   0   0
   extended device statistics
 device   r/sw/s   kr/s   kw/s wait actv  svc_t  %w  %b
 md0  0.05.60.0   83.2  0.0  0.2   28.0   0  10
 md1  0.00.00.00.0  0.0  0.00.0   0   0
 md2  0.1   84.20.7  527.2  0.0  1.0   11.8   0  93
 md10 0.05.60.0   83.2  0.0  0.1   19.0   0   8
 md11 0.00.00.00.0  0.0  0.00.0   0   0
 md12 0.0   84.20.3  527.2  0.0  0.89.7   0  77
 md20 0.05.60.0   83.2  0.0  0.1   19.9   0   8
 md21 0.00.00.00.0  0.0  0.00.0   0   0
 md22 0.0   84.10.4  527.2  0.0  0.9   10.3   0  82
 sd0  0.2   91.10.3  610.7  0.0  0.9   10.4   0  79
 sd1  0.2   91.00.4  610.7  0.0  1.0   11.0   0  84
 sd30 0.00.00.00.0  0.0  0.00.0   0   0
 nfs1 0.00.00.00.0  0.0  0.00.0   0   0

 I really can't say why Index_Lenght is 0... It might be something with the
 index?


 mysql SHOW INDEX FROM clientinfo;

 +++--+--+-+---+-+--++--++-+
 | Table  | Non_unique | Key_name | Seq_in_index | Column_name |
 Collation | Cardinality | Sub_part | Packed | Null | Index_type | Comment |

 +++--+--+-+---+-+--++--++-+
 | clientinfo |  0 | PRIMARY  |1 | userid  |
 A |  503836 | NULL | NULL   |  | BTREE  | |

 +++--+--+-+---+-+--++--++-+
 1 row in set (0.00 sec)


 mysql SHOW TABLE STATUS LIKE 'clientinfo';

 +++-++++-+-+--+---++-+-++---+--+++
 | Name   | Engine | Version | Row_format | Rows   | Avg_row_length |
 Data_length | Max_data_length | Index_length | Data_free | Auto_increment |
 Create_time | Update_time | Check_time | Collation |
 Checksum | Create_options | Comment|

 +++-++++-+-+--+---++-+-++---+--+++
 | clientinfo | InnoDB |  10 | Compact| 508170 | 81 |
 41500672 |   0 |0 | 0 |   NULL |
 2010-09-01 03:21:36 | NULL| NULL   | latin1_swedish_ci |
 NULL || InnoDB free: 276480 kB |

 +++-++++-+-+--+---++-+-++---+--+++
 1 row in set (0.00 sec)

 BR
 AJ


 On Fri, Sep 3, 2010 at 3:46 PM, Johnny Withers joh...@pixelated.netwrote:

 Very confusing...

 Why is index_length zero ?

 On top of that, there's only 500K rows in the table with a data size of
 41MB. Maybe InnoDB is flushing to disk too often?

 What's the output 

Re: Performance problems on MySQL

2010-09-03 Thread Alexandre Vieira
Hi,

When creating a table in MySQL with a PK it automatically creates an INDEX,
correct?

The Index_Length: 0 is rather strange..I've created a new INDEX on top of my
PK column on my test system and Index_Length shows a big value different
from 0. Do you think this might have any impact?

mysql show index from gwtraffic.clientinfo;
++++--+-+---+-+--++--++-+
| Table  | Non_unique | Key_name   | Seq_in_index | Column_name |
Collation | Cardinality | Sub_part | Packed | Null | Index_type | Comment |
++++--+-+---+-+--++--++-+
| clientinfo |  0 | PRIMARY|1 | userid  |
A |  548216 | NULL | NULL   |  | BTREE  | |
| clientinfo |  1 | userid_idx |1 | userid  |
A |  548216 | NULL | NULL   |  | BTREE  | |
++++--+-+---+-+--++--++-+
2 rows in set (0.01 sec)

mysql show table status LIKE 'clientinfo';
+++-++++-+-+--+---++-+-++---+--++-+
| Name   | Engine | Version | Row_format | Rows   | Avg_row_length |
Data_length | Max_data_length | Index_length | Data_free | Auto_increment |
Create_time | Update_time | Check_time | Collation |
Checksum | Create_options | Comment |
+++-++++-+-+--+---++-+-++---+--++-+
| clientinfo | InnoDB |  10 | Compact| 548216 | 62 |
34144256 |   0 |  5783552 | 0 |   NULL |
2010-09-03 17:38:16 | NULL| NULL   | latin1_swedish_ci |
NULL || InnoDB free: 1214464 kB |
+++-++++-+-+--+---++-+-++---+--++-+
1 row in set (0.00 sec)

I'm trying to stress my test DB but can't measure any different results with
or without the second INDEX.

Regarding the disks.. the DB is updated 20+ times every second. Writing the
log, checkpoint to disk, etc.. can cause that much load?

BR
AJ

On Fri, Sep 3, 2010 at 4:54 PM, Johnny Withers joh...@pixelated.net wrote:

 I think your MySQL instance is disk bound.

 If you look at your iostats, md2, 12 and 22 have a ~10ms wait time before a
 request can be processed. iostat is also reporting those disks are 75%+
 utilized which means they are doing about all they can do.

 Anyway you can add more disks? Add faster disks?

 I'm not an iostat expert, use my research and recommendations with caution
 =)


   --

 *Johnny Withers*
 jwith...@ecashsoftware.com
 601.919.2275 x112
  [image: eCash Software Systems]


 On Fri, Sep 3, 2010 at 10:37 AM, Alexandre Vieira nul...@gmail.comwrote:

 Hi,

 The DB is working on /var, which is md2 / md12 / md22.

   extended device statistics
 device   r/sw/s   kr/s   kw/s wait actv  svc_t  %w  %b

 md2  0.1   80.00.4  471.4  0.0  1.0   12.2   0  94
 md10 0.05.70.0   78.8  0.0  0.1   19.7   0   9
 md11 0.00.00.00.0  0.0  0.00.0   0   0
 md12 0.0   80.00.3  471.4  0.0  0.89.9   0  76
 md20 0.05.70.0   78.8  0.0  0.1   21.1   0   9
 md21 0.00.00.00.0  0.0  0.00.0   0   0
 md22 0.0   80.00.1  471.3  0.0  0.8   10.6   0  81
 sd0  0.2   86.80.3  550.5  0.0  0.9   10.6   0  78
 sd1  0.2   86.80.2  550.4  0.0  1.0   11.3   0  83
 sd30 0.00.00.00.0  0.0  0.00.0   0   0
 nfs1 0.00.00.00.0  0.0  0.00.0   0   0
   extended device statistics
 device   r/sw/s   kr/s   kw/s wait actv  svc_t  %w  %b
 md0  0.05.60.0   83.2  0.0  0.2   28.0   0  10
 md1  0.00.00.00.0  0.0  0.00.0   0   0
 md2  0.1   84.20.7  527.2  0.0  1.0   11.8   0  93
 md10 0.05.60.0   83.2  0.0  0.1   19.0   0   8
 md11 0.00.00.00.0  0.0  0.00.0   0   0
 md12 0.0   84.20.3  527.2  0.0  0.89.7   0  77
 md20 0.05.60.0   83.2  0.0  

Re: Performance problems on MySQL

2010-09-03 Thread Johnny Withers
It seems that when your index is PRIMARY on InnoDB tables, it's magic and is
part of the data thereby it is not included in the index_length field.

I have never noticed this. I don't think adding a new index will make a
difference.

You could try moving your log files to a different disk array than where
your data is. If you have binary and query logging enabled, it's probably a
good idea.



  --

*Johnny Withers*
jwith...@ecashsoftware.com
601.919.2275 x112
 [image: eCash Software Systems]


On Fri, Sep 3, 2010 at 12:45 PM, Alexandre Vieira nul...@gmail.com wrote:

 Hi,

 When creating a table in MySQL with a PK it automatically creates an INDEX,
 correct?

 The Index_Length: 0 is rather strange..I've created a new INDEX on top of
 my PK column on my test system and Index_Length shows a big value different
 from 0. Do you think this might have any impact?

 mysql show index from gwtraffic.clientinfo;


 ++++--+-+---+-+--++--++-+
 | Table  | Non_unique | Key_name   | Seq_in_index | Column_name |
 Collation | Cardinality | Sub_part | Packed | Null | Index_type | Comment |

 ++++--+-+---+-+--++--++-+
 | clientinfo |  0 | PRIMARY|1 | userid  |
 A |  548216 | NULL | NULL   |  | BTREE  | |
 | clientinfo |  1 | userid_idx |1 | userid  |
 A |  548216 | NULL | NULL   |  | BTREE  | |

 ++++--+-+---+-+--++--++-+

 2 rows in set (0.01 sec)

 mysql show table status LIKE 'clientinfo';


 +++-++++-+-+--+---++-+-++---+--++-+
 | Name   | Engine | Version | Row_format | Rows   | Avg_row_length |
 Data_length | Max_data_length | Index_length | Data_free | Auto_increment |
 Create_time | Update_time | Check_time | Collation |
 Checksum | Create_options | Comment |

 +++-++++-+-+--+---++-+-++---+--++-+
 | clientinfo | InnoDB |  10 | Compact| 548216 | 62 |
 34144256 |   0 |  5783552 | 0 |   NULL |
 2010-09-03 17:38:16 | NULL| NULL   | latin1_swedish_ci |
 NULL || InnoDB free: 1214464 kB |


 +++-++++-+-+--+---++-+-++---+--++-+
 1 row in set (0.00 sec)

 I'm trying to stress my test DB but can't measure any different results
 with or without the second INDEX.

 Regarding the disks.. the DB is updated 20+ times every second. Writing the
 log, checkpoint to disk, etc.. can cause that much load?

 BR
 AJ





Performance problems on MySQL

2010-09-02 Thread Alexandre Vieira
Hi list,

I'm having some performance problems on my 5.0.45-log DB running on Solaris
8 (V240).

We only have one table and two apps selecting, updating, inserting and
deleting massively and randomly from this table.

The table is very simple. All SELECTs,INSERTs,UPDATEs and DELETEs have only
one condition on an unique varchar indexed column.

The table has 500k records and has been OPTIMIZED 32h ago.

I've ran some sampling and:

A SELECT costs between 400ms and 600ms.
An UPDATE costs between 800ms and 1300ms.
A DELETE costs between 900ms and 1300ms
An INSERT costs always 900ms 2000ms.

At any given time the DB is handling 60-80 operations every second. It does
not scale any more than this because all the application connections to the
DB are being used and waiting for the DB to move. Our application queues
requests and it lags our clients.

The perl mysqltuner only whines about Query cache disabled but since I get
an ~20 updates every second I can't get any query cache hits, so I disabled
it.

If it makes any difference, we're replicating everything to another server
that don't serve any queries.

The DB has a 32 hour uptime.

Any help is most welcome.

You can find my.cnf, show status and show innodb status below.

Kind regards
Alex

###
my.cnf:

sql-mode=STRICT_ALL_TABLES
old_passwords   =1
skip-bdb
max_connections =100
max_allowed_packet  =1M
table_cache =512
sort_buffer_size=2M
read_buffer_size=4M
read_rnd_buffer_size=8M
thread_cache_size   =16
query_cache_limit   =32M
thread_concurrency  =8
max_heap_table_size =28M
tmp_table_size  =12M
innodb_buffer_pool_size =350M
innodb_additional_mem_pool_size =15M
innodb_log_buffer_size  =6M
innodb_flush_log_at_trx_commit  =1
innodb_lock_wait_timeout=50

###

mysql show status where Value NOT LIKE 0;
+---++
| Variable_name | Value  |
+---++
| Aborted_clients   | 88 |
| Aborted_connects  | 37590  |
| Binlog_cache_use  | 2148392|
| Bytes_received| 1117   |
| Bytes_sent| 8772   |
| Com_change_db | 1  |
| Com_delete| 4  |
| Com_insert| 3  |
| Com_select| 2  |
| Com_show_databases| 1  |
| Com_show_fields   | 3  |
| Com_show_status   | 2  |
| Com_show_tables   | 1  |
| Compression   | OFF|
| Connections   | 276096 |
| Created_tmp_files | 5  |
| Created_tmp_tables| 4  |
| Flush_commands| 1  |
| Handler_commit| 14 |
| Handler_prepare   | 14 |
| Handler_read_key  | 8  |
| Handler_read_rnd_next | 263|
| Handler_write | 395|
| Innodb_buffer_pool_pages_data | 6019   |
| Innodb_buffer_pool_pages_dirty| 1858   |
| Innodb_buffer_pool_pages_flushed  | 593993 |
| Innodb_buffer_pool_pages_free | 15784  |
| Innodb_buffer_pool_pages_misc | 597|
| Innodb_buffer_pool_pages_total| 22400  |
| Innodb_buffer_pool_read_ahead_rnd | 1  |
| Innodb_buffer_pool_read_requests  | 42797013   |
| Innodb_buffer_pool_reads  | 3497   |
| Innodb_buffer_pool_write_requests | 19096507   |
| Innodb_data_fsyncs| 4319683|
| Innodb_data_pending_fsyncs| 1  |
| Innodb_data_read  | 60231680   |
| Innodb_data_reads | 3514   |
| Innodb_data_writes| 4496721|
| Innodb_data_written   | 1259458560 |
| Innodb_dblwr_pages_written| 593993 |
| Innodb_dblwr_writes   | 12967  |
| Innodb_log_write_requests | 2111208|
| Innodb_log_writes | 4285654|
| Innodb_os_log_fsyncs  | 4303114|
| Innodb_os_log_pending_fsyncs  | 1  |
| Innodb_os_log_written | 3264897024 |
| Innodb_page_size  | 16384  |
| Innodb_pages_created  | 2476   |
| Innodb_pages_read | 3543   |
| Innodb_pages_written  | 593993 |
| Innodb_row_lock_time  | 1339668|
| Innodb_row_lock_time_avg  | 379|
| Innodb_row_lock_time_max  | 10631  |
| Innodb_row_lock_waits | 3531   |
| Innodb_rows_deleted

Re: Performance problems on MySQL

2010-09-02 Thread Johnny Withers
Can you show us the table structure and sample queries?

On Thursday, September 2, 2010, Alexandre Vieira nul...@gmail.com wrote:
 Hi list,

 I'm having some performance problems on my 5.0.45-log DB running on Solaris
 8 (V240).

 We only have one table and two apps selecting, updating, inserting and
 deleting massively and randomly from this table.

 The table is very simple. All SELECTs,INSERTs,UPDATEs and DELETEs have only
 one condition on an unique varchar indexed column.

 The table has 500k records and has been OPTIMIZED 32h ago.

 I've ran some sampling and:

 A SELECT costs between 400ms and 600ms.
 An UPDATE costs between 800ms and 1300ms.
 A DELETE costs between 900ms and 1300ms
 An INSERT costs always 900ms 2000ms.

 At any given time the DB is handling 60-80 operations every second. It does
 not scale any more than this because all the application connections to the
 DB are being used and waiting for the DB to move. Our application queues
 requests and it lags our clients.

 The perl mysqltuner only whines about Query cache disabled but since I get
 an ~20 updates every second I can't get any query cache hits, so I disabled
 it.

 If it makes any difference, we're replicating everything to another server
 that don't serve any queries.

 The DB has a 32 hour uptime.

 Any help is most welcome.

 You can find my.cnf, show status and show innodb status below.

 Kind regards
 Alex

 ###
 my.cnf:

 sql-mode                        =STRICT_ALL_TABLES
 old_passwords                   =1
 skip-bdb
 max_connections                 =100
 max_allowed_packet              =1M
 table_cache                     =512
 sort_buffer_size                =2M
 read_buffer_size                =4M
 read_rnd_buffer_size            =8M
 thread_cache_size               =16
 query_cache_limit               =32M
 thread_concurrency              =8
 max_heap_table_size             =28M
 tmp_table_size                  =12M
 innodb_buffer_pool_size         =350M
 innodb_additional_mem_pool_size =15M
 innodb_log_buffer_size          =6M
 innodb_flush_log_at_trx_commit  =1
 innodb_lock_wait_timeout        =50

 ###

 mysql show status where Value NOT LIKE 0;
 +---++
 | Variable_name                     | Value      |
 +---++
 | Aborted_clients                   | 88         |
 | Aborted_connects                  | 37590      |
 | Binlog_cache_use                  | 2148392    |
 | Bytes_received                    | 1117       |
 | Bytes_sent                        | 8772       |
 | Com_change_db                     | 1          |
 | Com_delete                        | 4          |
 | Com_insert                        | 3          |
 | Com_select                        | 2          |
 | Com_show_databases                | 1          |
 | Com_show_fields                   | 3          |
 | Com_show_status                   | 2          |
 | Com_show_tables                   | 1          |
 | Compression                       | OFF        |
 | Connections                       | 276096     |
 | Created_tmp_files                 | 5          |
 | Created_tmp_tables                | 4          |
 | Flush_commands                    | 1          |
 | Handler_commit                    | 14         |
 | Handler_prepare                   | 14         |
 | Handler_read_key                  | 8          |
 | Handler_read_rnd_next             | 263        |
 | Handler_write                     | 395        |
 | Innodb_buffer_pool_pages_data     | 6019       |
 | Innodb_buffer_pool_pages_dirty    | 1858       |
 | Innodb_buffer_pool_pages_flushed  | 593993     |
 | Innodb_buffer_pool_pages_free     | 15784      |
 | Innodb_buffer_pool_pages_misc     | 597        |
 | Innodb_buffer_pool_pages_total    | 22400      |
 | Innodb_buffer_pool_read_ahead_rnd | 1          |
 | Innodb_buffer_pool_read_requests  | 42797013   |
 | Innodb_buffer_pool_reads          | 3497       |
 | Innodb_buffer_pool_write_requests | 19096507   |
 | Innodb_data_fsyncs                | 4319683    |
 | Innodb_data_pending_fsyncs        | 1          |
 | Innodb_data_read                  | 60231680   |
 | Innodb_data_reads                 | 3514       |
 | Innodb_data_writes                | 4496721    |
 | Innodb_data_written               | 1259458560 |
 | Innodb_dblwr_pages_written        | 593993     |
 | Innodb_dblwr_writes               | 12967      |
 | Innodb_log_write_requests         | 2111208    |
 | Innodb_log_writes                 | 4285654    |
 | Innodb_os_log_fsyncs              | 4303114    |
 | Innodb_os_log_pending_fsyncs      | 1          |
 | Innodb_os_log_written             | 3264897024 |
 | Innodb_page_size                  | 16384      |
 | Innodb_pages_created              | 2476       |
 | Innodb_pages_read                 | 3543       |
 | Innodb_pages_written              | 593993

Re: Performance problems on MySQL

2010-09-02 Thread John Daisley
What is the hardware spec? Anything else running on the box?

Why are you replicating but not making use of the slave?

Can you post the output of SHOW CREATE TABLE?

Regards
John

On 2 September 2010 12:50, Alexandre Vieira nul...@gmail.com wrote:

 Hi list,

 I'm having some performance problems on my 5.0.45-log DB running on Solaris
 8 (V240).

 We only have one table and two apps selecting, updating, inserting and
 deleting massively and randomly from this table.

 The table is very simple. All SELECTs,INSERTs,UPDATEs and DELETEs have only
 one condition on an unique varchar indexed column.

 The table has 500k records and has been OPTIMIZED 32h ago.

 I've ran some sampling and:

 A SELECT costs between 400ms and 600ms.
 An UPDATE costs between 800ms and 1300ms.
 A DELETE costs between 900ms and 1300ms
 An INSERT costs always 900ms 2000ms.

 At any given time the DB is handling 60-80 operations every second. It does
 not scale any more than this because all the application connections to the
 DB are being used and waiting for the DB to move. Our application queues
 requests and it lags our clients.

 The perl mysqltuner only whines about Query cache disabled but since I
 get
 an ~20 updates every second I can't get any query cache hits, so I disabled
 it.

 If it makes any difference, we're replicating everything to another server
 that don't serve any queries.

 The DB has a 32 hour uptime.

 Any help is most welcome.

 You can find my.cnf, show status and show innodb status below.

 Kind regards
 Alex

 ###
 my.cnf:

 sql-mode=STRICT_ALL_TABLES
 old_passwords   =1
 skip-bdb
 max_connections =100
 max_allowed_packet  =1M
 table_cache =512
 sort_buffer_size=2M
 read_buffer_size=4M
 read_rnd_buffer_size=8M
 thread_cache_size   =16
 query_cache_limit   =32M
 thread_concurrency  =8
 max_heap_table_size =28M
 tmp_table_size  =12M
 innodb_buffer_pool_size =350M
 innodb_additional_mem_pool_size =15M
 innodb_log_buffer_size  =6M
 innodb_flush_log_at_trx_commit  =1
 innodb_lock_wait_timeout=50

 ###

 mysql show status where Value NOT LIKE 0;
 +---++
 | Variable_name | Value  |
 +---++
 | Aborted_clients   | 88 |
 | Aborted_connects  | 37590  |
 | Binlog_cache_use  | 2148392|
 | Bytes_received| 1117   |
 | Bytes_sent| 8772   |
 | Com_change_db | 1  |
 | Com_delete| 4  |
 | Com_insert| 3  |
 | Com_select| 2  |
 | Com_show_databases| 1  |
 | Com_show_fields   | 3  |
 | Com_show_status   | 2  |
 | Com_show_tables   | 1  |
 | Compression   | OFF|
 | Connections   | 276096 |
 | Created_tmp_files | 5  |
 | Created_tmp_tables| 4  |
 | Flush_commands| 1  |
 | Handler_commit| 14 |
 | Handler_prepare   | 14 |
 | Handler_read_key  | 8  |
 | Handler_read_rnd_next | 263|
 | Handler_write | 395|
 | Innodb_buffer_pool_pages_data | 6019   |
 | Innodb_buffer_pool_pages_dirty| 1858   |
 | Innodb_buffer_pool_pages_flushed  | 593993 |
 | Innodb_buffer_pool_pages_free | 15784  |
 | Innodb_buffer_pool_pages_misc | 597|
 | Innodb_buffer_pool_pages_total| 22400  |
 | Innodb_buffer_pool_read_ahead_rnd | 1  |
 | Innodb_buffer_pool_read_requests  | 42797013   |
 | Innodb_buffer_pool_reads  | 3497   |
 | Innodb_buffer_pool_write_requests | 19096507   |
 | Innodb_data_fsyncs| 4319683|
 | Innodb_data_pending_fsyncs| 1  |
 | Innodb_data_read  | 60231680   |
 | Innodb_data_reads | 3514   |
 | Innodb_data_writes| 4496721|
 | Innodb_data_written   | 1259458560 |
 | Innodb_dblwr_pages_written| 593993 |
 | Innodb_dblwr_writes   | 12967  |
 | Innodb_log_write_requests | 2111208|
 | Innodb_log_writes | 4285654|
 | Innodb_os_log_fsyncs  | 4303114|
 | Innodb_os_log_pending_fsyncs  | 1  |
 | Innodb_os_log_written | 3264897024 |
 | Innodb_page_size  | 16384  |
 | Innodb_pages_created  | 2476

Performance problems on MySQL

2010-09-02 Thread Alexandre Vieira
John, Johnny,

Thanks for the prompt answer.

mysql SHOW CREATE TABLE clientinfo;
++--+
| Table  | Create
Table
|
++--+
| clientinfo | CREATE TABLE `clientinfo` (
  `userid` varchar(21) NOT NULL default '',
  `units` float default NULL,
  `date_last_query` datetime default NULL,
  `last_acc` int(10) unsigned default NULL,
  `date_last_units` datetime default NULL,
  `notification` int(10) unsigned NOT NULL default '0',
  `package` char(1) default NULL,
  `user_type` varchar(5) default NULL,
  PRIMARY KEY  (`userid`)
) ENGINE=InnoDB DEFAULT CHARSET=latin1 |
++--+
1 row in set (0.00 sec)
mysql SHOW INDEX FROM clientinfo;
+++--+--+-+---+-+--++--++-+
| Table  | Non_unique | Key_name | Seq_in_index | Column_name |
Collation | Cardinality | Sub_part | Packed | Null | Index_type | Comment |
+++--+--+-+---+-+--++--++-+
| clientinfo |  0 | PRIMARY  |1 | userid  |
A |  460056 | NULL | NULL   |  | BTREE  | |
+++--+--+-+---+-+--++--++-+
1 row in set (0.00 sec)


SELECT * FROM clientinfo WHERE userid='182106617';

UPDATE clientinfo SET
units=0.0,date_last_query=now(),user_type='POS',last_acc=167,date_last_units=now(),notification=0
WHERE userid='152633876';

INSERT INTO clientinfo VALUES ('171918726',101.0,now(),1,now(),0,'D','PRE')
ON DUPLICATE KEY UPDATE units=101.0, date_last_query=now(), last_acc=1,
date_last_units=now(), notification=0, package='D', user_type='PRE';

DELETE FROM clientinfo WHERE units='155618918';

There are no other type of queries.

We're running this DB on a Sun-Fire V240. 2xUIIIi 1.5ghz with 2GB of RAM.

We also run some other applications in the server, but nothing that consumes
all the CPU/Memory. The machine has almost 1GB of free memory and 50% of
idle CPU time at any time.

TIA

BR
Alex


On Thu, Sep 2, 2010 at 1:52 PM, John Daisley daisleyj...@googlemail.comwrote:

 What is the hardware spec? Anything else running on the box?

 Why are you replicating but not making use of the slave?

 Can you post the output of SHOW CREATE TABLE?

 Regards
 John





Re: Performance problems on MySQL

2010-09-02 Thread Jangita

On 02/09/2010 4:46 p, Alexandre Vieira wrote:

John, Johnny,

Thanks for the prompt answer.


...

We also run some other applications in the server, but nothing that consumes
all the CPU/Memory. The machine has almost 1GB of free memory and 50% of
idle CPU time at any time.

TIA

BR
Alex


Increase innodb_buffer_pool_size say to 1GB?
--
Jangita | +256 76 91 8383 | Y!  MSN: jang...@yahoo.com
Skype: jangita | GTalk: jangita.nyag...@gmail.com

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: Performance problems on MySQL

2010-09-02 Thread Alexandre Vieira
Hi Jangita,

I'm 15779 innodb_buffer_pool_pages_free from a total of 22400. That's 246MB
of 350MB free.

| Innodb_buffer_pool_pages_data | 6020   |
| Innodb_buffer_pool_pages_dirty| 1837   |
| Innodb_buffer_pool_pages_flushed  | 673837 |
| Innodb_buffer_pool_pages_free | 15779  |
| Innodb_buffer_pool_pages_latched  | 0  |
| Innodb_buffer_pool_pages_misc | 601|
| Innodb_buffer_pool_pages_total| 22400  |
| Innodb_buffer_pool_read_ahead_rnd | 1  |
| Innodb_buffer_pool_read_ahead_seq | 0  |
| Innodb_buffer_pool_read_requests  | 48471963   |
| Innodb_buffer_pool_reads  | 3497   |
| Innodb_buffer_pool_wait_free  | 0  |
| Innodb_buffer_pool_write_requests | 21700478   |

Why would I need to increase?

Thanks

BR
AJ

On Thu, Sep 2, 2010 at 4:47 PM, Jangita jang...@jangita.com wrote:

 On 02/09/2010 4:46 p, Alexandre Vieira wrote:

 John, Johnny,

 Thanks for the prompt answer.

  ...

  We also run some other applications in the server, but nothing that
 consumes
 all the CPU/Memory. The machine has almost 1GB of free memory and 50% of
 idle CPU time at any time.

 TIA

 BR
 Alex


 Increase innodb_buffer_pool_size say to 1GB?
 --
 Jangita | +256 76 91 8383 | Y!  MSN: jang...@yahoo.com
 Skype: jangita | GTalk: jangita.nyag...@gmail.com

 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql?unsub=nul...@gmail.com




-- 
Alexandre Vieira - nul...@gmail.com


RE: Performance problems on MySQL

2010-09-02 Thread Travis Ard
Have you considered adding a secondary index on the units column for your
delete queries?

DELETE FROM clientinfo WHERE units='155618918';

-Original Message-
From: Alexandre Vieira [mailto:nul...@gmail.com] 
Sent: Thursday, September 02, 2010 8:46 AM
To: John Daisley; joh...@pixelated.net
Cc: mysql@lists.mysql.com
Subject: Performance problems on MySQL

John, Johnny,

Thanks for the prompt answer.

mysql SHOW CREATE TABLE clientinfo;
++--




+
| Table  | Create
Table
|
++--




+
| clientinfo | CREATE TABLE `clientinfo` (
  `userid` varchar(21) NOT NULL default '',
  `units` float default NULL,
  `date_last_query` datetime default NULL,
  `last_acc` int(10) unsigned default NULL,
  `date_last_units` datetime default NULL,
  `notification` int(10) unsigned NOT NULL default '0',
  `package` char(1) default NULL,
  `user_type` varchar(5) default NULL,
  PRIMARY KEY  (`userid`)
) ENGINE=InnoDB DEFAULT CHARSET=latin1 |
++--




+
1 row in set (0.00 sec)
mysql SHOW INDEX FROM clientinfo;
+++--+--+-+-
--+-+--++--++-+
| Table  | Non_unique | Key_name | Seq_in_index | Column_name |
Collation | Cardinality | Sub_part | Packed | Null | Index_type | Comment |
+++--+--+-+-
--+-+--++--++-+
| clientinfo |  0 | PRIMARY  |1 | userid  |
A |  460056 | NULL | NULL   |  | BTREE  | |
+++--+--+-+-
--+-+--++--++-+
1 row in set (0.00 sec)


SELECT * FROM clientinfo WHERE userid='182106617';

UPDATE clientinfo SET
units=0.0,date_last_query=now(),user_type='POS',last_acc=167,date_last_units
=now(),notification=0
WHERE userid='152633876';

INSERT INTO clientinfo VALUES ('171918726',101.0,now(),1,now(),0,'D','PRE')
ON DUPLICATE KEY UPDATE units=101.0, date_last_query=now(), last_acc=1,
date_last_units=now(), notification=0, package='D', user_type='PRE';

DELETE FROM clientinfo WHERE units='155618918';

There are no other type of queries.

We're running this DB on a Sun-Fire V240. 2xUIIIi 1.5ghz with 2GB of RAM.

We also run some other applications in the server, but nothing that consumes
all the CPU/Memory. The machine has almost 1GB of free memory and 50% of
idle CPU time at any time.

TIA

BR
Alex


On Thu, Sep 2, 2010 at 1:52 PM, John Daisley
daisleyj...@googlemail.comwrote:

 What is the hardware spec? Anything else running on the box?

 Why are you replicating but not making use of the slave?

 Can you post the output of SHOW CREATE TABLE?

 Regards
 John





-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: Performance problems on MySQL

2010-09-02 Thread Alexandre Vieira
Hi Travis,

Sorry, bad copy/paste. That DELETE statement is wrong.

The application executes:

DELETE FROM clientinfo WHERE userid='x';

BR
AJ

On Thu, Sep 2, 2010 at 5:23 PM, Travis Ard travis_...@hotmail.com wrote:

 Have you considered adding a secondary index on the units column for your
 delete queries?

 DELETE FROM clientinfo WHERE units='155618918';

 -Original Message-
 From: Alexandre Vieira [mailto:nul...@gmail.com]
 Sent: Thursday, September 02, 2010 8:46 AM
 To: John Daisley; joh...@pixelated.net
 Cc: mysql@lists.mysql.com
 Subject: Performance problems on MySQL

 John, Johnny,

 Thanks for the prompt answer.

 mysql SHOW CREATE TABLE clientinfo;

 ++--

 

 

 

 
 +
 | Table  | Create
 Table
 |

 ++--

 

 

 

 
 +
 | clientinfo | CREATE TABLE `clientinfo` (
  `userid` varchar(21) NOT NULL default '',
  `units` float default NULL,
  `date_last_query` datetime default NULL,
  `last_acc` int(10) unsigned default NULL,
  `date_last_units` datetime default NULL,
  `notification` int(10) unsigned NOT NULL default '0',
  `package` char(1) default NULL,
  `user_type` varchar(5) default NULL,
  PRIMARY KEY  (`userid`)
 ) ENGINE=InnoDB DEFAULT CHARSET=latin1 |

 ++--

 

 

 

 
 +
 1 row in set (0.00 sec)
 mysql SHOW INDEX FROM clientinfo;

 +++--+--+-+-
 --+-+--++--++-+
 | Table  | Non_unique | Key_name | Seq_in_index | Column_name |
 Collation | Cardinality | Sub_part | Packed | Null | Index_type | Comment |

 +++--+--+-+-
 --+-+--++--++-+
 | clientinfo |  0 | PRIMARY  |1 | userid  |
 A |  460056 | NULL | NULL   |  | BTREE  | |

 +++--+--+-+-
 --+-+--++--++-+
 1 row in set (0.00 sec)


 SELECT * FROM clientinfo WHERE userid='182106617';

 UPDATE clientinfo SET

 units=0.0,date_last_query=now(),user_type='POS',last_acc=167,date_last_units
 =now(),notification=0
 WHERE userid='152633876';

 INSERT INTO clientinfo VALUES ('171918726',101.0,now(),1,now(),0,'D','PRE')
 ON DUPLICATE KEY UPDATE units=101.0, date_last_query=now(), last_acc=1,
 date_last_units=now(), notification=0, package='D', user_type='PRE';

 DELETE FROM clientinfo WHERE units='155618918';

 There are no other type of queries.

 We're running this DB on a Sun-Fire V240. 2xUIIIi 1.5ghz with 2GB of RAM.

 We also run some other applications in the server, but nothing that
 consumes
 all the CPU/Memory. The machine has almost 1GB of free memory and 50% of
 idle CPU time at any time.

 TIA

 BR
 Alex


 On Thu, Sep 2, 2010 at 1:52 PM, John Daisley
 daisleyj...@googlemail.comwrote:

  What is the hardware spec? Anything else running on the box?
 
  Why are you replicating but not making use of the slave?
 
  Can you post the output of SHOW CREATE TABLE?
 
  Regards
  John
 
 
 




-- 
Alexandre Vieira - nul...@gmail.com


Re: Large import into MYISAM - performance problems

2008-06-05 Thread Simon Collins
I'm loading the data through the command below mysql -f -u root -p 
enwiki  enwiki.sql


The version is MySQL 5.0.51a-community

I've disabled the primary key, so there are no indexes. The CPU has 2 
cores and 2 Gigs memory.


The import fell over overnight with a table full error as it hit 1T (I 
think this may be a file system problem). As it's not importing before 
anymore show status isn't going to provide any interesting info however, 
I did notice that mysql was not consuming much CPU time ~ 10%.


I wouldn't like to split the data up into separate tables as it would 
change the schema and I'm not in charge of the schema design - just the 
DBA at the backend.


Cheers

Simon

mos wrote:

Simon,
As someone else mentioned, how are you loading the data? Can you post 
the SQL?


You have an Id field, so is that not the primary key? If so, the 
slowdown could be maintaining the index. If so, add up to 30% of your 
available ram to your key_bufer_size in your my.cnf file and restart 
the server. How much RAM do you have on your machine and how many 
CPU's do you have? What version of MySQL are you using? Also can you 
post your Show Status output after it has started to slow down? How 
much CPU is being used after the import slows down?


Now from what you've said, it looks like you are using this table as a 
lookup table, so if it just has an id and a blob field, you probably 
return the blob field for a given id, correct? If it were up to me, I 
would break the data into more manageable tables. If you have 100 
million rows, then I'd break it into 10x10 million row tables. Table_1 
would have id's from 1 to 9,999,999, and table_2 with id's from 10 
million to 10,999,999 etc. Your lookup would call a stored procedure 
which determines which table to use based on the Id it was given. If 
you really had to search all the tables you can then use a Merge table 
based on those 10 tables. I use Merge tables quite a bit and the 
performance is quite good.


Mike

At 11:42 AM 6/4/2008, you wrote:

Dear all,

I'm presently trying to import the full wikipedia dump for one of our 
research users. Unsurprisingly it's a massive import file (2.7T)


Most of the data is importing into a single MyISAM table which has an 
id field and a blob field. There are no constraints / indexes on this 
table. We're using an XFS filesystem.


The import starts of quickly but gets increasingly slower as it 
progresses, starting off at about 60 G per hour but now the MyISAM 
table is ~1TB it's slowed to a load of about 5G per hour. At this 
rate the import will not finish for a considerable time, if at all.


Can anyone suggest to me why this is happening and if there's a way 
to improve performance. If there's a more suitable list to discuss 
this, please let me know.


Regards

Simon






--
Dr Simon Collins
Data Grid Consultant
National Grid Service
University of Manchester
Research Computing Services
Kilburn Building
Oxford Road
Manchester
M13 9PL

Tel 0161 275 0604


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Large import into MYISAM - performance problems

2008-06-05 Thread Ananda Kumar
Simon,
Why dont u split the file and use LOAD DATA INFILE command which would
improve the performance while loading into an empty table with keys
disabled.

regards
anandkl


On 6/5/08, Simon Collins [EMAIL PROTECTED] wrote:

 I'm loading the data through the command below mysql -f -u root -p enwiki 
 enwiki.sql

 The version is MySQL 5.0.51a-community

 I've disabled the primary key, so there are no indexes. The CPU has 2 cores
 and 2 Gigs memory.

 The import fell over overnight with a table full error as it hit 1T (I
 think this may be a file system problem). As it's not importing before
 anymore show status isn't going to provide any interesting info however, I
 did notice that mysql was not consuming much CPU time ~ 10%.

 I wouldn't like to split the data up into separate tables as it would
 change the schema and I'm not in charge of the schema design - just the DBA
 at the backend.

 Cheers

 Simon

 mos wrote:

 Simon,
 As someone else mentioned, how are you loading the data? Can you post the
 SQL?

 You have an Id field, so is that not the primary key? If so, the slowdown
 could be maintaining the index. If so, add up to 30% of your available ram
 to your key_bufer_size in your my.cnf file and restart the server. How much
 RAM do you have on your machine and how many CPU's do you have? What version
 of MySQL are you using? Also can you post your Show Status output after it
 has started to slow down? How much CPU is being used after the import slows
 down?

 Now from what you've said, it looks like you are using this table as a
 lookup table, so if it just has an id and a blob field, you probably return
 the blob field for a given id, correct? If it were up to me, I would break
 the data into more manageable tables. If you have 100 million rows, then I'd
 break it into 10x10 million row tables. Table_1 would have id's from 1 to
 9,999,999, and table_2 with id's from 10 million to 10,999,999 etc. Your
 lookup would call a stored procedure which determines which table to use
 based on the Id it was given. If you really had to search all the tables you
 can then use a Merge table based on those 10 tables. I use Merge tables
 quite a bit and the performance is quite good.

 Mike

 At 11:42 AM 6/4/2008, you wrote:

 Dear all,

 I'm presently trying to import the full wikipedia dump for one of our
 research users. Unsurprisingly it's a massive import file (2.7T)

 Most of the data is importing into a single MyISAM table which has an id
 field and a blob field. There are no constraints / indexes on this table.
 We're using an XFS filesystem.

 The import starts of quickly but gets increasingly slower as it
 progresses, starting off at about 60 G per hour but now the MyISAM table is
 ~1TB it's slowed to a load of about 5G per hour. At this rate the import
 will not finish for a considerable time, if at all.

 Can anyone suggest to me why this is happening and if there's a way to
 improve performance. If there's a more suitable list to discuss this, please
 let me know.

 Regards

 Simon





 --
 Dr Simon Collins
 Data Grid Consultant
 National Grid Service
 University of Manchester
 Research Computing Services
 Kilburn Building
 Oxford Road
 Manchester
 M13 9PL

 Tel 0161 275 0604


 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]




Re: Large import into MYISAM - performance problems

2008-06-05 Thread Adrian Bruce
You could load the data into several smaller tables and combine them 
into a merged table which would have no real effect on the schema.


Ade

Simon Collins wrote:
I'm loading the data through the command below mysql -f -u root -p 
enwiki  enwiki.sql


The version is MySQL 5.0.51a-community

I've disabled the primary key, so there are no indexes. The CPU has 2 
cores and 2 Gigs memory.


The import fell over overnight with a table full error as it hit 1T 
(I think this may be a file system problem). As it's not importing 
before anymore show status isn't going to provide any interesting info 
however, I did notice that mysql was not consuming much CPU time ~ 10%.


I wouldn't like to split the data up into separate tables as it would 
change the schema and I'm not in charge of the schema design - just 
the DBA at the backend.


Cheers

Simon

mos wrote:

Simon,
As someone else mentioned, how are you loading the data? Can you post 
the SQL?


You have an Id field, so is that not the primary key? If so, the 
slowdown could be maintaining the index. If so, add up to 30% of your 
available ram to your key_bufer_size in your my.cnf file and restart 
the server. How much RAM do you have on your machine and how many 
CPU's do you have? What version of MySQL are you using? Also can you 
post your Show Status output after it has started to slow down? How 
much CPU is being used after the import slows down?


Now from what you've said, it looks like you are using this table as 
a lookup table, so if it just has an id and a blob field, you 
probably return the blob field for a given id, correct? If it were up 
to me, I would break the data into more manageable tables. If you 
have 100 million rows, then I'd break it into 10x10 million row 
tables. Table_1 would have id's from 1 to 9,999,999, and table_2 with 
id's from 10 million to 10,999,999 etc. Your lookup would call a 
stored procedure which determines which table to use based on the Id 
it was given. If you really had to search all the tables you can then 
use a Merge table based on those 10 tables. I use Merge tables quite 
a bit and the performance is quite good.


Mike

At 11:42 AM 6/4/2008, you wrote:

Dear all,

I'm presently trying to import the full wikipedia dump for one of 
our research users. Unsurprisingly it's a massive import file (2.7T)


Most of the data is importing into a single MyISAM table which has 
an id field and a blob field. There are no constraints / indexes on 
this table. We're using an XFS filesystem.


The import starts of quickly but gets increasingly slower as it 
progresses, starting off at about 60 G per hour but now the MyISAM 
table is ~1TB it's slowed to a load of about 5G per hour. At this 
rate the import will not finish for a considerable time, if at all.


Can anyone suggest to me why this is happening and if there's a way 
to improve performance. If there's a more suitable list to discuss 
this, please let me know.


Regards

Simon









--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Large import into MYISAM - performance problems

2008-06-05 Thread Simon Collins
I can do  - if the load data infile command definitely improves 
performance and splitting the file does the same I have no problem with 
doing this. It just seems strange that it's problems with the way the 
import file is configured. I thought the problem would be somehow with 
the table getting bigger.


Regards

Simon


Ananda Kumar wrote:

Simon,
Why dont u split the file and use LOAD DATA INFILE command which would
improve the performance while loading into an empty table with keys
disabled.

regards
anandkl


On 6/5/08, Simon Collins [EMAIL PROTECTED] wrote:
  

I'm loading the data through the command below mysql -f -u root -p enwiki 
enwiki.sql

The version is MySQL 5.0.51a-community

I've disabled the primary key, so there are no indexes. The CPU has 2 cores
and 2 Gigs memory.

The import fell over overnight with a table full error as it hit 1T (I
think this may be a file system problem). As it's not importing before
anymore show status isn't going to provide any interesting info however, I
did notice that mysql was not consuming much CPU time ~ 10%.

I wouldn't like to split the data up into separate tables as it would
change the schema and I'm not in charge of the schema design - just the DBA
at the backend.

Cheers

Simon

mos wrote:



Simon,
As someone else mentioned, how are you loading the data? Can you post the
SQL?

You have an Id field, so is that not the primary key? If so, the slowdown
could be maintaining the index. If so, add up to 30% of your available ram
to your key_bufer_size in your my.cnf file and restart the server. How much
RAM do you have on your machine and how many CPU's do you have? What version
of MySQL are you using? Also can you post your Show Status output after it
has started to slow down? How much CPU is being used after the import slows
down?

Now from what you've said, it looks like you are using this table as a
lookup table, so if it just has an id and a blob field, you probably return
the blob field for a given id, correct? If it were up to me, I would break
the data into more manageable tables. If you have 100 million rows, then I'd
break it into 10x10 million row tables. Table_1 would have id's from 1 to
9,999,999, and table_2 with id's from 10 million to 10,999,999 etc. Your
lookup would call a stored procedure which determines which table to use
based on the Id it was given. If you really had to search all the tables you
can then use a Merge table based on those 10 tables. I use Merge tables
quite a bit and the performance is quite good.

Mike

At 11:42 AM 6/4/2008, you wrote:

  

Dear all,

I'm presently trying to import the full wikipedia dump for one of our
research users. Unsurprisingly it's a massive import file (2.7T)

Most of the data is importing into a single MyISAM table which has an id
field and a blob field. There are no constraints / indexes on this table.
We're using an XFS filesystem.

The import starts of quickly but gets increasingly slower as it
progresses, starting off at about 60 G per hour but now the MyISAM table is
~1TB it's slowed to a load of about 5G per hour. At this rate the import
will not finish for a considerable time, if at all.

Can anyone suggest to me why this is happening and if there's a way to
improve performance. If there's a more suitable list to discuss this, please
let me know.

Regards

Simon




  

--
Dr Simon Collins
Data Grid Consultant
National Grid Service
University of Manchester
Research Computing Services
Kilburn Building
Oxford Road
Manchester
M13 9PL

Tel 0161 275 0604


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]





  



--
Dr Simon Collins
Data Grid Consultant
National Grid Service
University of Manchester
Research Computing Services
Kilburn Building
Oxford Road
Manchester
M13 9PL

Tel 0161 275 0604


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Large import into MYISAM - performance problems

2008-06-05 Thread Saravanan
Hi Simon,

what kind of table you are using. If it is myisam you can increase the max size 
of table by changing the following variable

myisam_data_pointer_size = 7

as default it should be 6.

Please let me know if that helps you.

Thanks,
Saravanan



--- On Thu, 6/5/08, Simon Collins lt;[EMAIL PROTECTED]gt; wrote:
From: Simon Collins lt;[EMAIL PROTECTED]gt;
Subject: Re: Large import into MYISAM - performance problems
To: mysql@lists.mysql.com
Date: Thursday, June 5, 2008, 3:05 PM

I#39;m loading the data through the command below mysql -f -u root -p 
enwiki lt; enwiki.sql

The version is MySQL 5.0.51a-community

I#39;ve disabled the primary key, so there are no indexes. The CPU has 2 
cores and 2 Gigs memory.

The import fell over overnight with a quot;table fullquot; error as it hit 1T
(I 
think this may be a file system problem). As it#39;s not importing before 
anymore show status isn#39;t going to provide any interesting info however, 
I did notice that mysql was not consuming much CPU time ~ 10%.

I wouldn#39;t like to split the data up into separate tables as it would 
change the schema and I#39;m not in charge of the schema design - just the 
DBA at the backend.

Cheers

Simon

mos wrote:
gt; Simon,
gt; As someone else mentioned, how are you loading the data? Can you post 
gt; the SQL?
gt;
gt; You have an Id field, so is that not the primary key? If so, the 
gt; slowdown could be maintaining the index. If so, add up to 30% of your 
gt; available ram to your key_bufer_size in your my.cnf file and restart 
gt; the server. How much RAM do you have on your machine and how many 
gt; CPU#39;s do you have? What version of MySQL are you using? Also can you 
gt; post your quot;Show Statusquot; output after it has started to slow
down? How 
gt; much CPU is being used after the import slows down?
gt;
gt; Now from what you#39;ve said, it looks like you are using this table as a

gt; lookup table, so if it just has an id and a blob field, you probably 
gt; return the blob field for a given id, correct? If it were up to me, I 
gt; would break the data into more manageable tables. If you have 100 
gt; million rows, then I#39;d break it into 10x10 million row tables. Table_1

gt; would have id#39;s from 1 to 9,999,999, and table_2 with id#39;s from 10

gt; million to 10,999,999 etc. Your lookup would call a stored procedure 
gt; which determines which table to use based on the Id it was given. If 
gt; you really had to search all the tables you can then use a Merge table 
gt; based on those 10 tables. I use Merge tables quite a bit and the 
gt; performance is quite good.
gt;
gt; Mike
gt;
gt; At 11:42 AM 6/4/2008, you wrote:
gt;gt; Dear all,
gt;gt;
gt;gt; I#39;m presently trying to import the full wikipedia dump for one of
our 
gt;gt; research users. Unsurprisingly it#39;s a massive import file (2.7T)
gt;gt;
gt;gt; Most of the data is importing into a single MyISAM table which has an 
gt;gt; id field and a blob field. There are no constraints / indexes on this 
gt;gt; table. We#39;re using an XFS filesystem.
gt;gt;
gt;gt; The import starts of quickly but gets increasingly slower as it 
gt;gt; progresses, starting off at about 60 G per hour but now the MyISAM 
gt;gt; table is ~1TB it#39;s slowed to a load of about 5G per hour. At this 
gt;gt; rate the import will not finish for a considerable time, if at all.
gt;gt;
gt;gt; Can anyone suggest to me why this is happening and if there#39;s a
way 
gt;gt; to improve performance. If there#39;s a more suitable list to discuss

gt;gt; this, please let me know.
gt;gt;
gt;gt; Regards
gt;gt;
gt;gt; Simon
gt;
gt;


-- 
Dr Simon Collins
Data Grid Consultant
National Grid Service
University of Manchester
Research Computing Services
Kilburn Building
Oxford Road
Manchester
M13 9PL

Tel 0161 275 0604


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


  

Re: Large import into MYISAM - performance problems

2008-06-05 Thread Olaf Stein
Simon,

In my experience load data infile is a lot faster than a sql file htrough
the client.
I would parse the sql file and create a csv file with just the columns of
your table and then use load data infile using the created csv file

Olaf


On 6/5/08 4:52 AM, Simon Collins [EMAIL PROTECTED] wrote:

 I can do  - if the load data infile command definitely improves
 performance and splitting the file does the same I have no problem with
 doing this. It just seems strange that it's problems with the way the
 import file is configured. I thought the problem would be somehow with
 the table getting bigger.
 
 Regards
 
 Simon
 
 
 Ananda Kumar wrote:
 Simon,
 Why dont u split the file and use LOAD DATA INFILE command which would
 improve the performance while loading into an empty table with keys
 disabled.
 
 regards
 anandkl
 
 
 On 6/5/08, Simon Collins [EMAIL PROTECTED] wrote:
   
 I'm loading the data through the command below mysql -f -u root -p enwiki 
 enwiki.sql
 
 The version is MySQL 5.0.51a-community
 
 I've disabled the primary key, so there are no indexes. The CPU has 2 cores
 and 2 Gigs memory.
 
 The import fell over overnight with a table full error as it hit 1T (I
 think this may be a file system problem). As it's not importing before
 anymore show status isn't going to provide any interesting info however, I
 did notice that mysql was not consuming much CPU time ~ 10%.
 
 I wouldn't like to split the data up into separate tables as it would
 change the schema and I'm not in charge of the schema design - just the DBA
 at the backend.
 
 Cheers
 
 Simon
 
 mos wrote:
 
 
 Simon,
 As someone else mentioned, how are you loading the data? Can you post the
 SQL?
 
 You have an Id field, so is that not the primary key? If so, the slowdown
 could be maintaining the index. If so, add up to 30% of your available ram
 to your key_bufer_size in your my.cnf file and restart the server. How much
 RAM do you have on your machine and how many CPU's do you have? What
 version
 of MySQL are you using? Also can you post your Show Status output after
 it
 has started to slow down? How much CPU is being used after the import slows
 down?
 
 Now from what you've said, it looks like you are using this table as a
 lookup table, so if it just has an id and a blob field, you probably return
 the blob field for a given id, correct? If it were up to me, I would break
 the data into more manageable tables. If you have 100 million rows, then
 I'd
 break it into 10x10 million row tables. Table_1 would have id's from 1 to
 9,999,999, and table_2 with id's from 10 million to 10,999,999 etc. Your
 lookup would call a stored procedure which determines which table to use
 based on the Id it was given. If you really had to search all the tables
 you
 can then use a Merge table based on those 10 tables. I use Merge tables
 quite a bit and the performance is quite good.
 
 Mike
 
 At 11:42 AM 6/4/2008, you wrote:
 
   
 Dear all,
 
 I'm presently trying to import the full wikipedia dump for one of our
 research users. Unsurprisingly it's a massive import file (2.7T)
 
 Most of the data is importing into a single MyISAM table which has an id
 field and a blob field. There are no constraints / indexes on this table.
 We're using an XFS filesystem.
 
 The import starts of quickly but gets increasingly slower as it
 progresses, starting off at about 60 G per hour but now the MyISAM table
 is
 ~1TB it's slowed to a load of about 5G per hour. At this rate the import
 will not finish for a considerable time, if at all.
 
 Can anyone suggest to me why this is happening and if there's a way to
 improve performance. If there's a more suitable list to discuss this,
 please
 let me know.
 
 Regards
 
 Simon
 
 
 
   
 --
 Dr Simon Collins
 Data Grid Consultant
 National Grid Service
 University of Manchester
 Research Computing Services
 Kilburn Building
 Oxford Road
 Manchester
 M13 9PL
 
 Tel 0161 275 0604
 
 
 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
 
 
 
 
   
 

- Confidentiality Notice:
The following mail message, including any attachments, is for the
sole use of the intended recipient(s) and may contain confidential
and privileged information. The recipient is responsible to
maintain the confidentiality of this information and to use the
information only for authorized purposes. If you are not the
intended recipient (or authorized to receive information for the
intended recipient), you are hereby notified that any review, use,
disclosure, distribution, copying, printing, or action taken in
reliance on the contents of this e-mail is strictly prohibited. If
you have received this communication in error, please notify us
immediately by reply e-mail and destroy all copies of the original
message. Thank you.

-- 
MySQL General Mailing List
For list archives: 

Re: Large import into MYISAM - performance problems

2008-06-05 Thread mos

At 10:30 AM 6/5/2008, you wrote:

Simon,

In my experience load data infile is a lot faster than a sql file htrough
the client.
I would parse the sql file and create a csv file with just the columns of
your table and then use load data infile using the created csv file

Olaf


Olaf,
   Using a Load Data on an empty file is at least 10x faster than 
using SQL inserts. But I thought his blob field was binary (images) in 
which case Load Data won't work. If his blob field is plain text, then of 
course Load Data will work nicely. :)


Mike 



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Large import into MYISAM - performance problems

2008-06-05 Thread Simon Collins

Olaf, Mike

Thanks for the input, the blob data is just text, I'll have a go at 
using the load data command


Regards

Simon

mos wrote:

At 10:30 AM 6/5/2008, you wrote:

Simon,

In my experience load data infile is a lot faster than a sql file 
htrough

the client.
I would parse the sql file and create a csv file with just the 
columns of

your table and then use load data infile using the created csv file

Olaf


Olaf,
   Using a Load Data on an empty file is at least 10x faster than 
using SQL inserts. But I thought his blob field was binary (images) in 
which case Load Data won't work. If his blob field is plain text, then 
of course Load Data will work nicely. :)


Mike




--
Dr Simon Collins
Data Grid Consultant
National Grid Service
University of Manchester
Research Computing Services
Kilburn Building
Oxford Road
Manchester
M13 9PL

Tel 0161 275 0604


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Large import into MYISAM - performance problems

2008-06-05 Thread Olaf Stein
Even more when you compare to a script executing the inserts, instead the
mysql client...

Olaf



On 6/5/08 12:06 PM, mos [EMAIL PROTECTED] wrote:

 At 10:30 AM 6/5/2008, you wrote:
 Simon,
 
 In my experience load data infile is a lot faster than a sql file htrough
 the client.
 I would parse the sql file and create a csv file with just the columns of
 your table and then use load data infile using the created csv file
 
 Olaf
 
 Olaf,
 Using a Load Data on an empty file is at least 10x faster than
 using SQL inserts. But I thought his blob field was binary (images) in
 which case Load Data won't work. If his blob field is plain text, then of
 course Load Data will work nicely. :)
 
 Mike 
 

- Confidentiality Notice:
The following mail message, including any attachments, is for the
sole use of the intended recipient(s) and may contain confidential
and privileged information. The recipient is responsible to
maintain the confidentiality of this information and to use the
information only for authorized purposes. If you are not the
intended recipient (or authorized to receive information for the
intended recipient), you are hereby notified that any review, use,
disclosure, distribution, copying, printing, or action taken in
reliance on the contents of this e-mail is strictly prohibited. If
you have received this communication in error, please notify us
immediately by reply e-mail and destroy all copies of the original
message. Thank you.

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Large import into MYISAM - performance problems

2008-06-04 Thread Simon Collins

Dear all,

I'm presently trying to import the full wikipedia dump for one of our 
research users. Unsurprisingly it's a massive import file (2.7T)


Most of the data is importing into a single MyISAM table which has an id 
field and a blob field. There are no constraints / indexes on this 
table. We're using an XFS filesystem.


The import starts of quickly but gets increasingly slower as it 
progresses, starting off at about 60 G per hour but now the MyISAM table 
is ~1TB it's slowed to a load of about 5G per hour. At this rate the 
import will not finish for a considerable time, if at all.


Can anyone suggest to me why this is happening and if there's a way to 
improve performance. If there's a more suitable list to discuss this, 
please let me know.


Regards

Simon

--
Dr Simon Collins
Data Grid Consultant
National Grid Service
University of Manchester
Research Computing Services
Kilburn Building
Oxford Road
Manchester
M13 9PL

Tel 0161 275 0604


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Large import into MYISAM - performance problems

2008-06-04 Thread Ananda Kumar
Hi Simon,
How ur doing this import into  ur table.


On 6/4/08, Simon Collins [EMAIL PROTECTED] wrote:

 Dear all,

 I'm presently trying to import the full wikipedia dump for one of our
 research users. Unsurprisingly it's a massive import file (2.7T)

 Most of the data is importing into a single MyISAM table which has an id
 field and a blob field. There are no constraints / indexes on this table.
 We're using an XFS filesystem.

 The import starts of quickly but gets increasingly slower as it progresses,
 starting off at about 60 G per hour but now the MyISAM table is ~1TB it's
 slowed to a load of about 5G per hour. At this rate the import will not
 finish for a considerable time, if at all.

 Can anyone suggest to me why this is happening and if there's a way to
 improve performance. If there's a more suitable list to discuss this, please
 let me know.

 Regards

 Simon

 --
 Dr Simon Collins
 Data Grid Consultant
 National Grid Service
 University of Manchester
 Research Computing Services
 Kilburn Building
 Oxford Road
 Manchester
 M13 9PL

 Tel 0161 275 0604


 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]




Re: Large import into MYISAM - performance problems

2008-06-04 Thread mos

Simon,
As someone else mentioned, how are you loading the data? Can you 
post the SQL?


   You have an Id field, so is that not the primary key? If so, the 
slowdown could be maintaining the index. If so, add up to 30% of your 
available ram to your key_bufer_size in your my.cnf file and restart the 
server. How much RAM do you have on your machine and how many CPU's do you 
have? What version of MySQL are you using? Also can you post your Show 
Status output after it has started to slow down? How much CPU is being 
used after the import slows down?


 Now from what you've said, it looks like you are using this table as 
a lookup table, so if it just has an id and a blob field, you probably 
return the blob field for a given id, correct? If it were up to me, I would 
break the data into more manageable tables. If you have 100 million rows, 
then I'd break it into 10x10 million row tables. Table_1 would have id's 
from 1 to 9,999,999, and table_2 with id's from 10 million to 10,999,999 
etc. Your lookup would call a stored procedure which determines which table 
to use based on the Id it was given. If you really had to search all the 
tables you can then use a Merge table based on those 10 tables. I use Merge 
tables quite a bit and the performance is quite good.


Mike

At 11:42 AM 6/4/2008, you wrote:

Dear all,

I'm presently trying to import the full wikipedia dump for one of our 
research users. Unsurprisingly it's a massive import file (2.7T)


Most of the data is importing into a single MyISAM table which has an id 
field and a blob field. There are no constraints / indexes on this table. 
We're using an XFS filesystem.


The import starts of quickly but gets increasingly slower as it 
progresses, starting off at about 60 G per hour but now the MyISAM table 
is ~1TB it's slowed to a load of about 5G per hour. At this rate the 
import will not finish for a considerable time, if at all.


Can anyone suggest to me why this is happening and if there's a way to 
improve performance. If there's a more suitable list to discuss this, 
please let me know.


Regards

Simon



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Large import into MYISAM - performance problems

2008-06-04 Thread Krishna Chandra Prajapati
Hi,

Break up the file into small chunks and then import one by one.


On Wed, Jun 4, 2008 at 10:12 PM, Simon Collins 
[EMAIL PROTECTED] wrote:

 Dear all,

 I'm presently trying to import the full wikipedia dump for one of our
 research users. Unsurprisingly it's a massive import file (2.7T)

 Most of the data is importing into a single MyISAM table which has an id
 field and a blob field. There are no constraints / indexes on this table.
 We're using an XFS filesystem.

 The import starts of quickly but gets increasingly slower as it progresses,
 starting off at about 60 G per hour but now the MyISAM table is ~1TB it's
 slowed to a load of about 5G per hour. At this rate the import will not
 finish for a considerable time, if at all.

 Can anyone suggest to me why this is happening and if there's a way to
 improve performance. If there's a more suitable list to discuss this, please
 let me know.

 Regards

 Simon

 --
 Dr Simon Collins
 Data Grid Consultant
 National Grid Service
 University of Manchester
 Research Computing Services
 Kilburn Building
 Oxford Road
 Manchester
 M13 9PL

 Tel 0161 275 0604


 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:
 http://lists.mysql.com/[EMAIL PROTECTED]




-- 
Krishna Chandra Prajapati
MySQL DBA,
Ed Ventures e-Learning Pvt.Ltd.
1-8-303/48/15, Sindhi Colony
P.G.Road, Secunderabad.
Pin Code: 53
Office Number: 040-66489771
Mob: 9912924044
URL: ed-ventures-online.com
Email-id: [EMAIL PROTECTED]


Performance Problems With JOINS - Tunnng required or upgrade hardware?

2007-09-14 Thread [ Triadbrasil ] Filipe Tomita
Hi all,

First sorry my bad english :)

I having a problem with a large join with 10 tables with 70Gb of text data,
some joins executed by index but some others not.

I´m work with HP SERVER (Proliant NL-150) a 2 Xeon 2 Duo with 3Gb Ram and
RAID 0.

When executed to a client with small datasets the retrive is fastest, but
when i try with a large dataset client the database down or left a 10 min to
execute a query.

This is my.cnf

[client]
 port= 3306
 socket= /var/lib/mysql/mysql.sock

 [mysqld]
 port= 3306
 socket= /var/lib/mysql/mysql.sock
 skip-locking
 tmp_table_size  =256M
 key_buffer_size = 750M
 max_allowed_packet = 10M
 max_connections=400
 table_cache = 4000
 sort_buffer_size = 100M
 read_buffer_size = 100M
 read_rnd_buffer_size = 50M
 myisam_sort_buffer_size = 64M
 thread_cache = 8
 query_cache_type=1
 query_cache_size = 256M
 query_cache_limit=25M
 join_buffer_size=128M
 thread_concurrency = 16
 log-bin=mysql-bin

 server-id= 1

 innodb_buffer_pool_size = 1512M
 innodb_additional_mem_pool_size = 100M
 innodb_thread_concurrency=16

 [mysqldump]
 quick
 max_allowed_packet = 16M

 [mysql]
 no-auto-rehash

 [isamchk]
 key_buffer = 256M
 sort_buffer_size = 256M
 read_buffer = 2M
 write_buffer = 2M

 [myisamchk]
 key_buffer = 256M
 sort_buffer_size = 256M
 read_buffer = 2M
 write_buffer = 2M

 [mysqlhotcopy]
 interactive-timeout


and that is a trouble SQL

### SQL 1 ##
SELECT NAC.id, NAC.nome assunto, NAC.ordem
FROM Noticias N
INNER JOIN (
SELECT NC.noticiaId, A.id, A.nome, AC.ordem
FROM NoticiasClientes NC
INNER JOIN (AssuntosClientes AC, Assuntos A)

ON (NC.clienteId = '.$clienteId.'
AND NC.clienteId = AC.clienteId
AND NC.assuntoId =
AC.assuntoId
AND AC.assuntoId = A.id)
)NAC ON (N.dataInsercao = '.$clippingDate.')
 SQL 2 ###

SELECT NT.id, NT.titulo as tituloNoticia, NC.tipo tipoNoticia, NI.id Imagem,
VI.nome AS veiculoNome, VI.id veiculoId, NC.impactoId
FROM (SELECT NCL.* FROM NoticiasClientes
NCL WHERE NCL.assuntoId = '.$filter.' AND NCL.clienteId='.$clienteId.')
NC
INNER JOIN (Noticias NT, Veiculos
VI, (SELECT VCL.veiculoId, VCL.clienteId, VCL.ordem

FROM VeiculosClientes VCL

WHERE VCL.clienteId='.$clienteId.'

ORDER BY VCL.ordem) VC)
ON (NT.id = NC.noticiaId
AND NT.dataInsercao =
'.$clippingDate.'
AND
VI.tipoVeiculoIdIN (.$tiposVeiculos.)
AND VI.id =
NT.veiculoId
)
LEFT JOIN (ImagemNoticia NI)
ON (NI.noticiaId = NC.noticiaId)
GROUP BY NC.noticiaId

 SQL 3 ###

SELECT N.id, N.titulo,VCT.id veiculoId,  VCT.veiculo, VCT.tipoVeiculo,
VCT.ordemVeiculo, NAC.assuntoId, NAC.impactoId, NAC.assunto,
NAC.ordemAssunto, IMN.id as imgId
FROM (Noticias N
INNER JOIN ((SELECT NC.noticiaId,
I.id as impactoId, A.nome as assunto, AC.ordem as ordemAssunto, AC.assuntoId
FROM
NoticiasClientes NC
INNER JOIN
(AssuntosClientes AC, Assuntos A, Impactos I)
ON (
NC.clienteId = '.$clienteId.'
AND
NC.clienteId = AC.clienteId
AND
NC.assuntoId = AC.assuntoId

AND AC.assuntoId = A.id

AND NC.impactoId = I.id)) NAC,
(SELECT V.id, V.nome as
Veiculo, VC.ordem as ordemVeiculo, TV.nome as tipoVeiculo
FROM Veiculos V
INNER JOIN
(VeiculosClientes VC, TiposVeiculos TV)
ON (
VC.clienteId = '.$clienteId.'
AND (
TV.id IN (.$tiposVeiculos.))
AND
V.id = VC.veiculoId

AND V.tipoVeiculoId = TV.id)) VCT)
ON (N.id = NAC.noticiaId AND
N.veiculoId = VCT.id))
LEFT JOIN 

Re: Performance Problems With JOINS - Tunnng required or upgrade hardware?

2007-09-14 Thread Baron Schwartz

Hi,

Your English is fine :)  Your queries don't look too bad.  It could be 
there are no good indexes.  Have you tried running EXPLAIN on them? 
What version of MySQL are you using?  You can also try profiling the 
queries (by hand with SHOW STATUS, or more easily with MySQL Query 
Profiler) to see what they are doing.  I don't know what your operating 
system is, but if it's Linux or UNIX you can also run vmstat -n 5 
while the query runs so you can see what it is doing.  (For example, it 
may be swapping, which is bad).


If you need help understanding EXPLAIN, please write back and include 
the output of SHOW CREATE TABLE for the tables, your query, and the 
result of EXPLAIN for the query.


[ Triadbrasil ] Filipe Tomita wrote:

Hi all,

First sorry my bad english :)

I having a problem with a large join with 10 tables with 70Gb of text data,
some joins executed by index but some others not.

I´m work with HP SERVER (Proliant NL-150) a 2 Xeon 2 Duo with 3Gb Ram and
RAID 0.

When executed to a client with small datasets the retrive is fastest, but
when i try with a large dataset client the database down or left a 10 min to
execute a query.

This is my.cnf

[client]

port= 3306
socket= /var/lib/mysql/mysql.sock

[mysqld]
port= 3306
socket= /var/lib/mysql/mysql.sock
skip-locking
tmp_table_size  =256M
key_buffer_size = 750M
max_allowed_packet = 10M
max_connections=400
table_cache = 4000
sort_buffer_size = 100M
read_buffer_size = 100M
read_rnd_buffer_size = 50M
myisam_sort_buffer_size = 64M
thread_cache = 8
query_cache_type=1
query_cache_size = 256M
query_cache_limit=25M
join_buffer_size=128M
thread_concurrency = 16
log-bin=mysql-bin

server-id= 1

innodb_buffer_pool_size = 1512M
innodb_additional_mem_pool_size = 100M
innodb_thread_concurrency=16

[mysqldump]
quick
max_allowed_packet = 16M

[mysql]
no-auto-rehash

[isamchk]
key_buffer = 256M
sort_buffer_size = 256M
read_buffer = 2M
write_buffer = 2M

[myisamchk]
key_buffer = 256M
sort_buffer_size = 256M
read_buffer = 2M
write_buffer = 2M

[mysqlhotcopy]
interactive-timeout



and that is a trouble SQL

### SQL 1 ##
SELECT NAC.id, NAC.nome assunto, NAC.ordem
FROM Noticias N
INNER JOIN (
SELECT NC.noticiaId, A.id, A.nome, AC.ordem
FROM NoticiasClientes NC
INNER JOIN (AssuntosClientes AC, Assuntos A)

ON (NC.clienteId = '.$clienteId.'
AND NC.clienteId = AC.clienteId
AND NC.assuntoId =
AC.assuntoId
AND AC.assuntoId = A.id)
)NAC ON (N.dataInsercao = '.$clippingDate.')
 SQL 2 ###

SELECT NT.id, NT.titulo as tituloNoticia, NC.tipo tipoNoticia, NI.id Imagem,
VI.nome AS veiculoNome, VI.id veiculoId, NC.impactoId
FROM (SELECT NCL.* FROM NoticiasClientes
NCL WHERE NCL.assuntoId = '.$filter.' AND NCL.clienteId='.$clienteId.')
NC
INNER JOIN (Noticias NT, Veiculos
VI, (SELECT VCL.veiculoId, VCL.clienteId, VCL.ordem

FROM VeiculosClientes VCL

WHERE VCL.clienteId='.$clienteId.'

ORDER BY VCL.ordem) VC)
ON (NT.id = NC.noticiaId
AND NT.dataInsercao =
'.$clippingDate.'
AND
VI.tipoVeiculoIdIN (.$tiposVeiculos.)
AND VI.id =
NT.veiculoId
)
LEFT JOIN (ImagemNoticia NI)
ON (NI.noticiaId = NC.noticiaId)
GROUP BY NC.noticiaId

 SQL 3 ###

SELECT N.id, N.titulo,VCT.id veiculoId,  VCT.veiculo, VCT.tipoVeiculo,
VCT.ordemVeiculo, NAC.assuntoId, NAC.impactoId, NAC.assunto,
NAC.ordemAssunto, IMN.id as imgId
FROM (Noticias N
INNER JOIN ((SELECT NC.noticiaId,
I.id as impactoId, A.nome as assunto, AC.ordem as ordemAssunto, AC.assuntoId
FROM
NoticiasClientes NC
INNER JOIN
(AssuntosClientes AC, Assuntos A, Impactos I)
ON (
NC.clienteId = '.$clienteId.'
AND
NC.clienteId = AC.clienteId
AND
NC.assuntoId = AC.assuntoId

AND AC.assuntoId = A.id

AND NC.impactoId = I.id)) NAC,
(SELECT V.id, V.nome as
Veiculo, VC.ordem as 

Re: Performance Problems With Two Tables With Over 500K Rows

2006-11-27 Thread Dominik Klein

My innodb_buffer_pool_size is:
innodb_buffer_pool_size | 8388608

That looks like 8MB... that sounds small if I have a DB with over 1M
rows to process.  No?


Yes, that's extremely small.  I'd go for at least 256M, and maybe 512M
if your machine will primarily be doing mysql duties.



Did you do this yet?

This should speed it up imho.

Regards
Dominik

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Performance Problems With Two Tables With Over 500K Rows

2006-11-27 Thread Duncan Hill
On Saturday 25 November 2006 17:54, John Kopanas wrote:

 The following query takes over 6 seconds:
 SELECT * FROM purchased_services WHERE (purchased_services.company_id =
 535263)

What does EXPLAIN say about that query?

Have you done an optimize recently?
-- 
Scanned by iCritical.

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Re: Performance Problems With Two Tables With Over 500K Rows

2006-11-26 Thread John Kopanas

Thanks a lot for your help.

The query should and only does return 1-6 rows depending on the id.
Never more then that.  Here are the comperative EXPLAINs:

mysql EXPLAIN SELECT * FROM purchased_services WHERE id = 1000;
++-++---+---+-+-+---+--+---+
| id | select_type | table  | type  | possible_keys | key
  | key_len | ref   | rows | Extra |
++-++---+---+-+-+---+--+---+
|  1 | SIMPLE  | purchased_services | const | PRIMARY   |
PRIMARY |   4 | const |1 |   |
++-++---+---+-+-+---+--+---+
1 row in set (0.00 sec)

mysql EXPLAIN SELECT * FROM purchased_services WHERE company_id = 1000;
++-++--+-+--+-+--++-+
| id | select_type | table  | type | possible_keys
 | key  | key_len | ref  | rows   | Extra   |
++-++--+-+--+-+--++-+
|  1 | SIMPLE  | purchased_services | ALL  |
purchased_services_company_id_index | NULL |NULL | NULL | 626188 |
Using where |
++-++--+-+--+-+--++-+
1 row in set (0.00 sec)

Here is the explain for the SELECT COUNT(id)

mysql EXPLAIN SELECT count(id) FROM companies;
++-+---+---+---+-+-+--++-+
| id | select_type | table | type  | possible_keys | key |
key_len | ref  | rows   | Extra   |
++-+---+---+---+-+-+--++-+
|  1 | SIMPLE  | companies | index | NULL  | PRIMARY |
 4 | NULL | 533821 | Using index |
++-+---+---+---+-+-+--++-+
1 row in set (0.10 sec)

The explain takes a fraction of a second and returns the amound of row
plus some.  But when I just as for the count it took me 5 seconds.
Something is broken.

My innodb_buffer_pool_size is:
innodb_buffer_pool_size | 8388608

That looks like 8MB... that sounds small if I have a DB with over 1M
rows to process.  No?

Thanks again for your help.

Your Friend,

John


On 11/25/06, Dan Nelson [EMAIL PROTECTED] wrote:

In the last episode (Nov 25), John Kopanas said:
 Sorry about these questions.  I am used to working with DBs with less
 then 10K rows and now I am working with tables with over 500K rows
 which seems to be changing a lot for me.  I was hoping I can get some
 people's advice.

 I have a 'companies' table with over 500K rows and a
 'purchased_services' table with over 650K rows.

 The following query takes over 6 seconds:
 SELECT * FROM purchased_services WHERE (purchased_services.company_id = 
535263)

How many rows does it return, and what's its explain plan?

 purchased_services.company_id has an index on it.

 The following query takes over 3 seconds:
 SELECT count(id) FROM companies;

An explain plan here would be useful too.

 To me the time it takes to run these queries makes no sense.  I would
 imagine both of these queries would take a fraction of a second.

 When running some of these queries and looking at 'SHOW processlist'
 I was getting a lot of 'Writing to tmp'.

That doesn't make much sense, since both queries should simply be doing
index scans (or full table scans depending on how many rows are
expected to match in the first query, but the explain plans will show
that).

 My DB engine is InnoDB.  I am running this on my laptop that is a
 PowerBook 867 with 756 MB of Ram.

What's your innodb_buffer_pool_size set to?

--
Dan Nelson
[EMAIL PROTECTED]




--
John Kopanas
[EMAIL PROTECTED]

http://www.kopanas.com
http://www.cusec.net
http://www.soen.info

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Re: Re: Re: Performance Problems With Two Tables With Over 500K Rows

2006-11-26 Thread John Kopanas

The application is not in production yet but when it will go in
production the server will be considerably faster and have much more
RAM.  But before I put the app in production I want to make sure it is
working properly.  500K rows does not sounds like that much in this
day in age.  If I understand what is going on on my laptop I will be
confident it will work well in production.

:-)

On 11/25/06, Dan Buettner [EMAIL PROTECTED] wrote:

This kind of timeframe (2 - 2.5 secs) could just be the result of
running on a laptop.  You've got a small amount of RAM compared to
many servers, a bit slower processor, and *much* slower hard disk
system than most servers.  If your query has to access multiple
records spread out throughout the table off a slow laptop disk, this
makes sense to me.

Do you normally run this database on a real server in production?
Is response time better?

Still, it does seem a bit slow, even for an 867 MHz laptop, and seeing
the output of an EXPLAIN might be helpful.

Also, to answer your question about the speed of selecting the count
of rows in a table - MyISAM always knows exactly how many rows are in
a table, so it answers quickly without checking the data.  InnoDB does
not keep track of how many rows are in a table, so it has to go count
them when you do a SELECT COUNT(*) or in your case SELECT COUNT(id).
That takes a little bit of time.

Dan


On 11/25/06, John Kopanas [EMAIL PROTECTED] wrote:
 If I just SELECT id:
 SELECT id FROM purchased_services WHERE (company_id = 1000)

 It takes approx 2-2.5s.  When I look at the process list it looks like
 that it's state seems to always be in sending data...

 This is after killing the db and repopulating it again.  So what is going on?

 On 11/25/06, John Kopanas [EMAIL PROTECTED] wrote:
  I tried the same tests with the database replicated in a MyISAM
  engine.  The count was instantaneous but the following still took
  3-6seconds:
 
  SELECT * FROM purchased_services WHERE (purchased_services.company_id = 
535263)
 
  The following though was instantaneous:
 
  SELECT * FROM purchased_services WHERE (id = 1000)
 
  This is the result from my SHOW INDEX FROM purchased_services:
 
  
+++-+--+-+---+-+--++--++-+
  | Table  | Non_unique | Key_name
   | Seq_in_index | Column_name | Collation | Cardinality | Sub_part |
  Packed | Null | Index_type | Comment |
  
+++-+--+-+---+-+--++--++-+
  | purchased_services |  0 | PRIMARY
   |1 | id  | A |  627546 | NULL |
  NULL   |  | BTREE  | |
  | purchased_services |  1 |
  purchased_services_company_id_index |1 | company_id  | A
|  627546 | NULL | NULL   | YES  | BTREE  |
  |
  
+++-+--+-+---+-+--++--++-+
 
  So I do have the necessary index.  I am so confused  Argh...
 
  Your Friend,
 
  John
 
 
  On 11/25/06, John Kopanas [EMAIL PROTECTED] wrote:
   Sorry about these questions.  I am used to working with DBs with less
   then 10K rows and now I am working with tables with over 500K rows
   which seems to be changing a lot for me.  I was hoping I can get some
   people's advice.
  
   I have a 'companies' table with over 500K rows and a
   'purchased_services' table with over 650K rows.
  
   The following query takes over 6 seconds:
   SELECT * FROM purchased_services WHERE (purchased_services.company_id = 
535263)
  
   purchased_services.company_id has an index on it.
  
   The following query takes over 3 seconds:
   SELECT count(id) FROM companies;
  
   To me the time it takes to run these queries makes no sense.  I would
   imagine both of these queries would take a fraction of a second.
  
   When running some of these queries and looking at 'SHOW processlist' I
   was getting a lot of 'Writing to tmp'.
  
   My DB engine is InnoDB.  I am running this on my laptop that is a
   PowerBook 867 with 756 MB of Ram.
  
   Feedback and insight would be greatly appreciated.
  
   Thanks my friends! :-)
  
   --
   John Kopanas
   [EMAIL PROTECTED]
  
   http://www.kopanas.com
   http://www.cusec.net
   http://www.soen.info
  
 
 
  --
  John Kopanas
  [EMAIL PROTECTED]
 
  http://www.kopanas.com
  http://www.cusec.net
  http://www.soen.info
 


 --
 John Kopanas
 [EMAIL PROTECTED]

 http://www.kopanas.com
 http://www.cusec.net
 http://www.soen.info

 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]






--
John Kopanas
[EMAIL PROTECTED]


Re: Performance Problems With Two Tables With Over 500K Rows

2006-11-26 Thread Dan Nelson
In the last episode (Nov 26), John Kopanas said:
 Thanks a lot for your help.
 
 The query should and only does return 1-6 rows depending on the id.
 Never more then that.  Here are the comperative EXPLAINs:
 
 mysql EXPLAIN SELECT * FROM purchased_services WHERE id = 1000;
 ++-++---+---+-+-+---+--+---+
 | id | select_type | table  | type  | possible_keys | key   | 
 key_len | ref   | rows | Extra |
 ++-++---+---+-+-+---+--+---+
 |  1 | SIMPLE  | purchased_services | const | PRIMARY   | PRIMARY |   
 4 | const |1 |   |
 ++-++---+---+-+-+---+--+---+
 1 row in set (0.00 sec)

This query definitly should run almost instantly, since it looks like a
direct lookup on the primary key.
 
 mysql EXPLAIN SELECT * FROM purchased_services WHERE company_id = 1000;
 ++-++--+-+--+-+--++-+
 | id | select_type | table  | type | possible_keys  | 
 key  | key_len | ref  | rows   | Extra   |
 ++-++--+-+--+-+--++-+
 |  1 | SIMPLE  | purchased_services | ALL  | 
 purchased_services_company_id_index | NULL |NULL | NULL | 626188 | Using 
 where |
 ++-++--+-+--+-+--++-+
 1 row in set (0.00 sec)

This EXPLAIN indicates that mysql thinks that the query would match
~600k rows and will do a full table scan.  Mysql only keeps a single
cardinality value for each index that estimates how many records have
a unique value in the index.  This can cause problems for the optimizer
if you have one value for say 60% of the table, and unique values for
the rest.  You can try adding a FORCE INDEX clause to the query and see
if that helps.
 
 Here is the explain for the SELECT COUNT(id)
 
 mysql EXPLAIN SELECT count(id) FROM companies;
 ++-+---+---+---+-+-+--++-+
 | id | select_type | table | type  | possible_keys | key | key_len | 
 ref  | rows   | Extra   |
 ++-+---+---+---+-+-+--++-+
 |  1 | SIMPLE  | companies | index | NULL  | PRIMARY |  4 | NULL 
 | 533821 | Using index |
 ++-+---+---+---+-+-+--++-+
 1 row in set (0.10 sec)
 
 The explain takes a fraction of a second and returns the amound of row
 plus some.  But when I just as for the count it took me 5 seconds.
 Something is broken.

Note that a primary index scan on an InnoDB table really is a full
table scan.  Try creating another index on just the id field and force
mysql to use it with a FORCE INDEX clause.  Innodb's query optimizer
will always prefer the primary index even if the secondary is smaller,
which is why you have to force it here.  The row count in the estimate
is off because Innodb's query optimizer doesn't know the exact row
count and has to guess.
 
 My innodb_buffer_pool_size is:
 innodb_buffer_pool_size | 8388608
 
 That looks like 8MB... that sounds small if I have a DB with over 1M
 rows to process.  No?

Yes, that's extremely small.  I'd go for at least 256M, and maybe 512M
if your machine will primarily be doing mysql duties.

-- 
Dan Nelson
[EMAIL PROTECTED]

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Re: Performance Problems With Two Tables With Over 500K Rows

2006-11-26 Thread John Kopanas

When I did a:

SELECT * FROM purchased_services WHERE company_id = 1000;

It took me 7 seconds.  This is driving me crazy!

I am going to have to try this on another computer and see if I am
going to get the same results on another system.   Argh...

On 11/26/06, Dan Nelson [EMAIL PROTECTED] wrote:

In the last episode (Nov 26), John Kopanas said:
 Thanks a lot for your help.

 The query should and only does return 1-6 rows depending on the id.
 Never more then that.  Here are the comperative EXPLAINs:

 mysql EXPLAIN SELECT * FROM purchased_services WHERE id = 1000;
 
++-++---+---+-+-+---+--+---+
 | id | select_type | table  | type  | possible_keys | key   | 
key_len | ref   | rows | Extra |
 
++-++---+---+-+-+---+--+---+
 |  1 | SIMPLE  | purchased_services | const | PRIMARY   | PRIMARY |   
4 | const |1 |   |
 
++-++---+---+-+-+---+--+---+
 1 row in set (0.00 sec)

This query definitly should run almost instantly, since it looks like a
direct lookup on the primary key.

 mysql EXPLAIN SELECT * FROM purchased_services WHERE company_id = 1000;
 
++-++--+-+--+-+--++-+
 | id | select_type | table  | type | possible_keys  | 
key  | key_len | ref  | rows   | Extra   |
 
++-++--+-+--+-+--++-+
 |  1 | SIMPLE  | purchased_services | ALL  | 
purchased_services_company_id_index | NULL |NULL | NULL | 626188 | Using where 
|
 
++-++--+-+--+-+--++-+
 1 row in set (0.00 sec)

This EXPLAIN indicates that mysql thinks that the query would match
~600k rows and will do a full table scan.  Mysql only keeps a single
cardinality value for each index that estimates how many records have
a unique value in the index.  This can cause problems for the optimizer
if you have one value for say 60% of the table, and unique values for
the rest.  You can try adding a FORCE INDEX clause to the query and see
if that helps.

 Here is the explain for the SELECT COUNT(id)

 mysql EXPLAIN SELECT count(id) FROM companies;
 
++-+---+---+---+-+-+--++-+
 | id | select_type | table | type  | possible_keys | key | key_len | 
ref  | rows   | Extra   |
 
++-+---+---+---+-+-+--++-+
 |  1 | SIMPLE  | companies | index | NULL  | PRIMARY |  4 | NULL 
| 533821 | Using index |
 
++-+---+---+---+-+-+--++-+
 1 row in set (0.10 sec)

 The explain takes a fraction of a second and returns the amound of row
 plus some.  But when I just as for the count it took me 5 seconds.
 Something is broken.

Note that a primary index scan on an InnoDB table really is a full
table scan.  Try creating another index on just the id field and force
mysql to use it with a FORCE INDEX clause.  Innodb's query optimizer
will always prefer the primary index even if the secondary is smaller,
which is why you have to force it here.  The row count in the estimate
is off because Innodb's query optimizer doesn't know the exact row
count and has to guess.

 My innodb_buffer_pool_size is:
 innodb_buffer_pool_size | 8388608

 That looks like 8MB... that sounds small if I have a DB with over 1M
 rows to process.  No?

Yes, that's extremely small.  I'd go for at least 256M, and maybe 512M
if your machine will primarily be doing mysql duties.

--
Dan Nelson
[EMAIL PROTECTED]




--
John Kopanas
[EMAIL PROTECTED]

http://www.kopanas.com
http://www.cusec.net
http://www.soen.info

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Performance Problems With Two Tables With Over 500K Rows

2006-11-26 Thread Dan Nelson
In the last episode (Nov 26), John Kopanas said:
 On 11/26/06, Dan Nelson [EMAIL PROTECTED] wrote:
 In the last episode (Nov 26), John Kopanas said:
  Thanks a lot for your help.
 
  The query should and only does return 1-6 rows depending on the id.
  Never more then that.  Here are the comperative EXPLAINs:
 
  mysql EXPLAIN SELECT * FROM purchased_services WHERE id = 1000;
  
 ++-++---+---+-+-+---+--+---+
 | id | select_type | table  | type  | possible_keys | key   | 
 key_len | ref   | rows | Extra |
 ++-++---+---+-+-+---+--+---+
 |  1 | SIMPLE  | purchased_services | const | PRIMARY   | PRIMARY | 
   4 | const |1 |   |
 ++-++---+---+-+-+---+--+---+
  1 row in set (0.00 sec)
 
 This query definitly should run almost instantly, since it looks like a
 direct lookup on the primary key.
 
  mysql EXPLAIN SELECT * FROM purchased_services WHERE company_id = 1000;
  
 ++-++--+-+--+-+--++-+
 | id | select_type | table  | type | possible_keys   | 
 key  | key_len | ref  | rows   | Extra   |
 ++-++--+-+--+-+--++-+
 |  1 | SIMPLE  | purchased_services | ALL  | 
 purchased_services_company_id_index | NULL |NULL | NULL | 626188 | 
 Using where |
 ++-++--+-+--+-+--++-+
  1 row in set (0.00 sec)
 
 This EXPLAIN indicates that mysql thinks that the query would match
 ~600k rows and will do a full table scan.  Mysql only keeps a single
 cardinality value for each index that estimates how many records have
 a unique value in the index.  This can cause problems for the optimizer
 if you have one value for say 60% of the table, and unique values for
 the rest.  You can try adding a FORCE INDEX clause to the query and see
 if that helps.

 When I did a:
 
 SELECT * FROM purchased_services WHERE company_id = 1000;
 
 It took me 7 seconds.  This is driving me crazy!

 I am going to have to try this on another computer and see if I am
 going to get the same results on another system.  Argh...

Yes, a full table scan on 500k rows may very well take 7 seconds.  Did
you try with a FORCE INDEX (purchased_services_company_id) clause?

-- 
Dan Nelson
[EMAIL PROTECTED]

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Re: Performance Problems With Two Tables With Over 500K Rows

2006-11-26 Thread John Kopanas

Yes... with FORCE INDEX it still takes 7 seconds.

On 11/26/06, Dan Nelson [EMAIL PROTECTED] wrote:

In the last episode (Nov 26), John Kopanas said:
 On 11/26/06, Dan Nelson [EMAIL PROTECTED] wrote:
 In the last episode (Nov 26), John Kopanas said:
  Thanks a lot for your help.
 
  The query should and only does return 1-6 rows depending on the id.
  Never more then that.  Here are the comperative EXPLAINs:
 
  mysql EXPLAIN SELECT * FROM purchased_services WHERE id = 1000;
 
 
++-++---+---+-+-+---+--+---+
 | id | select_type | table  | type  | possible_keys | key   | 
key_len | ref   | rows | Extra |
 
++-++---+---+-+-+---+--+---+
 |  1 | SIMPLE  | purchased_services | const | PRIMARY   | PRIMARY | 
  4 | const |1 |   |
 
++-++---+---+-+-+---+--+---+
  1 row in set (0.00 sec)
 
 This query definitly should run almost instantly, since it looks like a
 direct lookup on the primary key.
 
  mysql EXPLAIN SELECT * FROM purchased_services WHERE company_id = 1000;
 
 
++-++--+-+--+-+--++-+
 | id | select_type | table  | type | possible_keys   | 
key  | key_len | ref  | rows   | Extra   |
 
++-++--+-+--+-+--++-+
 |  1 | SIMPLE  | purchased_services | ALL  | 
purchased_services_company_id_index | NULL |NULL | NULL | 626188 | Using where |
 
++-++--+-+--+-+--++-+
  1 row in set (0.00 sec)
 
 This EXPLAIN indicates that mysql thinks that the query would match
 ~600k rows and will do a full table scan.  Mysql only keeps a single
 cardinality value for each index that estimates how many records have
 a unique value in the index.  This can cause problems for the optimizer
 if you have one value for say 60% of the table, and unique values for
 the rest.  You can try adding a FORCE INDEX clause to the query and see
 if that helps.

 When I did a:

 SELECT * FROM purchased_services WHERE company_id = 1000;

 It took me 7 seconds.  This is driving me crazy!

 I am going to have to try this on another computer and see if I am
 going to get the same results on another system.  Argh...

Yes, a full table scan on 500k rows may very well take 7 seconds.  Did
you try with a FORCE INDEX (purchased_services_company_id) clause?

--
Dan Nelson
[EMAIL PROTECTED]




--
John Kopanas
[EMAIL PROTECTED]

http://www.kopanas.com
http://www.cusec.net
http://www.soen.info

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Re: Performance Problems With Two Tables With Over 500K Rows

2006-11-26 Thread mos

At 08:31 PM 11/26/2006, John Kopanas wrote:

When I did a:

SELECT * FROM purchased_services WHERE company_id = 1000;

It took me 7 seconds.  This is driving me crazy!

I am going to have to try this on another computer and see if I am
going to get the same results on another system.   Argh...


Try running Optimize on the table to rebuild the index. Maybe there is 
something wrong with the index.
I assume you have to use InnoDb because you need transactions? If not, copy 
the data to a MyISAM table and retest it.


Here is some sample code to copy the table:

create table TableNew type=MYISAM select * from TableOld limit 0;
insert into TableNew select * from TableOld;
alter table TableNew add index ix_Index1 (Company_Id);
select * from TableNew where Company_Id=1000;

Mike

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Performance Problems With Two Tables With Over 500K Rows

2006-11-25 Thread John Kopanas

Sorry about these questions.  I am used to working with DBs with less
then 10K rows and now I am working with tables with over 500K rows
which seems to be changing a lot for me.  I was hoping I can get some
people's advice.

I have a 'companies' table with over 500K rows and a
'purchased_services' table with over 650K rows.

The following query takes over 6 seconds:
SELECT * FROM purchased_services WHERE (purchased_services.company_id = 535263)

purchased_services.company_id has an index on it.

The following query takes over 3 seconds:
SELECT count(id) FROM companies;

To me the time it takes to run these queries makes no sense.  I would
imagine both of these queries would take a fraction of a second.

When running some of these queries and looking at 'SHOW processlist' I
was getting a lot of 'Writing to tmp'.

My DB engine is InnoDB.  I am running this on my laptop that is a
PowerBook 867 with 756 MB of Ram.

Feedback and insight would be greatly appreciated.

Thanks my friends! :-)

--
John Kopanas
[EMAIL PROTECTED]

http://www.kopanas.com
http://www.cusec.net
http://www.soen.info

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Performance Problems With Two Tables With Over 500K Rows

2006-11-25 Thread John Kopanas

I tried the same tests with the database replicated in a MyISAM
engine.  The count was instantaneous but the following still took
3-6seconds:

SELECT * FROM purchased_services WHERE (purchased_services.company_id = 535263)

The following though was instantaneous:

SELECT * FROM purchased_services WHERE (id = 1000)

This is the result from my SHOW INDEX FROM purchased_services:

+++-+--+-+---+-+--++--++-+
| Table  | Non_unique | Key_name
| Seq_in_index | Column_name | Collation | Cardinality | Sub_part |
Packed | Null | Index_type | Comment |
+++-+--+-+---+-+--++--++-+
| purchased_services |  0 | PRIMARY
|1 | id  | A |  627546 | NULL |
NULL   |  | BTREE  | |
| purchased_services |  1 |
purchased_services_company_id_index |1 | company_id  | A
 |  627546 | NULL | NULL   | YES  | BTREE  |
|
+++-+--+-+---+-+--++--++-+

So I do have the necessary index.  I am so confused  Argh...

Your Friend,

John


On 11/25/06, John Kopanas [EMAIL PROTECTED] wrote:

Sorry about these questions.  I am used to working with DBs with less
then 10K rows and now I am working with tables with over 500K rows
which seems to be changing a lot for me.  I was hoping I can get some
people's advice.

I have a 'companies' table with over 500K rows and a
'purchased_services' table with over 650K rows.

The following query takes over 6 seconds:
SELECT * FROM purchased_services WHERE (purchased_services.company_id = 535263)

purchased_services.company_id has an index on it.

The following query takes over 3 seconds:
SELECT count(id) FROM companies;

To me the time it takes to run these queries makes no sense.  I would
imagine both of these queries would take a fraction of a second.

When running some of these queries and looking at 'SHOW processlist' I
was getting a lot of 'Writing to tmp'.

My DB engine is InnoDB.  I am running this on my laptop that is a
PowerBook 867 with 756 MB of Ram.

Feedback and insight would be greatly appreciated.

Thanks my friends! :-)

--
John Kopanas
[EMAIL PROTECTED]

http://www.kopanas.com
http://www.cusec.net
http://www.soen.info




--
John Kopanas
[EMAIL PROTECTED]

http://www.kopanas.com
http://www.cusec.net
http://www.soen.info

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Re: Performance Problems With Two Tables With Over 500K Rows

2006-11-25 Thread John Kopanas

If I just SELECT id:
SELECT id FROM purchased_services WHERE (company_id = 1000)

It takes approx 2-2.5s.  When I look at the process list it looks like
that it's state seems to always be in sending data...

This is after killing the db and repopulating it again.  So what is going on?

On 11/25/06, John Kopanas [EMAIL PROTECTED] wrote:

I tried the same tests with the database replicated in a MyISAM
engine.  The count was instantaneous but the following still took
3-6seconds:

SELECT * FROM purchased_services WHERE (purchased_services.company_id = 535263)

The following though was instantaneous:

SELECT * FROM purchased_services WHERE (id = 1000)

This is the result from my SHOW INDEX FROM purchased_services:

+++-+--+-+---+-+--++--++-+
| Table  | Non_unique | Key_name
 | Seq_in_index | Column_name | Collation | Cardinality | Sub_part |
Packed | Null | Index_type | Comment |
+++-+--+-+---+-+--++--++-+
| purchased_services |  0 | PRIMARY
 |1 | id  | A |  627546 | NULL |
NULL   |  | BTREE  | |
| purchased_services |  1 |
purchased_services_company_id_index |1 | company_id  | A
  |  627546 | NULL | NULL   | YES  | BTREE  |
|
+++-+--+-+---+-+--++--++-+

So I do have the necessary index.  I am so confused  Argh...

Your Friend,

John


On 11/25/06, John Kopanas [EMAIL PROTECTED] wrote:
 Sorry about these questions.  I am used to working with DBs with less
 then 10K rows and now I am working with tables with over 500K rows
 which seems to be changing a lot for me.  I was hoping I can get some
 people's advice.

 I have a 'companies' table with over 500K rows and a
 'purchased_services' table with over 650K rows.

 The following query takes over 6 seconds:
 SELECT * FROM purchased_services WHERE (purchased_services.company_id = 
535263)

 purchased_services.company_id has an index on it.

 The following query takes over 3 seconds:
 SELECT count(id) FROM companies;

 To me the time it takes to run these queries makes no sense.  I would
 imagine both of these queries would take a fraction of a second.

 When running some of these queries and looking at 'SHOW processlist' I
 was getting a lot of 'Writing to tmp'.

 My DB engine is InnoDB.  I am running this on my laptop that is a
 PowerBook 867 with 756 MB of Ram.

 Feedback and insight would be greatly appreciated.

 Thanks my friends! :-)

 --
 John Kopanas
 [EMAIL PROTECTED]

 http://www.kopanas.com
 http://www.cusec.net
 http://www.soen.info



--
John Kopanas
[EMAIL PROTECTED]

http://www.kopanas.com
http://www.cusec.net
http://www.soen.info




--
John Kopanas
[EMAIL PROTECTED]

http://www.kopanas.com
http://www.cusec.net
http://www.soen.info

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Performance Problems With Two Tables With Over 500K Rows

2006-11-25 Thread Dan Nelson
In the last episode (Nov 25), John Kopanas said:
 Sorry about these questions.  I am used to working with DBs with less
 then 10K rows and now I am working with tables with over 500K rows
 which seems to be changing a lot for me.  I was hoping I can get some
 people's advice.
 
 I have a 'companies' table with over 500K rows and a
 'purchased_services' table with over 650K rows.
 
 The following query takes over 6 seconds:
 SELECT * FROM purchased_services WHERE (purchased_services.company_id = 
 535263)

How many rows does it return, and what's its explain plan?
 
 purchased_services.company_id has an index on it.
 
 The following query takes over 3 seconds:
 SELECT count(id) FROM companies;

An explain plan here would be useful too.
 
 To me the time it takes to run these queries makes no sense.  I would
 imagine both of these queries would take a fraction of a second.
 
 When running some of these queries and looking at 'SHOW processlist'
 I was getting a lot of 'Writing to tmp'.

That doesn't make much sense, since both queries should simply be doing
index scans (or full table scans depending on how many rows are
expected to match in the first query, but the explain plans will show
that).
 
 My DB engine is InnoDB.  I am running this on my laptop that is a
 PowerBook 867 with 756 MB of Ram.

What's your innodb_buffer_pool_size set to?

-- 
Dan Nelson
[EMAIL PROTECTED]

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Re: Re: Performance Problems With Two Tables With Over 500K Rows

2006-11-25 Thread Dan Buettner

This kind of timeframe (2 - 2.5 secs) could just be the result of
running on a laptop.  You've got a small amount of RAM compared to
many servers, a bit slower processor, and *much* slower hard disk
system than most servers.  If your query has to access multiple
records spread out throughout the table off a slow laptop disk, this
makes sense to me.

Do you normally run this database on a real server in production?
Is response time better?

Still, it does seem a bit slow, even for an 867 MHz laptop, and seeing
the output of an EXPLAIN might be helpful.

Also, to answer your question about the speed of selecting the count
of rows in a table - MyISAM always knows exactly how many rows are in
a table, so it answers quickly without checking the data.  InnoDB does
not keep track of how many rows are in a table, so it has to go count
them when you do a SELECT COUNT(*) or in your case SELECT COUNT(id).
That takes a little bit of time.

Dan


On 11/25/06, John Kopanas [EMAIL PROTECTED] wrote:

If I just SELECT id:
SELECT id FROM purchased_services WHERE (company_id = 1000)

It takes approx 2-2.5s.  When I look at the process list it looks like
that it's state seems to always be in sending data...

This is after killing the db and repopulating it again.  So what is going on?

On 11/25/06, John Kopanas [EMAIL PROTECTED] wrote:
 I tried the same tests with the database replicated in a MyISAM
 engine.  The count was instantaneous but the following still took
 3-6seconds:

 SELECT * FROM purchased_services WHERE (purchased_services.company_id = 
535263)

 The following though was instantaneous:

 SELECT * FROM purchased_services WHERE (id = 1000)

 This is the result from my SHOW INDEX FROM purchased_services:

 
+++-+--+-+---+-+--++--++-+
 | Table  | Non_unique | Key_name
  | Seq_in_index | Column_name | Collation | Cardinality | Sub_part |
 Packed | Null | Index_type | Comment |
 
+++-+--+-+---+-+--++--++-+
 | purchased_services |  0 | PRIMARY
  |1 | id  | A |  627546 | NULL |
 NULL   |  | BTREE  | |
 | purchased_services |  1 |
 purchased_services_company_id_index |1 | company_id  | A
   |  627546 | NULL | NULL   | YES  | BTREE  |
 |
 
+++-+--+-+---+-+--++--++-+

 So I do have the necessary index.  I am so confused  Argh...

 Your Friend,

 John


 On 11/25/06, John Kopanas [EMAIL PROTECTED] wrote:
  Sorry about these questions.  I am used to working with DBs with less
  then 10K rows and now I am working with tables with over 500K rows
  which seems to be changing a lot for me.  I was hoping I can get some
  people's advice.
 
  I have a 'companies' table with over 500K rows and a
  'purchased_services' table with over 650K rows.
 
  The following query takes over 6 seconds:
  SELECT * FROM purchased_services WHERE (purchased_services.company_id = 
535263)
 
  purchased_services.company_id has an index on it.
 
  The following query takes over 3 seconds:
  SELECT count(id) FROM companies;
 
  To me the time it takes to run these queries makes no sense.  I would
  imagine both of these queries would take a fraction of a second.
 
  When running some of these queries and looking at 'SHOW processlist' I
  was getting a lot of 'Writing to tmp'.
 
  My DB engine is InnoDB.  I am running this on my laptop that is a
  PowerBook 867 with 756 MB of Ram.
 
  Feedback and insight would be greatly appreciated.
 
  Thanks my friends! :-)
 
  --
  John Kopanas
  [EMAIL PROTECTED]
 
  http://www.kopanas.com
  http://www.cusec.net
  http://www.soen.info
 


 --
 John Kopanas
 [EMAIL PROTECTED]

 http://www.kopanas.com
 http://www.cusec.net
 http://www.soen.info



--
John Kopanas
[EMAIL PROTECTED]

http://www.kopanas.com
http://www.cusec.net
http://www.soen.info

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]




--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: mysql performance problems.

2006-03-31 Thread Kishore Jalleda
As others have suggested , turn your slow query log on in my.cnf , and set
your long-query_time, and you can view your slow queries in the *.log file
in your data dir, and then try to optimize them, you could also try mytop (
http://jeremy.zawodny.com/mysql/mytop/) , and check your queries in real
time..., also check SHOW FULL PROCESSLIST to see what state the query's are
in .

Kishore Jalleda
http://kjalleda.googlepages.com/projects


On 3/29/06, Jacob, Raymond A Jr [EMAIL PROTECTED] wrote:


 After a 23days of running mysql, I have a 3GB database. When I use an
 application
 called base(v.1.2.2) a web based intrusion detection analysis console, the
 mysqld utilization
 shoots up to over 90% and stays there until the application times out or
 is terminated.

 Question: Have I made some error in configuration?

 When I don't run the application base, mysqld utilization is between
 30-50%.
 Question: What hardware do I need to speed up queries?

 Question: How do determine if the query is the problem?

 Data:
 I used my-large.cnf as the basis of my.cnf.

 Hardware and OS info:
 ...
 FreeBSD 6.0-RELEASE-p5 #0:
 ...
 CPU: Intel Pentium III (997.46-MHz 686-class CPU)
 Origin = GenuineIntel  Id = 0x68a  Stepping = 10

 Features=0x383fbffFPU,VME,DE,PSE,TSC,MSR,PAE,MCE,CX8,APIC,SEP,MTRR,PGE,MCA,CMOV,PAT,PSE36,MMX,FXSR,SSE
 real memory  = 1073676288 (1023 MB)
 avail memory = 1041784832 (993 MB)


 Observations:
 Disk Space used:
 du -am /var/db/mysql | sort -nr | head -20
 5259mysql/
 3055mysql/snort
 2184mysql/snort_archive
 1546mysql/snort_archive/data.MYD
 1546mysql/snort/data.MYD
 560 mysql/snort/acid_event.MYI
 311 mysql/snort/acid_event.MYD
 132 mysql/snort_archive/event.MYI
 132 mysql/snort/event.MYI
 116 mysql/snort_archive/iphdr.MYI
 116 mysql/snort/iphdr.MYI
 112 mysql/snort_archive/iphdr.MYD
 112 mysql/snort/iphdr.MYD
 74  mysql/snort_archive/event.MYD
 74  mysql/snort/event.MYD
 42  mysql/snort_archive/data.MYI
 42  mysql/snort/data.MYI
 40  mysql/snort_archive/icmphdr.MYI
 40  mysql/snort/icmphdr.MYI
 35  mysql/snort_archive/icmphdr.MYD
 ...
  snort is 3GB
  snort_archive is 2GB(snort_archive acid and base tables have not been
 built that is why snort archive is smaller)

 When the application searches the database, the mysqld utilization goes up
 to over 90% until the application
 times out.

 top
 last pid: 44263;  load averages:  0.95,  0.89,  0.76  up
 25+23:49:4416:07:17
 49 processes:  2 running, 47 sleeping

 Mem: 173M Active, 608M Inact, 186M Wired, 29M Cache, 111M Buf, 1660K Free
 Swap: 2048M Total, 156K Used, 2048M Free


 PID USERNAME  THR PRI NICE   SIZERES STATETIME   WCPU COMMAND
 31890 mysql  15  200   103M 79032K kserel 768:38 93.46% mysqld
 49138 www 1   40 17432K 12848K accept   0:23  0.00% httpd
 46759 www 1  200 16584K 12084K lockf0:21  0.00% httpd
 46764 www 1   40 16632K 12072K accept   0:21  0.00% httpd
 46763 www 1   40 16580K 12012K accept   0:20  0.00% httpd
 46760 www 1   40 17452K 12872K accept   0:19  0.00% httpd
 46762 www 1   40 16568K 12000K accept   0:19  0.00% httpd
 46761 www 1   40 16608K 12088K sbwait   0:17  0.00% httpd
 68456 www 1   40 16572K 11980K accept   0:17  0.00% httpd
 68457 www 1   40 16724K 11824K accept   0:17  0.00% httpd
 68458 www 1   40 16980K 11920K accept   0:17  0.00% httpd

 Processes that run in the background:
 I run   an update  process  in the background with hope that if I
 process  the alerts from the snort table on a regular basis.o
 I won't have process a large number( 44,000) alerts first thing in the
 morning.
 The update process inserts records into the acid table
 that result from the join of certain fields from the snort tables.
 (Schema at
 http://www.andrew.cmu.edu/user/rdanyliw/snort/acid_db_er_v102.html )

 rabid# cat /var/log/base-update.2006-03-28.log
 2006-03-28, 00:05:00, Added 3808 alert(s) to the Alert cache
 2006-03-28, 01:05:00, Added 5855 alert(s) to the Alert cache
 2006-03-28, 02:05:00, Added 4096 alert(s) to the Alert cache
 2006-03-28, 03:05:00, Added 4473 alert(s) to the Alert cache
 2006-03-28, 04:05:00, Added 4378 alert(s) to the Alert cache
 2006-03-28, 05:05:00, Added 4087 alert(s) to the Alert cache
 2006-03-28, 06:05:00, Added 5163 alert(s) to the Alert cache
 2006-03-28, 07:05:00, Added 4789 alert(s) to the Alert cache
 2006-03-28, 08:05:00, Added 4411 alert(s) to the Alert cache
 2006-03-28, 09:05:00, Added 4830 alert(s) to the Alert cache
 2006-03-28, 10:05:00, Added 4739 alert(s) to the Alert cache
 2006-03-28, 11:05:00, Added 5360 alert(s) to the Alert cache
 2006-03-28, 12:05:00, Added 7305 alert(s) to the Alert cache
 2006-03-28, 13:05:00, Added 8481 alert(s) to the Alert cache
 2006-03-28, 14:05:00, Added 60731 alert(s) to the Alert cache
 2006-03-28, 15:05:00, Added 44328 

Re: mysql performance problems.

2006-03-30 Thread Prasanna Raj

Is tat query is the problem ? 

Then turn on your slow queies and try optimizing those slow queries ? 

Post your queries and table description for further help :)

--Praj

On Wed, 29 Mar 2006 12:33:20 -0500
Jacob, Raymond A Jr [EMAIL PROTECTED] wrote:

 
 After a 23days of running mysql, I have a 3GB database. When I use an 
 application
 called base(v.1.2.2) a web based intrusion detection analysis console, the 
 mysqld utilization
 shoots up to over 90% and stays there until the application times out or is 
 terminated.
 
 Question: Have I made some error in configuration? 
 
 When I don't run the application base, mysqld utilization is between 30-50%.
 Question: What hardware do I need to speed up queries?
 
 Question: How do determine if the query is the problem?
  
 Data:
 I used my-large.cnf as the basis of my.cnf.
 
 Hardware and OS info:
 ...
 FreeBSD 6.0-RELEASE-p5 #0: 
 ...
 CPU: Intel Pentium III (997.46-MHz 686-class CPU)
   Origin = GenuineIntel  Id = 0x68a  Stepping = 10
   
 Features=0x383fbffFPU,VME,DE,PSE,TSC,MSR,PAE,MCE,CX8,APIC,SEP,MTRR,PGE,MCA,CMOV,PAT,PSE36,MMX,FXSR,SSE
 real memory  = 1073676288 (1023 MB)
 avail memory = 1041784832 (993 MB)
 
 
 Observations:
 Disk Space used:
 du -am /var/db/mysql | sort -nr | head -20
 5259  mysql/
 3055  mysql/snort
 2184  mysql/snort_archive
 1546  mysql/snort_archive/data.MYD
 1546  mysql/snort/data.MYD
 560   mysql/snort/acid_event.MYI
 311   mysql/snort/acid_event.MYD
 132   mysql/snort_archive/event.MYI
 132   mysql/snort/event.MYI
 116   mysql/snort_archive/iphdr.MYI
 116   mysql/snort/iphdr.MYI
 112   mysql/snort_archive/iphdr.MYD
 112   mysql/snort/iphdr.MYD
 74mysql/snort_archive/event.MYD
 74mysql/snort/event.MYD
 42mysql/snort_archive/data.MYI
 42mysql/snort/data.MYI
 40mysql/snort_archive/icmphdr.MYI
 40mysql/snort/icmphdr.MYI
 35mysql/snort_archive/icmphdr.MYD
 ...
  snort is 3GB
  snort_archive is 2GB(snort_archive acid and base tables have not been 
  built that is why snort archive is smaller)
 
 When the application searches the database, the mysqld utilization goes up to 
 over 90% until the application
 times out. 
 
 top
 last pid: 44263;  load averages:  0.95,  0.89,  0.76  up 25+23:49:44
 16:07:17
 49 processes:  2 running, 47 sleeping
 
 Mem: 173M Active, 608M Inact, 186M Wired, 29M Cache, 111M Buf, 1660K Free
 Swap: 2048M Total, 156K Used, 2048M Free
 
 
   PID USERNAME  THR PRI NICE   SIZERES STATETIME   WCPU COMMAND
 31890 mysql  15  200   103M 79032K kserel 768:38 93.46% mysqld
 49138 www 1   40 17432K 12848K accept   0:23  0.00% httpd
 46759 www 1  200 16584K 12084K lockf0:21  0.00% httpd
 46764 www 1   40 16632K 12072K accept   0:21  0.00% httpd
 46763 www 1   40 16580K 12012K accept   0:20  0.00% httpd
 46760 www 1   40 17452K 12872K accept   0:19  0.00% httpd
 46762 www 1   40 16568K 12000K accept   0:19  0.00% httpd
 46761 www 1   40 16608K 12088K sbwait   0:17  0.00% httpd
 68456 www 1   40 16572K 11980K accept   0:17  0.00% httpd
 68457 www 1   40 16724K 11824K accept   0:17  0.00% httpd
 68458 www 1   40 16980K 11920K accept   0:17  0.00% httpd
 
 Processes that run in the background:
 I run   an update  process  in the background with hope that if I  
 process  the alerts from the snort table on a regular basis.o 
 I won't have process a large number( 44,000) alerts first thing in the 
 morning.
 The update process inserts records into the acid table
 that result from the join of certain fields from the snort tables.
 (Schema at http://www.andrew.cmu.edu/user/rdanyliw/snort/acid_db_er_v102.html 
 )
 
 rabid# cat /var/log/base-update.2006-03-28.log 
 2006-03-28, 00:05:00, Added 3808 alert(s) to the Alert cache
 2006-03-28, 01:05:00, Added 5855 alert(s) to the Alert cache
 2006-03-28, 02:05:00, Added 4096 alert(s) to the Alert cache
 2006-03-28, 03:05:00, Added 4473 alert(s) to the Alert cache
 2006-03-28, 04:05:00, Added 4378 alert(s) to the Alert cache
 2006-03-28, 05:05:00, Added 4087 alert(s) to the Alert cache
 2006-03-28, 06:05:00, Added 5163 alert(s) to the Alert cache
 2006-03-28, 07:05:00, Added 4789 alert(s) to the Alert cache
 2006-03-28, 08:05:00, Added 4411 alert(s) to the Alert cache
 2006-03-28, 09:05:00, Added 4830 alert(s) to the Alert cache
 2006-03-28, 10:05:00, Added 4739 alert(s) to the Alert cache
 2006-03-28, 11:05:00, Added 5360 alert(s) to the Alert cache
 2006-03-28, 12:05:00, Added 7305 alert(s) to the Alert cache
 2006-03-28, 13:05:00, Added 8481 alert(s) to the Alert cache
 2006-03-28, 14:05:00, Added 60731 alert(s) to the Alert cache
 2006-03-28, 15:05:00, Added 44328 alert(s) to the Alert cache
 2006-03-28, 15:50:00, Added 13742 alert(s) to the Alert cache
 2006-03-28, 15:55:00, Added 607 alert(s) to the Alert cache
 2006-03-28, 16:00:00, Added 938 alert(s) to the Alert cache
 2006-03-28, 16:05:00, Added 1348 

mysql performance problems.

2006-03-29 Thread Jacob, Raymond A Jr

After a 23days of running mysql, I have a 3GB database. When I use an 
application
called base(v.1.2.2) a web based intrusion detection analysis console, the 
mysqld utilization
shoots up to over 90% and stays there until the application times out or is 
terminated.

Question: Have I made some error in configuration? 

When I don't run the application base, mysqld utilization is between 30-50%.
Question: What hardware do I need to speed up queries?

Question: How do determine if the query is the problem?
 
Data:
I used my-large.cnf as the basis of my.cnf.

Hardware and OS info:
...
FreeBSD 6.0-RELEASE-p5 #0: 
...
CPU: Intel Pentium III (997.46-MHz 686-class CPU)
  Origin = GenuineIntel  Id = 0x68a  Stepping = 10
  
Features=0x383fbffFPU,VME,DE,PSE,TSC,MSR,PAE,MCE,CX8,APIC,SEP,MTRR,PGE,MCA,CMOV,PAT,PSE36,MMX,FXSR,SSE
real memory  = 1073676288 (1023 MB)
avail memory = 1041784832 (993 MB)


Observations:
Disk Space used:
du -am /var/db/mysql | sort -nr | head -20
5259mysql/
3055mysql/snort
2184mysql/snort_archive
1546mysql/snort_archive/data.MYD
1546mysql/snort/data.MYD
560 mysql/snort/acid_event.MYI
311 mysql/snort/acid_event.MYD
132 mysql/snort_archive/event.MYI
132 mysql/snort/event.MYI
116 mysql/snort_archive/iphdr.MYI
116 mysql/snort/iphdr.MYI
112 mysql/snort_archive/iphdr.MYD
112 mysql/snort/iphdr.MYD
74  mysql/snort_archive/event.MYD
74  mysql/snort/event.MYD
42  mysql/snort_archive/data.MYI
42  mysql/snort/data.MYI
40  mysql/snort_archive/icmphdr.MYI
40  mysql/snort/icmphdr.MYI
35  mysql/snort_archive/icmphdr.MYD
...
 snort is 3GB
 snort_archive is 2GB(snort_archive acid and base tables have not been built 
 that is why snort archive is smaller)

When the application searches the database, the mysqld utilization goes up to 
over 90% until the application
times out. 

top
last pid: 44263;  load averages:  0.95,  0.89,  0.76  up 25+23:49:4416:07:17
49 processes:  2 running, 47 sleeping

Mem: 173M Active, 608M Inact, 186M Wired, 29M Cache, 111M Buf, 1660K Free
Swap: 2048M Total, 156K Used, 2048M Free


  PID USERNAME  THR PRI NICE   SIZERES STATETIME   WCPU COMMAND
31890 mysql  15  200   103M 79032K kserel 768:38 93.46% mysqld
49138 www 1   40 17432K 12848K accept   0:23  0.00% httpd
46759 www 1  200 16584K 12084K lockf0:21  0.00% httpd
46764 www 1   40 16632K 12072K accept   0:21  0.00% httpd
46763 www 1   40 16580K 12012K accept   0:20  0.00% httpd
46760 www 1   40 17452K 12872K accept   0:19  0.00% httpd
46762 www 1   40 16568K 12000K accept   0:19  0.00% httpd
46761 www 1   40 16608K 12088K sbwait   0:17  0.00% httpd
68456 www 1   40 16572K 11980K accept   0:17  0.00% httpd
68457 www 1   40 16724K 11824K accept   0:17  0.00% httpd
68458 www 1   40 16980K 11920K accept   0:17  0.00% httpd

Processes that run in the background:
I run   an update  process  in the background with hope that if I  
process  the alerts from the snort table on a regular basis.o 
I won't have process a large number( 44,000) alerts first thing in the morning.
The update process inserts records into the acid table
that result from the join of certain fields from the snort tables.
(Schema at http://www.andrew.cmu.edu/user/rdanyliw/snort/acid_db_er_v102.html )

rabid# cat /var/log/base-update.2006-03-28.log 
2006-03-28, 00:05:00, Added 3808 alert(s) to the Alert cache
2006-03-28, 01:05:00, Added 5855 alert(s) to the Alert cache
2006-03-28, 02:05:00, Added 4096 alert(s) to the Alert cache
2006-03-28, 03:05:00, Added 4473 alert(s) to the Alert cache
2006-03-28, 04:05:00, Added 4378 alert(s) to the Alert cache
2006-03-28, 05:05:00, Added 4087 alert(s) to the Alert cache
2006-03-28, 06:05:00, Added 5163 alert(s) to the Alert cache
2006-03-28, 07:05:00, Added 4789 alert(s) to the Alert cache
2006-03-28, 08:05:00, Added 4411 alert(s) to the Alert cache
2006-03-28, 09:05:00, Added 4830 alert(s) to the Alert cache
2006-03-28, 10:05:00, Added 4739 alert(s) to the Alert cache
2006-03-28, 11:05:00, Added 5360 alert(s) to the Alert cache
2006-03-28, 12:05:00, Added 7305 alert(s) to the Alert cache
2006-03-28, 13:05:00, Added 8481 alert(s) to the Alert cache
2006-03-28, 14:05:00, Added 60731 alert(s) to the Alert cache
2006-03-28, 15:05:00, Added 44328 alert(s) to the Alert cache
2006-03-28, 15:50:00, Added 13742 alert(s) to the Alert cache
2006-03-28, 15:55:00, Added 607 alert(s) to the Alert cache
2006-03-28, 16:00:00, Added 938 alert(s) to the Alert cache
2006-03-28, 16:05:00, Added 1348 alert(s) to the Alert cache
2006-03-28, 16:10:02, Added 687 alert(s) to the Alert cache
2006-03-28, 16:15:00, Added 1168 alert(s) to the Alert cache
2006-03-28, 16:20:00, Added 1760 alert(s) to the Alert cache
2006-03-28, 16:25:00, Added 814 alert(s) to the Alert cache
2006-03-28, 16:30:01, Added 617 alert(s) to the Alert cache

Re: mysql performance problems.

2006-03-29 Thread walt

Jacob, Raymond A Jr wrote:


After a 23days of running mysql, I have a 3GB database. When I use an 
application
called base(v.1.2.2) a web based intrusion detection analysis console, the 
mysqld utilization
shoots up to over 90% and stays there until the application times out or is 
terminated.

Question: Have I made some error in configuration? 


When I don't run the application base, mysqld utilization is between 30-50%.
Question: What hardware do I need to speed up queries?

Question: How do determine if the query is the problem?

Data:
I used my-large.cnf as the basis of my.cnf.

Hardware and OS info:
...
FreeBSD 6.0-RELEASE-p5 #0: 
...

CPU: Intel Pentium III (997.46-MHz 686-class CPU)
 Origin = GenuineIntel  Id = 0x68a  Stepping = 10
 
Features=0x383fbffFPU,VME,DE,PSE,TSC,MSR,PAE,MCE,CX8,APIC,SEP,MTRR,PGE,MCA,CMOV,PAT,PSE36,MMX,FXSR,SSE
real memory  = 1073676288 (1023 MB)
avail memory = 1041784832 (993 MB)


Observations:
Disk Space used:
du -am /var/db/mysql | sort -nr | head -20
5259mysql/
3055mysql/snort
2184mysql/snort_archive
1546mysql/snort_archive/data.MYD
1546mysql/snort/data.MYD
560 mysql/snort/acid_event.MYI
311 mysql/snort/acid_event.MYD
132 mysql/snort_archive/event.MYI
132 mysql/snort/event.MYI
116 mysql/snort_archive/iphdr.MYI
116 mysql/snort/iphdr.MYI
112 mysql/snort_archive/iphdr.MYD
112 mysql/snort/iphdr.MYD
74  mysql/snort_archive/event.MYD
74  mysql/snort/event.MYD
42  mysql/snort_archive/data.MYI
42  mysql/snort/data.MYI
40  mysql/snort_archive/icmphdr.MYI
40  mysql/snort/icmphdr.MYI
35  mysql/snort_archive/icmphdr.MYD
...
 


snort is 3GB
snort_archive is 2GB(snort_archive acid and base tables have not been built 
that is why snort archive is smaller)
 



When the application searches the database, the mysqld utilization goes up to 
over 90% until the application
times out. 


top
last pid: 44263;  load averages:  0.95,  0.89,  0.76  up 25+23:49:4416:07:17
49 processes:  2 running, 47 sleeping

Mem: 173M Active, 608M Inact, 186M Wired, 29M Cache, 111M Buf, 1660K Free
Swap: 2048M Total, 156K Used, 2048M Free


 PID USERNAME  THR PRI NICE   SIZERES STATETIME   WCPU COMMAND
31890 mysql  15  200   103M 79032K kserel 768:38 93.46% mysqld
49138 www 1   40 17432K 12848K accept   0:23  0.00% httpd
46759 www 1  200 16584K 12084K lockf0:21  0.00% httpd
46764 www 1   40 16632K 12072K accept   0:21  0.00% httpd
46763 www 1   40 16580K 12012K accept   0:20  0.00% httpd
46760 www 1   40 17452K 12872K accept   0:19  0.00% httpd
46762 www 1   40 16568K 12000K accept   0:19  0.00% httpd
46761 www 1   40 16608K 12088K sbwait   0:17  0.00% httpd
68456 www 1   40 16572K 11980K accept   0:17  0.00% httpd
68457 www 1   40 16724K 11824K accept   0:17  0.00% httpd
68458 www 1   40 16980K 11920K accept   0:17  0.00% httpd

Processes that run in the background:
I run   an update  process  in the background with hope that if I  
process  the alerts from the snort table on a regular basis.o 
I won't have process a large number( 44,000) alerts first thing in the morning.

The update process inserts records into the acid table
that result from the join of certain fields from the snort tables.
(Schema at http://www.andrew.cmu.edu/user/rdanyliw/snort/acid_db_er_v102.html )

rabid# cat /var/log/base-update.2006-03-28.log 
2006-03-28, 00:05:00, Added 3808 alert(s) to the Alert cache

2006-03-28, 01:05:00, Added 5855 alert(s) to the Alert cache
2006-03-28, 02:05:00, Added 4096 alert(s) to the Alert cache
2006-03-28, 03:05:00, Added 4473 alert(s) to the Alert cache
2006-03-28, 04:05:00, Added 4378 alert(s) to the Alert cache
2006-03-28, 05:05:00, Added 4087 alert(s) to the Alert cache
2006-03-28, 06:05:00, Added 5163 alert(s) to the Alert cache
2006-03-28, 07:05:00, Added 4789 alert(s) to the Alert cache
2006-03-28, 08:05:00, Added 4411 alert(s) to the Alert cache
2006-03-28, 09:05:00, Added 4830 alert(s) to the Alert cache
2006-03-28, 10:05:00, Added 4739 alert(s) to the Alert cache
2006-03-28, 11:05:00, Added 5360 alert(s) to the Alert cache
2006-03-28, 12:05:00, Added 7305 alert(s) to the Alert cache
2006-03-28, 13:05:00, Added 8481 alert(s) to the Alert cache
2006-03-28, 14:05:00, Added 60731 alert(s) to the Alert cache
2006-03-28, 15:05:00, Added 44328 alert(s) to the Alert cache
2006-03-28, 15:50:00, Added 13742 alert(s) to the Alert cache
2006-03-28, 15:55:00, Added 607 alert(s) to the Alert cache
2006-03-28, 16:00:00, Added 938 alert(s) to the Alert cache
2006-03-28, 16:05:00, Added 1348 alert(s) to the Alert cache
2006-03-28, 16:10:02, Added 687 alert(s) to the Alert cache
2006-03-28, 16:15:00, Added 1168 alert(s) to the Alert cache
2006-03-28, 16:20:00, Added 1760 alert(s) to the Alert cache
2006-03-28, 16:25:00, Added 814 alert(s) to the Alert cache
2006-03-28, 16:30:01, Added 

Calling MySQL often risk of performance problems?

2005-12-02 Thread zzapper
Hi,
When I started out I used to do a single query and store the data in a Perl/PHP 
datastructure. 

I've noticed with time that I'm treating MySQL as though it were part of 
PHP/Perl. ie I call a MySQL
primitive everytime I need to read a table/lookup table etc. I develop 
Shoppingbaskets/CMS systems
for smaller client websites etc where speed is not the top criteria.

What are your views/experience. 


-- 
zzapper
Success for Techies and Vim,Zsh tips
http://SuccessTheory.com/


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



mySQL Performance Problems - Help !!

2005-08-29 Thread Callum McGillivray

Hi all,

I'm pretty new to the list, so please be kind :)

I'm having serious problems with our core mysql server.

We are running a Dell Poweredge 2850 with dual Xeon 3.0 processors, RAID 
5 and 1Gb memory.


There are 3 main databases running on this machine, one is a freeradius 
database, one is for vpopmail and the other is for cacti (MRTG based tool).


At the moment, we find the suddenly, our mail server or cacti server 
will establish several hundred connections to the database that just 
seem to be sit in an unauthenticated state.


E.G.
| 6452 | unauthenticated user | xxx.xxx.xxx.xxx:40923 | NULL 
| Connect | NULL | login | NULL |
| 6453 | unauthenticated user | xxx.xxx.xxx.xxx:40924 | NULL 
| Connect | NULL | login | NULL |


The server starts dragging it's heals and mail and cacti virtually stop 
altogether.  There does not seem to be any unusual CPU / I/O or memory 
usage when this happens.


Can anyone point me in the right direction here ?

I'm more than happy to provide any configuration information that is 
required - but this is killing us and I need to know what is going on.


We have run this server for several years (recently we upgraded the 
hardware without problems) and we have never experienced anything like this.


I am at my wits end and not having had any formal training in mySQL 
servers, I simply don't know what I should be looking at next.


Please - if anyone can give me any insight at all on this it would be 
MOST appreciated.


Warm regards,

Callum

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: mySQL Performance Problems - Help !!

2005-08-29 Thread Xiaodong Deng
my.cnf add this: skip-name-resolve under [mysqld]


On 8/29/05, Callum McGillivray [EMAIL PROTECTED] wrote:
 Hi all,
 
 I'm pretty new to the list, so please be kind :)
 
 I'm having serious problems with our core mysql server.
 
 We are running a Dell Poweredge 2850 with dual Xeon 3.0 processors, RAID
 5 and 1Gb memory.
 
 There are 3 main databases running on this machine, one is a freeradius
 database, one is for vpopmail and the other is for cacti (MRTG based tool).
 
 At the moment, we find the suddenly, our mail server or cacti server
 will establish several hundred connections to the database that just
 seem to be sit in an unauthenticated state.
 
 E.G.
 | 6452 | unauthenticated user | xxx.xxx.xxx.xxx:40923 | NULL
 | Connect | NULL | login | NULL |
 | 6453 | unauthenticated user | xxx.xxx.xxx.xxx:40924 | NULL
 | Connect | NULL | login | NULL |
 
 The server starts dragging it's heals and mail and cacti virtually stop
 altogether.  There does not seem to be any unusual CPU / I/O or memory
 usage when this happens.
 
 Can anyone point me in the right direction here ?
 
 I'm more than happy to provide any configuration information that is
 required - but this is killing us and I need to know what is going on.
 
 We have run this server for several years (recently we upgraded the
 hardware without problems) and we have never experienced anything like this.
 
 I am at my wits end and not having had any formal training in mySQL
 servers, I simply don't know what I should be looking at next.
 
 Please - if anyone can give me any insight at all on this it would be
 MOST appreciated.
 
 Warm regards,
 
 Callum
 
 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]
 
 


-- 
Best Wishes,

Xiaodong

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: mySQL Performance Problems - Help !!

2005-08-29 Thread Brent Baisley
If you suddenly are spiking in unauthenticated connections, you may  
be the target of a network attack. This could be just a random probe,  
you may be a random target or someone may be targeting you. Although  
if someone were specifically targeting you, you would probably be down.


I would check where the connections are coming from, especially if  
this amount of traffic is not typical. Perhaps you can set a filter  
on your router to limit the number of connections from that IP range  
or deny that range altogether. Normally you don't allow direct access  
to MySQL from the outside, access is usually done through SendMail/ 
Postifx or Apache (PHP, Perl, etc), but your requirements may be  
different.


On Aug 29, 2005, at 4:58 AM, Callum McGillivray wrote:


Hi all,

I'm pretty new to the list, so please be kind :)

I'm having serious problems with our core mysql server.

We are running a Dell Poweredge 2850 with dual Xeon 3.0 processors,  
RAID 5 and 1Gb memory.


There are 3 main databases running on this machine, one is a  
freeradius database, one is for vpopmail and the other is for cacti  
(MRTG based tool).


At the moment, we find the suddenly, our mail server or cacti  
server will establish several hundred connections to the database  
that just seem to be sit in an unauthenticated state.


E.G.
| 6452 | unauthenticated user | xxx.xxx.xxx.xxx:40923 |  
NULL | Connect | NULL | login | NULL |
| 6453 | unauthenticated user | xxx.xxx.xxx.xxx:40924 |  
NULL | Connect | NULL | login | NULL |


The server starts dragging it's heals and mail and cacti virtually  
stop altogether.  There does not seem to be any unusual CPU / I/O  
or memory usage when this happens.


Can anyone point me in the right direction here ?

I'm more than happy to provide any configuration information that  
is required - but this is killing us and I need to know what is  
going on.


We have run this server for several years (recently we upgraded the  
hardware without problems) and we have never experienced anything  
like this.


I am at my wits end and not having had any formal training in mySQL  
servers, I simply don't know what I should be looking at next.


Please - if anyone can give me any insight at all on this it would  
be MOST appreciated.


Warm regards,

Callum

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql? 
[EMAIL PROTECTED]






--
Brent Baisley
Systems Architect
Landover Associates, Inc.
Search  Advisory Services for Advanced Technology Environments
p: 212.759.6400/800.759.0577



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Performance problems through gateway

2005-06-06 Thread Philippe Poelvoorde

Hi,



The performance of the data transfers using the direct socket connection
goes from 15 milli sec (in the lab) to ~32 milli sec (in pseudo
production env). But the database calls go from 1 sec to several
seconds (have not measured this yet). The database was exactly the same
in both trials. We are moving small amounts of data (100 bytes) in any
query.


bogus ethernet cards or network equipements ?
last year one ethernet cards on our firewall start to produce errors, 
resulting in a really slow transfert rate and long latency, could this 
apply to you ?





Does this shed any light?




Celona, Paul - AES wrote:



I am running mysql 4.0.18 on Windows 2003 server which also hosts my
apache tomcat server. My applet makes a connection to the mysql


database


on the server as well as a socket connection to a service on the same
server. In the lab with only a hub between the client and server, the
application performs well and data is transferred quickly. In the
deployed environment with a pair of gateways in between, socket
performance is not affected much, but the application gui bogs down on
the database queries. Performance is so slow that some simple GUI
updates take up to 5-7 seconds with only a simple 1 table update
occurring.



Does anyone have experience with this and/or can provide some insight?





From: gerald_clark
If your applet is making connections on each page, you might be having

reverse dns problems.

From: Shawn Green
It sounds like you don't have all of your indexes declared on your
production database.


There could also be an issue of network lag between your application
server and your database server. The best performing applications use
the fewest trips to the database to accomplish what they need. You may
want to examine your application design and minimize the number of trips
you make to the server.


For example, assume you run two queries, one to get a list of
departments and another to list the people in each department. If you
design your application to perform one lookup to get the departments
list then loop through that list to find the department's people, you
are making way too many trips to the database. A more efficient design
is to JOIN the two tables and submit just one query. Then, as you
process the results, you detect when the Department value changes and
adjust your OUTPUT accordingly.


Could it be the volume of data you are trying to present is just that
much larger with your production data set than it was with your
development dataset that it's taking that much longer to format the
output?


You provided so FEW details of the actual issue, it's VERY hard to be
more helpful. Can you provide more details of what isn't working the way
you want and why?


Shawn Green
Database Administrator
Unimin Corporation - Spruce Pine



This e-mail and any files transmitted with it are proprietary and intended 
solely for the use of the individual or entity to whom they are addressed. If 
you have received this e-mail in error please notify the sender. Please note 
that any views or opinions presented in this e-mail are solely those of the 
author and do not necessarily represent those of ITT Industries, Inc. The 
recipient should check this e-mail and any attachments for the presence of 
viruses. ITT Industries accepts no liability for any damage caused by any virus 
transmitted by this e-mail.





--
Philippe Poelvoorde
COS Trading Ltd.

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Performance problems through gateway

2005-06-03 Thread Celona, Paul - AES

I am running mysql 4.0.18 on Windows 2003 server which also hosts my
apache tomcat server. My applet makes a connection to the mysql database
on the server as well as a socket connection to a service on the same
server. In the lab with only a hub between the client and server, the
application performs well and data is transferred quickly. In the
deployed environment with a pair of gateways in between, socket
performance is not affected much, but the application gui bogs down on
the database queries. Performance is so slow that some simple GUI
updates take up to 5-7 seconds with only a simple 1 table update
occurring.



Does anyone have experience with this and/or can provide some insight?




This e-mail and any files transmitted with it are proprietary and intended 
solely for the use of the individual or entity to whom they are addressed. If 
you have received this e-mail in error please notify the sender. Please note 
that any views or opinions presented in this e-mail are solely those of the 
author and do not necessarily represent those of ITT Industries, Inc. The 
recipient should check this e-mail and any attachments for the presence of 
viruses. ITT Industries accepts no liability for any damage caused by any virus 
transmitted by this e-mail.


Re: Performance problems through gateway

2005-06-03 Thread SGreen
Celona, Paul - AES [EMAIL PROTECTED] wrote on 06/03/2005 01:03:18 
PM:

 I am running mysql 4.0.18 on Windows 2003 server which also hosts my
 apache tomcat server. My applet makes a connection to the mysql database
 on the server as well as a socket connection to a service on the same
 server. In the lab with only a hub between the client and server, the
 application performs well and data is transferred quickly. In the
 deployed environment with a pair of gateways in between, socket
 performance is not affected much, but the application gui bogs down on
 the database queries. Performance is so slow that some simple GUI
 updates take up to 5-7 seconds with only a simple 1 table update
 occurring.

 
 Does anyone have experience with this and/or can provide some insight?

It sounds like you don't have all of your indexes declared on your 
production database. 

There could also be an issue of network lag between your application 
server and your database server. The best performing applications use the 
fewest trips to the database to accomplish what they need. You may want to 
examine your application design and minimize the number of trips you make 
to the server.

For example, assume you run two queries, one to get a list of departments 
and another to list the people in each department. If you design your 
application to perform one lookup to get the departments list then loop 
through that list to find the department's people, you are making way too 
many trips to the database. A more efficient design is to JOIN the two 
tables and submit just one query. Then, as you process the results, you 
detect when the Department value changes and adjust your OUTPUT 
accordingly.

Could it be the volume of data you are trying to present is just that much 
larger with your production data set than it was with your development 
dataset that it's taking that much longer to format the output?

You provided so FEW details of the actual issue, it's VERY hard to be more 
helpful. Can you provide more details of what isn't working the way you 
want and why?

Shawn Green
Database Administrator
Unimin Corporation - Spruce Pine


Re: Performance problems through gateway

2005-06-03 Thread gerald_clark

Celona, Paul - AES wrote:


I am running mysql 4.0.18 on Windows 2003 server which also hosts my
apache tomcat server. My applet makes a connection to the mysql database
on the server as well as a socket connection to a service on the same
server. In the lab with only a hub between the client and server, the
application performs well and data is transferred quickly. In the
deployed environment with a pair of gateways in between, socket
performance is not affected much, but the application gui bogs down on
the database queries. Performance is so slow that some simple GUI
updates take up to 5-7 seconds with only a simple 1 table update
occurring.



Does anyone have experience with this and/or can provide some insight?
 

If your applet is making connections on each page, you might be having 
reverse dns

problems.

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Performance problems through gateway

2005-06-03 Thread Celona, Paul - AES

For further clarification, what we are observing is that pull down lists
(which are already built on the GUI) take a long time to complete
processing. The processing we are performing upon user selection is
taking the selected element, updating 1 database column in 1 table with
the value, and then writing a string (timestamp plus about 30 chars) to
our event log table. So we have no selects, just one update and one
insert.

The performance of the data transfers using the direct socket connection
goes from 15 milli sec (in the lab) to ~32 milli sec (in pseudo
production env). But the database calls go from 1 sec to several
seconds (have not measured this yet). The database was exactly the same
in both trials. We are moving small amounts of data (100 bytes) in any
query.

Does this shed any light?




Celona, Paul - AES wrote:

I am running mysql 4.0.18 on Windows 2003 server which also hosts my
apache tomcat server. My applet makes a connection to the mysql
database
on the server as well as a socket connection to a service on the same
server. In the lab with only a hub between the client and server, the
application performs well and data is transferred quickly. In the
deployed environment with a pair of gateways in between, socket
performance is not affected much, but the application gui bogs down on
the database queries. Performance is so slow that some simple GUI
updates take up to 5-7 seconds with only a simple 1 table update
occurring.



Does anyone have experience with this and/or can provide some insight?
 


From: gerald_clark
If your applet is making connections on each page, you might be having
reverse dns problems.

From: Shawn Green
It sounds like you don't have all of your indexes declared on your
production database.

There could also be an issue of network lag between your application
server and your database server. The best performing applications use
the fewest trips to the database to accomplish what they need. You may
want to examine your application design and minimize the number of trips
you make to the server.

For example, assume you run two queries, one to get a list of
departments and another to list the people in each department. If you
design your application to perform one lookup to get the departments
list then loop through that list to find the department's people, you
are making way too many trips to the database. A more efficient design
is to JOIN the two tables and submit just one query. Then, as you
process the results, you detect when the Department value changes and
adjust your OUTPUT accordingly.

Could it be the volume of data you are trying to present is just that
much larger with your production data set than it was with your
development dataset that it's taking that much longer to format the
output?

You provided so FEW details of the actual issue, it's VERY hard to be
more helpful. Can you provide more details of what isn't working the way
you want and why?

Shawn Green
Database Administrator
Unimin Corporation - Spruce Pine



This e-mail and any files transmitted with it are proprietary and intended 
solely for the use of the individual or entity to whom they are addressed. If 
you have received this e-mail in error please notify the sender. Please note 
that any views or opinions presented in this e-mail are solely those of the 
author and do not necessarily represent those of ITT Industries, Inc. The 
recipient should check this e-mail and any attachments for the presence of 
viruses. ITT Industries accepts no liability for any damage caused by any virus 
transmitted by this e-mail.


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Performance problems

2004-01-30 Thread Matthew Stuart
I have built a web site and I am testing it locally on my PC. Testing 
through Internet Explorer is awfully slow and most of the time I am 
getting error 'ASP 0113' script timed out. The table I am calling 
records from is quite text heavy (a few hundred to a 1,000+ words per 
field in some places).

I have built a search facility too and in doing so I have added indexes 
to the table to try to make the search results appear quicker.

I have tried selects within the MySQL command window and they are much 
quicker. For example I have selected all from the table in question and 
it returned every record (2,000 of them) in 8.9 seconds. Selecting 
specific records takes fractions of a second. The web pages are taking 
minutes before timing out.

My system is 1ghz processor, 650ram, Windows ME running Personal Web 
Server and MySQL 4.0.15.

Any advice on how I might be able to improve performance with what I 
have would be appreciated.

TIA

Mat

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: Performance problems using 1GB Linux server and trying to avoid swapping

2003-12-18 Thread Markus Fischer
Hi,

On Tue, Dec 16, 2003 at 10:23:05PM +1100, Chris Nolan wrote : 
 How heavy is your usage of TEMPORARY TABLES? I don't use them much
 myself, but I'm sure that the others on the list will have something
 to say in that regard.

Here are the relevant numbers:

  Created_tmp_disk_tables| 21911
  Created_tmp_tables | 329778

This is from an uptime of about 40 hours which makes the
Created_tmp_disk_tables around 6% during this time.

 To get a better look at MySQL's usage of memory, you could try looking 
 at the output of
 SHOW STATUS .

I don't want to look silly, but actually having the show status
output in front of me doesn't tell me much about the memory usage;
at least I can't find it ;)

There are things like Qcache_lowmem_prunes  Qcache_free_memory but
this are the only ones related to memory. Can you advise me where to
look at exactly?

- Markus

Ps: I'm using version 4.0.14

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Performance problems using 1GB Linux server and trying to avoid swapping

2003-12-18 Thread Markus Fischer
On Tue, Dec 16, 2003 at 10:38:14AM -0600, Dan Nelson wrote : 
 In the last episode (Dec 16), Markus Fischer said:
  I'm investigating a performance problem with mysql server set up. The
  server is running linux with 1GB ram. I'ld like to tune the
  configuration of the server to use as much RAM as possible without
  swapping to the disc because of the big slow down.
  
  [mysqld]
  set-variable= key_buffer=16M
 
 Way too low; this says only cache 16MB of index data.  Try 256MB.
 
  set-variable= max_allowed_packet=200M
 
 Probably way too high, but doesn't hurt since it won't actually
 allocate that much memory unless you have a 200MB field someplace.

Thanks for pointing this out. I've compensated max_allowed_packet
and increased the key_buffer.
 
  I'm also not sure whether the database is swapping temporary tables
  to the disc or not; is there a way to verify if this is happening?
 
 show status like 'created_tmp%';
 
 Raising sort_buffer_size and join_buffer_size may also help if your
 queries pull a lot of records.

From what I read from the manual, sort_buffer_size is only used for
the isamchk tools, isn't it? I've adapted join_buffer_size though;
thanks.

- Markus

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Performance problems using 1GB Linux server and trying to avoid swapping

2003-12-18 Thread Dan Nelson
In the last episode (Dec 18), Markus Fischer said:
 On Tue, Dec 16, 2003 at 10:38:14AM -0600, Dan Nelson wrote : 
  Raising sort_buffer_size and join_buffer_size may also help if your
  queries pull a lot of records.
 
 From what I read from the manual, sort_buffer_size is only used
 for the isamchk tools, isn't it? I've adapted join_buffer_size
 though; thanks.

It's used for any sorting:

   * `sort_buffer_size' Each thread that needs to do a sort allocates a
 buffer of this size. Increase this value for faster `ORDER BY' or
 `GROUP BY' operations.  *Note Temporary files::.

-- 
Dan Nelson
[EMAIL PROTECTED]

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Performance problems using 1GB Linux server and trying to avoid swapping

2003-12-18 Thread Markus Fischer
On Thu, Dec 18, 2003 at 10:37:46AM -0600, Dan Nelson wrote : 
 In the last episode (Dec 18), Markus Fischer said:
  On Tue, Dec 16, 2003 at 10:38:14AM -0600, Dan Nelson wrote : 
   Raising sort_buffer_size and join_buffer_size may also help if your
   queries pull a lot of records.
  
  From what I read from the manual, sort_buffer_size is only used
  for the isamchk tools, isn't it? I've adapted join_buffer_size
  though; thanks.
 
 It's used for any sorting:
 
* `sort_buffer_size' Each thread that needs to do a sort allocates a
  buffer of this size. Increase this value for faster `ORDER BY' or
  `GROUP BY' operations.  *Note Temporary files::.

Of course, sorry, I got confused then.

thanks,

- Markus

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Performance problems using 1GB Linux server and trying to avoid swapping

2003-12-16 Thread Markus Fischer
Hello,

I'm investigating a performance problem with mysql server set up.
The server is running linux with 1GB ram. I'ld like to tune the
configuration of the server to use as much RAM as possible without
swapping to the disc because of the big slow down.

The current configuration is as follows (I've just pasted the
relevant pieces):

[mysqld]
skip-innodb
skip-locking
set-variable= key_buffer=16M
set-variable= max_allowed_packet=200M
set-variable= thread_stack=128K
bind-address= 127.0.0.1
port= 3306
skip-networking
set-variable = query_cache_type=1
set-variable = query_cache_size=64M
set-variable = tmp_table_size=50M


The main purpose of the database is a heavily modified forum
application based on vBulletin.

One of the biggest table is the 'post' table with consumes about
617M currently; this table is often used in queries for index pages
which are the most frequent visited pages; indexes for the relevant
column in the WHERE statement are all created. The next tables in
size are using 22M and most of them much less.


I'm also having problems in determining accurately how much memory
MySQL uses when running on the system; the information provided by
ps doesn't seem relieable due the many processes/threads. Getting
the currently used memory and whether MySQL needed to swap would be
very valueable in chasing down this issue.

I'm also not sure whether the database is swapping temporary tables
to the disc or not; is there a way to verify if this is happening?

Other beneficial things would be to know how much memory certain
queries need (mostly the queries on index pages).


Besides more physical memory, are the other ways to better tune the
server for the 1GB ram? What performance analyses tool are
recommended for use with MySQL?

thanks for any advice,

- Markus

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Performance problems using 1GB Linux server and trying to avoid swapping

2003-12-16 Thread Chris Nolan
Hi!

How heavy is your usage of TEMPORARY TABLES? I don't use them much 
myself, but
I'm sure that the others on the list will have something to say in that 
regard.

To get a better look at MySQL's usage of memory, you could try looking 
at the output of
SHOW STATUS .

Regards,

Chris

Markus Fischer wrote:

   Hello,

   I'm investigating a performance problem with mysql server set up.
   The server is running linux with 1GB ram. I'ld like to tune the
   configuration of the server to use as much RAM as possible without
   swapping to the disc because of the big slow down.
   The current configuration is as follows (I've just pasted the
   relevant pieces):
[mysqld]
skip-innodb
skip-locking
set-variable= key_buffer=16M
set-variable= max_allowed_packet=200M
set-variable= thread_stack=128K
bind-address= 127.0.0.1
port= 3306
skip-networking
set-variable = query_cache_type=1
set-variable = query_cache_size=64M
set-variable = tmp_table_size=50M
   The main purpose of the database is a heavily modified forum
   application based on vBulletin.
   One of the biggest table is the 'post' table with consumes about
   617M currently; this table is often used in queries for index pages
   which are the most frequent visited pages; indexes for the relevant
   column in the WHERE statement are all created. The next tables in
   size are using 22M and most of them much less.
   I'm also having problems in determining accurately how much memory
   MySQL uses when running on the system; the information provided by
   ps doesn't seem relieable due the many processes/threads. Getting
   the currently used memory and whether MySQL needed to swap would be
   very valueable in chasing down this issue.
   I'm also not sure whether the database is swapping temporary tables
   to the disc or not; is there a way to verify if this is happening?
   Other beneficial things would be to know how much memory certain
   queries need (mostly the queries on index pages).
   Besides more physical memory, are the other ways to better tune the
   server for the 1GB ram? What performance analyses tool are
   recommended for use with MySQL?
   thanks for any advice,

   - Markus

 



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: Performance problems using 1GB Linux server and trying to avoid swapping

2003-12-16 Thread Dan Nelson
In the last episode (Dec 16), Markus Fischer said:
 I'm investigating a performance problem with mysql server set up. The
 server is running linux with 1GB ram. I'ld like to tune the
 configuration of the server to use as much RAM as possible without
 swapping to the disc because of the big slow down.
 
 [mysqld]
 set-variable= key_buffer=16M

Way too low; this says only cache 16MB of index data.  Try 256MB.

 set-variable= max_allowed_packet=200M

Probably way too high, but doesn't hurt since it won't actually
allocate that much memory unless you have a 200MB field someplace.

 I'm also not sure whether the database is swapping temporary tables
 to the disc or not; is there a way to verify if this is happening?

show status like 'created_tmp%';

Raising sort_buffer_size and join_buffer_size may also help if your
queries pull a lot of records.

-- 
Dan Nelson
[EMAIL PROTECTED]

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



performance problems, maintence on one table slows down others

2003-11-17 Thread Dan Edwards
Hi, I have a performance issue I've tried resolving and I can't get rid 
of it. Basically I have a database called lobby that any queries to it 
must be as fast as possible inserts, and selects. It must do about 60 
queries a second with no queries taking more then 50ms. I also have 
another database called Track that I need to do some maintance on some 
of its tables. Right now I need to archive off records to seperate 
tables by month. I wrote a C program to do it. The problem is when I run 
it, it slows down the queries to the lobby database so severly that 
queries that normaly take less then 10 ms, take a second or more, which 
is absolutely unacceptable. (The tables are the backend of online card 
games, and database stalls cause lag for players)

To solve this problem I bought a fast server with a fast scsi mirror 
raid dedicated to the lobby database. The server is a Redhat 9, kernel 
2.4.20-20.9smp, mysql 4.0.16, dual p4 2.4 xeon (Dell 1600SC), with 1 gig 
ram, 4 36 gig 15k rpm scsi HDs. 2 mirrored for the os and most databases 
and 2 mirrored dedicated to the lobby database.

The program that archives tables basically does this:
insert into Events_Sep select * from Events where EventID=1000 and 
EventID1100;
delete from Events where EventID=1000 and EventID1100;
And then pauses 1 second.

The Events table has over 2 million records in it, and is 2.3GB in 
Events.MYD and 170M in MYI.

I thought the original problem was the lobby and track databases where 
on the same hard drive. But this new hardware hasn't solved the problem. 
It is better, but still far from good enough.

Here is a bit of top while the archiveing is running:
 18:39:40  up 11 days,  8:20,  2 users,  load average: 0.11, 0.12, 0.14
77 processes: 76 sleeping, 1 running, 0 zombie, 0 stopped
CPU0 states: 5.2% user 2.1% system  0.0% nice   0.0% iowait  92.2% idle
CPU1 states: 4.1% user 4.4% system  0.0% nice   0.0% iowait  91.0% idle
CPU2 states: 0.0% user 0.1% system  0.0% nice   0.0% iowait  99.4% idle
CPU3 states: 0.2% user 0.1% system  0.0% nice   0.0% iowait  99.2% idle
Mem:  1159456k av, 1149228k used, 10228k free,  0k shrd,   60804k buff
795764k actv,6452k in_d,   47432k in_c
Swap: 1052152k av,  308140k used,  744012k free   694920k cached
  PID USER PRI  NI  SIZE  RSS SHARE STAT %CPU %MEM   TIME CPU COMMAND
 7098 mysql 15   0  399M 304M  1348 S16.6 26.9 647:22   2 mysqld
Some dumps from iostat -k 2 with archive running, prety much alternates 
between these two:
Device:tpskB_read/skB_wrtn/skB_readkB_wrtn
dev8-0  414.50   120.00  2408.00240   4816
dev8-1  427.5060.00  2408.00120   4816
dev8-2  139.5024.00   604.00 48   1208
dev8-3  135.50 0.00   604.00  0   1208

Device:tpskB_read/skB_wrtn/skB_readkB_wrtn
dev8-05.0018.0034.00 36 68
dev8-1   13.5056.0034.00112 68
dev8-22.00 6.0012.00 12 24
dev8-31.50 2.0012.00  4 24
Some typical iostat dumps without the archive running:

Device:tpskB_read/skB_wrtn/skB_readkB_wrtn
dev8-05.00 0.0040.00  0 80
dev8-15.50 2.0040.00  4 80
dev8-2  118.50 2.00   522.00  4   1044
dev8-3  119.00 4.00   522.00  8   1044
Device:tpskB_read/skB_wrtn/skB_readkB_wrtn
dev8-04.00 2.0018.00  4 36
dev8-13.50 0.0018.00  0 36
dev8-21.00 4.00 0.00  8  0
dev8-30.50 4.00 0.00  8  0
The OS/Track tables are  dev8-0 and dev8-1, The lobby tables are on 
dev8-2 and dev8-3.

Here is important bits of my.cnf:
[mysqld]
server-id=4
datadir=/var/lib/mysql
socket=/var/lib/mysql/mysql.sock
log-bin
skip-locking
log-slow-queries
set-variable= key_buffer=384M
set-variable= max_allowed_packet=16M
set-variable= table_cache=512
set-variable= sort_buffer=2M
set-variable= record_buffer=2M
set-variable= myisam_sort_buffer_size=64M
set-variable= thread_cache=8
set-variable= thread_concurrency=8
set-variable= tmp_table_size=256M


One theory I have is that since the os/track harddrive is slowed down so 
much that writes to the bin log are delayed enough to cause delays in 
queries to lobby tables.

Anyone else have any other theories or suggestions of what I should try 
next. I would really like to avoid putting the databases on totally 
seperate servers.

Thanks for any insight, I know mysql has the power to pull this off, we 
just need to 

Performance Problems

2003-09-18 Thread Schonder, Matthias
Hei :) 

I have an extreme performance problem with a MySQL-DB. 
The database consists of 21 tables where all except three are storing only a
few records. Two have about 150.000 records but they are only used
temporary. The main table is rather huge, it has 90 columns and now after
three month it has 500.000 records... but in the end it has to store data of
36 month. 
But since the table has grown to over 350.000 records I ran into massive
performance problems. Querying for one record (Example: SELECT sendnr FROM
pool where sendnr = 111073101180) takes 8 seconds via command line! 
The table is indexed and i put the my-huge.cnf into /etc/ as my.cnf 

How can I get more performance out of the mysql? 
The server which currently only hosts this database and is running apache1.3
with php4 for providing results via intranet. 
The OS is FreeBSD 5.1. 
We are running two servers with the same enviroment 
One is a DualXeon 2.4ghz (with HT), 3GB RAM, 105GB HD (RAID5) 
the other one is a P4 with 2.4ghz, 1GB RAM and 120GB HD 

The Dual is the Productionserver, the P4 the developement- and test server. 
The querey takes that long on BOTH machines so it seems clear the DB itself
is causing the performance problem. 

So anyone can help? This is really urgend and will save my life :) 

Big thanks in advance. 

Pacem, 

Matthias Schonder

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Performance Problems

2003-09-18 Thread Andy Eastham
Matthias,

Can you send us your table index definitions and the output of an EXPLAIN
command on your query?

ie
DESCRIBE pool;
SHOW INDEX FROM pool;
EXPLAIN SELECT sendnr FROM pool where sendnr = 111073101180;

I'm pretty sure we can improve this - I've got a table with 55 million
records (though only 3 columns) and a search like the one you've got takes
0.07 seconds on a box similar to your dev box.

Andy

 -Original Message-
 From: Schonder, Matthias [mailto:[EMAIL PROTECTED]
 Sent: 18 September 2003 10:25
 To: '[EMAIL PROTECTED]'
 Subject: Performance Problems


 Hei :)

 I have an extreme performance problem with a MySQL-DB.
 The database consists of 21 tables where all except three are
 storing only a
 few records. Two have about 150.000 records but they are only used
 temporary. The main table is rather huge, it has 90 columns and now after
 three month it has 500.000 records... but in the end it has to
 store data of
 36 month.
 But since the table has grown to over 350.000 records I ran into massive
 performance problems. Querying for one record (Example: SELECT sendnr FROM
 pool where sendnr = 111073101180) takes 8 seconds via command line!
 The table is indexed and i put the my-huge.cnf into /etc/ as my.cnf

 How can I get more performance out of the mysql?
 The server which currently only hosts this database and is
 running apache1.3
 with php4 for providing results via intranet.
 The OS is FreeBSD 5.1.
 We are running two servers with the same enviroment
 One is a DualXeon 2.4ghz (with HT), 3GB RAM, 105GB HD (RAID5)
 the other one is a P4 with 2.4ghz, 1GB RAM and 120GB HD

 The Dual is the Productionserver, the P4 the developement- and
 test server.
 The querey takes that long on BOTH machines so it seems clear the
 DB itself
 is causing the performance problem.

 So anyone can help? This is really urgend and will save my life :)

 Big thanks in advance.

 Pacem,

 Matthias Schonder

 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:
 http://lists.mysql.com/[EMAIL PROTECTED]





-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Performance Problems

2003-09-18 Thread Harald Tijink
Do you use indexes?

See http://www.mysql.com/doc/en/CREATE_INDEX.html.

In my system a retrieval from a 24 million records table (3 columns) with a
result of 25 records only took 0.09 sec and 24 million records table with 5
columns 0.25 sec

Harald

- Original Message - 
From: Schonder, Matthias [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
Sent: Thursday, September 18, 2003 11:24 AM
Subject: Performance Problems


 Hei :)

 I have an extreme performance problem with a MySQL-DB.
 The database consists of 21 tables where all except three are storing only
a
 few records. Two have about 150.000 records but they are only used
 temporary. The main table is rather huge, it has 90 columns and now after
 three month it has 500.000 records... but in the end it has to store data
of
 36 month.
 But since the table has grown to over 350.000 records I ran into massive
 performance problems. Querying for one record (Example: SELECT sendnr FROM
 pool where sendnr = 111073101180) takes 8 seconds via command line!
 The table is indexed and i put the my-huge.cnf into /etc/ as my.cnf

 How can I get more performance out of the mysql?
 The server which currently only hosts this database and is running
apache1.3
 with php4 for providing results via intranet.
 The OS is FreeBSD 5.1.
 We are running two servers with the same enviroment
 One is a DualXeon 2.4ghz (with HT), 3GB RAM, 105GB HD (RAID5)
 the other one is a P4 with 2.4ghz, 1GB RAM and 120GB HD

 The Dual is the Productionserver, the P4 the developement- and test
server.
 The querey takes that long on BOTH machines so it seems clear the DB
itself
 is causing the performance problem.

 So anyone can help? This is really urgend and will save my life :)

 Big thanks in advance.

 Pacem,

 Matthias Schonder

 -- 
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Performance Problems

2003-09-18 Thread Scott Helms
  The main table is rather huge, it has 90 columns and now after
 three month it has 500.000 records... but in the end it has to store data of
 36 month.
Hmm, I think you had better look at normalizing your data, and creating
indexes.  Start with the indexes since that won't force you to make any
other changes, but 90 columns in one table sound like you put everything
in that one table rather than normalizing.  Take a look at this article
for some (somewhat dated) info.  

http://www.linux-mag.com/2001-06/mysql_01.html

Also what version of MySQL are you running and what table type? 
Have you looked at the processes that are running when run your query?

mysqladmin -p process

Will show what is happening, another very useful tool is mytop:
http://jeremy.zawodny.com/mysql/mytop/


Scott Helms



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Performance problems after record deletion.

2003-01-14 Thread Gunnar Lunde
 Hi
 
 We got a problem with a slow database after deleting records using the
 MySQL released with RedHat 7.2 (Server version 3.23.41). Here is the short
 story:
 
 We have a table with a lot of data, at the moment there are 85 million
 records in our table.  We developed a script that deleted old data during
 the night. The script runs as a cron job and starts each evening and runs
 trough the night. The script runs a loop where it selects the 10.000
 oldest records and runs a test to see if it is ok to delete them (some
 records are kept even if they are old if they have certain criterias
 fulfilled). The script builds a list and deletes 10 records at a time. The
 script runs fine, but after we have deleted a couple of million
 transactions the database is dog slow.  We can see that a select statement
 that used to complete in seconds now takes 30 minutes and this select
 statement locks the table so that everything else must wait.
 
 Our best guess is that something happened to the indexes during the delete
 operation, but we did an explain on the select statement that locks
 everything and it does seem to use the indexes it should. 
 We have read that a select statement will lock up the table if there are
 wholes in the database, but it indexes should not be affected? There are
 incoming inserts concurrently with running the large query. When the
 process is locking, it says sending data - nothing about temporary
 tables. 
 
 Since we can afford a bit of downtime (we luckily have a backup system),
 and for debugging purposes, we have started an analyze table job, although
 we don't think it will make a huge difference. We hope this job will be
 completed by tomorrow. As a sidenote - is there any way of monitoring the
 progress of large jobs of this kind?
 
If you have experienced the same type of problems, or if anyone have
any thought on why this is happening and how to fix it we would really like
to hear it.


 Best Regards
 
 Gunnar Lunde
 

-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




RE: Performance problems after record deletion.

2003-01-14 Thread Vikash K Agarwal
This is what MYSQL manual 3.23.41 says, may be it helps you

OPTIMIZE TABLE should be used if you have deleted a large part of a
table or if you have made many changes to a table with variable-length
rows (tables that have VARCHAR, BLOB, or TEXT columns). Deleted records
are maintained in a linked list and subsequent INSERT operations reuse
old record positions. You can use OPTIMIZE TABLE to reclaim the unused
space and to defragment the data file.

Rgds
Vikash K Agarwal

-Original Message-
From: Gunnar Lunde [mailto:[EMAIL PROTECTED]] 
Sent: Tuesday, January 14, 2003 4:05 PM
To: '[EMAIL PROTECTED]'
Subject: Performance problems after record deletion.

 Hi
 
 We got a problem with a slow database after deleting records using the
 MySQL released with RedHat 7.2 (Server version 3.23.41). Here is the
short
 story:
 
 We have a table with a lot of data, at the moment there are 85 million
 records in our table.  We developed a script that deleted old data
during
 the night. The script runs as a cron job and starts each evening and
runs
 trough the night. The script runs a loop where it selects the 10.000
 oldest records and runs a test to see if it is ok to delete them (some
 records are kept even if they are old if they have certain criterias
 fulfilled). The script builds a list and deletes 10 records at a time.
The
 script runs fine, but after we have deleted a couple of million
 transactions the database is dog slow.  We can see that a select
statement
 that used to complete in seconds now takes 30 minutes and this select
 statement locks the table so that everything else must wait.
 
 Our best guess is that something happened to the indexes during the
delete
 operation, but we did an explain on the select statement that locks
 everything and it does seem to use the indexes it should. 
 We have read that a select statement will lock up the table if there
are
 wholes in the database, but it indexes should not be affected? There
are
 incoming inserts concurrently with running the large query. When the
 process is locking, it says sending data - nothing about temporary
 tables. 
 
 Since we can afford a bit of downtime (we luckily have a backup
system),
 and for debugging purposes, we have started an analyze table job,
although
 we don't think it will make a huge difference. We hope this job will
be
 completed by tomorrow. As a sidenote - is there any way of monitoring
the
 progress of large jobs of this kind?
 
If you have experienced the same type of problems, or if anyone
have
any thought on why this is happening and how to fix it we would really
like
to hear it.


 Best Regards
 
 Gunnar Lunde
 

-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail
[EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php


-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




RE: Performance problems after record deletion.

2003-01-14 Thread Gunnar Lunde
Thank you for your reply, Vikash

We have decided not to use optimize because of the size of the table.
Optimize would halt the system to long. So we decided just to reuse the
space. Our problem is that after the deletion a select query that used to
take a second now takes 30 minutes and locks up everything. So we suspect
that something happened to the indexes and the select query does a full
table scan. However an explain on the query suggest that the indexes are
working. 

During normal operation reusing the space is ok, and it is ok that the
select statement locks up everything, but we can not live with a select
statement that takes 30 minutes when it used to complete in no time.

Ideas to fix this is more than welcome!

Best regards
Gunnar Lunde

 -Original Message-
 From: Vikash K Agarwal [mailto:[EMAIL PROTECTED]]
 Sent: Tuesday, January 14, 2003 12:22 PM
 To: 'Gunnar Lunde'; [EMAIL PROTECTED]
 Subject: RE: Performance problems after record deletion.
 
 
 This is what MYSQL manual 3.23.41 says, may be it helps you
 
 OPTIMIZE TABLE should be used if you have deleted a large part of a
 table or if you have made many changes to a table with variable-length
 rows (tables that have VARCHAR, BLOB, or TEXT columns). 
 Deleted records
 are maintained in a linked list and subsequent INSERT operations reuse
 old record positions. You can use OPTIMIZE TABLE to reclaim the unused
 space and to defragment the data file.
 
 Rgds
 Vikash K Agarwal
 
 -Original Message-
 From: Gunnar Lunde [mailto:[EMAIL PROTECTED]] 
 Sent: Tuesday, January 14, 2003 4:05 PM
 To: '[EMAIL PROTECTED]'
 Subject: Performance problems after record deletion.
 
  Hi
  
  We got a problem with a slow database after deleting 
 records using the
  MySQL released with RedHat 7.2 (Server version 3.23.41). Here is the
 short
  story:
  
  We have a table with a lot of data, at the moment there are 
 85 million
  records in our table.  We developed a script that deleted old data
 during
  the night. The script runs as a cron job and starts each evening and
 runs
  trough the night. The script runs a loop where it selects the 10.000
  oldest records and runs a test to see if it is ok to delete 
 them (some
  records are kept even if they are old if they have certain criterias
  fulfilled). The script builds a list and deletes 10 records 
 at a time.
 The
  script runs fine, but after we have deleted a couple of million
  transactions the database is dog slow.  We can see that a select
 statement
  that used to complete in seconds now takes 30 minutes and 
 this select
  statement locks the table so that everything else must wait.
  
  Our best guess is that something happened to the indexes during the
 delete
  operation, but we did an explain on the select statement that locks
  everything and it does seem to use the indexes it should. 
  We have read that a select statement will lock up the table if there
 are
  wholes in the database, but it indexes should not be affected? There
 are
  incoming inserts concurrently with running the large query. When the
  process is locking, it says sending data - nothing about temporary
  tables. 
  
  Since we can afford a bit of downtime (we luckily have a backup
 system),
  and for debugging purposes, we have started an analyze table job,
 although
  we don't think it will make a huge difference. We hope this job will
 be
  completed by tomorrow. As a sidenote - is there any way of 
 monitoring
 the
  progress of large jobs of this kind?
  
   If you have experienced the same type of problems, or if anyone
 have
 any thought on why this is happening and how to fix it we would really
 like
 to hear it.
 
 
  Best Regards
  
  Gunnar Lunde
  
 
 -
 Before posting, please check:
http://www.mysql.com/manual.php   (the manual)
http://lists.mysql.com/   (the list archive)
 
 To request this thread, e-mail [EMAIL PROTECTED]
 To unsubscribe, e-mail
 [EMAIL PROTECTED]
 Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php
 

-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




RE: Performance problems after record deletion.

2003-01-14 Thread Vikash K Agarwal
Gunnar

Something you can try:

1. Export all data (mysqldump, SELECT INTO OUTFILE), import it in a new
table (mysqlimport, LOAD DATA INFILE), create indexes on the new table,
drop the old table. This would remove the holes from data and indexes
wud be fresh again. This wud also not hold up nething. BUT wud require
space and some time.

vikash


-Original Message-
From: Gunnar Lunde [mailto:[EMAIL PROTECTED]] 
Sent: Tuesday, January 14, 2003 5:17 PM
To: '[EMAIL PROTECTED]'
Subject: RE: Performance problems after record deletion.

Thank you for your reply, Vikash

We have decided not to use optimize because of the size of the table.
Optimize would halt the system to long. So we decided just to reuse the
space. Our problem is that after the deletion a select query that used
to
take a second now takes 30 minutes and locks up everything. So we
suspect
that something happened to the indexes and the select query does a full
table scan. However an explain on the query suggest that the indexes are
working. 

During normal operation reusing the space is ok, and it is ok that the
select statement locks up everything, but we can not live with a select
statement that takes 30 minutes when it used to complete in no time.

Ideas to fix this is more than welcome!

Best regards
Gunnar Lunde

 -Original Message-
 From: Vikash K Agarwal [mailto:[EMAIL PROTECTED]]
 Sent: Tuesday, January 14, 2003 12:22 PM
 To: 'Gunnar Lunde'; [EMAIL PROTECTED]
 Subject: RE: Performance problems after record deletion.
 
 
 This is what MYSQL manual 3.23.41 says, may be it helps you
 
 OPTIMIZE TABLE should be used if you have deleted a large part of a
 table or if you have made many changes to a table with variable-length
 rows (tables that have VARCHAR, BLOB, or TEXT columns). 
 Deleted records
 are maintained in a linked list and subsequent INSERT operations reuse
 old record positions. You can use OPTIMIZE TABLE to reclaim the unused
 space and to defragment the data file.
 
 Rgds
 Vikash K Agarwal
 
 -Original Message-
 From: Gunnar Lunde [mailto:[EMAIL PROTECTED]] 
 Sent: Tuesday, January 14, 2003 4:05 PM
 To: '[EMAIL PROTECTED]'
 Subject: Performance problems after record deletion.
 
  Hi
  
  We got a problem with a slow database after deleting 
 records using the
  MySQL released with RedHat 7.2 (Server version 3.23.41). Here is the
 short
  story:
  
  We have a table with a lot of data, at the moment there are 
 85 million
  records in our table.  We developed a script that deleted old data
 during
  the night. The script runs as a cron job and starts each evening and
 runs
  trough the night. The script runs a loop where it selects the 10.000
  oldest records and runs a test to see if it is ok to delete 
 them (some
  records are kept even if they are old if they have certain criterias
  fulfilled). The script builds a list and deletes 10 records 
 at a time.
 The
  script runs fine, but after we have deleted a couple of million
  transactions the database is dog slow.  We can see that a select
 statement
  that used to complete in seconds now takes 30 minutes and 
 this select
  statement locks the table so that everything else must wait.
  
  Our best guess is that something happened to the indexes during the
 delete
  operation, but we did an explain on the select statement that locks
  everything and it does seem to use the indexes it should. 
  We have read that a select statement will lock up the table if there
 are
  wholes in the database, but it indexes should not be affected? There
 are
  incoming inserts concurrently with running the large query. When the
  process is locking, it says sending data - nothing about temporary
  tables. 
  
  Since we can afford a bit of downtime (we luckily have a backup
 system),
  and for debugging purposes, we have started an analyze table job,
 although
  we don't think it will make a huge difference. We hope this job will
 be
  completed by tomorrow. As a sidenote - is there any way of 
 monitoring
 the
  progress of large jobs of this kind?
  
   If you have experienced the same type of problems, or if anyone
 have
 any thought on why this is happening and how to fix it we would really
 like
 to hear it.
 
 
  Best Regards
  
  Gunnar Lunde
  
 
 -
 Before posting, please check:
http://www.mysql.com/manual.php   (the manual)
http://lists.mysql.com/   (the list archive)
 
 To request this thread, e-mail [EMAIL PROTECTED]
 To unsubscribe, e-mail
 [EMAIL PROTECTED]
 Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php
 

-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail
[EMAIL PROTECTED]
Trouble unsubscribing? Try: http

RE: Performance Problems with InnoDB Row Level Locking...

2002-09-06 Thread Varshavchick Alexander

Hi Heikki,

one more question please about innodb_flush_log_at_trx_commit: if there
was some way of increasing the delay between log flushes more than 1 sec,
can you estimate will it bring any real effect in performance? I know
it'll raise the risk of losing some last transactions if something
crashes, but we can go for it gaining the speed. How can it be done if
it's worth doing?

Thanks

sql, query

Alexander Varshavchick, Metrocom Joint Stock Company
Phone: (812)118-3322, 118-3115(fax)



-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: Performance Problems with InnoDB Row Level Locking...

2002-09-06 Thread Heikki Tuuri

Alexander,

- Original Message -
From: Varshavchick Alexander [EMAIL PROTECTED]
To: 'Heikki Tuuri' [EMAIL PROTECTED]
Cc: [EMAIL PROTECTED]
Sent: Friday, September 06, 2002 10:08 AM
Subject: RE: Performance Problems with InnoDB Row Level Locking...


 Hi Heikki,

 one more question please about innodb_flush_log_at_trx_commit: if there
 was some way of increasing the delay between log flushes more than 1 sec,
 can you estimate will it bring any real effect in performance? I know
 it'll raise the risk of losing some last transactions if something
 crashes, but we can go for it gaining the speed. How can it be done if
 it's worth doing?

it should not be worth doing.

A disk can do some 70 random writes per second, and the log flush (calling
fsync on the log file) typically uses 2 disk writes:

(1) writing the end of the log to the log file on disk, and
(2) updating the file access timestamps in the 'inode' of the file, if we
are using a Unix file system.

Thus the performance benefit of less than 1 log flush per second is small.
On the other hand, we might add an option which allows flushing the log 1 -
50 times per second.

Note that the file flush method fdatasync is supposed to eliminate the write
(2) above. Unfortunately there was evidence fadatasync caused file
corruption in Linux and Solaris, and it is currently mapped to the ordinary
fsync.

 Thanks

 sql, query
 
 Alexander Varshavchick, Metrocom Joint Stock Company
 Phone: (812)118-3322, 118-3115(fax)

Best regards,

Heikki Tuuri
Innobase Oy
---
InnoDB - transactions, hot backup, and foreign key support for MySQL
See http://www.innodb.com, download MySQL-Max from http://www.mysql.com




-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: Performance Problems with InnoDB Row Level Locking...

2002-09-06 Thread Heikki Tuuri

Joe,

- Original Message -
From: Joe Shear [EMAIL PROTECTED]
To: Heikki Tuuri [EMAIL PROTECTED]
Sent: Friday, September 06, 2002 2:13 AM
Subject: Re: Performance Problems with InnoDB Row Level Locking...


 Hi,
 On a side note, are there any plans to backport the spurious insert
 deadlock fix to the 3.23 series?

sorry, but 3.23 is now frozen from new 'features'. People want it to stay as
it is.

We are also encouraging people to test the 4.0 series.

 thanks
 joe

Regards,

Heikki

 On Thu, 2002-09-05 at 16:02, Heikki Tuuri wrote:
  Steve,
 
  - Original Message -
  From: Orr, Steve [EMAIL PROTECTED]
  To: 'Heikki Tuuri' [EMAIL PROTECTED]
  Cc: [EMAIL PROTECTED]
  Sent: Friday, September 06, 2002 1:23 AM
  Subject: RE: Performance Problems with InnoDB Row Level Locking...
 
 
   Heikki,
  
Next-key locking in InnoDB allows you to lock the non-existence of
rows
and thus prevents phantom rows from appearing.
   OK, now I understand what you're getting at with phantom rows.
  
   But given the tradeoff between the inadvertant next-key deadlocking
   challenge and a the phantom rows challenge, I think I'd rather have
the
   phantom rows challenge because: 1) it's not as common of a problem;
and 2)
   on the few occasions when I am confronted with it I can easily prevent
it
   with a table lock. The need to work around next-key deadlocking issues
is
   constant (EVERY database insert, update or delete) and it seems the
  solution
 
  normally you have to be prepared to reissue your transactions. Deadlocks
  generated by UNIQUE KEY checks and FOREIGN KEY checks are not the result
of
  next-key locking, but a general issue in row level locking. These checks
  involve implicit reading of tables in connection with
insert/delete/update.
  Thus you can get surprising deadlocks. Proving some system deadlock-free
is
  difficult.
 
  SHOW INNODB STATUS is the tool to analyze deadlock problems in 3.23.52
or
  4.0.3. With it we found the spurious insert deadlocks which were removed
in
  4.0.3. There may still be spurious deadlocks which appear in real-world
  applications and which are easy to remove by fine-tuning the next-key
  locking algorithm.
 
   is more difficult to achieve and may eventually come back to
serialization
   or table level locks which is what I'm trying to avoid to begin with.
I've
   already addressed some deadlock issues with frequent commits, smaller
   transactions, and better indexes and I only want to lock tables when I
   absolutely have to.
  
   I may be wrong but it just seems to me that the next-key locking
approach
   merely creates another concurrency issue and the subsequent next-key
   deadlock problem is just too significant to ignore.
 
  We need serializability to make MySQL replication and recovery from the
  binlog to work. That is why InnoDB cannot allow phantom rows. It would
be a
  relatively easy change in InnoDB itself to lower the isolation level so
that
  phantom rows would be allowed.
 
   Humbly,
   Steve
 
  Regards,
 
  Heikki
 
   -Original Message-
   From: Heikki Tuuri [mailto:[EMAIL PROTECTED]]
   Sent: Thursday, September 05, 2002 2:54 PM
   To: Orr, Steve
   Cc: [EMAIL PROTECTED]
   Subject: Re: Performance Problems with InnoDB Row Level Locking...
  
  
   Steve,
  
   - Original Message -
   From: Orr, Steve [EMAIL PROTECTED]
   To: 'Heikki Tuuri' [EMAIL PROTECTED]
   Sent: Thursday, September 05, 2002 11:04 PM
   Subject: RE: Performance Problems with InnoDB Row Level Locking...
  
  
Heikki,
   
You wrote...
 You are getting so many deadlocks that some transactions do not
pass
  at
 all?
No, the transactions eventually succeed but performance suffers.
Like I
said, this is a stress test to identify bottlenecks in database
   performance.
  
   cut
  
   some optimization of next-key locking is possible. I recently removed
some
   spurious deadlocks in 4.0 because a big customer complained of the
  problem.
  

 Consider a consistency rule: sum of all balances in table ACCOUNT
  must
 be 1,000,000. How do you keep that true if your database cannot
block
 phantom rows?
Can't that be accomplished by the SELECT ... FOR UPDATE syntax?
  
   That does not block new inserts to the table in Oracle. If you do:
  
   INSERT INTO account2 SELECT * FROM account;
  
   and someone else at the same time inserts within a single transaction
2
  rows
   ('Jones', 1000), ('Smith', -1000) to table account, you may end up
with
   table account2 where the sum of balances is not 1,000,000, though
table
   account always had 1,000,000 as the sum.
  
   The way to work around these serializability issues in Oracle is to
use
   table level locks. In the 1980's, when Oracle did not yet have foreign
  keys
   constraints, people used table level locks to implement referential
   integrity. You have to lock the NON-existence of child rows when

Re: Performance Problems with InnoDB Row Level Locking...

2002-09-06 Thread Varshavchick Alexander

Heikki, thank you for the answer. So on the systems other than Linux or
Solaris the best flush method should be fdatasync, is it correct? In this
case, if I don't specify innodb_flush_method option, fdatasync will not be
used - it'll be fsync be default instead? My system is FreeBSD, so which
value for innodb_flush_method can be optimal?

Thanks


Alexander Varshavchick, Metrocom Joint Stock Company
Phone: (812)118-3322, 118-3115(fax)

On Fri, 6 Sep 2002, Heikki Tuuri wrote:

 Date: Fri, 6 Sep 2002 10:27:03 +0300
 From: Heikki Tuuri [EMAIL PROTECTED]
 To: Varshavchick Alexander [EMAIL PROTECTED]
 Cc: [EMAIL PROTECTED]
 Subject: Re: Performance Problems with InnoDB Row Level Locking...

 Alexander,

 - Original Message -
 From: Varshavchick Alexander [EMAIL PROTECTED]
 To: 'Heikki Tuuri' [EMAIL PROTECTED]
 Cc: [EMAIL PROTECTED]
 Sent: Friday, September 06, 2002 10:08 AM
 Subject: RE: Performance Problems with InnoDB Row Level Locking...


  Hi Heikki,
 
  one more question please about innodb_flush_log_at_trx_commit: if there
  was some way of increasing the delay between log flushes more than 1 sec,
  can you estimate will it bring any real effect in performance? I know
  it'll raise the risk of losing some last transactions if something
  crashes, but we can go for it gaining the speed. How can it be done if
  it's worth doing?

 it should not be worth doing.

 A disk can do some 70 random writes per second, and the log flush (calling
 fsync on the log file) typically uses 2 disk writes:

 (1) writing the end of the log to the log file on disk, and
 (2) updating the file access timestamps in the 'inode' of the file, if we
 are using a Unix file system.

 Thus the performance benefit of less than 1 log flush per second is small.
 On the other hand, we might add an option which allows flushing the log 1 -
 50 times per second.

 Note that the file flush method fdatasync is supposed to eliminate the write
 (2) above. Unfortunately there was evidence fadatasync caused file
 corruption in Linux and Solaris, and it is currently mapped to the ordinary
 fsync.

  Thanks
 
  sql, query
  
  Alexander Varshavchick, Metrocom Joint Stock Company
  Phone: (812)118-3322, 118-3115(fax)

 Best regards,

 Heikki Tuuri
 Innobase Oy
 ---
 InnoDB - transactions, hot backup, and foreign key support for MySQL
 See http://www.innodb.com, download MySQL-Max from http://www.mysql.com




 -
 Before posting, please check:
http://www.mysql.com/manual.php   (the manual)
http://lists.mysql.com/   (the list archive)

 To request this thread, e-mail [EMAIL PROTECTED]
 To unsubscribe, e-mail [EMAIL PROTECTED]
 Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php



-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: Performance Problems with InnoDB Row Level Locking...

2002-09-06 Thread Heikki Tuuri

Alex,

- Original Message -
From: Varshavchick Alexander [EMAIL PROTECTED]
To: Heikki Tuuri [EMAIL PROTECTED]
Cc: [EMAIL PROTECTED]
Sent: Friday, September 06, 2002 11:49 AM
Subject: Re: Performance Problems with InnoDB Row Level Locking...


 Heikki, thank you for the answer. So on the systems other than Linux or
 Solaris the best flush method should be fdatasync, is it correct? In this
 case, if I don't specify innodb_flush_method option, fdatasync will not be
 used - it'll be fsync be default instead? My system is FreeBSD, so which
 value for innodb_flush_method can be optimal?

yes, but it is mapped to fsync on all Unixes. You can edit the source code
and test other options. Also O_SYNC and O_DSYNC.

 Thanks

 
 Alexander Varshavchick, Metrocom Joint Stock Company
 Phone: (812)118-3322, 118-3115(fax)

Regards,

Heikki

 On Fri, 6 Sep 2002, Heikki Tuuri wrote:

  Date: Fri, 6 Sep 2002 10:27:03 +0300
  From: Heikki Tuuri [EMAIL PROTECTED]
  To: Varshavchick Alexander [EMAIL PROTECTED]
  Cc: [EMAIL PROTECTED]
  Subject: Re: Performance Problems with InnoDB Row Level Locking...
 
  Alexander,
 
  - Original Message -
  From: Varshavchick Alexander [EMAIL PROTECTED]
  To: 'Heikki Tuuri' [EMAIL PROTECTED]
  Cc: [EMAIL PROTECTED]
  Sent: Friday, September 06, 2002 10:08 AM
  Subject: RE: Performance Problems with InnoDB Row Level Locking...
 
 
   Hi Heikki,
  
   one more question please about innodb_flush_log_at_trx_commit: if
there
   was some way of increasing the delay between log flushes more than 1
sec,
   can you estimate will it bring any real effect in performance? I know
   it'll raise the risk of losing some last transactions if something
   crashes, but we can go for it gaining the speed. How can it be done if
   it's worth doing?
 
  it should not be worth doing.
 
  A disk can do some 70 random writes per second, and the log flush
(calling
  fsync on the log file) typically uses 2 disk writes:
 
  (1) writing the end of the log to the log file on disk, and
  (2) updating the file access timestamps in the 'inode' of the file, if
we
  are using a Unix file system.
 
  Thus the performance benefit of less than 1 log flush per second is
small.
  On the other hand, we might add an option which allows flushing the log
1 -
  50 times per second.
 
  Note that the file flush method fdatasync is supposed to eliminate the
write
  (2) above. Unfortunately there was evidence fadatasync caused file
  corruption in Linux and Solaris, and it is currently mapped to the
ordinary
  fsync.
 
   Thanks
  
   sql, query
   
   Alexander Varshavchick, Metrocom Joint Stock Company
   Phone: (812)118-3322, 118-3115(fax)
 
  Best regards,
 
  Heikki Tuuri
  Innobase Oy
  ---
  InnoDB - transactions, hot backup, and foreign key support for MySQL
  See http://www.innodb.com, download MySQL-Max from http://www.mysql.com
 
 
 
 
  -
  Before posting, please check:
 http://www.mysql.com/manual.php   (the manual)
 http://lists.mysql.com/   (the list archive)
 
  To request this thread, e-mail [EMAIL PROTECTED]
  To unsubscribe, e-mail
[EMAIL PROTECTED]
  Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php
 




-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Performance Problems with InnoDB Row Level Locking...

2002-09-05 Thread Orr, Steve

Background:
I've developed a simplistic Perl program to test database performance with
concurrent session queries. The queries involve inserts, updates, and
deletes in order to test database performance in an OLTP mult-user ACID
compliant scenario. Obviously this is not a real world test but it does
stress the database engine's ability to manage transactions so it is
somewhat valid for comparison purposes. 

Problem:
When I do an insert/update/delete I sporadically get the following:
:mysql::st execute failed: Deadlock found when trying to get lock; Try
restarting transaction at dafunc.pm line 340... The word deadlock is
misleading because all the database changes are based on a session number
ensuring that no session is trying to change data that is also being changed
by another session. It appears a time out is occurring before the shared row
level locks are acquired. Changing innodb_lock_wait_timeout doesn't seem to
do much. How do I tune this? 

Given table locking problems associated with MyISAM, it was thought that
InnoDB would perform better but in my tests performance is worse. It appears
InnoDB is taking a long time to acquire shared row level locks and is timing
out. If that's the case then the benefit of row level locks over table level
locks is more than offset by the internal InnoDB overhead to manage the
locks. Any other explanations? Any known performance issues with InnoDB? Any
Perl DBI driver performance issues?

Here's a snippet of the Perl code for the curious:
-
## Update by session, rand_val...
sub updSessionRand {
 eval {
  my $rtnval= 0  ;
  $estart   = time() ;
  my $dbh   = @_[1]  ;
  $sess_val = @_[2]  ;
  $sqlStmt  = UPDATE bench_data SET text_val='updated text by rand_val', 
  timestamp_val=$timestamp
  WHERE sess_val = ? AND rand_val between ? AND ? ;

  my $stmtHdl=$dbh-prepare($sqlStmt);
  $stmtHdl-execute($sess_val,$sess_val+900,$sess_val+1500) ;
  $dbh-commit();
  $edone = time();
  $totsec = $edone-$estart;
  print Session:$sess_val, upd02, seconds:$totsec\n;
 };
 if ($@) {
warn Session $sess_val upd02 failed.\n $@;
$rtnval = 1 ;
}
 return $rtnval ;
}
-

(Side Note: In a separate process I found out that the syntax SELECT ...
FOR UPDATE produces exclusive locks so I changed it to SELECT ... LOCK IN
SHARE MODE and that helped matters. I also tried setting the transaction
isolation level to serializable but that was worse.)

I am running MySQL version 3.23.52-max-log with Linux 2.4.18-3 
(RedHat 7.3.2.96-110) on 2 PentiumIII processors with 2GB RAM.

Here are some current innodb related my.cnf settings:
set-variable = innodb_lock_wait_timeout=300
innodb_flush_log_at_trx_commit=1
set-variable = innodb_buffer_pool_size=384M
set-variable = innodb_additional_mem_pool_size=20M
set-variable = innodb_thread_concurrency=4


TIA! 
Steve Orr
sql,query

-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail mysql-unsubscribe-##L=##[EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: Performance Problems with InnoDB Row Level Locking...

2002-09-05 Thread Varshavchick Alexander

Heikki, one little question - is it a mistype, or can a flush log interval
duration be controlled by this option? The value should only be 0 or 1 as
the documentation says...

On Thu, 5 Sep 2002, Heikki Tuuri wrote:

 You can try setting

 innodb_flush_log_at_trx_commit=2

 if you can afford losing some last transactions in a power outage or an
 operating system crash.



sql,query

Alexander Varshavchick, Metrocom Joint Stock Company
Phone: (812)118-3322, 118-3115(fax)


-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: Performance Problems with InnoDB Row Level Locking...

2002-09-05 Thread Heikki Tuuri

Alexander,

- Original Message -
From: Varshavchick Alexander [EMAIL PROTECTED]
To: Heikki Tuuri [EMAIL PROTECTED]
Cc: [EMAIL PROTECTED]
Sent: Thursday, September 05, 2002 6:51 PM
Subject: Re: Performance Problems with InnoDB Row Level Locking...


 Heikki, one little question - is it a mistype, or can a flush log interval
 duration be controlled by this option? The value should only be 0 or 1 as
 the documentation says...

there is a new value 2 introduced in 3.23.52:


A new setting innodb_flush_log_at_trx_commit=2 makes InnoDB to write the log
to the operating system file cache at each commit. This is almost as fast as
the setting innodb_flush_log_at_trx_commit=0, and the setting 2 also has the
nice feature that in a crash where the operating system does not crash, no
committed transaction is lost. If the operating system crashes or there is a
power outage, then the setting 2 is no safer than the setting 0.



0 = write to log file once per second
1 = write to log file and flush the log to disk at every commit
2 = write to log file at every commit, but only flush to disk once per
second

I think 2 will be the best setting for most high-end users.

 On Thu, 5 Sep 2002, Heikki Tuuri wrote:

  You can try setting
 
  innodb_flush_log_at_trx_commit=2
 
  if you can afford losing some last transactions in a power outage or an
  operating system crash.
 

 
 Alexander Varshavchick, Metrocom Joint Stock Company
 Phone: (812)118-3322, 118-3115(fax)

Best regards,

Heikki Tuuri
Innobase Oy
---
InnoDB - transactions, hot backup, and foreign key support for MySQL
See http://www.innodb.com, download MySQL-Max from http://www.mysql.com




-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




RE: Performance Problems with InnoDB Row Level Locking...

2002-09-05 Thread Orr, Steve

Hello again Heikki and thanks for your informative reply.

Regarding...
 innodb_flush_log_at_trx_commit=2
This is not an option as we must guarantee no lost transactions. But I will
test it out of curiosity just to see what the performance difference is. 

Regarding...
 InnoDB uses next-key locking to ensure serializability and that 'phantom
 rows' do not appear. You can get lock conflicts even if the queries
 seemingly would not overlap.
Does this mean that InnoDB is locking the next leaf in the B-Tree? That
would explain the problem as those rows could be updated by other sessions.
If that's the case then I think the next-key locking architecture is the
problem because it introduces artificial deadlocks on heavily used tables
and indexes that would otherwise not occur. 
(See http://portal.acm.org/citation.cfm?id=335461dl=ACMcoll=portal#)

Using the same code on the same machine I'm getting dramatically better
performance with PostgreSQL and Oracle both of which implement multi-version
concurrency control with an ANSI isolation level of read committed. I
understand that this isolation level allows for unrepeatable reads but this
is easily overcome programatically (if needed). It seems like the repeatable
read isolation level isn't as practical and isn't really needed that often.
Based on the work arounds you listed in the coping with deadlocks link, I
don't see any way around my performance problem. (I had already tried
transaction resubmission but it just perpetuates the problem.) 

If the repeatable read isolation level presents a performance problem that I
can't work around then I'd rather have the read committed isolation level.
Are there any plans to enable the read committed isolation level in InnoDB?
It seems like read committed is the most commonly implemented isolation
level amongst the other database vendors so what was behind the decision to
implement the repeatable read isolation level in InnoDB? Just curious. :-)

In the link you gave you state:
You can get deadlocks even in the case of transactions which just insert or
delete a single row. That is because these operations are not really
'atomic': they automatically set locks on the (possibly several) index
records of the row inserted/deleted.

So... if these operations are not atomic then does that mean that MySQL
still does not pass the ACID test even with InnoDB?

Thanks again and I'm eagerly awaiting your reply.


Respectfully,
Steve Orr





-Original Message-
From: Heikki Tuuri [mailto:[EMAIL PROTECTED]]
Sent: Thursday, September 05, 2002 9:05 AM
To: [EMAIL PROTECTED]
Subject: Re: Performance Problems with InnoDB Row Level Locking...


Steve,

- Original Message -
From: Orr, Steve [EMAIL PROTECTED]
Newsgroups: mailing.database.mysql
Sent: Thursday, September 05, 2002 5:52 PM
Subject: Performance Problems with InnoDB Row Level Locking...


 Background:
 I've developed a simplistic Perl program to test database performance with
 concurrent session queries. The queries involve inserts, updates, and
 deletes in order to test database performance in an OLTP mult-user ACID
 compliant scenario. Obviously this is not a real world test but it does
 stress the database engine's ability to manage transactions so it is
 somewhat valid for comparison purposes.

 Problem:
 When I do an insert/update/delete I sporadically get the following:
 :mysql::st execute failed: Deadlock found when trying to get lock; Try
 restarting transaction at dafunc.pm line 340... The word deadlock is
 misleading because all the database changes are based on a session number
 ensuring that no session is trying to change data that is also being
changed

InnoDB uses next-key locking to ensure serializability and that 'phantom
rows' do not appear. You can get lock conflicts even if the queries
seemingly would not overlap.

For example,

CREATE TABLE emptytable(a INT NOT NULL, PRIMARY KEY(a)) TYPE = InnoDB;

user 1: SELECT * FROM emptytable WHERE a = 100 LOCK IN SHARE MODE;

will make

user 2: INSERT INTO emptytable VALUES (150);

wait for a next-key lock on the 'supremum' of the primary index.

These operations would not overlap when using so-called predicate locking,
but that is too expensive to implement.

In transactional databases deadlocks are a classic problem. Please refer to
http://www.innodb.com/ibman.html#Cope_with_deadlocks.

 by another session. It appears a time out is occurring before the shared
row
 level locks are acquired. Changing innodb_lock_wait_timeout doesn't seem
to
 do much. How do I tune this?

 Given table locking problems associated with MyISAM, it was thought that
 InnoDB would perform better but in my tests performance is worse. It
appears
 InnoDB is taking a long time to acquire shared row level locks and is
timing
 out. If that's the case then the benefit of row level locks over table
level
 locks is more than offset by the internal InnoDB overhead to manage the
 locks. Any other explanations? Any known performance issues

Re: Performance Problems with InnoDB Row Level Locking...

2002-09-05 Thread Heikki Tuuri

Steve,

- Original Message -
From: Orr, Steve [EMAIL PROTECTED]
To: 'Heikki Tuuri' [EMAIL PROTECTED]; [EMAIL PROTECTED]
Sent: Thursday, September 05, 2002 9:49 PM
Subject: RE: Performance Problems with InnoDB Row Level Locking...


 Hello again Heikki and thanks for your informative reply.

 Regarding...
  innodb_flush_log_at_trx_commit=2
 This is not an option as we must guarantee no lost transactions. But I
will
 test it out of curiosity just to see what the performance difference is.

if you want to get high performance for disk flushes, you should buy a disk
with a battery-backed cache, and check that the disk driver is aware of it.

Otherwise we are constrained by the disk rotation speed, some 150 rounds /
second.

 Regarding...
  InnoDB uses next-key locking to ensure serializability and that 'phantom
  rows' do not appear. You can get lock conflicts even if the queries
  seemingly would not overlap.
 Does this mean that InnoDB is locking the next leaf in the B-Tree? That

No, only the next index record in alphabetical order.

 would explain the problem as those rows could be updated by other
sessions.
 If that's the case then I think the next-key locking architecture is the
 problem because it introduces artificial deadlocks on heavily used
tables
 and indexes that would otherwise not occur.

Yes, that is true.

 (See http://portal.acm.org/citation.cfm?id=335461dl=ACMcoll=portal#)

 Using the same code on the same machine I'm getting dramatically better
 performance with PostgreSQL and Oracle both of which implement
multi-version
 concurrency control with an ANSI isolation level of read committed. I
 understand that this isolation level allows for unrepeatable reads but
this
 is easily overcome programatically (if needed). It seems like the
repeatable
 read isolation level isn't as practical and isn't really needed that
often.
 Based on the work arounds you listed in the coping with deadlocks link, I
 don't see any way around my performance problem. (I had already tried
 transaction resubmission but it just perpetuates the problem.)

You are getting so many deadlocks that some transactions do not pass at all?
Then best to serialize them with LOCK TABLES or the 'semaphore table' method
explained in the manual. Too much concurrency sometimes degrades
performance, and it is better to fall back to coarser granularity locking.

 If the repeatable read isolation level presents a performance problem that
I
 can't work around then I'd rather have the read committed isolation level.
 Are there any plans to enable the read committed isolation level in
InnoDB?
 It seems like read committed is the most commonly implemented isolation
 level amongst the other database vendors so what was behind the decision
to
 implement the repeatable read isolation level in InnoDB? Just curious. :-)

No, unfortunately MySQL replication and recovery from the binlog requires
that updating transactions have serializable execution. We log complete SQL
statements. They have to be performed in the exact same way in the slave or
in recovery. Consider

INSERT INTO ... SELECT ...

If phantom rows could appear in the result set of the SELECT, the number of
rows inserted in the slave could differ from what happened in the master.

 In the link you gave you state:
 You can get deadlocks even in the case of transactions which just insert
or
 delete a single row. That is because these operations are not really
 'atomic': they automatically set locks on the (possibly several) index
 records of the row inserted/deleted.

 So... if these operations are not atomic then does that mean that MySQL
 still does not pass the ACID test even with InnoDB?

InnoDB of course complies to the 'ACID' rules. What I meant with the above
is that even a single insert into a table is a complex sequence of
operations. People often forget that several indexes may be updated, foreign
key checks are made in other tables. Deadlocks can arise in unexpected ways.

Next-key locking actually makes InnoDB more 'ACID' than Oracle or
PostgreSQL. Phantom rows compromise the 'C' and 'I' in Oracle and
PostgreSQL. Consider a consistency rule: sum of all balances in table
ACCOUNT must be 1,000,000. How do you keep that true if your database
cannot block phantom rows?


Consistency
The database is transformed from one valid state to another valid state. A
transaction is legal only if it obeys user-defined integrity constraints.
Illegal transactions aren't allowed and, if an integrity constraint can't be
satisfied the transaction is rolled back. For example, suppose that you
define a rule that postings in a discussion forum table must be tied to a
valid user ID. Then you hire Joe Novice to write some admin pages. Joe
writes a delete-user page that doesn't bother to check whether or not the
deletion will result in an orphaned discussion forum posting. Oracle will
check, though, and abort any transaction that would result in you having a
discussion forum posting by a deleted

  1   2   >