MySQL 5.1: Views, queries, updates and performance issues

2011-12-29 Thread Bruce Ferrell
Hi all,

I've got some semi-general questions on the topics in the title.  What I'm 
looking for is more in the line of theory than query specifics.  I am but a 
poor peasant boy.

What I have is an application that makes heavy use of views.  If I understand 
views correctly (and I may not), views are representations of queries 
themselves. The guy who wrote
the app chose to do updates and joins against the views instead of against the 
underlying tables themselves.

I've tuned to meet the gross memory requirements and  mysqltuner.pl is saying 
that 45% of the joins are without indexes. With the slow query logs on and 
queries_without_indexes,
I'm frequently seeing updates that often take more that 2 seconds to 
complete... Often MUCH longer (how does 157 seconds grab you?).

So, with that background, what would you do next and is it possible this use of 
views, in this way is a significant contributor to the problem?

Bruce Ferrell

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: MySQL 5.1: Views, queries, updates and performance issues

2011-12-29 Thread Arthur Fuller
At first blush, your problem would appear to concern the lack of index-use.
 That's where I would begin my investigation. It might be painstaking, but
I would do something like this:

For each view
Look at the Join(s) and see what columns are being joined
Look at the tables and see what columns are being indexed
If any columns are not indexed that should be, create said index(es)
Next view

Of course, this process could be optimized by looking at the views in terms
of their frequency of use.

Finally, you didn't mention what sort of front end you're using. It's
possible that you might benefit by using stored procedures rather than
views. That switch would of course cost you some time invested in changing
the front end to pass explicit parameters.

Hope this helps,
Arthur

On Thu, Dec 29, 2011 at 12:50 PM, Bruce Ferrell bferr...@baywinds.orgwrote:

 Hi all,

 I've got some semi-general questions on the topics in the title.  What I'm
 looking for is more in the line of theory than query specifics.  I am but a
 poor peasant boy.

 What I have is an application that makes heavy use of views.  If I
 understand views correctly (and I may not), views are representations of
 queries themselves. The guy who wrote
 the app chose to do updates and joins against the views instead of against
 the underlying tables themselves.

 I've tuned to meet the gross memory requirements and  mysqltuner.pl is
 saying that 45% of the joins are without indexes. With the slow query logs
 on and queries_without_indexes,
 I'm frequently seeing updates that often take more that 2 seconds to
 complete... Often MUCH longer (how does 157 seconds grab you?).

 So, with that background, what would you do next and is it possible this
 use of views, in this way is a significant contributor to the problem?

 Bruce Ferrell


-- 
Cell: 647.710.1314

Thirty spokes converge on a hub
but it's the emptiness
that makes a wheel work
   -- from the Daodejing


Re: innodb/myisam performance issues

2008-09-06 Thread Brent Baisley
Hey Josh, I came in really late on this discussion. It's been my  
experience that InnoDB is great until the size of the database/indexes  
surpasses the amount of memory you can give to InnoDB for  caching.  
The performance drop off is pretty quick and dramatic. I've seen this  
happen on live tables that performed great one day and then horrible  
the next. Although this was on table of about 20 million rows, not 130M.
Based on your table size, you would need to be running a 64-bit system  
and 64-bit mysql so you could allocate enough memory to InnoDB. You  
don't see the system swapping because InnoDB is working within it's  
defined memory allocation limits. Using EXPLAIN on your queries  
probably isn't showing you anything helpful because MySQL is using the  
proper indexes, but InnoDB can't fit the entire index in memory. My  
best guess is that InnoDB is loading part of the index, searching,  
loading the next part, searching, etc. Which is why you don't see  
consistent high IO or CPU. If you run vmstat 1, that may show you that  
IO is occurring, followed by CPU, then back to IO.


For very large tables I stick with MyISAM and use MERGE tables if they  
are applicable.


Hope that helps or points you in the right direction.

Brent Baisley


On Sep 4, 2008, at 4:26 PM, Josh Miller wrote:


Good afternoon,

I have recently converted a large table from MyISAM to InnoDB and am  
experiencing severe performance issues because of it.  HTTP response  
times have gone from avg .25 seconds to avg 2-3 seconds.  Details  
follow:


PHP/MySQL website, no memcached, 3 web nodes that interact with DB,  
one that serves images, one master DB that serves all reads/writes,  
backup DB that only serves for backup/failover at this time (app  
being changed to split reads/writes, not yet).


The one table that I converted is 130M rows, around 10GB data MyISAM  
to 22GB InnoDB.  There are around 110 tables on the DB total.



My.cnf abbreviated settings:

[mysqld]
port  = 3306
socket  = /tmp/mysql.sock
skip-locking
key_buffer= 3G
sort_buffer_size  = 45M
max_allowed_packet  = 16M
table_cache = 2048

tmp_table_size= 512M
max_heap_table_size = 512M

myisam_sort_buffer_size = 512M
myisam_max_sort_file_size = 10G
myisam_repair_threads   = 1
thread_cache_size   = 300

query_cache_type  = 1
query_cache_limit = 1M
query_cache_size  = 600M

thread_concurrency  = 8
max_connections   = 2048
sync_binlog = 1

innodb_buffer_pool_size = 14G
innodb_log_file_size  = 20M
innodb_flush_log_at_trx_commit=1
innodb_flush_method = O_DIRECT
skip-innodb-doublewrite
innodb_support_xa = 1
innodb_autoextend_increment = 16
innodb_data_file_path   = ibdata1:40G:autoextend

We're seeing a significantly higher percentage of IO wait on the  
system,  averaging 20% now with the majority of that being user IO.   
The system is not swapping at all.


Any ideas for what to check or modify to increase the performance  
here and let MyISAM and InnoDB play better together?  The plan is to  
convert all tables to InnoDB which does not seem like a great idea  
at this point, we're considering moving back to MyISAM.


Thanks!
Josh Miller, RHCE

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]




--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: innodb/myisam performance issues

2008-09-05 Thread Josh Miller

Aaron Blew wrote:

Here are a couple ideas:
* Decrease innodb_autoextend_increment to 8 or even 4.  You may see
additional IO wait because you're pre-allocating space in chunks
disproportinate to what you immediately need, causing bursty performance.
* If your remaining MyISAM tables don't need it, take 2GB of the key_buffer
alocation and put it towards the innodb buffer pool

What are the system's specs?  What's it's underlying storage?  What flags
were used when you created the filesystem(s)?  What OS/Version of MySQL are
you running?  Could you send us some iostat output?


Thanks for all of your suggestions -- we've switched back to MyISAM 
until we can test this better.


* increasing the innodb_buffer_pool had no apparent effect on performance.
* System is a Dell PE2950 4 core, 32GB RAM, RAID-10 local disks.
* File system is plain ext3, 'mke2fs -j'
* Running RHEL 4.4, MySQL 5.0.66a-enterprise (open ticket with MySQL, 
working all angles here).

* iostat output sample (iostat -x 5):

avg-cpu:  %user   %nice %system %iowait  %steal   %idle
   5.110.003.37   23.440.00   68.08

Device:tps   Blk_read/s   Blk_wrtn/s   Blk_read   Blk_wrtn
sda 294.00   328.00  2560.00656   5120
dm-0359.50   328.00  2560.00656   5120
dm-1  0.00 0.00 0.00  0  0

avg-cpu:  %user   %nice %system %iowait  %steal   %idle
  14.270.006.63   22.280.00   56.82

Device:tps   Blk_read/s   Blk_wrtn/s   Blk_read   Blk_wrtn
sda 360.7055.72  4815.92112   9680
dm-0456.2255.72  4815.92112   9680
dm-1  0.00 0.00 0.00  0  0

avg-cpu:  %user   %nice %system %iowait  %steal   %idle
  34.080.00   23.60   15.860.00   26.47

Device:tps   Blk_read/s   Blk_wrtn/s   Blk_read   Blk_wrtn
sda 348.00   316.00  3304.00632   6608
dm-0446.00   316.00  3304.00632   6608
dm-1  0.00 0.00 0.00  0  0

avg-cpu:  %user   %nice %system %iowait  %steal   %idle
  29.590.00   27.84   15.230.00   27.34

Device:tps   Blk_read/s   Blk_wrtn/s   Blk_read   Blk_wrtn
sda 475.00   152.00  4284.00304   8568
dm-0554.50   152.00  4284.00304   8568
dm-1  0.00 0.00 0.00  0  0

avg-cpu:  %user   %nice %system %iowait  %steal   %idle
  23.280.00   15.77   18.150.00   42.80

Device:tps   Blk_read/s   Blk_wrtn/s   Blk_read   Blk_wrtn
sda 312.50   192.00  3044.00384   6088
dm-0401.50   192.00  3044.00384   6088
dm-1  0.00 0.00 0.00  0  0



Interesting note:  when I switched to using the myisam version of the 
table with the old configuration, we still had very poor performance 
with significant CPU IO wait as you can see from the above iostat.  This 
was without any load on the InnoDB table at all.  Once I restarted with 
the new settings, the load and performance recovered immediately.  You 
can see from this iostat output where the restart occurred:


Device: rrqm/s   wrqm/s r/s w/s   rsec/s   wsec/s 
avgrq-sz avgqu-sz   await  svctm  %util
sda   0.0061.68  260.88  610.58  2158.88  5384.43 
8.6678.96   90.60   1.05  91.52
dm-0  0.00 0.00  261.08  673.05  2158.88  5384.43 
8.0885.99   92.06   0.98  91.54
dm-1  0.00 0.000.000.00 0.00 0.00 
0.00 0.000.00   0.00   0.00


avg-cpu:  %user   %nice %system %iowait  %steal   %idle
   3.500.002.05   19.450.00   75.00

Device: rrqm/s   wrqm/s r/s w/s   rsec/s   wsec/s 
avgrq-sz avgqu-sz   await  svctm  %util
sda   0.0030.40  373.20  251.00  3340.80  2251.20 
8.9631.07   49.77   1.13  70.64
dm-0  0.00 0.00  373.00  281.40  3340.80  2251.20 
8.5533.85   51.72   1.08  70.72
dm-1  0.00 0.000.000.00 0.00 0.00 
0.00 0.000.00   0.00   0.00


avg-cpu:  %user   %nice %system %iowait  %steal   %idle
   4.150.002.002.400.00   91.45

Device: rrqm/s   wrqm/s r/s w/s   rsec/s   wsec/s 
avgrq-sz avgqu-sz   await  svctm  %util
sda   0.00 8.40   20.60   54.20   206.40   500.80 
9.45 5.30   70.80   1.37  10.28
dm-0  0.00 0.00   20.60   62.60   206.40   500.80 
8.50 5.57   66.90   1.23  10.26
dm-1  0.00 0.000.000.00 0.00 0.00 
0.00 0.000.00   0.00   0.00


avg-cpu:  %user   %nice %system %iowait  %steal   %idle
   0.750.000.602.25

Re: innodb/myisam performance issues

2008-09-05 Thread Michael Dykman
On Fri, Sep 5, 2008 at 12:55 PM, Josh Miller [EMAIL PROTECTED] wrote:
 Aaron Blew wrote:

 Here are a couple ideas:
 * Decrease innodb_autoextend_increment to 8 or even 4.  You may see
 additional IO wait because you're pre-allocating space in chunks
 disproportinate to what you immediately need, causing bursty performance.
 * If your remaining MyISAM tables don't need it, take 2GB of the
 key_buffer
 alocation and put it towards the innodb buffer pool

 What are the system's specs?  What's it's underlying storage?  What flags
 were used when you created the filesystem(s)?  What OS/Version of MySQL
 are
 you running?  Could you send us some iostat output?

 Thanks for all of your suggestions -- we've switched back to MyISAM until we
 can test this better.

 * increasing the innodb_buffer_pool had no apparent effect on performance.
 * System is a Dell PE2950 4 core, 32GB RAM, RAID-10 local disks.
 * File system is plain ext3, 'mke2fs -j'
 * Running RHEL 4.4, MySQL 5.0.66a-enterprise (open ticket with MySQL,
 working all angles here).
 * iostat output sample (iostat -x 5):

 avg-cpu:  %user   %nice %system %iowait  %steal   %idle
   5.110.003.37   23.440.00   68.08

 Device:tps   Blk_read/s   Blk_wrtn/s   Blk_read   Blk_wrtn
 sda 294.00   328.00  2560.00656   5120
 dm-0359.50   328.00  2560.00656   5120
 dm-1  0.00 0.00 0.00  0  0

 avg-cpu:  %user   %nice %system %iowait  %steal   %idle
  14.270.006.63   22.280.00   56.82

 Device:tps   Blk_read/s   Blk_wrtn/s   Blk_read   Blk_wrtn
 sda 360.7055.72  4815.92112   9680
 dm-0456.2255.72  4815.92112   9680
 dm-1  0.00 0.00 0.00  0  0

 avg-cpu:  %user   %nice %system %iowait  %steal   %idle
  34.080.00   23.60   15.860.00   26.47

 Device:tps   Blk_read/s   Blk_wrtn/s   Blk_read   Blk_wrtn
 sda 348.00   316.00  3304.00632   6608
 dm-0446.00   316.00  3304.00632   6608
 dm-1  0.00 0.00 0.00  0  0

 avg-cpu:  %user   %nice %system %iowait  %steal   %idle
  29.590.00   27.84   15.230.00   27.34

 Device:tps   Blk_read/s   Blk_wrtn/s   Blk_read   Blk_wrtn
 sda 475.00   152.00  4284.00304   8568
 dm-0554.50   152.00  4284.00304   8568
 dm-1  0.00 0.00 0.00  0  0

 avg-cpu:  %user   %nice %system %iowait  %steal   %idle
  23.280.00   15.77   18.150.00   42.80

 Device:tps   Blk_read/s   Blk_wrtn/s   Blk_read   Blk_wrtn
 sda 312.50   192.00  3044.00384   6088
 dm-0401.50   192.00  3044.00384   6088
 dm-1  0.00 0.00 0.00  0  0



 Interesting note:  when I switched to using the myisam version of the table
 with the old configuration, we still had very poor performance with
 significant CPU IO wait as you can see from the above iostat.  This was
 without any load on the InnoDB table at all.  Once I restarted with the new
 settings, the load and performance recovered immediately.  You can see from
 this iostat output where the restart occurred:

 Device: rrqm/s   wrqm/s r/s w/s   rsec/s   wsec/s avgrq-sz
 avgqu-sz   await  svctm  %util
 sda   0.0061.68  260.88  610.58  2158.88  5384.43 8.66
  78.96   90.60   1.05  91.52
 dm-0  0.00 0.00  261.08  673.05  2158.88  5384.43 8.08
  85.99   92.06   0.98  91.54
 dm-1  0.00 0.000.000.00 0.00 0.00 0.00
 0.000.00   0.00   0.00

 avg-cpu:  %user   %nice %system %iowait  %steal   %idle
   3.500.002.05   19.450.00   75.00

 Device: rrqm/s   wrqm/s r/s w/s   rsec/s   wsec/s avgrq-sz
 avgqu-sz   await  svctm  %util
 sda   0.0030.40  373.20  251.00  3340.80  2251.20 8.96
  31.07   49.77   1.13  70.64
 dm-0  0.00 0.00  373.00  281.40  3340.80  2251.20 8.55
  33.85   51.72   1.08  70.72
 dm-1  0.00 0.000.000.00 0.00 0.00 0.00
 0.000.00   0.00   0.00

 avg-cpu:  %user   %nice %system %iowait  %steal   %idle
   4.150.002.002.400.00   91.45

 Device: rrqm/s   wrqm/s r/s w/s   rsec/s   wsec/s avgrq-sz
 avgqu-sz   await  svctm  %util
 sda   0.00 8.40   20.60   54.20   206.40   500.80 9.45
 5.30   70.80   1.37  10.28
 dm-0  0.00 0.00   20.60   62.60   206.40   500.80 8.50
 5.57   66.90   1.23  10.26
 dm-1  0.00 0.000.000.00 0.00 0.00 0.00
 0.000.00   0.00   0.00

 

innodb/myisam performance issues

2008-09-04 Thread Josh Miller

Good afternoon,

I have recently converted a large table from MyISAM to InnoDB and am 
experiencing severe performance issues because of it.  HTTP response 
times have gone from avg .25 seconds to avg 2-3 seconds.  Details follow:


PHP/MySQL website, no memcached, 3 web nodes that interact with DB, one 
that serves images, one master DB that serves all reads/writes, backup 
DB that only serves for backup/failover at this time (app being changed 
to split reads/writes, not yet).


The one table that I converted is 130M rows, around 10GB data MyISAM to 
22GB InnoDB.  There are around 110 tables on the DB total.



My.cnf abbreviated settings:

[mysqld]
port  = 3306
socket  = /tmp/mysql.sock
skip-locking
key_buffer= 3G
sort_buffer_size  = 45M
max_allowed_packet  = 16M
table_cache = 2048

tmp_table_size= 512M
max_heap_table_size = 512M

myisam_sort_buffer_size = 512M
myisam_max_sort_file_size = 10G
myisam_repair_threads   = 1
thread_cache_size   = 300

query_cache_type  = 1
query_cache_limit = 1M
query_cache_size  = 600M

thread_concurrency  = 8
max_connections   = 2048
sync_binlog = 1

innodb_buffer_pool_size = 14G
innodb_log_file_size  = 20M
innodb_flush_log_at_trx_commit=1
innodb_flush_method = O_DIRECT
skip-innodb-doublewrite
innodb_support_xa = 1
innodb_autoextend_increment = 16
innodb_data_file_path   = ibdata1:40G:autoextend

We're seeing a significantly higher percentage of IO wait on the system, 
 averaging 20% now with the majority of that being user IO.  The system 
is not swapping at all.


Any ideas for what to check or modify to increase the performance here 
and let MyISAM and InnoDB play better together?  The plan is to convert 
all tables to InnoDB which does not seem like a great idea at this 
point, we're considering moving back to MyISAM.


Thanks!
Josh Miller, RHCE

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: innodb/myisam performance issues

2008-09-04 Thread Tom Horstmann
Hello Josh,

why you moved your table to InnoDB? Your description doesn't sound like the
tables rows
are accessed concurrently and need to be locked? Are you sure you need
InnoDB for this table?

If you need InnoDB you probably need to redesign your queries and table
structure to get them
more convenient for InnoDB.

With kind regards,

TomH

-Original Message-
From: Josh Miller [mailto:[EMAIL PROTECTED] 
Sent: Thursday, September 04, 2008 10:27 PM
To: mysql@lists.mysql.com
Subject: innodb/myisam performance issues

Good afternoon,

I have recently converted a large table from MyISAM to InnoDB and am 
experiencing severe performance issues because of it.  HTTP response 
times have gone from avg .25 seconds to avg 2-3 seconds.  Details follow:

PHP/MySQL website, no memcached, 3 web nodes that interact with DB, one 
that serves images, one master DB that serves all reads/writes, backup 
DB that only serves for backup/failover at this time (app being changed 
to split reads/writes, not yet).

The one table that I converted is 130M rows, around 10GB data MyISAM to 
22GB InnoDB.  There are around 110 tables on the DB total.


My.cnf abbreviated settings:

[mysqld]
port  = 3306
socket  = /tmp/mysql.sock
skip-locking
key_buffer= 3G
sort_buffer_size  = 45M
max_allowed_packet  = 16M
table_cache = 2048

tmp_table_size= 512M
max_heap_table_size = 512M

myisam_sort_buffer_size = 512M
myisam_max_sort_file_size = 10G
myisam_repair_threads   = 1
thread_cache_size   = 300

query_cache_type  = 1
query_cache_limit = 1M
query_cache_size  = 600M

thread_concurrency  = 8
max_connections   = 2048
sync_binlog = 1

innodb_buffer_pool_size = 14G
innodb_log_file_size  = 20M
innodb_flush_log_at_trx_commit=1
innodb_flush_method = O_DIRECT
skip-innodb-doublewrite
innodb_support_xa = 1
innodb_autoextend_increment = 16
innodb_data_file_path   = ibdata1:40G:autoextend

We're seeing a significantly higher percentage of IO wait on the system, 
  averaging 20% now with the majority of that being user IO.  The system 
is not swapping at all.

Any ideas for what to check or modify to increase the performance here 
and let MyISAM and InnoDB play better together?  The plan is to convert 
all tables to InnoDB which does not seem like a great idea at this 
point, we're considering moving back to MyISAM.

Thanks!
Josh Miller, RHCE

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: innodb/myisam performance issues

2008-09-04 Thread Josh Miller

Tom Horstmann wrote:

Hello Josh,

why you moved your table to InnoDB? Your description doesn't sound like the
tables rows
are accessed concurrently and need to be locked? Are you sure you need
InnoDB for this table?

If you need InnoDB you probably need to redesign your queries and table
structure to get them
more convenient for InnoDB.



Hi Tom,

The rows in this table are accessed concurrently as any activity on the 
site is recorded/added/updated to this table.  We have several others 
which serve similar purposes, (sessions, totaltraffic, etc...).


I don't disagree, the application needs to be written to perform better 
and use MySQL more efficiently.  I need to find a way to make it work 
better in the interim :)


Thanks!
Josh Miller, RHCE

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: innodb/myisam performance issues

2008-09-04 Thread Tom Horstmann
 The rows in this table are accessed concurrently as any activity on the 
 site is recorded/added/updated to this table.  We have several others 
 which serve similar purposes, (sessions, totaltraffic, etc...).

Is the performance lag occurring with read-only queries and updates/inserts
to the InnoDB table?
Is the table mostly read or more written?

You could set innodb_flush_log_at_trx_commit=2 if you may loose the latest
InnoDB
writes in case of a MySQL crash. It should give you much less IO for writes
on your
InnoDB tables.
Please see http://dev.mysql.com/doc/refman/4.1/en/innodb-parameters.html for
a
detailed description. 

Please also read about innodb_flush_method at this site and possibly try
other settings.

TomH


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: innodb/myisam performance issues

2008-09-04 Thread Tom Horstmann
Addendum..
Please also try increasing your innodb_log_file_size to a much higher value
if you
have lots of writes/transactions. Maybe 250MB is a good first try. 
You need to delete/move the InnoDB logs before restart.

Not sure about this, but please also set innodb_log_buffer_size. Try
something
between 16-32MB if you have many transactions.

TomH

-Original Message-
From: Tom Horstmann [mailto:[EMAIL PROTECTED] 
Sent: Thursday, September 04, 2008 11:15 PM
To: 'Josh Miller'
Cc: mysql@lists.mysql.com
Subject: RE: innodb/myisam performance issues

 The rows in this table are accessed concurrently as any activity on the 
 site is recorded/added/updated to this table.  We have several others 
 which serve similar purposes, (sessions, totaltraffic, etc...).

Is the performance lag occurring with read-only queries and updates/inserts
to the InnoDB table?
Is the table mostly read or more written?

You could set innodb_flush_log_at_trx_commit=2 if you may loose the latest
InnoDB
writes in case of a MySQL crash. It should give you much less IO for writes
on your
InnoDB tables.
Please see http://dev.mysql.com/doc/refman/4.1/en/innodb-parameters.html for
a
detailed description. 

Please also read about innodb_flush_method at this site and possibly try
other settings.

TomH


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: innodb/myisam performance issues

2008-09-04 Thread Perrin Harkins
On Thu, Sep 4, 2008 at 4:26 PM, Josh Miller [EMAIL PROTECTED] wrote:
 We're seeing a significantly higher percentage of IO wait on the system,
  averaging 20% now with the majority of that being user IO.  The system is
 not swapping at all.

O_DIRECT may not be the best setting for your hardware.  You might
want to go back to the default.

 Any ideas for what to check or modify to increase the performance here and
 let MyISAM and InnoDB play better together?

What you really need to do is look at which queries are slow and run
EXPLAIN plans for them.  Most big performance problems like you're
describing are due to index issues, so that's where you should be
looking.  Server tuning comes lat

- Perrin

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: innodb/myisam performance issues

2008-09-04 Thread Josh Miller

Tom Horstmann wrote:

Addendum..
Please also try increasing your innodb_log_file_size to a much higher value
if you
have lots of writes/transactions. Maybe 250MB is a good first try. 
You need to delete/move the InnoDB logs before restart.


Not sure about this, but please also set innodb_log_buffer_size. Try
something
between 16-32MB if you have many transactions.


Ok, we've increased the innodb_log_file_size to 500M, and that has not 
changed the IO wait at all so far (after 1 hour).



Thanks!
Josh Miller, RHCE

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: innodb/myisam performance issues

2008-09-04 Thread Josh Miller

Perrin Harkins wrote:

What you really need to do is look at which queries are slow and run
EXPLAIN plans for them.  Most big performance problems like you're
describing are due to index issues, so that's where you should be
looking.  Server tuning comes lat



We definitely need to work on re-designing the queries and indexes.  We 
have a less than 50% index usage rate which is disastrous.


We'd like to prove InnoDB and move onto that storage engine for the 
transaction support, MVCC, etc.. but we're finding that performance is poor.


Thanks!
Josh Miller, RHCE

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: innodb/myisam performance issues

2008-09-04 Thread Tom Horstmann
Perrin said it right.

If your app needs InnoDB (transaction, row level locks...) write it that
way.
Don't expect performance from a MyIsam compliant app when using InnoDB.

TomH

-Original Message-
From: Josh Miller [mailto:[EMAIL PROTECTED] 
Sent: Friday, September 05, 2008 12:42 AM
To: Tom Horstmann
Cc: mysql@lists.mysql.com
Subject: Re: innodb/myisam performance issues

Tom Horstmann wrote:
 Addendum..
 Please also try increasing your innodb_log_file_size to a much higher
value
 if you
 have lots of writes/transactions. Maybe 250MB is a good first try. 
 You need to delete/move the InnoDB logs before restart.
 
 Not sure about this, but please also set innodb_log_buffer_size. Try
 something
 between 16-32MB if you have many transactions.

Ok, we've increased the innodb_log_file_size to 500M, and that has not 
changed the IO wait at all so far (after 1 hour).


Thanks!
Josh Miller, RHCE

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: innodb/myisam performance issues

2008-09-04 Thread Perrin Harkins
On Thu, Sep 4, 2008 at 6:43 PM, Josh Miller [EMAIL PROTECTED] wrote:
 We'd like to prove InnoDB and move onto that storage engine for the
 transaction support, MVCC, etc.. but we're finding that performance is poor.

Well, thousands of large InnoDB database users prove that the engine
itself has good performance, so I'd say you're really at the stage of
working on your own indexes now.  You probably don't need to change
your queries, just the indexes.

- Perrin

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: innodb/myisam performance issues

2008-09-04 Thread Aaron Blew
Here are a couple ideas:
* Decrease innodb_autoextend_increment to 8 or even 4.  You may see
additional IO wait because you're pre-allocating space in chunks
disproportinate to what you immediately need, causing bursty performance.
* If your remaining MyISAM tables don't need it, take 2GB of the key_buffer
alocation and put it towards the innodb buffer pool

What are the system's specs?  What's it's underlying storage?  What flags
were used when you created the filesystem(s)?  What OS/Version of MySQL are
you running?  Could you send us some iostat output?

Thanks and good luck,
-Aaron

On Thu, Sep 4, 2008 at 1:26 PM, Josh Miller [EMAIL PROTECTED]wrote:

 Good afternoon,

 I have recently converted a large table from MyISAM to InnoDB and am
 experiencing severe performance issues because of it.  HTTP response times
 have gone from avg .25 seconds to avg 2-3 seconds.  Details follow:

 PHP/MySQL website, no memcached, 3 web nodes that interact with DB, one
 that serves images, one master DB that serves all reads/writes, backup DB
 that only serves for backup/failover at this time (app being changed to
 split reads/writes, not yet).

 The one table that I converted is 130M rows, around 10GB data MyISAM to
 22GB InnoDB.  There are around 110 tables on the DB total.


 My.cnf abbreviated settings:

 [mysqld]
 port  = 3306
 socket  = /tmp/mysql.sock
 skip-locking
 key_buffer= 3G
 sort_buffer_size  = 45M
 max_allowed_packet  = 16M
 table_cache = 2048

 tmp_table_size= 512M
 max_heap_table_size = 512M

 myisam_sort_buffer_size = 512M
 myisam_max_sort_file_size = 10G
 myisam_repair_threads   = 1
 thread_cache_size   = 300

 query_cache_type  = 1
 query_cache_limit = 1M
 query_cache_size  = 600M

 thread_concurrency  = 8
 max_connections   = 2048
 sync_binlog = 1

 innodb_buffer_pool_size = 14G
 innodb_log_file_size  = 20M
 innodb_flush_log_at_trx_commit=1
 innodb_flush_method = O_DIRECT
 skip-innodb-doublewrite
 innodb_support_xa = 1
 innodb_autoextend_increment = 16
 innodb_data_file_path   = ibdata1:40G:autoextend

 We're seeing a significantly higher percentage of IO wait on the system,
  averaging 20% now with the majority of that being user IO.  The system is
 not swapping at all.

 Any ideas for what to check or modify to increase the performance here and
 let MyISAM and InnoDB play better together?  The plan is to convert all
 tables to InnoDB which does not seem like a great idea at this point, we're
 considering moving back to MyISAM.

 Thanks!
 Josh Miller, RHCE

 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]




Re: key_buffer, performance issues and considerations.

2006-10-26 Thread Chris

RV Tec wrote:

Folks,

A few months ago, I came here asking for directions on how I could 
improve performance of MySQL. Back then, I was using OpenBSD on a dual 
Opteron 248 with 2GB, LSI MegaRAID 320-1, 15k RPM SCSI discs, MySQL was 
(still is) 4.0.27, the database is MyISAM, reaching 50GB.


After some considerations, we have decided to upgrade things in three 
steps:


1) Bumped the OS to Gentoo Linux, GCC 3.4.4, glibc 2.4/NPTL, deadline 
scheduler, xfs and kernel 2.6.16-gentoo-r7. This, by itself, was already 
a great performance improvement. And it is stable/reliable as well.


2) Jumped from 2GB RAM to 16GB, changed RAID card to a dual-channel (so 
the database have a channel of its own). This proved that memory was our 
greatest bottle neck. I can honestly say that now I'm happy with the 
performance.


My question is: key_buffer seems to be the solution to all my problems. 
On a 16GB server, I'm using only 37.5% of it to the key_buffer (6144). 
If I make this larger, will be a performance improvement or a stability 
killer?


With stuff like this I'd suggest a conservative approach. Increase it 
slightly, see how the server copes.. increase it slightly again etc 
until you find the sweet spot.


You'll need to run the server for at least a couple of days at each 
stage to see what issues the server has with the settings.


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



key_buffer, performance issues and considerations.

2006-10-25 Thread RV Tec

Folks,

A few months ago, I came here asking for directions on how I could improve 
performance of MySQL. Back then, I was using OpenBSD on a dual Opteron 248 
with 2GB, LSI MegaRAID 320-1, 15k RPM SCSI discs, MySQL was (still is) 
4.0.27, the database is MyISAM, reaching 50GB.


After some considerations, we have decided to upgrade things in three 
steps:


1) Bumped the OS to Gentoo Linux, GCC 3.4.4, glibc 2.4/NPTL, deadline 
scheduler, xfs and kernel 2.6.16-gentoo-r7. This, by itself, was already a 
great performance improvement. And it is stable/reliable as well.


2) Jumped from 2GB RAM to 16GB, changed RAID card to a dual-channel (so 
the database have a channel of its own). This proved that memory was our 
greatest bottle neck. I can honestly say that now I'm happy with the 
performance.


My question is: key_buffer seems to be the solution to all my problems. On 
a 16GB server, I'm using only 37.5% of it to the key_buffer (6144). If I 
make this larger, will be a performance improvement or a stability killer?


3) Upgrade MySQL 4.0.27 to MySQL 5.0 -- this is going to be painful, but 
it is already on the way.


Thanks a lot!

RV


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: key_buffer, performance issues and considerations.

2006-10-25 Thread Dan Buettner

RV, you may find that increasing the size of the key_buffer could have
a negative effect on performance.  You want to make sure that there's
enough physical RAM for all the processes on the machine, to avoid
paging/swapping to disk, which is very slow.

Here's a an interesting note:
http://dev.mysql.com/doc/refman/5.0/en/server-system-variables.html#optvar_key_buffer_size

I was surprised when I read that page as it says (for 5.0 anyway) that
4 GB is the largest size allowable for key_buffer, but you're using 6
GB.  This setting is for MyISAM only I believe, which I take it is
what you're using?

One suggestion if you truly can't utilize more than 4 GB for the
key_buffer is to look at setting up an in-memory filesystem to hold
temp space for MySQL, which will be quite a bit faster than temp space
on disk.  That would let you really use your 16GB well.  You can also
look at increasing some of the other buffers, sort buffers etc., but
do your homework as many of those are per-thread settings, not global
settings!  In other words, a 256 MB setting with 8 threads = 2 GB, not
256 MB.

On the 5.0 upgrade - it's not that bad, honest!  Two things to be
aware of are the change in passwords that happened with 4.1, and also
more strict interpretation of certain joins as has been discussed on
the list a couple of time recently.  The password thing is easy enough
to work around if you're happy just enabling the old password scheme;
the join thing could be a little more problematic but with moderate
testing you should be able to iron things out ahead of time.

Dan

On 10/25/06, RV Tec [EMAIL PROTECTED] wrote:

Folks,

A few months ago, I came here asking for directions on how I could improve
performance of MySQL. Back then, I was using OpenBSD on a dual Opteron 248
with 2GB, LSI MegaRAID 320-1, 15k RPM SCSI discs, MySQL was (still is)
4.0.27, the database is MyISAM, reaching 50GB.

After some considerations, we have decided to upgrade things in three
steps:

1) Bumped the OS to Gentoo Linux, GCC 3.4.4, glibc 2.4/NPTL, deadline
scheduler, xfs and kernel 2.6.16-gentoo-r7. This, by itself, was already a
great performance improvement. And it is stable/reliable as well.

2) Jumped from 2GB RAM to 16GB, changed RAID card to a dual-channel (so
the database have a channel of its own). This proved that memory was our
greatest bottle neck. I can honestly say that now I'm happy with the
performance.

My question is: key_buffer seems to be the solution to all my problems. On
a 16GB server, I'm using only 37.5% of it to the key_buffer (6144). If I
make this larger, will be a performance improvement or a stability killer?

3) Upgrade MySQL 4.0.27 to MySQL 5.0 -- this is going to be painful, but
it is already on the way.

Thanks a lot!

RV


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]




--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Views with functions and performance issues

2006-09-22 Thread Christopher Brooks
Hi,

I've got a view of a base table that is 100% identical to that base table
except for one column, which is a projection of the base table after its MD5
hashed.  The table is largish (~700,000 rows) and is growing quickly.
Queries on the base table are nice and fast, but on the hashed view are
painfully slow - in particular when restricting the queries on the view
based on the hashed colum.

I assume this is because there is no index for the hashed colum.  Is there a
good way I can deal with this, without changing the base table?

TIA, running MySQL 5 on windows, can change to a later build of MySQL (as
long as I can replicate from 5) if there is some funky stuff in there that
will do what I need...

Chris

--
Christopher A. Brooks
Research Officer, ARIES Group
University of Saskatchewan 
 
Email: [EMAIL PROTECTED]
Mail: Christopher Brooks
  Department of Computer Science
  University of Saskatchewan
  57 Campus Drive
  Saskatoon, Saskatchewan, S7N 5A9
  Canada  


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Views with functions and performance issues

2006-09-22 Thread Dan Buettner

If I'm understanding right - the view contains an additional column
that is an MD5 hash of some or all of the data in the base table,
right?

Yes, I would expect that to be very very slow.  When selecting, your
database engine has tro calculate 700K MD5 hashes.  Slow.  When
selecting a subset it has to do that plus what amounts to a table scan
of those 700K MD5 hashes.  Very slow.

Wonder if you could do this:

set up second table, with two columns.  One is id number to correspond
to id number in original table, second is to hold Md5 hash.  Createn
index on the MD5 column for sure, possibly a two column index.  I'm
not sure whether it would be better to create it as (id, md5_col) or
(md5_col, id); I'd test it each way I guess.

Set up insert/update/delete triggers on the first table to
add/update/delete records in the second table with ID and the MD5
hash.

Then re-create your view to show you all the columns from the first
table plus the MD5 column from the second table, joining on the id
column.

I think this should end up being pretty fast since the hashes will
only be calculated when the data changes, not all at once for every
select, and an index will speed things up tremendously when selecting
a subset.

HTH,
Dan

On 9/22/06, Christopher Brooks [EMAIL PROTECTED] wrote:

Hi,

I've got a view of a base table that is 100% identical to that base table
except for one column, which is a projection of the base table after its MD5
hashed.  The table is largish (~700,000 rows) and is growing quickly.
Queries on the base table are nice and fast, but on the hashed view are
painfully slow - in particular when restricting the queries on the view
based on the hashed colum.

I assume this is because there is no index for the hashed colum.  Is there a
good way I can deal with this, without changing the base table?

TIA, running MySQL 5 on windows, can change to a later build of MySQL (as
long as I can replicate from 5) if there is some funky stuff in there that
will do what I need...

Chris

--
Christopher A. Brooks
Research Officer, ARIES Group
University of Saskatchewan

Email: [EMAIL PROTECTED]
Mail: Christopher Brooks
  Department of Computer Science
  University of Saskatchewan
  57 Campus Drive
  Saskatoon, Saskatchewan, S7N 5A9
  Canada


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]




--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: [others] Re: Views with functions and performance issues

2006-09-22 Thread Christopher Brooks
Hi, thanks for the comments,

 If I'm understanding right - the view contains an additional 
 column that is an MD5 hash of some or all of the data in the 
 base table, right?

Close.  It's got all of the data in the base table except for the colum
that's being hashed - we show the hashed version of that instead.  Like a
password table, where the base table has the plain text passwords and
usernames, and the view only shows the hashed passwords and usernames
(though it's not for passwords).

 set up second table, with two columns.  One is id number to 
 correspond to id number in original table, second is to hold 
 Md5 hash.  Createn index on the MD5 column for sure, possibly 
 a two column index.  I'm not sure whether it would be better 
 to create it as (id, md5_col) or (md5_col, id); I'd test it 
 each way I guess.
 
 Set up insert/update/delete triggers on the first table to 
 add/update/delete records in the second table with ID and the 
 MD5 hash.
 
 Then re-create your view to show you all the columns from the 
 first table plus the MD5 column from the second table, 
 joining on the id column.

This sounds good, and this way I don't have to change the first table much
at all (as it's a replicated table and my luck with replication only lately
seems to have gotten good).  What kind of performance hit will I be taking
because of the join in the view (e.g. every query to the view is going to
have to have to do the join, yes?).  Or is this neglegable as long as the
hash and pointer of the second table are both indexed...

Chris


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: RE: [others] Re: Views with functions and performance issues

2006-09-22 Thread Dan Buettner

The hit with a join on indexed columns is negligible.  Relational
databases live for joins - they eat them for breakfast!  Seriously, as
long as it's indexed in both tables, it'll be super-speedy.

Dan

On 9/22/06, Christopher Brooks [EMAIL PROTECTED] wrote:

Hi, thanks for the comments,

 If I'm understanding right - the view contains an additional
 column that is an MD5 hash of some or all of the data in the
 base table, right?

Close.  It's got all of the data in the base table except for the colum
that's being hashed - we show the hashed version of that instead.  Like a
password table, where the base table has the plain text passwords and
usernames, and the view only shows the hashed passwords and usernames
(though it's not for passwords).

 set up second table, with two columns.  One is id number to
 correspond to id number in original table, second is to hold
 Md5 hash.  Createn index on the MD5 column for sure, possibly
 a two column index.  I'm not sure whether it would be better
 to create it as (id, md5_col) or (md5_col, id); I'd test it
 each way I guess.

 Set up insert/update/delete triggers on the first table to
 add/update/delete records in the second table with ID and the
 MD5 hash.

 Then re-create your view to show you all the columns from the
 first table plus the MD5 column from the second table,
 joining on the id column.

This sounds good, and this way I don't have to change the first table much
at all (as it's a replicated table and my luck with replication only lately
seems to have gotten good).  What kind of performance hit will I be taking
because of the join in the view (e.g. every query to the view is going to
have to have to do the join, yes?).  Or is this neglegable as long as the
hash and pointer of the second table are both indexed...

Chris




--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: ~performance issues~

2006-04-11 Thread Ravi Prasad LR
yes, that is the reason for slowness .  

Thanks,
Ravi
On Tuesday 11 April 2006 10:51, Mohammed Abdul Azeem wrote:
 Hi Ravi,

 Since the sync'ing is done to disk with sync_binlog=1, the update
 queries to server are slower compared to the server having sync_binlog=0
 rite ?

 Thanks,
 Abdul.

 On Tue, 2006-04-11 at 10:51 +0530, Ravi Prasad LR wrote:
  Hi Abdul,
 When  sync_binlog is set to 1, innodb fsyncs the binary
  log to disk after every single write to binary log, but not in the case
  of sync_binlog=0.
 
  From MySQL manual:
 
  snip
   If the value of this variable is positive, the MySQL server synchronizes
  its binary log to disk (fdatasync()) after every sync_binlog writes to
  this binary log. Note that there is one write to the binary log per
  statement if in autocommit mode, and otherwise one write per transaction.
  The default value is 0 which does no sync'ing to disk. A value of 1 is
  the safest choice, because in case of crash you lose at most one
  statement/transaction from the binary log; but it is also the slowest
  choice (unless the disk has a battery-backed cache, which makes sync'ing
  very fast). This variable was added in MySQL 4.1.3.
  /snip
 
  --Ravi
 
  On Tuesday 11 April 2006 10:22, Mohammed Abdul Azeem wrote:
   Hi,
  
   I have a master/slave setup ( replication enabled ) for mysql in two
   different geographic locations ( one master/slave set up in each
   location). In one location i have configured the sync_binlog=1 . And
   the other location does not have the same.
  
   My problem is, when i run similar update processes on both the master
   servers, the server with sync_binlog=1 is very slower in terms of
   completing the update query as compared to the machine having
   sync_binlog=0.
  
   Is that a cause for slow performance ?
  
   Thanks in advance,
   Abdul.
  
   
   This email has been Scanned for Viruses!
 www.newbreak.com
  
   

 
 This email has been Scanned for Viruses!
   www.newbreak.com

 

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: ~performance issues~

2006-04-11 Thread Mohammed Abdul Azeem
Thanks Ravi

On Tue, 2006-04-11 at 11:41 +0530, Ravi Prasad LR wrote:
 yes, that is the reason for slowness .  
 
 Thanks,
 Ravi
 On Tuesday 11 April 2006 10:51, Mohammed Abdul Azeem wrote:
  Hi Ravi,
 
  Since the sync'ing is done to disk with sync_binlog=1, the update
  queries to server are slower compared to the server having sync_binlog=0
  rite ?
 
  Thanks,
  Abdul.
 
  On Tue, 2006-04-11 at 10:51 +0530, Ravi Prasad LR wrote:
   Hi Abdul,
  When  sync_binlog is set to 1, innodb fsyncs the binary
   log to disk after every single write to binary log, but not in the case
   of sync_binlog=0.
  
   From MySQL manual:
  
   snip
If the value of this variable is positive, the MySQL server synchronizes
   its binary log to disk (fdatasync()) after every sync_binlog writes to
   this binary log. Note that there is one write to the binary log per
   statement if in autocommit mode, and otherwise one write per transaction.
   The default value is 0 which does no sync'ing to disk. A value of 1 is
   the safest choice, because in case of crash you lose at most one
   statement/transaction from the binary log; but it is also the slowest
   choice (unless the disk has a battery-backed cache, which makes sync'ing
   very fast). This variable was added in MySQL 4.1.3.
   /snip
  
   --Ravi
  
   On Tuesday 11 April 2006 10:22, Mohammed Abdul Azeem wrote:
Hi,
   
I have a master/slave setup ( replication enabled ) for mysql in two
different geographic locations ( one master/slave set up in each
location). In one location i have configured the sync_binlog=1 . And
the other location does not have the same.
   
My problem is, when i run similar update processes on both the master
servers, the server with sync_binlog=1 is very slower in terms of
completing the update query as compared to the machine having
sync_binlog=0.
   
Is that a cause for slow performance ?
   
Thanks in advance,
Abdul.
   

This email has been Scanned for Viruses!
  www.newbreak.com
   

 
  
  This email has been Scanned for Viruses!
www.newbreak.com
 
  
 
 
 This email has been Scanned for Viruses!
   www.newbreak.com


This email has been Scanned for Viruses!
  www.newbreak.com



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



~performance issues~

2006-04-10 Thread Mohammed Abdul Azeem
Hi,

I have a master/slave setup ( replication enabled ) for mysql in two
different geographic locations ( one master/slave set up in each
location). In one location i have configured the sync_binlog=1 . And the
other location does not have the same.

My problem is, when i run similar update processes on both the master
servers, the server with sync_binlog=1 is very slower in terms of
completing the update query as compared to the machine having
sync_binlog=0.

Is that a cause for slow performance ? 

Thanks in advance,
Abdul.


This email has been Scanned for Viruses!
  www.newbreak.com



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: ~performance issues~

2006-04-10 Thread Ravi Prasad LR
Hi Abdul, 
   When  sync_binlog is set to 1, innodb fsyncs the binary log to 
disk after every single write to binary log, but not in the case of 
sync_binlog=0.
From MySQL manual:
snip
 If the value of this variable is positive, the MySQL server synchronizes its 
binary log to disk (fdatasync()) after every sync_binlog writes to this 
binary log. Note that there is one write to the binary log per statement if 
in autocommit mode, and otherwise one write per transaction. The default 
value is 0 which does no sync'ing to disk. A value of 1 is the safest choice, 
because in case of crash you lose at most one statement/transaction from the 
binary log; but it is also the slowest choice (unless the disk has a 
battery-backed cache, which makes sync'ing very fast). This variable was 
added in MySQL 4.1.3.
/snip

--Ravi







On Tuesday 11 April 2006 10:22, Mohammed Abdul Azeem wrote:
 Hi,

 I have a master/slave setup ( replication enabled ) for mysql in two
 different geographic locations ( one master/slave set up in each
 location). In one location i have configured the sync_binlog=1 . And the
 other location does not have the same.

 My problem is, when i run similar update processes on both the master
 servers, the server with sync_binlog=1 is very slower in terms of
 completing the update query as compared to the machine having
 sync_binlog=0.

 Is that a cause for slow performance ?

 Thanks in advance,
 Abdul.

 
 This email has been Scanned for Viruses!
   www.newbreak.com

 

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: ~performance issues~

2006-04-10 Thread Mohammed Abdul Azeem
Hi Ravi,

Since the sync'ing is done to disk with sync_binlog=1, the update
queries to server are slower compared to the server having sync_binlog=0
rite ? 

Thanks,
Abdul.

On Tue, 2006-04-11 at 10:51 +0530, Ravi Prasad LR wrote:
 Hi Abdul, 
When  sync_binlog is set to 1, innodb fsyncs the binary log to 
 disk after every single write to binary log, but not in the case of 
 sync_binlog=0.
 From MySQL manual:
 snip
  If the value of this variable is positive, the MySQL server synchronizes its 
 binary log to disk (fdatasync()) after every sync_binlog writes to this 
 binary log. Note that there is one write to the binary log per statement if 
 in autocommit mode, and otherwise one write per transaction. The default 
 value is 0 which does no sync'ing to disk. A value of 1 is the safest choice, 
 because in case of crash you lose at most one statement/transaction from the 
 binary log; but it is also the slowest choice (unless the disk has a 
 battery-backed cache, which makes sync'ing very fast). This variable was 
 added in MySQL 4.1.3.
 /snip
 
 --Ravi
 
 
 
 
 
 
 
 On Tuesday 11 April 2006 10:22, Mohammed Abdul Azeem wrote:
  Hi,
 
  I have a master/slave setup ( replication enabled ) for mysql in two
  different geographic locations ( one master/slave set up in each
  location). In one location i have configured the sync_binlog=1 . And the
  other location does not have the same.
 
  My problem is, when i run similar update processes on both the master
  servers, the server with sync_binlog=1 is very slower in terms of
  completing the update query as compared to the machine having
  sync_binlog=0.
 
  Is that a cause for slow performance ?
 
  Thanks in advance,
  Abdul.
 
  
  This email has been Scanned for Viruses!
www.newbreak.com
 
  
 


This email has been Scanned for Viruses!
  www.newbreak.com



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



4.1 x 5.0 INSERT Performance issues

2006-03-24 Thread Edilson Vasconcelos de Melo Junior
Hi,

 

My software use to run with mySQL 4.1, but now i have to use some of the new
features of mySQL 5.0 i did dump all databases, uninistalled mysql4.1,
installed the mysql5.0 and start loading the .sql file. This operation use
to take about 4 hours but know it is taking 1day!!! Did anything happen with
INSERT on InnoDB tables? What's wrong?

 

Thanks,

Dirso



Re: innodb performance issues

2005-07-16 Thread tony
On Fri, 2005-07-15 at 13:28 -0700, David Griffiths wrote:


 
David,

Thanks for your suggestions, i'll give them a try.


 There are other tuning choices (including the thread-pool-cache). The 
 best resource is the page on innodb performance tuning, and it can be 
 found here:
 
 http://dev.mysql.com/doc/mysql/en/innodb-configuration.html

I didi read through this before I posted, however I am a programmer with
no real admin experience in at the deep end, and it was all a little
over my head :( I guess i'll get to understand it in time.

Regards
tony


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



innodb performance issues

2005-07-15 Thread tony
Hi,

A few days ago i posted a quaestion about performace, I now have a
little more info, hopefully someone can help.

I have a table, tblShoppingCart with 3 fields, 

cartUid (int 11 auto increment)
userUid (int 11, indexed) 
strCartHash (varchar 32) 

The table is innodb

Nomally my server load is below 0.1 and everythings fine, I have a
process that runs occasionally that pushes the load up to 1.5, when this
happens inserts into the table seem to get blocked, ie taking up to 20
seconds, as soon as the load drops the inserts are fine again.
Interestingly, if I convert the table to myisam I don't get this
problem. However I really want to keep the table innodb as I use it in
transactions latter.

My my.cnf file is coppied from the default huge.cnf file, i have duel
xeons with 4gb of ram and i'm running mysql 4.1.1 on red hat linux.

Any pointers on where i can look further appreciated.

Tony


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: innodb performance issues

2005-07-15 Thread David Griffiths


Tony,

You said that you copied the my.cnf file from huge.cnf - not sure what 
version you are using (I missed your original post), but the my-huge.cnf 
in mysql 4.0.24 is for MyISAM. You should have used 
my-innodb-heavy-4G.cnf as the starting point for an InnoDB system. The 
my-huge.cnf allocates way to much memory to the MyISAM engine. All the 
innodb stuff is commented out.


If you want help, you'll need to post your my.cnf file, the full table 
definition (try SHOW CREATE TABLE tblSoppingCart; and pasting the 
results in here).


You'll need to also post the queries that are hitting the database while 
you're having these issues.


David


tony wrote:


Hi,

A few days ago i posted a quaestion about performace, I now have a
little more info, hopefully someone can help.

I have a table, tblShoppingCart with 3 fields, 


cartUid (int 11 auto increment)
userUid (int 11, indexed) 
strCartHash (varchar 32) 


The table is innodb

Nomally my server load is below 0.1 and everythings fine, I have a
process that runs occasionally that pushes the load up to 1.5, when this
happens inserts into the table seem to get blocked, ie taking up to 20
seconds, as soon as the load drops the inserts are fine again.
Interestingly, if I convert the table to myisam I don't get this
problem. However I really want to keep the table innodb as I use it in
transactions latter.

My my.cnf file is coppied from the default huge.cnf file, i have duel
xeons with 4gb of ram and i'm running mysql 4.1.1 on red hat linux.

Any pointers on where i can look further appreciated.

Tony


 




--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: innodb performance issues

2005-07-15 Thread tony
Hi David,


On Fri, 2005-07-15 at 10:25 -0700, David Griffiths wrote:
 Tony,
 
  - not sure what version you are using

4.1.11. Server is a duel xeon machine with 4gb or ram running mysql and
apache webserver and not much else.


 You should have used 
 my-innodb-heavy-4G.cnf as the starting point for an InnoDB system. 

I can use this instead if it's going to help.

 If you want help, you'll need to post your my.cnf file, 

[client]
port= 3306
socket  = /var/lib/mysql/mysql.sock
# The MySQL server
[mysqld]
port= 3306
socket  = /var/lib/mysql/mysql.sock
skip-locking
key_buffer = 384M
max_allowed_packet = 1M
table_cache = 512
sort_buffer_size = 2M
read_buffer_size = 2M
read_rnd_buffer_size = 8M
myisam_sort_buffer_size = 64M
thread_cache = 8
query_cache_size = 32M
log = /var/log/mysql/mysql.log
log-slow-queries= /var/log/mysql/mysql-slow.log
set-variable= max_connections=250
server-id   = 1
innodb_data_home_dir = /var/lib/mysql/
innodb_log_group_home_dir = /var/lib/mysql/
innodb_buffer_pool_size = 384M
innodb_additional_mem_pool_size = 20



 the full table 
 definition (try SHOW CREATE TABLE tblSoppingCart; and pasting the 
 results in here).


tblCart | CREATE TABLE `tblCart` (
  `intCartUid` int(11) NOT NULL auto_increment,
  `intUserUid` int(11) NOT NULL default '0',
  `tsCartCreated` datetime NOT NULL default '-00-00 00:00:00',
  `tsLastUpdated` datetime NOT NULL default '-00-00 00:00:00',
  `strCartHash` varchar(32) NOT NULL default '',
  PRIMARY KEY  (`intCartUid`),
  KEY `intUserUid` (`intUserUid`),
  KEY `tsLastUpdated` (`tsLastUpdated`)
) ENGINE=InnoDB DEFAULT CHARSET=latin1


 
 You'll need to also post the queries that are hitting the database while 
 you're having these issues.
 


# Query_time: 20  Lock_time: 0  Rows_sent: 0  Rows_examined: 0
SET insert_id=34475,timestamp=1121407309;
INSERT INTO
  dbseThxWebOrders.tblCart
  (intUserUid,tsCartCreated,strCartHash)
  VALUES

(0,now(),'4e5d105f7cd34268e1a5e160d479ed91');

is an example from my slow query log. All of the offending queries today
were this same query.

Thanks for you help

Tony






-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: innodb performance issues

2005-07-15 Thread David Griffiths

Tony,

Your my.cnf file is set up for MyISAM, not InnoDB. MySQL allows you to 
allocate memory and resources to any and all storage engines. Yours is 
set up to give lots of resources to MyISAM, and none to InnoDB.


Reducing MyISAM
key_buffer = 384M - this is way too much - I'd set to to 2-16 meg, 
assuming that the only MyISAM tables you have are in the mysql database.
query_cache_size = 32M - read up on the query cache - it's only useful 
for oft-repeated queries that hit tables in which the data rarely 
changes. We turn ours off


The big variable in InnoDB (that affects performance the most) is the 
innodb_buffer_pool_size. Since you are running a xeon, I am guessing 
it's a 32-bit architecture. There is a limit on the max size of the process


The amount of memory MySQL will use is:

innodb_buffer_pool_size + key_buffer + 
max_connections*(sort_buffer_size+read_buffer_size+binlog_cache_size) + 
max_connections*2MB

You should make sure that stays under 2 gigabytes. If MySQL uses much 
more memory, it will crash.



There are other tuning choices (including the thread-pool-cache). The 
best resource is the page on innodb performance tuning, and it can be 
found here:


http://dev.mysql.com/doc/mysql/en/innodb-configuration.html

You might also want to consider High Performance MySQL. There is lots of 
good info in there on setup, tuning, replication, etc.


David



tony wrote:


Hi David,


On Fri, 2005-07-15 at 10:25 -0700, David Griffiths wrote:
 


Tony,

 - not sure what version you are using
   



4.1.11. Server is a duel xeon machine with 4gb or ram running mysql and
apache webserver and not much else.


You should have used 
 

my-innodb-heavy-4G.cnf as the starting point for an InnoDB system. 
   



I can use this instead if it's going to help.

 

If you want help, you'll need to post your my.cnf file, 
   



[client]
port= 3306
socket  = /var/lib/mysql/mysql.sock
# The MySQL server
[mysqld]
port= 3306
socket  = /var/lib/mysql/mysql.sock
skip-locking
key_buffer = 384M
max_allowed_packet = 1M
table_cache = 512
sort_buffer_size = 2M
read_buffer_size = 2M
read_rnd_buffer_size = 8M
myisam_sort_buffer_size = 64M
thread_cache = 8
query_cache_size = 32M
log = /var/log/mysql/mysql.log
log-slow-queries= /var/log/mysql/mysql-slow.log
set-variable= max_connections=250
server-id   = 1
innodb_data_home_dir = /var/lib/mysql/
innodb_log_group_home_dir = /var/lib/mysql/
innodb_buffer_pool_size = 384M
innodb_additional_mem_pool_size = 20



 

the full table 
definition (try SHOW CREATE TABLE tblSoppingCart; and pasting the 
results in here).
   




tblCart | CREATE TABLE `tblCart` (
 `intCartUid` int(11) NOT NULL auto_increment,
 `intUserUid` int(11) NOT NULL default '0',
 `tsCartCreated` datetime NOT NULL default '-00-00 00:00:00',
 `tsLastUpdated` datetime NOT NULL default '-00-00 00:00:00',
 `strCartHash` varchar(32) NOT NULL default '',
 PRIMARY KEY  (`intCartUid`),
 KEY `intUserUid` (`intUserUid`),
 KEY `tsLastUpdated` (`tsLastUpdated`)
) ENGINE=InnoDB DEFAULT CHARSET=latin1


 

You'll need to also post the queries that are hitting the database while 
you're having these issues.


   




# Query_time: 20  Lock_time: 0  Rows_sent: 0  Rows_examined: 0
SET insert_id=34475,timestamp=1121407309;
INSERT INTO
 dbseThxWebOrders.tblCart
 (intUserUid,tsCartCreated,strCartHash)
 VALUES

(0,now(),'4e5d105f7cd34268e1a5e160d479ed91');

is an example from my slow query log. All of the offending queries today
were this same query.

Thanks for you help

Tony





 




--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Performance issues when deleting and reading on large table

2005-04-24 Thread Almar van Pel

Hi,

I'm managing a large free website running on Linux, with MySQL 4.0.23 and
has 2Gb memory. (PIV 2,6 Ghz)
MySQL is configured to have 382M key-buffer. 

There are two problems I have, wich have to do with a large table. This
table contains 5 million records 
and is the core of our application. It is a MyIsam table. It has two fields
in the key. The table itsself is 1.2 Gb large and has an 
200 Mb index, containing not only the key, but some additional indexes. The
website is being visited by more then 
100.000 unique visitors a day. Wich means a lot of selects and inserts, and
a very small amount of updates and deletes.

Problem 1;

After some time (sometimes a week sometimes a month) it appears that the
index of the table gets stuck. 
It tries to read from the table but does not get response. This causes the
connectionqueue to fill up 
and the load on the system increases dramatically. In other words, unless I
do an optimize table , the system 
hangs. Most of the times you see that the index is getting 20 Mb off. 
When I do check table (before optimizing) there are no errors.

Is there any way to see this problem coming, so I can outrun it? (Without
having to schedule optimize, wich = downtime, every week..) 

Problem 2;

Trying to get the previous table clean, I created some jobs deleting old
records. When I delete a lot of records at in one job,
the system also nearly hangs. (+/- 10 to 15.000 records) The load again
increases dramatically. I tried every trick in the book, but cannot
understand, 
why this action is so heavy for the system.

Is there anyone who can explain, why this is such a heavy operation? And
what I can do more to optimize this process?

A general question, I've seen that sometimes the system (mysqld) does a
flush tables on itsself. I can't seem to find out when, why or for what
reason? Does anyone know?



Best regards,


Almar van Pel





-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Performance issues when deleting and reading on large table

2005-04-24 Thread Jigal van Hemert
From: Almar van Pel

 After some time (sometimes a week sometimes a month) it appears that the
 index of the table gets stuck.
 It tries to read from the table but does not get response. This causes the
 connectionqueue to fill up
 and the load on the system increases dramatically. In other words, unless
I
 do an optimize table , the system
 hangs. Most of the times you see that the index is getting 20 Mb off.
 When I do check table (before optimizing) there are no errors.

 Is there any way to see this problem coming, so I can outrun it? (Without
 having to schedule optimize, wich = downtime, every week..)

You should run optimize table regularly (once a week or so) in some cases:
http://dev.mysql.com/doc/mysql/en/optimize-table.html

It's a probably a case of not having the cardinality of indexes right and
thus making wrong decisions for queries.

 Trying to get the previous table clean, I created some jobs deleting old
 records. When I delete a lot of records at in one job,
 the system also nearly hangs. (+/- 10 to 15.000 records) The load again
 increases dramatically. I tried every trick in the book, but cannot
 understand,
 why this action is so heavy for the system.

Deleting a lot of records will have impact on the indexes, so it's quite a
job. The inserts/updates/deletes will also block the table for reading in
case of MyISAM.

Such a large table in a high concurrency situation (many writes and many
reads) can be a job for InnoDB tables.
They seem slow for small tables, but have the tendency to keep the same
speed for large tables, while MyISAM will probably get slower the bigger the
table is under these circumstances.
If you can use the PRIMARY index in a query and keep the 'PRIMARY' index as
short as possible, InnoDB can be a very fast table handler.

Depending on the size of the resulting record sets, your system must have
enough memory to handle it. Otherwise a lot of temporary tables will end up
on disk (slow) and also indexes cannot be loaded in memory (slow).

Running large databases is sometimes a bit of a challenge; finding the right
queries, setting up the right index(es), etc.

Regards, Jigal.


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Performance issues when deleting and reading on large table

2005-04-24 Thread mathias fatene
Hi all,
Know that indexes are good for select(s), but very bad for massive
insert,update and delete.

If you want to do a massive delete with a cron, it's better to :
 * select the rows to delete (using indexes) 
 * delete indexes
 * delete rows (already marked)
 * recreate indexes

Another way if you want to delete a big percentage of your table, is to
copy the stating records, drop table and recreate it with those record.
Then recreate indexes.

I assume that you're not in a massive transactional situation, and maybe
myisam storage. If not, show processlist may help you to track using or
not of internal temporary tables, ...


Mathias

-Original Message-
From: Jigal van Hemert [mailto:[EMAIL PROTECTED] 
Sent: dimanche 24 avril 2005 17:05
To: Almar van Pel; mysql@lists.mysql.com
Subject: Re: Performance issues when deleting and reading on large table

From: Almar van Pel

 After some time (sometimes a week sometimes a month) it appears that
the
 index of the table gets stuck.
 It tries to read from the table but does not get response. This causes
the
 connectionqueue to fill up
 and the load on the system increases dramatically. In other words,
unless
I
 do an optimize table , the system
 hangs. Most of the times you see that the index is getting 20 Mb off.
 When I do check table (before optimizing) there are no errors.

 Is there any way to see this problem coming, so I can outrun it?
(Without
 having to schedule optimize, wich = downtime, every week..)

You should run optimize table regularly (once a week or so) in some
cases:
http://dev.mysql.com/doc/mysql/en/optimize-table.html

It's a probably a case of not having the cardinality of indexes right
and
thus making wrong decisions for queries.

 Trying to get the previous table clean, I created some jobs deleting
old
 records. When I delete a lot of records at in one job,
 the system also nearly hangs. (+/- 10 to 15.000 records) The load
again
 increases dramatically. I tried every trick in the book, but cannot
 understand,
 why this action is so heavy for the system.

Deleting a lot of records will have impact on the indexes, so it's quite
a
job. The inserts/updates/deletes will also block the table for reading
in
case of MyISAM.

Such a large table in a high concurrency situation (many writes and many
reads) can be a job for InnoDB tables.
They seem slow for small tables, but have the tendency to keep the same
speed for large tables, while MyISAM will probably get slower the bigger
the
table is under these circumstances.
If you can use the PRIMARY index in a query and keep the 'PRIMARY' index
as
short as possible, InnoDB can be a very fast table handler.

Depending on the size of the resulting record sets, your system must
have
enough memory to handle it. Otherwise a lot of temporary tables will end
up
on disk (slow) and also indexes cannot be loaded in memory (slow).

Running large databases is sometimes a bit of a challenge; finding the
right
queries, setting up the right index(es), etc.

Regards, Jigal.


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Performance issues when deleting and reading on large table

2005-04-24 Thread Almar van Pel

Hi Jigal, Mathias,

Thanks the time you took to reply to my issue's!

I would like to clear out some things. 

 It's a probably a case of not having the cardinality of indexes right and
thus making wrong decisions for queries.
- Currently there is not a single query in the application that does not use
the correct index. We only have key-reads. Wich would mean that MySQL is
creating these incorrect indexes?

 Depending on the size of the resulting record sets, your system must have
enough memory to handle it. Otherwise a lot of temporary tables will end up
on disk (slow) and also indexes cannot be loaded in memory (slow).
- The system runs with a key-buffer of 382 M, wich is most of the time not
filled 100 %. Created temp. tables is very low. 

 Deleting a lot of records will have impact on the indexes, so it's quite a
job. The inserts/updates/deletes will also block the table for reading in
case of MyISAM.
- During deletion of records from the table there is no user interaction.
The only person manipulating the table/database is me. That's the reason why
i'm finding this 'strange'. 

Changing to Innodb would be a great risk I think. Maybe we should think this
over again, but the way the system is configured right now should in my
opion be sufficient enough. 

Mathias, what do you mean by:

 If you want to do a massive delete with a cron, it's better to :
 * select the rows to delete (using indexes)
 * delete indexes
 * delete rows (already marked)
 * recreate indexes

I don't really understand how you 'mark' the records for deletion before
deleting indexes. However I'm very interested.

Regards,

Almar van Pel



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Performance issues when deleting and reading on large table

2005-04-24 Thread mathias fatene
If you have no active transactions and want a cron delete, an example is
:
* Create table tmp as select * from your_table where ... - here indexes
are used
* drop indexes
* delete from you_table where ...
* insert into your_table select * from tmp
* create index on you_table.

You must test it to unsure that index creation is not slow when you have
a lot of indexes.

You can also disable constraints when deleting and optimize your table
at the end of the deletion.

In myisam storage, since an update,insert or delete means lock table
there is a big transactional problem. Innodb offers row loocking, but
you seem having a problem using it. Unfortunaltly ! 

To simulate transaction, you must split your queries. I remember had
worked on a load problem which take days to finish (or not) because the
load operation was combined with a lot of select (verify) data.

My solution was to do a lot of selects (using indexes), spool results to
files, delete rows, and load data from files.
It took 1.5 hour to finish a 650Mo data with all the checking
operations.
 

Mathias

-Original Message-
From: Almar van Pel [mailto:[EMAIL PROTECTED] 
Sent: dimanche 24 avril 2005 18:44
To: mysql@lists.mysql.com
Cc: 'Jigal van Hemert'; 'mathias fatene'
Subject: RE: Performance issues when deleting and reading on large table


Hi Jigal, Mathias,

Thanks the time you took to reply to my issue's!

I would like to clear out some things. 

 It's a probably a case of not having the cardinality of indexes right
and
thus making wrong decisions for queries.
- Currently there is not a single query in the application that does not
use
the correct index. We only have key-reads. Wich would mean that MySQL is
creating these incorrect indexes?

 Depending on the size of the resulting record sets, your system must
have
enough memory to handle it. Otherwise a lot of temporary tables will end
up
on disk (slow) and also indexes cannot be loaded in memory (slow).
- The system runs with a key-buffer of 382 M, wich is most of the time
not
filled 100 %. Created temp. tables is very low. 

 Deleting a lot of records will have impact on the indexes, so it's
quite a
job. The inserts/updates/deletes will also block the table for reading
in
case of MyISAM.
- During deletion of records from the table there is no user
interaction.
The only person manipulating the table/database is me. That's the reason
why
i'm finding this 'strange'. 

Changing to Innodb would be a great risk I think. Maybe we should think
this
over again, but the way the system is configured right now should in my
opion be sufficient enough. 

Mathias, what do you mean by:

 If you want to do a massive delete with a cron, it's better to :
 * select the rows to delete (using indexes)
 * delete indexes
 * delete rows (already marked)
 * recreate indexes

I don't really understand how you 'mark' the records for deletion before
deleting indexes. However I'm very interested.

Regards,

Almar van Pel




-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Performance issues when deleting and reading on large table

2005-04-24 Thread Jigal van Hemert
  It's a probably a case of not having the cardinality of indexes right
and
 thus making wrong decisions for queries.
 - Currently there is not a single query in the application that does not
use
 the correct index. We only have key-reads. Wich would mean that MySQL is
 creating these incorrect indexes?

The indexes are not necessarily incorrect, but MySQL also keeps a property
called 'cardinality' for each index. It is an estimate of the number of
different items in the index. An index on 'gender' may have a cardinality of
only two or three (male/female(/unknown)) for example. I've noticed that the
cardinality on MyISAM tables can be very wrong and will be updated to a
correct value after an OPTIMIZE TABLE. InnoDB seems to estimate the
cardinality constantly.

  Deleting a lot of records will have impact on the indexes, so it's quite
a
 job. The inserts/updates/deletes will also block the table for reading in
 case of MyISAM.
 - During deletion of records from the table there is no user interaction.
 The only person manipulating the table/database is me. That's the reason
why
 i'm finding this 'strange'.
It will still be a massive operation on indexes. If you have many indexes
the task will be even harder...

 Changing to Innodb would be a great risk I think. Maybe we should think
this
 over again, but the way the system is configured right now should in my
 opion be sufficient enough.

It's not a risk, but may take a while to complete (rebuilding the tables).
Anyway, you should test it on a seperate database or even a different
server. You may also need to redesign the index(es).

Regards, Jigal.


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Performance issues when deleting and reading on large table

2005-04-24 Thread mathias fatene
 An index on 'gender' may have a cardinality of
 only two or three (male/female(/unknown)) for example.

Never b-tree index such columns !
Oracle (db2 ...rdbms) has bitmap indexes which work fine fork such data.
Look at BIN(myset+0) in
http://dev.mysql.com/tech-resources/articles/mysql-set-datatype.html.

Massive load is better without indexes, which are only good for selects.

Mathias

-Original Message-
From: Jigal van Hemert [mailto:[EMAIL PROTECTED] 
Sent: dimanche 24 avril 2005 19:46
To: Almar van Pel; mysql@lists.mysql.com
Cc: 'mathias fatene'
Subject: Re: Performance issues when deleting and reading on large table

  It's a probably a case of not having the cardinality of indexes
right
and
 thus making wrong decisions for queries.
 - Currently there is not a single query in the application that does
not
use
 the correct index. We only have key-reads. Wich would mean that MySQL
is
 creating these incorrect indexes?

The indexes are not necessarily incorrect, but MySQL also keeps a
property
called 'cardinality' for each index. It is an estimate of the number of
different items in the index. An index on 'gender' may have a
cardinality of
only two or three (male/female(/unknown)) for example. I've noticed that
the
cardinality on MyISAM tables can be very wrong and will be updated to a
correct value after an OPTIMIZE TABLE. InnoDB seems to estimate the
cardinality constantly.

  Deleting a lot of records will have impact on the indexes, so it's
quite
a
 job. The inserts/updates/deletes will also block the table for reading
in
 case of MyISAM.
 - During deletion of records from the table there is no user
interaction.
 The only person manipulating the table/database is me. That's the
reason
why
 i'm finding this 'strange'.
It will still be a massive operation on indexes. If you have many
indexes
the task will be even harder...

 Changing to Innodb would be a great risk I think. Maybe we should
think
this
 over again, but the way the system is configured right now should in
my
 opion be sufficient enough.

It's not a risk, but may take a while to complete (rebuilding the
tables).
Anyway, you should test it on a seperate database or even a different
server. You may also need to redesign the index(es).

Regards, Jigal.



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Full-text search performance issues

2005-02-10 Thread Homam S.A.
Since MySQL stores RowIDs with the indexed words
instead of the table's primary key IDs, and since it
uses only one index per table in any query, performing
a full-text search on a large table (several million
rows) and joining it with another large table proves
to be extremely slow!

The bottleneck seems to be the disk, as MySQL
generates an insane number of disk reads while trying
to join the two tables without using common indexes
between the two.

Basically I have two tables, A and B. These tables
cannot be merged, so there's no way around using a
join. One of these tables, A, has a TEXT column that
is full-text indexed. Now these tables are normally
joined using the indexes on their primary keys. But if
use a full-text criteria in the WHERE clause against
A, then MySQL will NOT use the primary key index on A
to perform the join. It instead seems to get the
result from the full-text index and uses it to perform
a huge number of random (unsorted) lookups on the
second table to perform the join.

I tried to perform a separate query on each table and
store the results (primary key IDs) in a couple of
temp tables, then join these tables manually. But
these tables can be huge, and doing this for each web
paging request is very inefficient. In addition,
limiting the size of these temp tables would render
the search dysfunctional because it won't be
encompassing any more.

With non-full-text indexes, you can just merge the
indexes into a composite index to work around the
single-index-per-table limitation. But you can't merge
a regular primary key index and a full-text index into
a composite index, and I have no idea why MySQL
developers didn't just use the primary key IDs with
the indexed words, as in MS SQL Server full-text
index, instead of the physical row IDs. May be it's a
MyISAM limitation that an index, no matter what type
it is, has to use physical row IDs as table pointers
instead of primary keys.

Is there any way around this limitation? I'd like to
know if someone has been in a such a situation and how
he or she solved it.

I appreciate your help!




__ 
Do you Yahoo!? 
Meet the all-new My Yahoo! - Try it today! 
http://my.yahoo.com 
 


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Performance issues

2004-06-28 Thread Jeremy Zawodny
On Tue, Jun 22, 2004 at 01:34:39PM -0400, Aram Mirzadeh wrote:
 
 We have an internal SNMP monitoring system that is monitoring about
 10,000 devices.  Each device is pinged then pulled for about an
 average of 25-30 elements.  Each of the ping results and elements
 are then stored in text file, then another system picks them up
 (NFS) and inserts them into a MyISAM (3.23.54) database.  The data
 is kept for 13 weeks.
 
 The database system is a Xeon 4 way, 12GB of ram with a striped raid
 array dedicated to the database files and its indexes and such.
 
 Every 5 minutes another process goes through the last set of inserts
 and compares them for any threshold breaches, so the entire last set
 of data is looked at.
 
 We're falling behind on the inserts because the system can't seem to
 handle the amount of inserts, the front end that generates the web
 pages based on the previous records is dogging down.
 
 I have read the regular optimizations papers and have done as much
 as I felt safe, are there any huge database optimization papers?
 Anything I should be looking at?

I'd consider bulking up the INSERTs, performing multi-row INSERTs
rather than doing them one by one.  That can speed things up quite a
bit in my experience.

Jeremy
-- 
Jeremy D. Zawodny |  Perl, Web, MySQL, Linux Magazine, Yahoo!
[EMAIL PROTECTED]  |  http://jeremy.zawodny.com/

[book] High Performance MySQL -- http://highperformancemysql.com/

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Performance issues

2004-06-28 Thread SGreen

Have you thought about using Merge tables?

If you have a sliding 5 minute monitoring window that you need to query
frequently you could create a smaller MERGE table to hold to 6 minutes
worth of data composed of six tables of one minute's data each.

At the end of each minute, you create a new table, change the merge
definition, then archive the old minute (the one that just left the
monitoring window) into a larger static table. Your indexes will be small
(only 1 minutes worth of data). The tables you need to query are smaller
(just 6 minutes worth) and you still keep all of your historical data.  You
could even hold off archiving the old tables until you have some free
time if you needed to.

You could also run tiers of tables. One weekly Merge table containing 7
daily tables. The most recent daily table could be a merge table of up
to 24 hourly tables. The most recent Hourly table could have the
results of archiving off your old minutes for the current hour. I
didn't see anything about NOT nesting  merge tables but I would assume it
could cause some serious headaches if you went overboard doing it

You probably want to review: http://dev.mysql.com/doc/mysql/en/MERGE.html
and: http://dev.mysql.com/doc/mysql/en/MERGE_table_problems.html
for details. (especially read about MERGE tables using lots of file
handles!!!)

I am sure if you thought about it you could break down your storage into
something more manageable than I described.

Best Wishes,
Shawn Green
Database Administrator
Unimin Corporation - Spruce Pine



   

  Jeremy Zawodny   

  [EMAIL PROTECTED]To:   Aram Mirzadeh [EMAIL 
PROTECTED]  
  om  cc:   [EMAIL PROTECTED] 

   Fax to: 

  06/28/2004 02:24 Subject:  Re: Performance issues

  PM   

  Please respond to

  mysql

   

   





On Tue, Jun 22, 2004 at 01:34:39PM -0400, Aram Mirzadeh wrote:

 We have an internal SNMP monitoring system that is monitoring about
 10,000 devices.  Each device is pinged then pulled for about an
 average of 25-30 elements.  Each of the ping results and elements
 are then stored in text file, then another system picks them up
 (NFS) and inserts them into a MyISAM (3.23.54) database.  The data
 is kept for 13 weeks.

 The database system is a Xeon 4 way, 12GB of ram with a striped raid
 array dedicated to the database files and its indexes and such.

 Every 5 minutes another process goes through the last set of inserts
 and compares them for any threshold breaches, so the entire last set
 of data is looked at.

 We're falling behind on the inserts because the system can't seem to
 handle the amount of inserts, the front end that generates the web
 pages based on the previous records is dogging down.

 I have read the regular optimizations papers and have done as much
 as I felt safe, are there any huge database optimization papers?
 Anything I should be looking at?

I'd consider bulking up the INSERTs, performing multi-row INSERTs
rather than doing them one by one.  That can speed things up quite a
bit in my experience.

Jeremy
--
Jeremy D. Zawodny |  Perl, Web, MySQL, Linux Magazine, Yahoo!
[EMAIL PROTECTED]  |  http://jeremy.zawodny.com/

[book] High Performance MySQL -- http://highperformancemysql.com/

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]







-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Performance issues

2004-06-28 Thread Andrew Pattison
By default MySQL flushes keys to disk with every INSERT, hence the
performance degredation with performing several single INSERTs one after the
other. The following extract from the MySQL documentation hints at one way
of changing this on a per-table basis:

a.. Declaring a MyISAM table with the DELAY_KEY_WRITE=1 table option makes
index updates faster because they are not flushed to disk until the table is
closed. The downside is that if something kills the server while such a
table is open, you should ensure that they are okay by running the server
with the --myisam-recover option, or by running myisamchk before restarting
the server. (However, even in this case, you should not lose anything by
using DELAY_KEY_WRITE, because the key information can always be generated
from the data rows.)

There is also a way of getting MySQL to do lazy writing of indexes on a
global basis but I couldn't find a quick reference to that.

Cheers

Andrew.

- Original Message - 
From: Jeremy Zawodny [EMAIL PROTECTED]
To: Aram Mirzadeh [EMAIL PROTECTED]
Cc: [EMAIL PROTECTED]
Sent: Monday, June 28, 2004 7:24 PM
Subject: Re: Performance issues


 On Tue, Jun 22, 2004 at 01:34:39PM -0400, Aram Mirzadeh wrote:
 
  We have an internal SNMP monitoring system that is monitoring about
  10,000 devices.  Each device is pinged then pulled for about an
  average of 25-30 elements.  Each of the ping results and elements
  are then stored in text file, then another system picks them up
  (NFS) and inserts them into a MyISAM (3.23.54) database.  The data
  is kept for 13 weeks.
 
  The database system is a Xeon 4 way, 12GB of ram with a striped raid
  array dedicated to the database files and its indexes and such.
 
  Every 5 minutes another process goes through the last set of inserts
  and compares them for any threshold breaches, so the entire last set
  of data is looked at.
 
  We're falling behind on the inserts because the system can't seem to
  handle the amount of inserts, the front end that generates the web
  pages based on the previous records is dogging down.
 
  I have read the regular optimizations papers and have done as much
  as I felt safe, are there any huge database optimization papers?
  Anything I should be looking at?

 I'd consider bulking up the INSERTs, performing multi-row INSERTs
 rather than doing them one by one.  That can speed things up quite a
 bit in my experience.

 Jeremy
 -- 
 Jeremy D. Zawodny |  Perl, Web, MySQL, Linux Magazine, Yahoo!
 [EMAIL PROTECTED]  |  http://jeremy.zawodny.com/

 [book] High Performance MySQL -- http://highperformancemysql.com/

 -- 
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:
http://lists.mysql.com/[EMAIL PROTECTED]





-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Performance issues

2004-06-28 Thread Jeremy Zawodny
On Mon, Jun 28, 2004 at 09:21:04PM +0100, Andrew Pattison wrote:
 By default MySQL flushes keys to disk with every INSERT, hence the
 performance degredation with performing several single INSERTs one after the
 other. The following extract from the MySQL documentation hints at one way
 of changing this on a per-table basis:
 
 a.. Declaring a MyISAM table with the DELAY_KEY_WRITE=1 table option makes
 index updates faster because they are not flushed to disk until the table is
 closed. The downside is that if something kills the server while such a
 table is open, you should ensure that they are okay by running the server
 with the --myisam-recover option, or by running myisamchk before restarting
 the server. (However, even in this case, you should not lose anything by
 using DELAY_KEY_WRITE, because the key information can always be generated
 from the data rows.)
 
 There is also a way of getting MySQL to do lazy writing of indexes on a
 global basis but I couldn't find a quick reference to that.

Delayed Key Writes:

  http://dev.mysql.com/doc/mysql/en/CREATE_TABLE.html

Search that page for delay and you'll find it.

Jeremy
-- 
Jeremy D. Zawodny |  Perl, Web, MySQL, Linux Magazine, Yahoo!
[EMAIL PROTECTED]  |  http://jeremy.zawodny.com/

[book] High Performance MySQL -- http://highperformancemysql.com/

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Performance issues

2004-06-22 Thread Aram Mirzadeh

We have an internal SNMP monitoring system that is monitoring about 
10,000 devices.  Each device is pinged then pulled for about an average 
of 25-30 elements.  Each of the ping results and elements are then 
stored in text file, then another system picks them up (NFS) and inserts 
them into a MyISAM (3.23.54) database.  The data is kept for 13 weeks.

The database system is a Xeon 4 way, 12GB of ram with a striped raid 
array dedicated to the database files and its indexes and such.

Every 5 minutes another process goes through the last set of inserts and 
compares them for any threshold breaches, so the entire last set of data 
is looked at.

We're falling behind on the inserts because the system can't seem to 
handle the amount of inserts, the front end that generates the web pages 
based on the previous records is dogging down.

I have read the regular optimizations papers and have done as much as I 
felt safe, are there any huge database optimization papers?  Anything I 
should be looking at?

Here is the relavent my.cnf entries:
set-variable= key_buffer=256M
set-variable= max_allowed_packet=1M
set-variable= table_cache=256
set-variable= sort_buffer=1M
set-variable= record_buffer=1M
set-variable= myisam_sort_buffer_size=64M
set-variable= thread_cache=8
set-variable= thread_concurrency=8
[mysql]
no-auto-rehash
[isamchk]
set-variable= key_buffer=128M
set-variable= sort_buffer=128M
set-variable= read_buffer=2M
set-variable= write_buffer=2M
[myisamchk]
set-variable= key_buffer=128M
set-variable= sort_buffer=128M
set-variable= read_buffer=2M
set-variable= write_buffer=2M
And here is my top output:
MySQL on 1.2.3.4 (3.23.54) up 2+06:36:05 [13:10:01]
 Queries: 191.5M  qps: 1022 Slow:   296.0 Se/In/Up/De(%): 
22/10/62/00
 qps now:  147 Slow qps: 0.0  Threads:9 (   2/   7) 
50/00/00/00
 Key Efficiency: 96.7%  Bps in/out:  4.0k/ 1.6k   Now in/out:  6.2k/767.7k

Any suggestions would be greatly appreciated.
Aram
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: Performance issues

2004-06-22 Thread Sergio Salvi
Hi!

Can you give more details on the problematic inserts you're doing (table 
structure, indexes and insert command) ?

Also, do you believe your queries would benefit from MySQL's query cache? 
Maybe it's worth upgrading to version 4 and use this feature, even if 
you allocate just a small amount of memory for that.

[]s,
Sergio Salvi.

On Tue, 22 Jun 2004, Aram Mirzadeh wrote:

 
 
 We have an internal SNMP monitoring system that is monitoring about 
 10,000 devices.  Each device is pinged then pulled for about an average 
 of 25-30 elements.  Each of the ping results and elements are then 
 stored in text file, then another system picks them up (NFS) and inserts 
 them into a MyISAM (3.23.54) database.  The data is kept for 13 weeks.
 
 The database system is a Xeon 4 way, 12GB of ram with a striped raid 
 array dedicated to the database files and its indexes and such.
 
 Every 5 minutes another process goes through the last set of inserts and 
 compares them for any threshold breaches, so the entire last set of data 
 is looked at.
 
 We're falling behind on the inserts because the system can't seem to 
 handle the amount of inserts, the front end that generates the web pages 
 based on the previous records is dogging down.
 
 I have read the regular optimizations papers and have done as much as I 
 felt safe, are there any huge database optimization papers?  Anything I 
 should be looking at?
 
 Here is the relavent my.cnf entries:
 
 set-variable= key_buffer=256M
 set-variable= max_allowed_packet=1M
 set-variable= table_cache=256
 set-variable= sort_buffer=1M
 set-variable= record_buffer=1M
 set-variable= myisam_sort_buffer_size=64M
 set-variable= thread_cache=8
 set-variable= thread_concurrency=8
 
 [mysql]
 no-auto-rehash
 
 [isamchk]
 set-variable= key_buffer=128M
 set-variable= sort_buffer=128M
 set-variable= read_buffer=2M
 set-variable= write_buffer=2M
 
 [myisamchk]
 set-variable= key_buffer=128M
 set-variable= sort_buffer=128M
 set-variable= read_buffer=2M
 set-variable= write_buffer=2M
 
 And here is my top output:
 
 MySQL on 1.2.3.4 (3.23.54) up 2+06:36:05 [13:10:01]
   Queries: 191.5M  qps: 1022 Slow:   296.0 Se/In/Up/De(%): 
 22/10/62/00
   qps now:  147 Slow qps: 0.0  Threads:9 (   2/   7) 
 50/00/00/00
   Key Efficiency: 96.7%  Bps in/out:  4.0k/ 1.6k   Now in/out:  6.2k/767.7k
 
 Any suggestions would be greatly appreciated.
 
 Aram
 
 





-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Performance issues

2004-06-22 Thread mos
At 12:34 PM 6/22/2004, you wrote:

We have an internal SNMP monitoring system that is monitoring about 10,000 
devices.  Each device is pinged then pulled for about an average of 25-30 
elements.  Each of the ping results and elements are then stored in text 
file, then another system picks them up (NFS) and inserts them into a 
MyISAM (3.23.54) database.  The data is kept for 13 weeks.

The database system is a Xeon 4 way, 12GB of ram with a striped raid array 
dedicated to the database files and its indexes and such.

Every 5 minutes another process goes through the last set of inserts and 
compares them for any threshold breaches, so the entire last set of data 
is looked at.

We're falling behind on the inserts because the system can't seem to 
handle the amount of inserts, the front end that generates the web pages 
based on the previous records is dogging down.
Have you tried Load Data Infile? It is for loading data from a text file 
into a table and is much faster than using Insert ... statements. For 
example, I can load 1 million rows of x(30) into a MyISam table in 15 
seconds on a P4 2.4ghz machine. You can use either Ignore or Replace to 
handle duplicate indexes.

Mike
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: Where to start for performance issues?

2004-06-21 Thread Brent Baisley
Performance for anything, database or otherwise, comes down to finding 
where the bottleneck is. Thankfully, there are only a limited number of 
places the bottleneck can be: RAM, CPU, Disk I/O, Network. Sounds easy, 
you have a 1 in 4 chance of picking the right one without doing any 
analysis. Of course, it's not that easy.

If your my.cnf file is not configured optimally, MySQL may be loading 
up the RAM it is allowed to use (perhaps showing a CPU spike), but then 
starting swapping out to disk (temp files) when it hits it's RAM 
allocation. Since this is MySQL hitting a RAM allocation limit and not 
the system running out of RAM, you won't see the OS indicating page 
outs, which would indicate not enough RAM.

Usually, the two parameters that help the most in the my.cnf file is 
the key_buffer and sort_buffer_size. If your complaints are coming from 
Forum and Picture Gallery, your bottleneck may be in disk I/O. I always 
try to put the OS on it's own disk like you, but if I only have two 
disks, I'll mirror the two disks to increase read speeds. Although it's 
then double important that you system has enough RAM since your writes 
will be slower (think slow memory paging). Of course, if you want to 
live dangerously, which you are just about doing anyway with no 
mirroring, you can stripe your disks and get better read and write 
speeds.
But first play with you my.cnf parameters to see if things speed up. 
Also, read the manual:
http://dev.mysql.com/doc/mysql/en/Optimising_the_Server.html

On Jun 21, 2004, at 12:10 AM, Eric Wagar wrote:
I have a Sun Netra T1 with 512MB of ram and 2 x 9G drives.  (One is
exclusively the MySQL 3.23.56 files, and the other is OS, Apache, etc.)
I am also using the default medium (I think) my.cnf.

Currently, the complaints are coming from a php Forum and a php picture
Gallery.  From the OS side, I don't see any problems, along with the
Apache side.  So, I am leaning towards an improper MySQL configuration.

Where do I go to get more information on where to start looking for
performance issues for the DB?
--
Brent Baisley
Systems Architect
Landover Associates, Inc.
Search  Advisory Services for Advanced Technology Environments
p: 212.759.6400/800.759.0577
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Where to start for performance issues?

2004-06-20 Thread Eric Wagar
I have a Sun Netra T1 with 512MB of ram and 2 x 9G drives.  (One is
exclusively the MySQL 3.23.56 files, and the other is OS, Apache, etc.)
I am also using the default medium (I think) my.cnf.

 

Currently, the complaints are coming from a php Forum and a php picture
Gallery.  From the OS side, I don't see any problems, along with the
Apache side.  So, I am leaning towards an improper MySQL configuration.

 

Where do I go to get more information on where to start looking for
performance issues for the DB?


Thanks

Eric



Re: InnoDB Performance issues

2003-07-13 Thread Heikki Tuuri
Nicholas,

- Original Message - 
From: Nicholas Elliott [EMAIL PROTECTED]
Newsgroups: mailing.database.mysql
Sent: Friday, July 11, 2003 6:04 PM
Subject: InnoDB Performance issues


 --=_NextPart_000_003B_01C3479C.77A1AB60
 Content-Type: text/plain;
 charset=iso-8859-1
 Content-Transfer-Encoding: quoted-printable

 Hey all,

...
 create table basic_daily_grid(
 date DATE NOT NULL PRIMARY KEY,
 variable1 MEDIUMBLOB,
 variable2 MEDIUMBLOB ...
 variable9 MEDIUMBLOB
 );

...
 mysql select date from basic_daily_grid_innodb;
 ...
 317 rows in set (0.00 sec)

 mysql explain select date, count(*) from basic_daily_grid_innodb group =
 by date;
 +-+---+---+-+-+--=
 +--+-+
 | table | type | possible_keys | key | key_len | ref | rows | Extra |
 +-+---+---+-+-+--=
 +--+-+
 | basic_daily_grid_innodb | index | NULL | PRIMARY | 3 | NULL |301 | =
 Using index |
 +-+---+---+-+-+--=
 +--+-+


 mysql explain select date, count(*) from basic_daily_grid_innodb group =
 by date;
 ...
 317 rows in set (2 min 54.95 sec)


 I assume this is due to versioning or some other transactional feature. =
 Or, is this a bug, or am I doing something wrong? I don't quite see why =
 grouping items that are all unique should be that much slower than not =
 grouping. I need InnoDB for the unlimited table size, but I don't =
 (Really) need transactions, commit/rollback, or checkpoints.

it is a performance bug. I an ORDER BY MySQL may use more columns than are
mentioned in the SELECT query, and InnoDB retrieves the whole row. If there
is a big BLOB in the row, it can take quite a while.

I may fix this to 4.1.1, but first I have to ask the MySQL developer if
handler::extra(KEYREAD) is aware that in a clustered index all columns are
in the index record.

Workaround: put BLOBs to a separate table and use a surrogate key (=
auto-inc column) to join it to a smaller table where the other columns are.

 Any suggestions on solving this last hurdle? Its entirely likely I'll =
 need to group by year and average the results, or something similar - =
 and at 3 minutes a pop, thats a little high for my liking. Perhaps I'm =
 expecting too much?


 Thanks,
 Nick Elliott

Best regards,

Heikki Tuuri
Innobase Oy
http://www.innodb.com
Transactions, foreign keys, and a hot backup tool for MySQL
Order MySQL technical support from https://order.mysql.com/



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



InnoDB Performance issues

2003-07-11 Thread Nicholas Elliott
Hey all,

I've been experimenting with the best way to store a large (~100GB) of data for 
retrieval. Essentially, I'm storing 9 variables for approximately 1,000,000 locations 
a day for the last ten years. This can work out at around 4MB a variable a day - but 
not all variables are always present (could be null).

(If you don't care about the details, I'll summarize at the end of this email).


Inserting and retrieving from a MyISAM table seemed to be approaching impossible. (Not 
totally surprising.) I originally had a table along the lines of:

create table basic_daily_report(
date DATE NOT NULL,
location_id MEDIUMINT UNSIGNED NOT NULL,
variable1 float,
variable2 float
variable9 float,
primary key (date, location_id)
);

(Just a summary of the actual table)

With this I had a maxiumum table size of around 100GB - just barely enough to do it. I 
expected I would end up segmenting by year, or something similar, as ugly as that is. 
I tested InnoDB as an alternative to this, but we'll get to that in a second. 
Basically, inserting a day's worth of data would take ages, and pretty much require an 
analyze table for a couple hours every morning. Selecting was getting to be pretty 
slow, as well. Eventually, I hit on the idea of including one row per day:


create table basic_daily_grid(
date DATE NOT NULL PRIMARY KEY,
variable1 MEDIUMBLOB,
variable2 MEDIUMBLOB ...
variable9 MEDIUMBLOB
);

And wrote a UDF such that you pass it the variable and a location, and it'll return 
the exact value. This works well because every day has a constant number of locations 
in a grid format, so it's simply an array lookup. So, 

select grid_point(location_id, variable1) from basic_daily_grid where date=20030101

would return the right value for locationid. It turns out this is almost (95%) as fast 
as the first version in selecting, but it has the added bonus of inserts now only take 
~5 seconds per day!




Alas, after inserting 260 days (less than a year) I hit the MyISAM table size limit of 
4.2GB - because a BLOB is a variable length field.


-


I mention all the above in case someone has an alternative solution I'm looking over. 
Possible solutions I've found are a) use InnoDB instead, b) work with the source to 
create a new field type BLOBARRAY of a constant width instead of dynamic, c) work with 
the source to somehow overcome the 4.2GB limit on a dynamic table.


c) Seems unlikely - if the actual developers can't do it, I probably can't
b) Seems possible, I assume no one saw a need for a constant width column of 4MB, so 
hopefully its not too difficult
a) Was my first try. Inserting takes about twice as long as myisam... sure, I can deal 
with that. Selecting a specific date is in the same ballpark as well, so little 
problem there. What I'm having severe performance issues on are querys that group, or 
do a count(*). For example:


mysql explain select date from basic_daily_grid_innodb;
+-+---+---+-+-+--+--+-+
| table | type | possible_keys | key | key_len | ref | rows | Extra |
+-+---+---+-+-+--+--+-+
| basic_daily_grid_innodb | index | NULL | PRIMARY | 3 | NULL |301 | Using index |
+-+---+---+-+-+--+--+-+




mysql select date from basic_daily_grid_innodb;
...
317 rows in set (0.00 sec)

mysql explain select date, count(*) from basic_daily_grid_innodb group by date;
+-+---+---+-+-+--+--+-+
| table | type | possible_keys | key | key_len | ref | rows | Extra |
+-+---+---+-+-+--+--+-+
| basic_daily_grid_innodb | index | NULL | PRIMARY | 3 | NULL |301 | Using index |
+-+---+---+-+-+--+--+-+


mysql explain select date, count(*) from basic_daily_grid_innodb group by date;
...
317 rows in set (2 min 54.95 sec)


I assume this is due to versioning or some other transactional feature. Or, is this a 
bug, or am I doing something wrong? I don't quite see why grouping items that are all 
unique should be that much slower than not grouping. I need InnoDB for the unlimited 
table size, but I don't (Really) need transactions, commit/rollback, or checkpoints.


Any suggestions on solving this last hurdle? Its entirely likely I'll need to group by 
year and average the results, or something similar - and at 3 minutes a pop, thats a 
little high for my liking. Perhaps I'm expecting too much?


Thanks,
Nick Elliott


Re: InnoDB Performance issues

2003-07-11 Thread Dan Nelson
In the last episode (Jul 11), Nicholas Elliott said:
 I've been experimenting with the best way to store a large (~100GB)
 of data for retrieval. Essentially, I'm storing 9 variables for
 approximately 1,000,000 locations a day for the last ten years. This
 can work out at around 4MB a variable a day - but not all variables
 are always present (could be null).
[...]
 Alas, after inserting 260 days (less than a year) I hit the MyISAM
 table size limit of 4.2GB - because a BLOB is a variable length
 field.

MyISAM doesn't have a hard 4gb table size...  It may default to a 4gb
limit if you forgot to give a hint as to the final table size when you
created it, though.  Try running

ALTER TABLE mytable AVG_ROW_LENGTH=3600 MAX_ROWS=3600

( 36MB average row length since you have 9 4mb blobs, and 10 years
worth of records. )

-- 
Dan Nelson
[EMAIL PROTECTED]

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: InnoDB Performance issues

2003-07-11 Thread Nicholas Elliott
That did it -- show table status lists the upper limit as approx 1TB now =].

I'm still curious about the InnoDB issues, but now at least I can avoid it
and work with the original plan!

Thanks,

Nick Elliott

- Original Message -
From: Dan Nelson [EMAIL PROTECTED]
To: Nicholas Elliott [EMAIL PROTECTED]
Cc: [EMAIL PROTECTED]
Sent: Friday, July 11, 2003 11:29 AM
Subject: Re: InnoDB Performance issues


 In the last episode (Jul 11), Nicholas Elliott said:
  I've been experimenting with the best way to store a large (~100GB)
  of data for retrieval. Essentially, I'm storing 9 variables for
  approximately 1,000,000 locations a day for the last ten years. This
  can work out at around 4MB a variable a day - but not all variables
  are always present (could be null).
 [...]
  Alas, after inserting 260 days (less than a year) I hit the MyISAM
  table size limit of 4.2GB - because a BLOB is a variable length
  field.

 MyISAM doesn't have a hard 4gb table size...  It may default to a 4gb
 limit if you forgot to give a hint as to the final table size when you
 created it, though.  Try running

 ALTER TABLE mytable AVG_ROW_LENGTH=3600 MAX_ROWS=3600

 ( 36MB average row length since you have 9 4mb blobs, and 10 years
 worth of records. )

 --
 Dan Nelson
 [EMAIL PROTECTED]

 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: InnoDB Performance issues

2003-07-11 Thread Nils Valentin
Hi Nicholas,

How about storing the BLOBS outside of the DB and refering to them ?

Best regards

Nils Valentin
Tokyo/Japan


2003 7 12  00:06Nicholas Elliott :
 Hey all,

 I've been experimenting with the best way to store a large (~100GB) of data
 for retrieval. Essentially, I'm storing 9 variables for approximately
 1,000,000 locations a day for the last ten years. This can work out at
 around 4MB a variable a day - but not all variables are always present
 (could be null).

 (If you don't care about the details, I'll summarize at the end of this
 email).


 Inserting and retrieving from a MyISAM table seemed to be approaching
 impossible. (Not totally surprising.) I originally had a table along the
 lines of:

 create table basic_daily_report(
 date DATE NOT NULL,
 location_id MEDIUMINT UNSIGNED NOT NULL,
 variable1 float,
 variable2 float
 variable9 float,
 primary key (date, location_id)
 );

 (Just a summary of the actual table)

 With this I had a maxiumum table size of around 100GB - just barely enough
 to do it. I expected I would end up segmenting by year, or something
 similar, as ugly as that is. I tested InnoDB as an alternative to this, but
 we'll get to that in a second. Basically, inserting a day's worth of data
 would take ages, and pretty much require an analyze table for a couple
 hours every morning. Selecting was getting to be pretty slow, as well.
 Eventually, I hit on the idea of including one row per day:


 create table basic_daily_grid(
 date DATE NOT NULL PRIMARY KEY,
 variable1 MEDIUMBLOB,
 variable2 MEDIUMBLOB ...
 variable9 MEDIUMBLOB
 );

 And wrote a UDF such that you pass it the variable and a location, and
 it'll return the exact value. This works well because every day has a
 constant number of locations in a grid format, so it's simply an array
 lookup. So,

 select grid_point(location_id, variable1) from basic_daily_grid where
 date=20030101

 would return the right value for locationid. It turns out this is almost
 (95%) as fast as the first version in selecting, but it has the added bonus
 of inserts now only take ~5 seconds per day!




 Alas, after inserting 260 days (less than a year) I hit the MyISAM table
 size limit of 4.2GB - because a BLOB is a variable length field.


 -


 I mention all the above in case someone has an alternative solution I'm
 looking over. Possible solutions I've found are a) use InnoDB instead, b)
 work with the source to create a new field type BLOBARRAY of a constant
 width instead of dynamic, c) work with the source to somehow overcome the
 4.2GB limit on a dynamic table.


 c) Seems unlikely - if the actual developers can't do it, I probably can't
 b) Seems possible, I assume no one saw a need for a constant width column
 of 4MB, so hopefully its not too difficult a) Was my first try. Inserting
 takes about twice as long as myisam... sure, I can deal with that.
 Selecting a specific date is in the same ballpark as well, so little
 problem there. What I'm having severe performance issues on are querys that
 group, or do a count(*). For example:


 mysql explain select date from basic_daily_grid_innodb;
 +-+---+---+-+-+
--+--+-+

 | table | type | possible_keys | key | key_len | ref | rows | Extra |

 +-+---+---+-+-+
--+--+-+

 | basic_daily_grid_innodb | index | NULL | PRIMARY | 3 | NULL |301 | Using
 | index |

 +-+---+---+-+-+
--+--+-+




 mysql select date from basic_daily_grid_innodb;
 ...
 317 rows in set (0.00 sec)

 mysql explain select date, count(*) from basic_daily_grid_innodb group by
 date;
 +-+---+---+-+-+
--+--+-+

 | table | type | possible_keys | key | key_len | ref | rows | Extra |

 +-+---+---+-+-+
--+--+-+

 | basic_daily_grid_innodb | index | NULL | PRIMARY | 3 | NULL |301 | Using
 | index |

 +-+---+---+-+-+
--+--+-+


 mysql explain select date, count(*) from basic_daily_grid_innodb group by
 date; ...
 317 rows in set (2 min 54.95 sec)


 I assume this is due to versioning or some other transactional feature. Or,
 is this a bug, or am I doing something wrong? I don't quite see why
 grouping items that are all unique should be that much slower than not
 grouping. I need InnoDB for the unlimited table size, but I don't (Really)
 need transactions, commit/rollback, or checkpoints.


 Any suggestions on solving this last hurdle? Its entirely likely I'll need
 to group by year and average the results, or something similar - and at 3
 minutes a pop, thats a little high for my liking. Perhaps I'm

MySQL performance issues - PLEASE ADVICE!!

2002-05-07 Thread webmaster

I am having performance problems with my MySQL installation - what would be an 
appropriate channel for requesting help to such a problem?
I have posted to this list twice and another one as well. Volumes are very high on 
using MySQL/standard SQL questions, but not an single suggestion has been submitted to 
my request. 
Is the only valid alternative to go for an annual support deal with MySQL.org? My site 
can't afford that. Lots of enthusiastic 'community' members will be disappointed if it 
should go down because I cant afford proper support. 
Please respond if you have any suggestion where it might be wise to direct my request!

All the best; 

Eivind :-)



-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




RE: MySQL performance issues - PLEASE ADVICE!!

2002-05-07 Thread Jay Blanchard

[snip]
I am having performance problems with my MySQL installation - what would be
an appropriate channel for requesting help to such a problem?
[/snip]

Repost your original concern and I will see if I can help.

Jay



-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: MySQL performance issues - PLEASE ADVICE!!

2002-05-07 Thread Gelu

Hi,
Don't give up.Try againso ... what's your problem?
More details about your problem are welcome.
Regards,
Gelu
_
G.NET SOFTWARE COMPANY

Permanent e-mail address : [EMAIL PROTECTED]
  [EMAIL PROTECTED]

- Original Message -
From: webmaster [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
Sent: Tuesday, May 07, 2002 5:13 PM
Subject: MySQL performance issues - PLEASE ADVICE!!


I am having performance problems with my MySQL installation - what would be
an appropriate channel for requesting help to such a problem?
I have posted to this list twice and another one as well. Volumes are very
high on using MySQL/standard SQL questions, but not an single suggestion has
been submitted to my request.
Is the only valid alternative to go for an annual support deal with
MySQL.org? My site can't afford that. Lots of enthusiastic 'community'
members will be disappointed if it should go down because I cant afford
proper support.
Please respond if you have any suggestion where it might be wise to direct
my request!

All the best;

Eivind :-)



-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: Performance issues between two servers

2002-05-06 Thread Brent Baisley

Have you checked the network connection? You should be running at
full-duplex so you don't get collisions. But more importantly, make sure you
aren't getting errors on the line. A poor crimp or pinched wire could really
slow things down, especially if you try running at 100MB.
Try copying a single large file between APPDEV1/SQLDEV0 and APPDEV1/SQLDEV1.
See what the speed difference is.

 Backgroud: 
 We have a process that runs on a server (APPDEV1) that writes records to a
 mysql server (SQLDEV0). We attempted to move the databases from SQLDEV0 to
 SQLDEV1 and have run into some sort of performance bottleneck. The server
 SQLDEV0 is a Compac server with ~2GB of ram and two processors. The server
 SQLDEV1 is also a Compac server. It has 4GB of ram and two processors that
 are a bit faster than the ones in SQLDEV0.
 
 One big difference between SQLDEV0 and SQLDEV1 is the version of RedHat.
 SQLDEV0 is running RedHat 7.2. SQLDEV1 is running an enterprise version of
 RedHat 7.2 so that it can take advantage of the 4GB of ram.
 
 All the table spaces are using Innodb.
 
 Problem: 
 The process on APPDEV1 can write records to the box SQLDEV0 about eight time
 faster than to SQLDEV1. We've looked over the my.sql configurations and they
 seem to be ok. In fact we adjusted the my.cnf file on SQLDEV1 so that it was
 identicle to SQLDEV0 but it did not help. The systems are running ~70-95%
 cpu idle so cpu is not a bottle neck. In testing, raw disk I/O rates are
 about 50% faster on SQLDEV1 as SQLDEV0. We don't see a bottle neck on I/O.
 
 This is the only process using mysql on SQLDEV1. On SQLDEV0 it shares access
 with several other programs but the box is not very busy.
 
 Thoughts? Comments? Criticism?
 
 Carl McNamee 
 Systems Administrator/DBA
 Billing Concepts 
 (210) 949-7282 
 
 -
 Before posting, please check:
  http://www.mysql.com/manual.php   (the manual)
  http://lists.mysql.com/   (the list archive)
 
 To request this thread, e-mail [EMAIL PROTECTED]
 To unsubscribe, e-mail [EMAIL PROTECTED]
 Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php
 

-- 
Brent Baisley
Systems Architect
Landover Associates, Inc.
Search  Advisory Services for Advanced Technology Environments
p: 212.759.6400/800.759.0577


-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Performance issues between two servers

2002-05-03 Thread Carl McNamee

Backgroud: 
We have a process that runs on a server (APPDEV1) that writes records to a
mysql server (SQLDEV0). We attempted to move the databases from SQLDEV0 to
SQLDEV1 and have run into some sort of performance bottleneck. The server
SQLDEV0 is a Compac server with ~2GB of ram and two processors. The server
SQLDEV1 is also a Compac server. It has 4GB of ram and two processors that
are a bit faster than the ones in SQLDEV0. 

One big difference between SQLDEV0 and SQLDEV1 is the version of RedHat.
SQLDEV0 is running RedHat 7.2. SQLDEV1 is running an enterprise version of
RedHat 7.2 so that it can take advantage of the 4GB of ram. 

All the table spaces are using Innodb. 

Problem: 
The process on APPDEV1 can write records to the box SQLDEV0 about eight time
faster than to SQLDEV1. We've looked over the my.sql configurations and they
seem to be ok. In fact we adjusted the my.cnf file on SQLDEV1 so that it was
identicle to SQLDEV0 but it did not help. The systems are running ~70-95%
cpu idle so cpu is not a bottle neck. In testing, raw disk I/O rates are
about 50% faster on SQLDEV1 as SQLDEV0. We don't see a bottle neck on I/O. 

This is the only process using mysql on SQLDEV1. On SQLDEV0 it shares access
with several other programs but the box is not very busy. 


Thoughts? Comments? Criticism? 

Carl McNamee 
Systems Administrator/DBA 
Billing Concepts 
(210) 949-7282 


-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




performance issues with large tables?

2001-10-03 Thread Aaron Brick

hello all,

i am interested in determining whether there are performance issues of which
i should be aware related to the usage of a table which takes up ones or
tens of gigabytes. i am planning to use an index of some kind.

i'd appreciate hearing about any experience which you'd relate, or
any theoretical considerations.

thanks,

aaron brick.


   /\
  |   Aaron Brick (415) 206 - 4685   |
  |   [EMAIL PROTECTED] Room 3501C, SFGH   |

  Programmer Analyst, Functional Genomics Core  
   Sandler Center for Basic Research in Asthma
   Lung Biology Center, Department of Medicine
San Francisco General Hospital
   University of California, San Francisco

-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: Performance issues.

2001-02-09 Thread Benjamin Pflugmann

Hi.

On Wed, Feb 07, 2001 at 03:01:28PM -0500, [EMAIL PROTECTED] wrote:
[...]
 We have one table with all the defintions on it's own row.
 Then we have built off of that table another table that is only the distinct
 words, no definitions.
 
 This is because if a word has 10 definitions, it makes it hard to limit the
 number of results returned from the first table to 5 words, because we don't
 know how many definitions each word has.
 
 We have two coloumns that we check the search on.  keyWord and vKeyWord.
 keyWord is basically the non-display keyword.  without spaces and junk.  We
 could remove that from the searching, if it'd help.  Would that make much of
 a difference?

In your case, yes (see below).

 So first we do a:
 "SELECT COUNT(*) AS totalWords FROM keywords WHERE keyword LIKE '$keyword%'
 OR vkeyword LIKE '$keyword%'"
 to get the number of entries they can page through.

The problem is that MySQL cannot (yet) use indexes well for OR
clauses. You can see this with

EXPLAIN SELECT COUNT(*) AS totalWords FROM keywords WHERE keyword LIKE
'$keyword%' OR vkeyword LIKE '$keyword%';

If you can afford it (as you said above), just let keyword away and
test for vkeyword:

SELECT COUNT(*) AS totalWords FROM keywords WHERE vkeyword LIKE '$keyword%';

This should be quite fast (provided that there exists an index on
vkeyword).

 Then we do a:
 "SELECT vkeyWord FROM keywords WHERE keyword LIKE '$keyword%' OR vkeyword
 LIKE '$keyword%' LIMIT $startWordCount, 5"
 ($startWordCount depends on which page they are on)
 And build a list of the words we received.

You would have to rewrite this, too.

 Then we do a:
 "SELECT * FROM Random WHERE vKeyWord IN ($word1, $word2, $word3, $word4,
 $word5) ORDER BY ID"
 
 And *poof* we have all the definitions for 5 words, and the maximum number
 of words that there could be.
 
 Are we doing anything obviouslly wrong in this?

Not really. It's just that MySQL cannot handle the OR well.

If you really would need the two conditions, there are work-arounds
for that (e.g. creating a temporary table, ...), which I won't
elaborate on now.

 Is there a way to log all the sql calls?

Yes, there are two logs: an update log (contains only queries which
change the database content) and a general log (all queries). You
probably have to enable logging first. Have a look at the section
about server options in the manual.

Bye,

Benjamin.


-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




No Key on LIKE% (was: Performance issues.)

2001-02-08 Thread Steve Ruby



Quentin Bennett wrote:
 
 Hi,
 
 For an indexed column, the index is used if the start of the string is used:
 
 LIKE 'a string of text%' may use an index
 LIKE '%any old string%' will not, since the start of the string is unknown.
 
 The index will only be used if the server decides that it will be quicker
 than a full table scan.
 
 Have you got the results of 'explain select ' to see if your index is
 actually being used.
 
 Regards
 
 Quentin


While we are (were) on the subject.. Any thoughts why like would not be
using an index in this case?

mysql explain select * from _data where datatime like '12:00:%';
+---+--+---+--+-+--+-++
| table | type | possible_keys | key  | key_len | ref  | rows| Extra  |
+---+--+---+--+-+--+-++
| _data | ALL  | dataTime  | NULL |NULL | NULL | 5751070 | where used |
+---+--+---+--+-+--+-++
1 row in set (0.06 sec)

mysql explain select * from _data where datatime like '12%';
+---+--+---+--+-+--+-++
| table | type | possible_keys | key  | key_len | ref  | rows| Extra  |
+---+--+---+--+-+--+-++
| _data | ALL  | dataTime  | NULL |NULL | NULL | 5751070 | where used |
+---+--+---+--+-+--+-++
1 row in set (0.00 sec)


There is a key on dataTime and there are only 94 unique values for datatime...

So why in 5.7mil rows it doesn't use key?   I have the same issue on a
datadatetime column which has 191,000 unique values and it doesn't use
the index either

-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




RE: Performance issues.

2001-02-07 Thread Ryan Hadley

Maybe we're doing something really wrong in the way we do each look up.

We have one table with all the defintions on it's own row.
Then we have built off of that table another table that is only the distinct
words, no definitions.

This is because if a word has 10 definitions, it makes it hard to limit the
number of results returned from the first table to 5 words, because we don't
know how many definitions each word has.

We have two coloumns that we check the search on.  keyWord and vKeyWord.
keyWord is basically the non-display keyword.  without spaces and junk.  We
could remove that from the searching, if it'd help.  Would that make much of
a difference?

So first we do a:
"SELECT COUNT(*) AS totalWords FROM keywords WHERE keyword LIKE '$keyword%'
OR vkeyword LIKE '$keyword%'"
to get the number of entries they can page through.

Then we do a:
"SELECT vkeyWord FROM keywords WHERE keyword LIKE '$keyword%' OR vkeyword
LIKE '$keyword%' LIMIT $startWordCount, 5"
($startWordCount depends on which page they are on)
And build a list of the words we received.

Then we do a:
"SELECT * FROM Random WHERE vKeyWord IN ($word1, $word2, $word3, $word4,
$word5) ORDER BY ID"

And *poof* we have all the definitions for 5 words, and the maximum number
of words that there could be.

Are we doing anything obviouslly wrong in this?

Is there a way to log all the sql calls?

-Original Message-
From: Jason Terry [mailto:[EMAIL PROTECTED]]
Sent: Wednesday, February 07, 2001 11:01 AM
To: Jim Beigel; [EMAIL PROTECTED]
Subject: Re: Performance issues.


Cool, I like this info. (especially key_buffer_size stuff)

However, I was running my numbers against what you said would be good.   And
this is what I came up with...

The Key_reads/Key_read_request = 0.002 (much less than you suggest so that
is good... I think)

However this one worries me a bit...
Key_write/Key_write_requests  = 0.087 (way lower than 1)

Does this mean that I am sending WAY to many un-needed UPDATE requests?

- Original Message -
From: "Jim Beigel" [EMAIL PROTECTED]
To: "Ryan Hadley" [EMAIL PROTECTED]; [EMAIL PROTECTED]
Sent: Wednesday, February 07, 2001 6:32 AM
Subject: RE: Performance issues.


 Ryan:

 1.) ...WHERE word LIKE '%word%' will never be fast.

 2.) Since you said you have indexed the field, if you can limit your
 searches to ...WHERE word LIKE 'word%', then you might want to look at
 indexing on a subset of the field if it's a big one.

 3.) You will get better performance if you perform routine maintenance on
 your tables. Try running OPTIMIZE TABLE  Or, if you can bring down
 mysqld, try the myisamchk (-a -S) equivalents.

 4.) Lastly, in addition to the other comments, I'd take a look at changing
 some of the parameters in my.cnf. Here's a pertinent snip from the manual:

 join_buffer_size
 
 The size of the buffer that is used for full joins (joins that do not use
 indexes). The buffer is allocated one time for each full join between two
 tables. Increase this value to get a faster full join when adding indexes
is
 not possible. (Normally the best way to get fast joins is to add indexes.)

 key_buffer_size
 ---
 Index blocks are buffered and are shared by all threads. key_buffer_size
is
 the size of the buffer used for index blocks. Increase this to get better
 index handling (for all reads and multiple writes) to as much as you can
 afford; 64M on a 256M machine that mainly runs MySQL is quite common. If
 you, however, make this too big (more than 50% of your total memory?) your
 system may start to page and become REALLY slow. Remember that because
MySQL
 does not cache data read, that you will have to leave some room for the OS
 filesystem cache. You can check the performance of the key buffer by doing
 show status and examine the variables Key_read_requests, Key_reads,
 Key_write_requests, and Key_writes. The Key_reads/Key_read_request ratio
 should normally be  0.01. The Key_write/Key_write_requests is usually
near
 1 if you are using mostly updates/deletes but may be much smaller if you
 tend to do updates that affect many at the same time or if you are using
 delay_key_write. See section 7.28 SHOW Syntax (Get Information About
Tables,
 Columns,...). To get even more speed when writing many rows at the same
 time, use LOCK TABLES. See section 7.32 LOCK TABLES/UNLOCK TABLES Syntax.




 Jim Beigel
 Director of Software Development
 Alabanza Corporation
 [EMAIL PROTECTED]
 740-282-2971 x.5205

  -Original Message-
  From: Ryan Hadley [mailto:[EMAIL PROTECTED]]
  Sent: Tuesday, February 06, 2001 6:09 PM
  To: [EMAIL PROTECTED]
  Subject: RE: Performance issues.
 
 
  Thanks for the quick response.
 
  The response time is slow... and the mysqld processes are what is
  hogging up
  the system.
 
  We do have indexes on the fields, but from what I understand,
  when you use a
  "LIKE" statement, it rarely uses an index.

RE: Performance issues.

2001-02-06 Thread Ryan Hadley

Woah!  800,000 visits a day!?  Wow...  try 80,000.

-Original Message-
From: Ryan Hadley [mailto:[EMAIL PROTECTED]]
Sent: Tuesday, February 06, 2001 5:25 PM
To: [EMAIL PROTECTED]
Subject: Performance issues.


I'm not much of a db admin so I really don't know how to get better
performance out of our database... but it seriously needs some speeding up.

We have this huge dictionary... It's 272235 rows.  We have to be able to
search through it.

We run about 800,000 visits a day.

Right now we're doing a "WHERE keyWord LIKE 'word%'"... but the "LIKE"'s are
killing the machine.  We had to upgrade our memory to 512M.  That helped a
bit, now our machine doesn't just die.  But it's still way to slow.  The CPU
is maxing out and we're hitting like 10-15% idle during slow periods and
0.0% idle during rush periods.

What can we do?  Besides of course firing me and getting a real db admin. :)


-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail
[EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php


-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




RE: Performance issues.

2001-02-06 Thread Ryan Hadley

Thanks for the quick response.

The response time is slow... and the mysqld processes are what is hogging up
the system.

We do have indexes on the fields, but from what I understand, when you use a
"LIKE" statement, it rarely uses an index.

-Ryan

-Original Message-
From: Kent Hoover [mailto:[EMAIL PROTECTED]]
Sent: Tuesday, February 06, 2001 5:42 PM
To: [EMAIL PROTECTED]
Subject: Performance issues.


Ryan:

If your response time for this query is slow, it is likely that an INDEX
will help
you. (Read about CREATE INDEX in the MySQL manual.

If you don't already have an INDEX on the keyWord column, create one.

If you can induce your customer/users to type more characters, that
would help.
WHERE keyWord LIKE 'salomi%'  is much better for you than
WHERE keyWord LIKE 's%'.

NOTE, that if your response time to this query is good, there could be
something
other than MySQL running on your machine that is sucking your CPU dry.
You might be able to spot it by running 'top' 'ps -ef' or whatever
command is
available for your machine.

Cheers,

Kent Hoover




-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail
[EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php


-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




RE: Performance issues.

2001-02-06 Thread Quentin Bennett

Hi,

For an indexed column, the index is used if the start of the string is used:

LIKE 'a string of text%' may use an index
LIKE '%any old string%' will not, since the start of the string is unknown.

The index will only be used if the server decides that it will be quicker
than a full table scan.

Have you got the results of 'explain select ' to see if your index is
actually being used.

Regards

Quentin
-Original Message-
From: Ryan Hadley [mailto:[EMAIL PROTECTED]]
Sent: Wednesday, 7 February 2001 12:09
To: [EMAIL PROTECTED]
Subject: RE: Performance issues.


Thanks for the quick response.

The response time is slow... and the mysqld processes are what is hogging up
the system.

We do have indexes on the fields, but from what I understand, when you use a
"LIKE" statement, it rarely uses an index.

-Ryan

-Original Message-
From: Kent Hoover [mailto:[EMAIL PROTECTED]]
Sent: Tuesday, February 06, 2001 5:42 PM
To: [EMAIL PROTECTED]
Subject: Performance issues.


Ryan:

If your response time for this query is slow, it is likely that an INDEX
will help
you. (Read about CREATE INDEX in the MySQL manual.

If you don't already have an INDEX on the keyWord column, create one.

If you can induce your customer/users to type more characters, that
would help.
WHERE keyWord LIKE 'salomi%'  is much better for you than
WHERE keyWord LIKE 's%'.

NOTE, that if your response time to this query is good, there could be
something
other than MySQL running on your machine that is sucking your CPU dry.
You might be able to spot it by running 'top' 'ps -ef' or whatever
command is
available for your machine.

Cheers,

Kent Hoover




-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail
[EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php


-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail
[EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php

The information contained in this email is privileged and confidential
and intended for the addressee only. If you are not the intended 
recipient, you are asked to respect that confidentiality and not 
disclose, copy or make use of its contents. If received in error 
you are asked to destroy this email and contact the sender immediately. 
Your assistance is appreciated.

-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




RE: Performance issues.

2001-02-06 Thread Ryan Hadley

I haven't had a chance to do so yet.

But, we offer 4 kinds of searches:
'$word%'
'%$word%'
'%$word'
and
'$word'

So some searches still won't use indexes.

-Original Message-
From: Quentin Bennett [mailto:[EMAIL PROTECTED]]
Sent: Tuesday, February 06, 2001 7:26 PM
To: 'Ryan Hadley'; [EMAIL PROTECTED]
Subject: RE: Performance issues.


Hi,

For an indexed column, the index is used if the start of the string is used:

LIKE 'a string of text%' may use an index
LIKE '%any old string%' will not, since the start of the string is unknown.

The index will only be used if the server decides that it will be quicker
than a full table scan.

Have you got the results of 'explain select ' to see if your index is
actually being used.

Regards

Quentin
-Original Message-
From: Ryan Hadley [mailto:[EMAIL PROTECTED]]
Sent: Wednesday, 7 February 2001 12:09
To: [EMAIL PROTECTED]
Subject: RE: Performance issues.


Thanks for the quick response.

The response time is slow... and the mysqld processes are what is hogging up
the system.

We do have indexes on the fields, but from what I understand, when you use a
"LIKE" statement, it rarely uses an index.

-Ryan

-Original Message-
From: Kent Hoover [mailto:[EMAIL PROTECTED]]
Sent: Tuesday, February 06, 2001 5:42 PM
To: [EMAIL PROTECTED]
Subject: Performance issues.


Ryan:

If your response time for this query is slow, it is likely that an INDEX
will help
you. (Read about CREATE INDEX in the MySQL manual.

If you don't already have an INDEX on the keyWord column, create one.

If you can induce your customer/users to type more characters, that
would help.
WHERE keyWord LIKE 'salomi%'  is much better for you than
WHERE keyWord LIKE 's%'.

NOTE, that if your response time to this query is good, there could be
something
other than MySQL running on your machine that is sucking your CPU dry.
You might be able to spot it by running 'top' 'ps -ef' or whatever
command is
available for your machine.

Cheers,

Kent Hoover




-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail
[EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php


-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail
[EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php

The information contained in this email is privileged and confidential
and intended for the addressee only. If you are not the intended
recipient, you are asked to respect that confidentiality and not
disclose, copy or make use of its contents. If received in error
you are asked to destroy this email and contact the sender immediately.
Your assistance is appreciated.

-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail
[EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php


-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: Performance issues.

2001-02-06 Thread --==[bMan]==--

"Only" 272235???  I enter on average about 75,000 to 80,000 records a day 
(and some times, I break 100,000 records a day.  I do monthly rotations so 
it's easy to calculate how big my table gets).  Granted, I don't know what 
your table structure is but mine is very simple.  All I do is run imports 
every morning of CACI.txt file generated by EchoScope (network traffic 
monitoring tool).  I have about 15 fields per records (as far as I can 
remember) and most of them are indexed.  I wish only, that I could only do 
unique index on more than just ID field but due to the uniqueness of what I'm 
doing, I have to live with regular indexes.  Some of the fields are longer 
than 100 characters so I made sure that my indexes are no longer than 20 
characters.  I was playing a lot before I got performance I have right now.  
One thing I've noticed was that if I indexed more than 20 characters (some 
times, I was crazy trying to index all 255 characters), performance was 
actually dropping down drastically.

Now, things I want to share with:

1.  I use my database as a back-end for my PERL scripts and web interface for 
easy data querying by end users.  Having said that, I could not really afford 
long delays between queries and data display to the browser.

2.  In my queries I use =, LIKE and used to use REGEXP within MySQL.  Out of 
these three, REGEXP was the WORST in performance (and rightly so).  I decided 
to drop it altogether.  An example:

I created a keyword list so clients would not have to type individual words 
to find what they want.  So rather than type: red, yellow, green, gray, blue 
and so on, they would only type "colors" in the query field.  It was my task, 
then, to go and fetch all those.  It's, of course a simple example but you 
can imagine the implications (sex sites, e.g.).  Initially, I used REGEXP and 
it took FOREVER to return a query.  It took on average 5-7 minutes to return 
a query to a browser searching through 1,000,000+ database.  It was totally 
unacceptable so I had to re-write it.  I have decided to write a routine that 
would simply write a query for me.  So, if I had 30 words I am interested it, 
my routine would build a query for me like:

AND (field LIKE %something% OR field LIKE %something1% OR field LIKE 
%something2%) AND NOT (field LIKE %thisthing% OR field LIKE %thisthing1%) and 
so on (it can get quite long).  As you can see, I use %word% yet still, I get 
a great performance out of it.  After re-writing my code, query of that type 
dropped to about 60 seconds running over 1,000,000+ records.  It's sevenfold+ 
improvement over MySQL's builtin REGEXP.

Doing '=' returns a query almost in less than a second to a second on the 
server.  Very rarely it's more than a few seconds.  Doing 'LIKE "something%"' 
query is also very fast but doing 'LIKE %something%' is not much slower 
either.  The longest it took for any of my queries to run was 90 seconds.  
Overall, I am very pleased with the way things go.  I was comparing how MS 
SQL would hold up against MySQL.  I tried to import 500,000 records from 
ASCII file to MS SQL and it took half a day only to completely die (I guess, 
machine ran out of resources).  That was done on a comparable machine running 
Windows NT 4.0.  The same task on MySQL took me between 1 to 5 minutes (if 
table was indexed).  Running some test against commercial databases (in my 
case, Progress.  I don't know if anybody knows it here), MySQL also came up a 
winner.  It was about 10 times faster in read queries.

I guess, what I am trying to say is that it's up to you to optimize it as 
much as possible.  Run tests and see what's best for you.

P.S.  If you let people run queries like 's%', expect long delays.  It's only 
natural.  I always tell my users that if they want speed up their queries, 
type in as much as possible.  Unless it's only absolutely necessary, use 
query of that type.

Now the hardware (don't laugh, please):

IBM Pentium II, 450MHz with 324Mb of RAM (so I have less than you do) :-)  
One thing that really bugs me, though, is the disks.  On that particular 
machine they are VERY VERY slow.  It's a simple off the assembly line machine 
so I guess, I can't really complain.  If I switched them to something better, 
I'm sure I would get even a better performance.



On Tuesday 06 February 2001 17:24, Ryan Hadley wrote:
-  I'm not much of a db admin so I really don't know how to get better
-  performance out of our database... but it seriously needs some speeding
 up. -
-  We have this huge dictionary... It's 272235 rows.  We have to be able t
-  search through it.
-
-  We run about 800,000 visits a day.
-
-  Right now we're doing a "WHERE keyWord LIKE 'word%'"... but the "LIKE"'s
 are -  killing the machine.  We had to upgrade our memory to 512M.  That
 helped a -  bit, now our machine doesn't just die.  But it's still way to
 slow.  The CPU -  is maxing out and we're hitting like 10-15% idle during
 slow periods and -  0.0% idle