Re: ~performance issues~

2006-04-11 Thread Ravi Prasad LR
yes, that is the reason for slowness .  

Thanks,
Ravi
On Tuesday 11 April 2006 10:51, Mohammed Abdul Azeem wrote:
 Hi Ravi,

 Since the sync'ing is done to disk with sync_binlog=1, the update
 queries to server are slower compared to the server having sync_binlog=0
 rite ?

 Thanks,
 Abdul.

 On Tue, 2006-04-11 at 10:51 +0530, Ravi Prasad LR wrote:
  Hi Abdul,
 When  sync_binlog is set to 1, innodb fsyncs the binary
  log to disk after every single write to binary log, but not in the case
  of sync_binlog=0.
 
  From MySQL manual:
 
  snip
   If the value of this variable is positive, the MySQL server synchronizes
  its binary log to disk (fdatasync()) after every sync_binlog writes to
  this binary log. Note that there is one write to the binary log per
  statement if in autocommit mode, and otherwise one write per transaction.
  The default value is 0 which does no sync'ing to disk. A value of 1 is
  the safest choice, because in case of crash you lose at most one
  statement/transaction from the binary log; but it is also the slowest
  choice (unless the disk has a battery-backed cache, which makes sync'ing
  very fast). This variable was added in MySQL 4.1.3.
  /snip
 
  --Ravi
 
  On Tuesday 11 April 2006 10:22, Mohammed Abdul Azeem wrote:
   Hi,
  
   I have a master/slave setup ( replication enabled ) for mysql in two
   different geographic locations ( one master/slave set up in each
   location). In one location i have configured the sync_binlog=1 . And
   the other location does not have the same.
  
   My problem is, when i run similar update processes on both the master
   servers, the server with sync_binlog=1 is very slower in terms of
   completing the update query as compared to the machine having
   sync_binlog=0.
  
   Is that a cause for slow performance ?
  
   Thanks in advance,
   Abdul.
  
   
   This email has been Scanned for Viruses!
 www.newbreak.com
  
   

 
 This email has been Scanned for Viruses!
   www.newbreak.com

 

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: ~performance issues~

2006-04-11 Thread Mohammed Abdul Azeem
Thanks Ravi

On Tue, 2006-04-11 at 11:41 +0530, Ravi Prasad LR wrote:
 yes, that is the reason for slowness .  
 
 Thanks,
 Ravi
 On Tuesday 11 April 2006 10:51, Mohammed Abdul Azeem wrote:
  Hi Ravi,
 
  Since the sync'ing is done to disk with sync_binlog=1, the update
  queries to server are slower compared to the server having sync_binlog=0
  rite ?
 
  Thanks,
  Abdul.
 
  On Tue, 2006-04-11 at 10:51 +0530, Ravi Prasad LR wrote:
   Hi Abdul,
  When  sync_binlog is set to 1, innodb fsyncs the binary
   log to disk after every single write to binary log, but not in the case
   of sync_binlog=0.
  
   From MySQL manual:
  
   snip
If the value of this variable is positive, the MySQL server synchronizes
   its binary log to disk (fdatasync()) after every sync_binlog writes to
   this binary log. Note that there is one write to the binary log per
   statement if in autocommit mode, and otherwise one write per transaction.
   The default value is 0 which does no sync'ing to disk. A value of 1 is
   the safest choice, because in case of crash you lose at most one
   statement/transaction from the binary log; but it is also the slowest
   choice (unless the disk has a battery-backed cache, which makes sync'ing
   very fast). This variable was added in MySQL 4.1.3.
   /snip
  
   --Ravi
  
   On Tuesday 11 April 2006 10:22, Mohammed Abdul Azeem wrote:
Hi,
   
I have a master/slave setup ( replication enabled ) for mysql in two
different geographic locations ( one master/slave set up in each
location). In one location i have configured the sync_binlog=1 . And
the other location does not have the same.
   
My problem is, when i run similar update processes on both the master
servers, the server with sync_binlog=1 is very slower in terms of
completing the update query as compared to the machine having
sync_binlog=0.
   
Is that a cause for slow performance ?
   
Thanks in advance,
Abdul.
   

This email has been Scanned for Viruses!
  www.newbreak.com
   

 
  
  This email has been Scanned for Viruses!
www.newbreak.com
 
  
 
 
 This email has been Scanned for Viruses!
   www.newbreak.com


This email has been Scanned for Viruses!
  www.newbreak.com



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: ~performance issues~

2006-04-10 Thread Ravi Prasad LR
Hi Abdul, 
   When  sync_binlog is set to 1, innodb fsyncs the binary log to 
disk after every single write to binary log, but not in the case of 
sync_binlog=0.
From MySQL manual:
snip
 If the value of this variable is positive, the MySQL server synchronizes its 
binary log to disk (fdatasync()) after every sync_binlog writes to this 
binary log. Note that there is one write to the binary log per statement if 
in autocommit mode, and otherwise one write per transaction. The default 
value is 0 which does no sync'ing to disk. A value of 1 is the safest choice, 
because in case of crash you lose at most one statement/transaction from the 
binary log; but it is also the slowest choice (unless the disk has a 
battery-backed cache, which makes sync'ing very fast). This variable was 
added in MySQL 4.1.3.
/snip

--Ravi







On Tuesday 11 April 2006 10:22, Mohammed Abdul Azeem wrote:
 Hi,

 I have a master/slave setup ( replication enabled ) for mysql in two
 different geographic locations ( one master/slave set up in each
 location). In one location i have configured the sync_binlog=1 . And the
 other location does not have the same.

 My problem is, when i run similar update processes on both the master
 servers, the server with sync_binlog=1 is very slower in terms of
 completing the update query as compared to the machine having
 sync_binlog=0.

 Is that a cause for slow performance ?

 Thanks in advance,
 Abdul.

 
 This email has been Scanned for Viruses!
   www.newbreak.com

 

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: ~performance issues~

2006-04-10 Thread Mohammed Abdul Azeem
Hi Ravi,

Since the sync'ing is done to disk with sync_binlog=1, the update
queries to server are slower compared to the server having sync_binlog=0
rite ? 

Thanks,
Abdul.

On Tue, 2006-04-11 at 10:51 +0530, Ravi Prasad LR wrote:
 Hi Abdul, 
When  sync_binlog is set to 1, innodb fsyncs the binary log to 
 disk after every single write to binary log, but not in the case of 
 sync_binlog=0.
 From MySQL manual:
 snip
  If the value of this variable is positive, the MySQL server synchronizes its 
 binary log to disk (fdatasync()) after every sync_binlog writes to this 
 binary log. Note that there is one write to the binary log per statement if 
 in autocommit mode, and otherwise one write per transaction. The default 
 value is 0 which does no sync'ing to disk. A value of 1 is the safest choice, 
 because in case of crash you lose at most one statement/transaction from the 
 binary log; but it is also the slowest choice (unless the disk has a 
 battery-backed cache, which makes sync'ing very fast). This variable was 
 added in MySQL 4.1.3.
 /snip
 
 --Ravi
 
 
 
 
 
 
 
 On Tuesday 11 April 2006 10:22, Mohammed Abdul Azeem wrote:
  Hi,
 
  I have a master/slave setup ( replication enabled ) for mysql in two
  different geographic locations ( one master/slave set up in each
  location). In one location i have configured the sync_binlog=1 . And the
  other location does not have the same.
 
  My problem is, when i run similar update processes on both the master
  servers, the server with sync_binlog=1 is very slower in terms of
  completing the update query as compared to the machine having
  sync_binlog=0.
 
  Is that a cause for slow performance ?
 
  Thanks in advance,
  Abdul.
 
  
  This email has been Scanned for Viruses!
www.newbreak.com
 
  
 


This email has been Scanned for Viruses!
  www.newbreak.com



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Performance issues when deleting and reading on large table

2005-04-24 Thread Jigal van Hemert
From: Almar van Pel

 After some time (sometimes a week sometimes a month) it appears that the
 index of the table gets stuck.
 It tries to read from the table but does not get response. This causes the
 connectionqueue to fill up
 and the load on the system increases dramatically. In other words, unless
I
 do an optimize table , the system
 hangs. Most of the times you see that the index is getting 20 Mb off.
 When I do check table (before optimizing) there are no errors.

 Is there any way to see this problem coming, so I can outrun it? (Without
 having to schedule optimize, wich = downtime, every week..)

You should run optimize table regularly (once a week or so) in some cases:
http://dev.mysql.com/doc/mysql/en/optimize-table.html

It's a probably a case of not having the cardinality of indexes right and
thus making wrong decisions for queries.

 Trying to get the previous table clean, I created some jobs deleting old
 records. When I delete a lot of records at in one job,
 the system also nearly hangs. (+/- 10 to 15.000 records) The load again
 increases dramatically. I tried every trick in the book, but cannot
 understand,
 why this action is so heavy for the system.

Deleting a lot of records will have impact on the indexes, so it's quite a
job. The inserts/updates/deletes will also block the table for reading in
case of MyISAM.

Such a large table in a high concurrency situation (many writes and many
reads) can be a job for InnoDB tables.
They seem slow for small tables, but have the tendency to keep the same
speed for large tables, while MyISAM will probably get slower the bigger the
table is under these circumstances.
If you can use the PRIMARY index in a query and keep the 'PRIMARY' index as
short as possible, InnoDB can be a very fast table handler.

Depending on the size of the resulting record sets, your system must have
enough memory to handle it. Otherwise a lot of temporary tables will end up
on disk (slow) and also indexes cannot be loaded in memory (slow).

Running large databases is sometimes a bit of a challenge; finding the right
queries, setting up the right index(es), etc.

Regards, Jigal.


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Performance issues when deleting and reading on large table

2005-04-24 Thread mathias fatene
Hi all,
Know that indexes are good for select(s), but very bad for massive
insert,update and delete.

If you want to do a massive delete with a cron, it's better to :
 * select the rows to delete (using indexes) 
 * delete indexes
 * delete rows (already marked)
 * recreate indexes

Another way if you want to delete a big percentage of your table, is to
copy the stating records, drop table and recreate it with those record.
Then recreate indexes.

I assume that you're not in a massive transactional situation, and maybe
myisam storage. If not, show processlist may help you to track using or
not of internal temporary tables, ...


Mathias

-Original Message-
From: Jigal van Hemert [mailto:[EMAIL PROTECTED] 
Sent: dimanche 24 avril 2005 17:05
To: Almar van Pel; mysql@lists.mysql.com
Subject: Re: Performance issues when deleting and reading on large table

From: Almar van Pel

 After some time (sometimes a week sometimes a month) it appears that
the
 index of the table gets stuck.
 It tries to read from the table but does not get response. This causes
the
 connectionqueue to fill up
 and the load on the system increases dramatically. In other words,
unless
I
 do an optimize table , the system
 hangs. Most of the times you see that the index is getting 20 Mb off.
 When I do check table (before optimizing) there are no errors.

 Is there any way to see this problem coming, so I can outrun it?
(Without
 having to schedule optimize, wich = downtime, every week..)

You should run optimize table regularly (once a week or so) in some
cases:
http://dev.mysql.com/doc/mysql/en/optimize-table.html

It's a probably a case of not having the cardinality of indexes right
and
thus making wrong decisions for queries.

 Trying to get the previous table clean, I created some jobs deleting
old
 records. When I delete a lot of records at in one job,
 the system also nearly hangs. (+/- 10 to 15.000 records) The load
again
 increases dramatically. I tried every trick in the book, but cannot
 understand,
 why this action is so heavy for the system.

Deleting a lot of records will have impact on the indexes, so it's quite
a
job. The inserts/updates/deletes will also block the table for reading
in
case of MyISAM.

Such a large table in a high concurrency situation (many writes and many
reads) can be a job for InnoDB tables.
They seem slow for small tables, but have the tendency to keep the same
speed for large tables, while MyISAM will probably get slower the bigger
the
table is under these circumstances.
If you can use the PRIMARY index in a query and keep the 'PRIMARY' index
as
short as possible, InnoDB can be a very fast table handler.

Depending on the size of the resulting record sets, your system must
have
enough memory to handle it. Otherwise a lot of temporary tables will end
up
on disk (slow) and also indexes cannot be loaded in memory (slow).

Running large databases is sometimes a bit of a challenge; finding the
right
queries, setting up the right index(es), etc.

Regards, Jigal.


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Performance issues when deleting and reading on large table

2005-04-24 Thread Almar van Pel

Hi Jigal, Mathias,

Thanks the time you took to reply to my issue's!

I would like to clear out some things. 

 It's a probably a case of not having the cardinality of indexes right and
thus making wrong decisions for queries.
- Currently there is not a single query in the application that does not use
the correct index. We only have key-reads. Wich would mean that MySQL is
creating these incorrect indexes?

 Depending on the size of the resulting record sets, your system must have
enough memory to handle it. Otherwise a lot of temporary tables will end up
on disk (slow) and also indexes cannot be loaded in memory (slow).
- The system runs with a key-buffer of 382 M, wich is most of the time not
filled 100 %. Created temp. tables is very low. 

 Deleting a lot of records will have impact on the indexes, so it's quite a
job. The inserts/updates/deletes will also block the table for reading in
case of MyISAM.
- During deletion of records from the table there is no user interaction.
The only person manipulating the table/database is me. That's the reason why
i'm finding this 'strange'. 

Changing to Innodb would be a great risk I think. Maybe we should think this
over again, but the way the system is configured right now should in my
opion be sufficient enough. 

Mathias, what do you mean by:

 If you want to do a massive delete with a cron, it's better to :
 * select the rows to delete (using indexes)
 * delete indexes
 * delete rows (already marked)
 * recreate indexes

I don't really understand how you 'mark' the records for deletion before
deleting indexes. However I'm very interested.

Regards,

Almar van Pel



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Performance issues when deleting and reading on large table

2005-04-24 Thread mathias fatene
If you have no active transactions and want a cron delete, an example is
:
* Create table tmp as select * from your_table where ... - here indexes
are used
* drop indexes
* delete from you_table where ...
* insert into your_table select * from tmp
* create index on you_table.

You must test it to unsure that index creation is not slow when you have
a lot of indexes.

You can also disable constraints when deleting and optimize your table
at the end of the deletion.

In myisam storage, since an update,insert or delete means lock table
there is a big transactional problem. Innodb offers row loocking, but
you seem having a problem using it. Unfortunaltly ! 

To simulate transaction, you must split your queries. I remember had
worked on a load problem which take days to finish (or not) because the
load operation was combined with a lot of select (verify) data.

My solution was to do a lot of selects (using indexes), spool results to
files, delete rows, and load data from files.
It took 1.5 hour to finish a 650Mo data with all the checking
operations.
 

Mathias

-Original Message-
From: Almar van Pel [mailto:[EMAIL PROTECTED] 
Sent: dimanche 24 avril 2005 18:44
To: mysql@lists.mysql.com
Cc: 'Jigal van Hemert'; 'mathias fatene'
Subject: RE: Performance issues when deleting and reading on large table


Hi Jigal, Mathias,

Thanks the time you took to reply to my issue's!

I would like to clear out some things. 

 It's a probably a case of not having the cardinality of indexes right
and
thus making wrong decisions for queries.
- Currently there is not a single query in the application that does not
use
the correct index. We only have key-reads. Wich would mean that MySQL is
creating these incorrect indexes?

 Depending on the size of the resulting record sets, your system must
have
enough memory to handle it. Otherwise a lot of temporary tables will end
up
on disk (slow) and also indexes cannot be loaded in memory (slow).
- The system runs with a key-buffer of 382 M, wich is most of the time
not
filled 100 %. Created temp. tables is very low. 

 Deleting a lot of records will have impact on the indexes, so it's
quite a
job. The inserts/updates/deletes will also block the table for reading
in
case of MyISAM.
- During deletion of records from the table there is no user
interaction.
The only person manipulating the table/database is me. That's the reason
why
i'm finding this 'strange'. 

Changing to Innodb would be a great risk I think. Maybe we should think
this
over again, but the way the system is configured right now should in my
opion be sufficient enough. 

Mathias, what do you mean by:

 If you want to do a massive delete with a cron, it's better to :
 * select the rows to delete (using indexes)
 * delete indexes
 * delete rows (already marked)
 * recreate indexes

I don't really understand how you 'mark' the records for deletion before
deleting indexes. However I'm very interested.

Regards,

Almar van Pel




-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Performance issues when deleting and reading on large table

2005-04-24 Thread Jigal van Hemert
  It's a probably a case of not having the cardinality of indexes right
and
 thus making wrong decisions for queries.
 - Currently there is not a single query in the application that does not
use
 the correct index. We only have key-reads. Wich would mean that MySQL is
 creating these incorrect indexes?

The indexes are not necessarily incorrect, but MySQL also keeps a property
called 'cardinality' for each index. It is an estimate of the number of
different items in the index. An index on 'gender' may have a cardinality of
only two or three (male/female(/unknown)) for example. I've noticed that the
cardinality on MyISAM tables can be very wrong and will be updated to a
correct value after an OPTIMIZE TABLE. InnoDB seems to estimate the
cardinality constantly.

  Deleting a lot of records will have impact on the indexes, so it's quite
a
 job. The inserts/updates/deletes will also block the table for reading in
 case of MyISAM.
 - During deletion of records from the table there is no user interaction.
 The only person manipulating the table/database is me. That's the reason
why
 i'm finding this 'strange'.
It will still be a massive operation on indexes. If you have many indexes
the task will be even harder...

 Changing to Innodb would be a great risk I think. Maybe we should think
this
 over again, but the way the system is configured right now should in my
 opion be sufficient enough.

It's not a risk, but may take a while to complete (rebuilding the tables).
Anyway, you should test it on a seperate database or even a different
server. You may also need to redesign the index(es).

Regards, Jigal.


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: Performance issues when deleting and reading on large table

2005-04-24 Thread mathias fatene
 An index on 'gender' may have a cardinality of
 only two or three (male/female(/unknown)) for example.

Never b-tree index such columns !
Oracle (db2 ...rdbms) has bitmap indexes which work fine fork such data.
Look at BIN(myset+0) in
http://dev.mysql.com/tech-resources/articles/mysql-set-datatype.html.

Massive load is better without indexes, which are only good for selects.

Mathias

-Original Message-
From: Jigal van Hemert [mailto:[EMAIL PROTECTED] 
Sent: dimanche 24 avril 2005 19:46
To: Almar van Pel; mysql@lists.mysql.com
Cc: 'mathias fatene'
Subject: Re: Performance issues when deleting and reading on large table

  It's a probably a case of not having the cardinality of indexes
right
and
 thus making wrong decisions for queries.
 - Currently there is not a single query in the application that does
not
use
 the correct index. We only have key-reads. Wich would mean that MySQL
is
 creating these incorrect indexes?

The indexes are not necessarily incorrect, but MySQL also keeps a
property
called 'cardinality' for each index. It is an estimate of the number of
different items in the index. An index on 'gender' may have a
cardinality of
only two or three (male/female(/unknown)) for example. I've noticed that
the
cardinality on MyISAM tables can be very wrong and will be updated to a
correct value after an OPTIMIZE TABLE. InnoDB seems to estimate the
cardinality constantly.

  Deleting a lot of records will have impact on the indexes, so it's
quite
a
 job. The inserts/updates/deletes will also block the table for reading
in
 case of MyISAM.
 - During deletion of records from the table there is no user
interaction.
 The only person manipulating the table/database is me. That's the
reason
why
 i'm finding this 'strange'.
It will still be a massive operation on indexes. If you have many
indexes
the task will be even harder...

 Changing to Innodb would be a great risk I think. Maybe we should
think
this
 over again, but the way the system is configured right now should in
my
 opion be sufficient enough.

It's not a risk, but may take a while to complete (rebuilding the
tables).
Anyway, you should test it on a seperate database or even a different
server. You may also need to redesign the index(es).

Regards, Jigal.



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Performance issues

2004-06-28 Thread Jeremy Zawodny
On Tue, Jun 22, 2004 at 01:34:39PM -0400, Aram Mirzadeh wrote:
 
 We have an internal SNMP monitoring system that is monitoring about
 10,000 devices.  Each device is pinged then pulled for about an
 average of 25-30 elements.  Each of the ping results and elements
 are then stored in text file, then another system picks them up
 (NFS) and inserts them into a MyISAM (3.23.54) database.  The data
 is kept for 13 weeks.
 
 The database system is a Xeon 4 way, 12GB of ram with a striped raid
 array dedicated to the database files and its indexes and such.
 
 Every 5 minutes another process goes through the last set of inserts
 and compares them for any threshold breaches, so the entire last set
 of data is looked at.
 
 We're falling behind on the inserts because the system can't seem to
 handle the amount of inserts, the front end that generates the web
 pages based on the previous records is dogging down.
 
 I have read the regular optimizations papers and have done as much
 as I felt safe, are there any huge database optimization papers?
 Anything I should be looking at?

I'd consider bulking up the INSERTs, performing multi-row INSERTs
rather than doing them one by one.  That can speed things up quite a
bit in my experience.

Jeremy
-- 
Jeremy D. Zawodny |  Perl, Web, MySQL, Linux Magazine, Yahoo!
[EMAIL PROTECTED]  |  http://jeremy.zawodny.com/

[book] High Performance MySQL -- http://highperformancemysql.com/

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Performance issues

2004-06-28 Thread SGreen

Have you thought about using Merge tables?

If you have a sliding 5 minute monitoring window that you need to query
frequently you could create a smaller MERGE table to hold to 6 minutes
worth of data composed of six tables of one minute's data each.

At the end of each minute, you create a new table, change the merge
definition, then archive the old minute (the one that just left the
monitoring window) into a larger static table. Your indexes will be small
(only 1 minutes worth of data). The tables you need to query are smaller
(just 6 minutes worth) and you still keep all of your historical data.  You
could even hold off archiving the old tables until you have some free
time if you needed to.

You could also run tiers of tables. One weekly Merge table containing 7
daily tables. The most recent daily table could be a merge table of up
to 24 hourly tables. The most recent Hourly table could have the
results of archiving off your old minutes for the current hour. I
didn't see anything about NOT nesting  merge tables but I would assume it
could cause some serious headaches if you went overboard doing it

You probably want to review: http://dev.mysql.com/doc/mysql/en/MERGE.html
and: http://dev.mysql.com/doc/mysql/en/MERGE_table_problems.html
for details. (especially read about MERGE tables using lots of file
handles!!!)

I am sure if you thought about it you could break down your storage into
something more manageable than I described.

Best Wishes,
Shawn Green
Database Administrator
Unimin Corporation - Spruce Pine



   

  Jeremy Zawodny   

  [EMAIL PROTECTED]To:   Aram Mirzadeh [EMAIL 
PROTECTED]  
  om  cc:   [EMAIL PROTECTED] 

   Fax to: 

  06/28/2004 02:24 Subject:  Re: Performance issues

  PM   

  Please respond to

  mysql

   

   





On Tue, Jun 22, 2004 at 01:34:39PM -0400, Aram Mirzadeh wrote:

 We have an internal SNMP monitoring system that is monitoring about
 10,000 devices.  Each device is pinged then pulled for about an
 average of 25-30 elements.  Each of the ping results and elements
 are then stored in text file, then another system picks them up
 (NFS) and inserts them into a MyISAM (3.23.54) database.  The data
 is kept for 13 weeks.

 The database system is a Xeon 4 way, 12GB of ram with a striped raid
 array dedicated to the database files and its indexes and such.

 Every 5 minutes another process goes through the last set of inserts
 and compares them for any threshold breaches, so the entire last set
 of data is looked at.

 We're falling behind on the inserts because the system can't seem to
 handle the amount of inserts, the front end that generates the web
 pages based on the previous records is dogging down.

 I have read the regular optimizations papers and have done as much
 as I felt safe, are there any huge database optimization papers?
 Anything I should be looking at?

I'd consider bulking up the INSERTs, performing multi-row INSERTs
rather than doing them one by one.  That can speed things up quite a
bit in my experience.

Jeremy
--
Jeremy D. Zawodny |  Perl, Web, MySQL, Linux Magazine, Yahoo!
[EMAIL PROTECTED]  |  http://jeremy.zawodny.com/

[book] High Performance MySQL -- http://highperformancemysql.com/

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]







-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Performance issues

2004-06-28 Thread Andrew Pattison
By default MySQL flushes keys to disk with every INSERT, hence the
performance degredation with performing several single INSERTs one after the
other. The following extract from the MySQL documentation hints at one way
of changing this on a per-table basis:

a.. Declaring a MyISAM table with the DELAY_KEY_WRITE=1 table option makes
index updates faster because they are not flushed to disk until the table is
closed. The downside is that if something kills the server while such a
table is open, you should ensure that they are okay by running the server
with the --myisam-recover option, or by running myisamchk before restarting
the server. (However, even in this case, you should not lose anything by
using DELAY_KEY_WRITE, because the key information can always be generated
from the data rows.)

There is also a way of getting MySQL to do lazy writing of indexes on a
global basis but I couldn't find a quick reference to that.

Cheers

Andrew.

- Original Message - 
From: Jeremy Zawodny [EMAIL PROTECTED]
To: Aram Mirzadeh [EMAIL PROTECTED]
Cc: [EMAIL PROTECTED]
Sent: Monday, June 28, 2004 7:24 PM
Subject: Re: Performance issues


 On Tue, Jun 22, 2004 at 01:34:39PM -0400, Aram Mirzadeh wrote:
 
  We have an internal SNMP monitoring system that is monitoring about
  10,000 devices.  Each device is pinged then pulled for about an
  average of 25-30 elements.  Each of the ping results and elements
  are then stored in text file, then another system picks them up
  (NFS) and inserts them into a MyISAM (3.23.54) database.  The data
  is kept for 13 weeks.
 
  The database system is a Xeon 4 way, 12GB of ram with a striped raid
  array dedicated to the database files and its indexes and such.
 
  Every 5 minutes another process goes through the last set of inserts
  and compares them for any threshold breaches, so the entire last set
  of data is looked at.
 
  We're falling behind on the inserts because the system can't seem to
  handle the amount of inserts, the front end that generates the web
  pages based on the previous records is dogging down.
 
  I have read the regular optimizations papers and have done as much
  as I felt safe, are there any huge database optimization papers?
  Anything I should be looking at?

 I'd consider bulking up the INSERTs, performing multi-row INSERTs
 rather than doing them one by one.  That can speed things up quite a
 bit in my experience.

 Jeremy
 -- 
 Jeremy D. Zawodny |  Perl, Web, MySQL, Linux Magazine, Yahoo!
 [EMAIL PROTECTED]  |  http://jeremy.zawodny.com/

 [book] High Performance MySQL -- http://highperformancemysql.com/

 -- 
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:
http://lists.mysql.com/[EMAIL PROTECTED]





-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Performance issues

2004-06-28 Thread Jeremy Zawodny
On Mon, Jun 28, 2004 at 09:21:04PM +0100, Andrew Pattison wrote:
 By default MySQL flushes keys to disk with every INSERT, hence the
 performance degredation with performing several single INSERTs one after the
 other. The following extract from the MySQL documentation hints at one way
 of changing this on a per-table basis:
 
 a.. Declaring a MyISAM table with the DELAY_KEY_WRITE=1 table option makes
 index updates faster because they are not flushed to disk until the table is
 closed. The downside is that if something kills the server while such a
 table is open, you should ensure that they are okay by running the server
 with the --myisam-recover option, or by running myisamchk before restarting
 the server. (However, even in this case, you should not lose anything by
 using DELAY_KEY_WRITE, because the key information can always be generated
 from the data rows.)
 
 There is also a way of getting MySQL to do lazy writing of indexes on a
 global basis but I couldn't find a quick reference to that.

Delayed Key Writes:

  http://dev.mysql.com/doc/mysql/en/CREATE_TABLE.html

Search that page for delay and you'll find it.

Jeremy
-- 
Jeremy D. Zawodny |  Perl, Web, MySQL, Linux Magazine, Yahoo!
[EMAIL PROTECTED]  |  http://jeremy.zawodny.com/

[book] High Performance MySQL -- http://highperformancemysql.com/

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Performance issues

2004-06-22 Thread Sergio Salvi
Hi!

Can you give more details on the problematic inserts you're doing (table 
structure, indexes and insert command) ?

Also, do you believe your queries would benefit from MySQL's query cache? 
Maybe it's worth upgrading to version 4 and use this feature, even if 
you allocate just a small amount of memory for that.

[]s,
Sergio Salvi.

On Tue, 22 Jun 2004, Aram Mirzadeh wrote:

 
 
 We have an internal SNMP monitoring system that is monitoring about 
 10,000 devices.  Each device is pinged then pulled for about an average 
 of 25-30 elements.  Each of the ping results and elements are then 
 stored in text file, then another system picks them up (NFS) and inserts 
 them into a MyISAM (3.23.54) database.  The data is kept for 13 weeks.
 
 The database system is a Xeon 4 way, 12GB of ram with a striped raid 
 array dedicated to the database files and its indexes and such.
 
 Every 5 minutes another process goes through the last set of inserts and 
 compares them for any threshold breaches, so the entire last set of data 
 is looked at.
 
 We're falling behind on the inserts because the system can't seem to 
 handle the amount of inserts, the front end that generates the web pages 
 based on the previous records is dogging down.
 
 I have read the regular optimizations papers and have done as much as I 
 felt safe, are there any huge database optimization papers?  Anything I 
 should be looking at?
 
 Here is the relavent my.cnf entries:
 
 set-variable= key_buffer=256M
 set-variable= max_allowed_packet=1M
 set-variable= table_cache=256
 set-variable= sort_buffer=1M
 set-variable= record_buffer=1M
 set-variable= myisam_sort_buffer_size=64M
 set-variable= thread_cache=8
 set-variable= thread_concurrency=8
 
 [mysql]
 no-auto-rehash
 
 [isamchk]
 set-variable= key_buffer=128M
 set-variable= sort_buffer=128M
 set-variable= read_buffer=2M
 set-variable= write_buffer=2M
 
 [myisamchk]
 set-variable= key_buffer=128M
 set-variable= sort_buffer=128M
 set-variable= read_buffer=2M
 set-variable= write_buffer=2M
 
 And here is my top output:
 
 MySQL on 1.2.3.4 (3.23.54) up 2+06:36:05 [13:10:01]
   Queries: 191.5M  qps: 1022 Slow:   296.0 Se/In/Up/De(%): 
 22/10/62/00
   qps now:  147 Slow qps: 0.0  Threads:9 (   2/   7) 
 50/00/00/00
   Key Efficiency: 96.7%  Bps in/out:  4.0k/ 1.6k   Now in/out:  6.2k/767.7k
 
 Any suggestions would be greatly appreciated.
 
 Aram
 
 





-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Performance issues

2004-06-22 Thread mos
At 12:34 PM 6/22/2004, you wrote:

We have an internal SNMP monitoring system that is monitoring about 10,000 
devices.  Each device is pinged then pulled for about an average of 25-30 
elements.  Each of the ping results and elements are then stored in text 
file, then another system picks them up (NFS) and inserts them into a 
MyISAM (3.23.54) database.  The data is kept for 13 weeks.

The database system is a Xeon 4 way, 12GB of ram with a striped raid array 
dedicated to the database files and its indexes and such.

Every 5 minutes another process goes through the last set of inserts and 
compares them for any threshold breaches, so the entire last set of data 
is looked at.

We're falling behind on the inserts because the system can't seem to 
handle the amount of inserts, the front end that generates the web pages 
based on the previous records is dogging down.
Have you tried Load Data Infile? It is for loading data from a text file 
into a table and is much faster than using Insert ... statements. For 
example, I can load 1 million rows of x(30) into a MyISam table in 15 
seconds on a P4 2.4ghz machine. You can use either Ignore or Replace to 
handle duplicate indexes.

Mike
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: Performance issues between two servers

2002-05-06 Thread Brent Baisley

Have you checked the network connection? You should be running at
full-duplex so you don't get collisions. But more importantly, make sure you
aren't getting errors on the line. A poor crimp or pinched wire could really
slow things down, especially if you try running at 100MB.
Try copying a single large file between APPDEV1/SQLDEV0 and APPDEV1/SQLDEV1.
See what the speed difference is.

 Backgroud: 
 We have a process that runs on a server (APPDEV1) that writes records to a
 mysql server (SQLDEV0). We attempted to move the databases from SQLDEV0 to
 SQLDEV1 and have run into some sort of performance bottleneck. The server
 SQLDEV0 is a Compac server with ~2GB of ram and two processors. The server
 SQLDEV1 is also a Compac server. It has 4GB of ram and two processors that
 are a bit faster than the ones in SQLDEV0.
 
 One big difference between SQLDEV0 and SQLDEV1 is the version of RedHat.
 SQLDEV0 is running RedHat 7.2. SQLDEV1 is running an enterprise version of
 RedHat 7.2 so that it can take advantage of the 4GB of ram.
 
 All the table spaces are using Innodb.
 
 Problem: 
 The process on APPDEV1 can write records to the box SQLDEV0 about eight time
 faster than to SQLDEV1. We've looked over the my.sql configurations and they
 seem to be ok. In fact we adjusted the my.cnf file on SQLDEV1 so that it was
 identicle to SQLDEV0 but it did not help. The systems are running ~70-95%
 cpu idle so cpu is not a bottle neck. In testing, raw disk I/O rates are
 about 50% faster on SQLDEV1 as SQLDEV0. We don't see a bottle neck on I/O.
 
 This is the only process using mysql on SQLDEV1. On SQLDEV0 it shares access
 with several other programs but the box is not very busy.
 
 Thoughts? Comments? Criticism?
 
 Carl McNamee 
 Systems Administrator/DBA
 Billing Concepts 
 (210) 949-7282 
 
 -
 Before posting, please check:
  http://www.mysql.com/manual.php   (the manual)
  http://lists.mysql.com/   (the list archive)
 
 To request this thread, e-mail [EMAIL PROTECTED]
 To unsubscribe, e-mail [EMAIL PROTECTED]
 Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php
 

-- 
Brent Baisley
Systems Architect
Landover Associates, Inc.
Search  Advisory Services for Advanced Technology Environments
p: 212.759.6400/800.759.0577


-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: Performance issues.

2001-02-09 Thread Benjamin Pflugmann

Hi.

On Wed, Feb 07, 2001 at 03:01:28PM -0500, [EMAIL PROTECTED] wrote:
[...]
 We have one table with all the defintions on it's own row.
 Then we have built off of that table another table that is only the distinct
 words, no definitions.
 
 This is because if a word has 10 definitions, it makes it hard to limit the
 number of results returned from the first table to 5 words, because we don't
 know how many definitions each word has.
 
 We have two coloumns that we check the search on.  keyWord and vKeyWord.
 keyWord is basically the non-display keyword.  without spaces and junk.  We
 could remove that from the searching, if it'd help.  Would that make much of
 a difference?

In your case, yes (see below).

 So first we do a:
 "SELECT COUNT(*) AS totalWords FROM keywords WHERE keyword LIKE '$keyword%'
 OR vkeyword LIKE '$keyword%'"
 to get the number of entries they can page through.

The problem is that MySQL cannot (yet) use indexes well for OR
clauses. You can see this with

EXPLAIN SELECT COUNT(*) AS totalWords FROM keywords WHERE keyword LIKE
'$keyword%' OR vkeyword LIKE '$keyword%';

If you can afford it (as you said above), just let keyword away and
test for vkeyword:

SELECT COUNT(*) AS totalWords FROM keywords WHERE vkeyword LIKE '$keyword%';

This should be quite fast (provided that there exists an index on
vkeyword).

 Then we do a:
 "SELECT vkeyWord FROM keywords WHERE keyword LIKE '$keyword%' OR vkeyword
 LIKE '$keyword%' LIMIT $startWordCount, 5"
 ($startWordCount depends on which page they are on)
 And build a list of the words we received.

You would have to rewrite this, too.

 Then we do a:
 "SELECT * FROM Random WHERE vKeyWord IN ($word1, $word2, $word3, $word4,
 $word5) ORDER BY ID"
 
 And *poof* we have all the definitions for 5 words, and the maximum number
 of words that there could be.
 
 Are we doing anything obviouslly wrong in this?

Not really. It's just that MySQL cannot handle the OR well.

If you really would need the two conditions, there are work-arounds
for that (e.g. creating a temporary table, ...), which I won't
elaborate on now.

 Is there a way to log all the sql calls?

Yes, there are two logs: an update log (contains only queries which
change the database content) and a general log (all queries). You
probably have to enable logging first. Have a look at the section
about server options in the manual.

Bye,

Benjamin.


-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




RE: Performance issues.

2001-02-07 Thread Ryan Hadley

Maybe we're doing something really wrong in the way we do each look up.

We have one table with all the defintions on it's own row.
Then we have built off of that table another table that is only the distinct
words, no definitions.

This is because if a word has 10 definitions, it makes it hard to limit the
number of results returned from the first table to 5 words, because we don't
know how many definitions each word has.

We have two coloumns that we check the search on.  keyWord and vKeyWord.
keyWord is basically the non-display keyword.  without spaces and junk.  We
could remove that from the searching, if it'd help.  Would that make much of
a difference?

So first we do a:
"SELECT COUNT(*) AS totalWords FROM keywords WHERE keyword LIKE '$keyword%'
OR vkeyword LIKE '$keyword%'"
to get the number of entries they can page through.

Then we do a:
"SELECT vkeyWord FROM keywords WHERE keyword LIKE '$keyword%' OR vkeyword
LIKE '$keyword%' LIMIT $startWordCount, 5"
($startWordCount depends on which page they are on)
And build a list of the words we received.

Then we do a:
"SELECT * FROM Random WHERE vKeyWord IN ($word1, $word2, $word3, $word4,
$word5) ORDER BY ID"

And *poof* we have all the definitions for 5 words, and the maximum number
of words that there could be.

Are we doing anything obviouslly wrong in this?

Is there a way to log all the sql calls?

-Original Message-
From: Jason Terry [mailto:[EMAIL PROTECTED]]
Sent: Wednesday, February 07, 2001 11:01 AM
To: Jim Beigel; [EMAIL PROTECTED]
Subject: Re: Performance issues.


Cool, I like this info. (especially key_buffer_size stuff)

However, I was running my numbers against what you said would be good.   And
this is what I came up with...

The Key_reads/Key_read_request = 0.002 (much less than you suggest so that
is good... I think)

However this one worries me a bit...
Key_write/Key_write_requests  = 0.087 (way lower than 1)

Does this mean that I am sending WAY to many un-needed UPDATE requests?

- Original Message -
From: "Jim Beigel" [EMAIL PROTECTED]
To: "Ryan Hadley" [EMAIL PROTECTED]; [EMAIL PROTECTED]
Sent: Wednesday, February 07, 2001 6:32 AM
Subject: RE: Performance issues.


 Ryan:

 1.) ...WHERE word LIKE '%word%' will never be fast.

 2.) Since you said you have indexed the field, if you can limit your
 searches to ...WHERE word LIKE 'word%', then you might want to look at
 indexing on a subset of the field if it's a big one.

 3.) You will get better performance if you perform routine maintenance on
 your tables. Try running OPTIMIZE TABLE  Or, if you can bring down
 mysqld, try the myisamchk (-a -S) equivalents.

 4.) Lastly, in addition to the other comments, I'd take a look at changing
 some of the parameters in my.cnf. Here's a pertinent snip from the manual:

 join_buffer_size
 
 The size of the buffer that is used for full joins (joins that do not use
 indexes). The buffer is allocated one time for each full join between two
 tables. Increase this value to get a faster full join when adding indexes
is
 not possible. (Normally the best way to get fast joins is to add indexes.)

 key_buffer_size
 ---
 Index blocks are buffered and are shared by all threads. key_buffer_size
is
 the size of the buffer used for index blocks. Increase this to get better
 index handling (for all reads and multiple writes) to as much as you can
 afford; 64M on a 256M machine that mainly runs MySQL is quite common. If
 you, however, make this too big (more than 50% of your total memory?) your
 system may start to page and become REALLY slow. Remember that because
MySQL
 does not cache data read, that you will have to leave some room for the OS
 filesystem cache. You can check the performance of the key buffer by doing
 show status and examine the variables Key_read_requests, Key_reads,
 Key_write_requests, and Key_writes. The Key_reads/Key_read_request ratio
 should normally be  0.01. The Key_write/Key_write_requests is usually
near
 1 if you are using mostly updates/deletes but may be much smaller if you
 tend to do updates that affect many at the same time or if you are using
 delay_key_write. See section 7.28 SHOW Syntax (Get Information About
Tables,
 Columns,...). To get even more speed when writing many rows at the same
 time, use LOCK TABLES. See section 7.32 LOCK TABLES/UNLOCK TABLES Syntax.




 Jim Beigel
 Director of Software Development
 Alabanza Corporation
 [EMAIL PROTECTED]
 740-282-2971 x.5205

  -Original Message-
  From: Ryan Hadley [mailto:[EMAIL PROTECTED]]
  Sent: Tuesday, February 06, 2001 6:09 PM
  To: [EMAIL PROTECTED]
  Subject: RE: Performance issues.
 
 
  Thanks for the quick response.
 
  The response time is slow... and the mysqld processes are what is
  hogging up
  the system.
 
  We do have indexes on the fields, but from what I understand,
  when you use a
  "LIKE" statement, it rarely uses an index.

RE: Performance issues.

2001-02-06 Thread Ryan Hadley

Woah!  800,000 visits a day!?  Wow...  try 80,000.

-Original Message-
From: Ryan Hadley [mailto:[EMAIL PROTECTED]]
Sent: Tuesday, February 06, 2001 5:25 PM
To: [EMAIL PROTECTED]
Subject: Performance issues.


I'm not much of a db admin so I really don't know how to get better
performance out of our database... but it seriously needs some speeding up.

We have this huge dictionary... It's 272235 rows.  We have to be able to
search through it.

We run about 800,000 visits a day.

Right now we're doing a "WHERE keyWord LIKE 'word%'"... but the "LIKE"'s are
killing the machine.  We had to upgrade our memory to 512M.  That helped a
bit, now our machine doesn't just die.  But it's still way to slow.  The CPU
is maxing out and we're hitting like 10-15% idle during slow periods and
0.0% idle during rush periods.

What can we do?  Besides of course firing me and getting a real db admin. :)


-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail
[EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php


-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




RE: Performance issues.

2001-02-06 Thread Ryan Hadley

Thanks for the quick response.

The response time is slow... and the mysqld processes are what is hogging up
the system.

We do have indexes on the fields, but from what I understand, when you use a
"LIKE" statement, it rarely uses an index.

-Ryan

-Original Message-
From: Kent Hoover [mailto:[EMAIL PROTECTED]]
Sent: Tuesday, February 06, 2001 5:42 PM
To: [EMAIL PROTECTED]
Subject: Performance issues.


Ryan:

If your response time for this query is slow, it is likely that an INDEX
will help
you. (Read about CREATE INDEX in the MySQL manual.

If you don't already have an INDEX on the keyWord column, create one.

If you can induce your customer/users to type more characters, that
would help.
WHERE keyWord LIKE 'salomi%'  is much better for you than
WHERE keyWord LIKE 's%'.

NOTE, that if your response time to this query is good, there could be
something
other than MySQL running on your machine that is sucking your CPU dry.
You might be able to spot it by running 'top' 'ps -ef' or whatever
command is
available for your machine.

Cheers,

Kent Hoover




-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail
[EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php


-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




RE: Performance issues.

2001-02-06 Thread Quentin Bennett

Hi,

For an indexed column, the index is used if the start of the string is used:

LIKE 'a string of text%' may use an index
LIKE '%any old string%' will not, since the start of the string is unknown.

The index will only be used if the server decides that it will be quicker
than a full table scan.

Have you got the results of 'explain select ' to see if your index is
actually being used.

Regards

Quentin
-Original Message-
From: Ryan Hadley [mailto:[EMAIL PROTECTED]]
Sent: Wednesday, 7 February 2001 12:09
To: [EMAIL PROTECTED]
Subject: RE: Performance issues.


Thanks for the quick response.

The response time is slow... and the mysqld processes are what is hogging up
the system.

We do have indexes on the fields, but from what I understand, when you use a
"LIKE" statement, it rarely uses an index.

-Ryan

-Original Message-
From: Kent Hoover [mailto:[EMAIL PROTECTED]]
Sent: Tuesday, February 06, 2001 5:42 PM
To: [EMAIL PROTECTED]
Subject: Performance issues.


Ryan:

If your response time for this query is slow, it is likely that an INDEX
will help
you. (Read about CREATE INDEX in the MySQL manual.

If you don't already have an INDEX on the keyWord column, create one.

If you can induce your customer/users to type more characters, that
would help.
WHERE keyWord LIKE 'salomi%'  is much better for you than
WHERE keyWord LIKE 's%'.

NOTE, that if your response time to this query is good, there could be
something
other than MySQL running on your machine that is sucking your CPU dry.
You might be able to spot it by running 'top' 'ps -ef' or whatever
command is
available for your machine.

Cheers,

Kent Hoover




-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail
[EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php


-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail
[EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php

The information contained in this email is privileged and confidential
and intended for the addressee only. If you are not the intended 
recipient, you are asked to respect that confidentiality and not 
disclose, copy or make use of its contents. If received in error 
you are asked to destroy this email and contact the sender immediately. 
Your assistance is appreciated.

-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




RE: Performance issues.

2001-02-06 Thread Ryan Hadley

I haven't had a chance to do so yet.

But, we offer 4 kinds of searches:
'$word%'
'%$word%'
'%$word'
and
'$word'

So some searches still won't use indexes.

-Original Message-
From: Quentin Bennett [mailto:[EMAIL PROTECTED]]
Sent: Tuesday, February 06, 2001 7:26 PM
To: 'Ryan Hadley'; [EMAIL PROTECTED]
Subject: RE: Performance issues.


Hi,

For an indexed column, the index is used if the start of the string is used:

LIKE 'a string of text%' may use an index
LIKE '%any old string%' will not, since the start of the string is unknown.

The index will only be used if the server decides that it will be quicker
than a full table scan.

Have you got the results of 'explain select ' to see if your index is
actually being used.

Regards

Quentin
-Original Message-
From: Ryan Hadley [mailto:[EMAIL PROTECTED]]
Sent: Wednesday, 7 February 2001 12:09
To: [EMAIL PROTECTED]
Subject: RE: Performance issues.


Thanks for the quick response.

The response time is slow... and the mysqld processes are what is hogging up
the system.

We do have indexes on the fields, but from what I understand, when you use a
"LIKE" statement, it rarely uses an index.

-Ryan

-Original Message-
From: Kent Hoover [mailto:[EMAIL PROTECTED]]
Sent: Tuesday, February 06, 2001 5:42 PM
To: [EMAIL PROTECTED]
Subject: Performance issues.


Ryan:

If your response time for this query is slow, it is likely that an INDEX
will help
you. (Read about CREATE INDEX in the MySQL manual.

If you don't already have an INDEX on the keyWord column, create one.

If you can induce your customer/users to type more characters, that
would help.
WHERE keyWord LIKE 'salomi%'  is much better for you than
WHERE keyWord LIKE 's%'.

NOTE, that if your response time to this query is good, there could be
something
other than MySQL running on your machine that is sucking your CPU dry.
You might be able to spot it by running 'top' 'ps -ef' or whatever
command is
available for your machine.

Cheers,

Kent Hoover




-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail
[EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php


-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail
[EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php

The information contained in this email is privileged and confidential
and intended for the addressee only. If you are not the intended
recipient, you are asked to respect that confidentiality and not
disclose, copy or make use of its contents. If received in error
you are asked to destroy this email and contact the sender immediately.
Your assistance is appreciated.

-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail
[EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php


-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: Performance issues.

2001-02-06 Thread --==[bMan]==--

"Only" 272235???  I enter on average about 75,000 to 80,000 records a day 
(and some times, I break 100,000 records a day.  I do monthly rotations so 
it's easy to calculate how big my table gets).  Granted, I don't know what 
your table structure is but mine is very simple.  All I do is run imports 
every morning of CACI.txt file generated by EchoScope (network traffic 
monitoring tool).  I have about 15 fields per records (as far as I can 
remember) and most of them are indexed.  I wish only, that I could only do 
unique index on more than just ID field but due to the uniqueness of what I'm 
doing, I have to live with regular indexes.  Some of the fields are longer 
than 100 characters so I made sure that my indexes are no longer than 20 
characters.  I was playing a lot before I got performance I have right now.  
One thing I've noticed was that if I indexed more than 20 characters (some 
times, I was crazy trying to index all 255 characters), performance was 
actually dropping down drastically.

Now, things I want to share with:

1.  I use my database as a back-end for my PERL scripts and web interface for 
easy data querying by end users.  Having said that, I could not really afford 
long delays between queries and data display to the browser.

2.  In my queries I use =, LIKE and used to use REGEXP within MySQL.  Out of 
these three, REGEXP was the WORST in performance (and rightly so).  I decided 
to drop it altogether.  An example:

I created a keyword list so clients would not have to type individual words 
to find what they want.  So rather than type: red, yellow, green, gray, blue 
and so on, they would only type "colors" in the query field.  It was my task, 
then, to go and fetch all those.  It's, of course a simple example but you 
can imagine the implications (sex sites, e.g.).  Initially, I used REGEXP and 
it took FOREVER to return a query.  It took on average 5-7 minutes to return 
a query to a browser searching through 1,000,000+ database.  It was totally 
unacceptable so I had to re-write it.  I have decided to write a routine that 
would simply write a query for me.  So, if I had 30 words I am interested it, 
my routine would build a query for me like:

AND (field LIKE %something% OR field LIKE %something1% OR field LIKE 
%something2%) AND NOT (field LIKE %thisthing% OR field LIKE %thisthing1%) and 
so on (it can get quite long).  As you can see, I use %word% yet still, I get 
a great performance out of it.  After re-writing my code, query of that type 
dropped to about 60 seconds running over 1,000,000+ records.  It's sevenfold+ 
improvement over MySQL's builtin REGEXP.

Doing '=' returns a query almost in less than a second to a second on the 
server.  Very rarely it's more than a few seconds.  Doing 'LIKE "something%"' 
query is also very fast but doing 'LIKE %something%' is not much slower 
either.  The longest it took for any of my queries to run was 90 seconds.  
Overall, I am very pleased with the way things go.  I was comparing how MS 
SQL would hold up against MySQL.  I tried to import 500,000 records from 
ASCII file to MS SQL and it took half a day only to completely die (I guess, 
machine ran out of resources).  That was done on a comparable machine running 
Windows NT 4.0.  The same task on MySQL took me between 1 to 5 minutes (if 
table was indexed).  Running some test against commercial databases (in my 
case, Progress.  I don't know if anybody knows it here), MySQL also came up a 
winner.  It was about 10 times faster in read queries.

I guess, what I am trying to say is that it's up to you to optimize it as 
much as possible.  Run tests and see what's best for you.

P.S.  If you let people run queries like 's%', expect long delays.  It's only 
natural.  I always tell my users that if they want speed up their queries, 
type in as much as possible.  Unless it's only absolutely necessary, use 
query of that type.

Now the hardware (don't laugh, please):

IBM Pentium II, 450MHz with 324Mb of RAM (so I have less than you do) :-)  
One thing that really bugs me, though, is the disks.  On that particular 
machine they are VERY VERY slow.  It's a simple off the assembly line machine 
so I guess, I can't really complain.  If I switched them to something better, 
I'm sure I would get even a better performance.



On Tuesday 06 February 2001 17:24, Ryan Hadley wrote:
-  I'm not much of a db admin so I really don't know how to get better
-  performance out of our database... but it seriously needs some speeding
 up. -
-  We have this huge dictionary... It's 272235 rows.  We have to be able t
-  search through it.
-
-  We run about 800,000 visits a day.
-
-  Right now we're doing a "WHERE keyWord LIKE 'word%'"... but the "LIKE"'s
 are -  killing the machine.  We had to upgrade our memory to 512M.  That
 helped a -  bit, now our machine doesn't just die.  But it's still way to
 slow.  The CPU -  is maxing out and we're hitting like 10-15% idle during
 slow periods and -  0.0% idle