Re: Specific benchmarking tool

2009-11-24 Thread ewen fortune
Johan,

Yes, there are built in parsers for different formats, for example I
was using the general log.
mk-log-player --split Thread_id --type genlog

(genlog was added the other day and is only in trunk so far)

http://www.maatkit.org/doc/mk-log-player.html

--type

type: string; group: Split

The type of log to --split (default slowlog). The permitted types are

binlog

Split a binary log file.
slowlog

Split a log file in any varation of MySQL slow-log format.

Cheers,

Ewen

On Tue, Nov 24, 2009 at 2:41 PM, Johan De Meersman vegiv...@tuxera.be wrote:
 Ewen,

 Do you need a specific log format or setting ? I'm debugging the tool, and
 it uses ;\n# as record separator, which is entirely not consistent with
 the log format I get out of the mysql log. Does it perchance try to parse
 zero-execution-time slowlogs instead of the regular log ?


 On Sat, Nov 14, 2009 at 1:23 AM, Johan De Meersman vegiv...@tuxera.be
 wrote:

 hmm, I got segfaults. i,ll check after the weekend.

 On 11/13/09, ewen fortune ewen.fort...@gmail.com wrote:
  Johan,
 
  What does? mk-log-player? - I just used it to split and play back 8G,
  no problem.
 
 
  Ewen
 
  On Fri, Nov 13, 2009 at 6:20 PM, Johan De Meersman vegiv...@tuxera.be
  wrote:
  It seems to have a problem with multi-gigabyte files :-D
 
  On Fri, Nov 13, 2009 at 5:35 PM, Johan De Meersman vegiv...@tuxera.be
  wrote:
 
  Ooo, shiny ! Thanks, mate :-)
 
  On Fri, Nov 13, 2009 at 4:56 PM, ewen fortune ewen.fort...@gmail.com
  wrote:
 
  Johan,
 
  The very latest version of mk-log-player can do that.
  If you get the version from trunk:
 
  wget http://www.maatkit.org/trunk/mk-log-player
 
  mk-log-player --split Thread_id --type genlog
 
  Cheers,
 
  Ewen
 
  On Fri, Nov 13, 2009 at 4:33 PM, Johan De Meersman
  vegiv...@tuxera.be
  wrote:
   Hey all,
  
   I'm looking for a Mysql benchmarking/stresstesting tool that can
   generate a
   workload based on standard Mysql full query log files. The idea is
   to
   verify
   performance of real production loads on various database setups.
  
   Does anyone know of such a tool, free or paying ?
  
   Thx,
   Johan
  
 
 
 
 



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: Specific benchmarking tool

2009-11-24 Thread Johan De Meersman
Yeah, I figured that out in the mean time :-) I was putting the log type
right after --split, and the damn thing doesn't think of throwing an
'unknown field' error :-)

It's working now, thanks a lot !

On Tue, Nov 24, 2009 at 8:27 PM, ewen fortune ewen.fort...@gmail.comwrote:

 Johan,

 Yes, there are built in parsers for different formats, for example I
 was using the general log.
 mk-log-player --split Thread_id --type genlog

 (genlog was added the other day and is only in trunk so far)

 http://www.maatkit.org/doc/mk-log-player.html

 --type

type: string; group: Split

The type of log to --split (default slowlog). The permitted types are

binlog

Split a binary log file.
slowlog

Split a log file in any varation of MySQL slow-log format.

 Cheers,

 Ewen

 On Tue, Nov 24, 2009 at 2:41 PM, Johan De Meersman vegiv...@tuxera.be
 wrote:
  Ewen,
 
  Do you need a specific log format or setting ? I'm debugging the tool,
 and
  it uses ;\n# as record separator, which is entirely not consistent with
  the log format I get out of the mysql log. Does it perchance try to parse
  zero-execution-time slowlogs instead of the regular log ?
 
 
  On Sat, Nov 14, 2009 at 1:23 AM, Johan De Meersman vegiv...@tuxera.be
  wrote:
 
  hmm, I got segfaults. i,ll check after the weekend.
 
  On 11/13/09, ewen fortune ewen.fort...@gmail.com wrote:
   Johan,
  
   What does? mk-log-player? - I just used it to split and play back 8G,
   no problem.
  
  
   Ewen
  
   On Fri, Nov 13, 2009 at 6:20 PM, Johan De Meersman 
 vegiv...@tuxera.be
   wrote:
   It seems to have a problem with multi-gigabyte files :-D
  
   On Fri, Nov 13, 2009 at 5:35 PM, Johan De Meersman 
 vegiv...@tuxera.be
   wrote:
  
   Ooo, shiny ! Thanks, mate :-)
  
   On Fri, Nov 13, 2009 at 4:56 PM, ewen fortune 
 ewen.fort...@gmail.com
   wrote:
  
   Johan,
  
   The very latest version of mk-log-player can do that.
   If you get the version from trunk:
  
   wget http://www.maatkit.org/trunk/mk-log-player
  
   mk-log-player --split Thread_id --type genlog
  
   Cheers,
  
   Ewen
  
   On Fri, Nov 13, 2009 at 4:33 PM, Johan De Meersman
   vegiv...@tuxera.be
   wrote:
Hey all,
   
I'm looking for a Mysql benchmarking/stresstesting tool that can
generate a
workload based on standard Mysql full query log files. The idea
 is
to
verify
performance of real production loads on various database setups.
   
Does anyone know of such a tool, free or paying ?
   
Thx,
Johan
   
  
  
  
  
 
 

 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql?unsub=vegiv...@tuxera.be




Specific benchmarking tool

2009-11-13 Thread Johan De Meersman
Hey all,

I'm looking for a Mysql benchmarking/stresstesting tool that can generate a
workload based on standard Mysql full query log files. The idea is to verify
performance of real production loads on various database setups.

Does anyone know of such a tool, free or paying ?

Thx,
Johan


Re: Specific benchmarking tool

2009-11-13 Thread Walter Heck - OlinData.com
take a look at mysqlslap: http://dev.mysql.com/doc/refman/5.1/en/mysqlslap.html

Walter

On Fri, Nov 13, 2009 at 22:33, Johan De Meersman vegiv...@tuxera.be wrote:
 Hey all,

 I'm looking for a Mysql benchmarking/stresstesting tool that can generate a
 workload based on standard Mysql full query log files. The idea is to verify
 performance of real production loads on various database setups.

 Does anyone know of such a tool, free or paying ?

 Thx,
 Johan


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: Specific benchmarking tool

2009-11-13 Thread ewen fortune
Johan,

The very latest version of mk-log-player can do that.
If you get the version from trunk:

wget http://www.maatkit.org/trunk/mk-log-player

mk-log-player --split Thread_id --type genlog

Cheers,

Ewen

On Fri, Nov 13, 2009 at 4:33 PM, Johan De Meersman vegiv...@tuxera.be wrote:
 Hey all,

 I'm looking for a Mysql benchmarking/stresstesting tool that can generate a
 workload based on standard Mysql full query log files. The idea is to verify
 performance of real production loads on various database setups.

 Does anyone know of such a tool, free or paying ?

 Thx,
 Johan


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: Specific benchmarking tool

2009-11-13 Thread Johan De Meersman
I did :-) Maybe I missed it, but I didn't see any option that suggests
mysqlslap can easily replay a query log ?

On Fri, Nov 13, 2009 at 4:50 PM, Walter Heck - OlinData.com 
li...@olindata.com wrote:

 take a look at mysqlslap:
 http://dev.mysql.com/doc/refman/5.1/en/mysqlslap.html

 Walter

 On Fri, Nov 13, 2009 at 22:33, Johan De Meersman vegiv...@tuxera.be
 wrote:
  Hey all,
 
  I'm looking for a Mysql benchmarking/stresstesting tool that can generate
 a
  workload based on standard Mysql full query log files. The idea is to
 verify
  performance of real production loads on various database setups.
 
  Does anyone know of such a tool, free or paying ?
 
  Thx,
  Johan
 

 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql?unsub=vegiv...@tuxera.be




Re: Specific benchmarking tool

2009-11-13 Thread Johan De Meersman
Ooo, shiny ! Thanks, mate :-)

On Fri, Nov 13, 2009 at 4:56 PM, ewen fortune ewen.fort...@gmail.comwrote:

 Johan,

 The very latest version of mk-log-player can do that.
 If you get the version from trunk:

 wget http://www.maatkit.org/trunk/mk-log-player

 mk-log-player --split Thread_id --type genlog

 Cheers,

 Ewen

 On Fri, Nov 13, 2009 at 4:33 PM, Johan De Meersman vegiv...@tuxera.be
 wrote:
  Hey all,
 
  I'm looking for a Mysql benchmarking/stresstesting tool that can generate
 a
  workload based on standard Mysql full query log files. The idea is to
 verify
  performance of real production loads on various database setups.
 
  Does anyone know of such a tool, free or paying ?
 
  Thx,
  Johan
 



Re: MySQL Benchmarking

2007-03-19 Thread Clyde Lewis

Alex,

Thanks a bunch for the insight and for proving the links to the 
following benchmarking tools. Unfortunately, business is requiring 
that each database live in it's own instance, so it sounds like 
moving in the direction of having multiple servers and spreading the 
data around would be the best idea.


Again, thanks.
CL

At 06:39 PM 3/15/2007, Alex Greg wrote:

On 3/14/07, Clyde Lewis [EMAIL PROTECTED] wrote:

System Configuration: Sun Microsystems  sun4u Sun Fire E2900
System clock frequency: 150 MHZ
Memory size: 65536 Megabytes
CPU: 12 @ 1200 MHz

I'm looking for a tool that will allow us to determine the max number
of databases that can run in a single instance of MySQL on a pretty
beefy server( Spec above).

In total we will have about  ~40 MySQL
instances running on this server. Each instance of MySQL, there will
have between 30-60 individual databases supporting an OLTP
application. I know that there are no know internal limits that MySQL
have regarding the number of databases that can be created, but I
would like get my hands on a tool that can simulate the number of
databases and identify where we would potentially run into
performance issues.


As I mentioned above, your performance issues are going to come not
from the number of databases, but from (primarily) how well-designed
your database tables and queries are, and (secondly) how you configure
the mysql server(s).

One important factor to bear in mind is that with 40 separate MySQL
instances on the single 64GB server, you will have a maximum 1.6GB of
RAM per instance (excluding memory used by the O/S and other
applications). This will have to be divided up between the various
memory buffers (key_buffer, innodb_buffer_pool, etc.) allocated by
each mysql process, so you might want to reconsider if you really need
to run 40 separate mysql processes, or whether all the databases can
live in the same MySQL instance and thus probably make better use of
the available RAM.

With regards to stress-testing and benchmarking, two popular tools for
benchmarking MySQL servers are:

Super Smack: http://vegan.net/tony/supersmack/
Sysbench: http://sysbench.sourceforge.net/


We need to determine whether to have multiple
servers to support the ~40 instances or have all ~40 instances on the
same machine. Any help of ideas would be greatly appreciated with
this decision.


I would be inclined to have separate machines, rather than put
everything on one huge server. By spreading the data around, you are
reducing the risk if the one mega-machine were to become unavailable,
and also reducing resource contention (on the disks, CPU, RAM etc.).


-- Alex


***
Clyde Lewis
Database Administrator
General Parts, Inc.



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: MySQL Benchmarking

2007-03-15 Thread Alex Greg

On 3/14/07, Clyde Lewis [EMAIL PROTECTED] wrote:

System Configuration: Sun Microsystems  sun4u Sun Fire E2900
System clock frequency: 150 MHZ
Memory size: 65536 Megabytes
CPU: 12 @ 1200 MHz

I'm looking for a tool that will allow us to determine the max number
of databases that can run in a single instance of MySQL on a pretty
beefy server( Spec above).

In total we will have about  ~40 MySQL
instances running on this server. Each instance of MySQL, there will
have between 30-60 individual databases supporting an OLTP
application. I know that there are no know internal limits that MySQL
have regarding the number of databases that can be created, but I
would like get my hands on a tool that can simulate the number of
databases and identify where we would potentially run into
performance issues.


As I mentioned above, your performance issues are going to come not
from the number of databases, but from (primarily) how well-designed
your database tables and queries are, and (secondly) how you configure
the mysql server(s).

One important factor to bear in mind is that with 40 separate MySQL
instances on the single 64GB server, you will have a maximum 1.6GB of
RAM per instance (excluding memory used by the O/S and other
applications). This will have to be divided up between the various
memory buffers (key_buffer, innodb_buffer_pool, etc.) allocated by
each mysql process, so you might want to reconsider if you really need
to run 40 separate mysql processes, or whether all the databases can
live in the same MySQL instance and thus probably make better use of
the available RAM.

With regards to stress-testing and benchmarking, two popular tools for
benchmarking MySQL servers are:

Super Smack: http://vegan.net/tony/supersmack/
Sysbench: http://sysbench.sourceforge.net/


We need to determine whether to have multiple
servers to support the ~40 instances or have all ~40 instances on the
same machine. Any help of ideas would be greatly appreciated with
this decision.


I would be inclined to have separate machines, rather than put
everything on one huge server. By spreading the data around, you are
reducing the risk if the one mega-machine were to become unavailable,
and also reducing resource contention (on the disks, CPU, RAM etc.).


-- Alex

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



MySQL Benchmarking

2007-03-14 Thread Clyde Lewis

Guys,

System Configuration: Sun Microsystems  sun4u Sun Fire E2900
System clock frequency: 150 MHZ
Memory size: 65536 Megabytes
CPU: 12 @ 1200 MHz

I'm looking for a tool that will allow us to determine the max number 
of databases that can run in a single instance of MySQL on a pretty 
beefy server( Spec above). In total we will have about  ~40 MySQL 
instances running on this server. Each instance of MySQL, there will 
have between 30-60 individual databases supporting an OLTP 
application. I know that there are no know internal limits that MySQL 
have regarding the number of databases that can be created, but I 
would like get my hands on a tool that can simulate the number of 
databases and identify where we would potentially run into 
performance issues.  We need to determine whether to have multiple 
servers to support the ~40 instances or have all ~40 instances on the 
same machine. Any help of ideas would be greatly appreciated with 
this decision.


Thanks in advance,

***
Clyde Lewis
Database Administrator
General Parts, Inc.



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Benchmarking GUI tool

2006-07-09 Thread Michael Louie Loria
Hello,

Does anybody know a Benchmarking GUI tool for MySQL under windows?


Thanks,

Mic



signature.asc
Description: OpenPGP digital signature


Re: Benchmarking

2006-05-25 Thread Jay Pipes

Dan Trainor wrote:
I'm curious as to what you guys use for benchmarking nowadays.  I'd like 
to benchmark preformance of an InnoDB database on a fancy new server, 
compared to an old degraded one.


Hi Dan!

I use SysBench for most things, also MyBench for a few things (from 
Jeremy Zawodny) as well as ApacheBench (ab), supersmack (really 
customizable), and have used httperf in the past.


For general MySQL benchmarking, you can always run the MySQL benchmark 
suite (included in source distributions) on one machine and on the 
other, and see differences that way.


Cheers,

--
Jay Pipes
Community Relations Manager, North America, MySQL Inc.
Roaming North America, based in Columbus, Ohio
email: [EMAIL PROTECTED]mob: +1 614 406 1267

Are You MySQL Certified? http://www.mysql.com/certification
Got Cluster? http://www.mysql.com/cluster

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Benchmarking

2006-05-24 Thread Dan Trainor

Hi -

It's been a short while since I've seen any discussion on this subject, 
and I'm wondering what's happened in this arena since then.


I'm curious as to what you guys use for benchmarking nowadays.  I'd like 
to benchmark preformance of an InnoDB database on a fancy new server, 
compared to an old degraded one.


Thanks!
-dant

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Benchmarking/optimization of MySQL

2004-03-02 Thread Bostjan Skufca (at) domenca.com
Hello, 

for the last few days I've been running benchmarks from sql-bench directory 
and tunning server parameters and I have few questions. 

Firstly I would like to note that benchmarks were run on two different but 
similar machines:

Machine ONE:
Dual Xeon 2.4 533MHz FSB
4GB RAM
SCSI raid 10 (controller from Adaptec)
Reiserfs
Linux 2.4.25-grsec
MySQL 3.23.58
/etc/my.cnf is almost empty, server mostly uses defaults for given version
This one is running Apache also but was tested when very lightly loaded 
(5req/s, 5queries/s)

Machine TWO:
Dual Xeon 2.4 400MHz FSB
2GB RAM
SCSI raid 1 (controller from Adaptec)
Reiserfs
Linux 2.4.25-grsec
MySQL 4.0.18
/etc/my.cnf is gracious, giving server enough resources - i guess
This one is actually a mail server but is running MySQL for testing and 
comparison purposes.

Both machines return similar results when doing hdparm on MySQLs' datadir 
disks (+/-2Mb for disk reads):
 Timing buffer-cache reads:   128 MB in  0.24 seconds =533.33 MB/sec
 Timing buffered disk reads:  64 MB in  1.37 seconds = 46.72 MB/sec
(Does somebody also think this is not enogh?)

Running bonnie++ on machines also resulted in very similar results (results 
not included in this message).

Load on machines was not noticeable at the time of benchmarking but machine 
ONE is generally considered more loaded than machine TWO.


My questions have arisen from observations that in some results the older 
version of MySQL on more loaded machine was quite faster that the newer 
one.

Running:
./test-alter-table --host=localhost --user=test --password=test 
--database=test --socket=/tmp/mysql.sock --server=MySQL --random --threads=10

gave following results:
Test name   ONE TWO
-
insert (1000) 0 1
alter_table_add (100) 614
create_index (8)  1 2
drop_index (8)2 3
alter_table_drop (91) 613
Total time   1533

After repeting tests for some time I believed these values are for real. So 
- is there any explanation why newer version alters table slower than older 
one?



Running:
./test-create --host=localhost --user=test --password=test --database=test 
--socket=/tmp/mysql.sock --server=MySQL --random --threads=10

gave following results:
Test name   ONE TWO
-
create_MANY_tables (1)   12 72
select_group_when_MANY_tables (1)  7   7
drop_table_when_MANY_tables (1) 3   3
create+drop (1)13 59
create_key+drop (1) 14 54
Total time   49195

Now these are what I call drastical difference.



There were also some differences in test-insert set of tests but there 
machine TWO compensated some of it's loss with it's query cache so Total 
times were 1336(ONE) vs. 1084(TWO). But it had one most of iritating 
results:
select_column+column (10)  12  20
Why is older version that faster in such a simple query?

Also note that when I installed MySQL 3.23.58 to machine TWO with exactly same 
options as it is installed on machine ONE the results were almost identical - 
meaning hardware has no noticable impact whatsoever.


Does anyone know where these (and other) differences come from?


PS: I would be very pleased if I could see hardware description / my.cnf / 
sql-bench results from you to see if I am on the right way and how much 
headroom do I still have. (Currently my run-all-tests script finishes with 
just above 1500 seconds on server TWO. Details I will post tomorrow as this 
message is already way too long and it is 4o'clock here and I can already see 
my bed in front of me although it is still 15 km away:).



Best regards,

Bostjan Skufca
system administrator

Domenca d.o.o. 
Phone: +386 4 5835444
Fax: +386 4 5831999
http://www.domenca.com


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Benchmarking/optimization of MySQL

2004-03-02 Thread Peter Zaitsev
On Tue, 2004-03-02 at 19:00, Bostjan Skufca (at) domenca.com wrote:

Bostjan,

At first I shall mention you have set up your experiment the hardest way
for comments. You have different hardware, different MySQL versions and
different MySQL settings.   Normally you would like to change only one
of them at the time to be able to analyze the difference better.

Side Load also does not really benefits results accuracy, however if it
is really light making a run several times and taking the best results
might be reasonable. 


Most of your results can be explained by  fsync() for .frm creation
added in MySQL 4.0.18 which  slows down CREATE/ALTER commands a bit,
which is however not critical for production.

I would also recommend you to get updated sql-bench version from public
mysql-bench BitKeeper tree it has more benchmarks available.


I have however no good explanations for you  select col+col query. 
What is about results with same MySQL options  (notes you need to supply
them as some defaults were changed in MySQL 4.0, run show variables to
find out these) 



 Hello, 
 
 for the last few days I've been running benchmarks from sql-bench directory 
 and tunning server parameters and I have few questions. 
 
 Firstly I would like to note that benchmarks were run on two different but 
 similar machines:
 
 Machine ONE:
 Dual Xeon 2.4 533MHz FSB
 4GB RAM
 SCSI raid 10 (controller from Adaptec)
 Reiserfs
 Linux 2.4.25-grsec
 MySQL 3.23.58
 /etc/my.cnf is almost empty, server mostly uses defaults for given version
 This one is running Apache also but was tested when very lightly loaded 
 (5req/s, 5queries/s)
 
 Machine TWO:
 Dual Xeon 2.4 400MHz FSB
 2GB RAM
 SCSI raid 1 (controller from Adaptec)
 Reiserfs
 Linux 2.4.25-grsec
 MySQL 4.0.18
 /etc/my.cnf is gracious, giving server enough resources - i guess
 This one is actually a mail server but is running MySQL for testing and 
 comparison purposes.
 
 Both machines return similar results when doing hdparm on MySQLs' datadir 
 disks (+/-2Mb for disk reads):
  Timing buffer-cache reads:   128 MB in  0.24 seconds =533.33 MB/sec
  Timing buffered disk reads:  64 MB in  1.37 seconds = 46.72 MB/sec
 (Does somebody also think this is not enogh?)
 
 Running bonnie++ on machines also resulted in very similar results (results 
 not included in this message).
 
 Load on machines was not noticeable at the time of benchmarking but machine 
 ONE is generally considered more loaded than machine TWO.
 
 
 My questions have arisen from observations that in some results the older 
 version of MySQL on more loaded machine was quite faster that the newer 
 one.
 
 Running:
 ./test-alter-table --host=localhost --user=test --password=test 
 --database=test --socket=/tmp/mysql.sock --server=MySQL --random --threads=10
 
 gave following results:
 Test name   ONE TWO
 -
 insert (1000) 0 1
 alter_table_add (100) 614
 create_index (8)  1 2
 drop_index (8)2 3
 alter_table_drop (91) 613
 Total time   1533
 
 After repeting tests for some time I believed these values are for real. So 
 - is there any explanation why newer version alters table slower than older 
 one?
 
 
 
 Running:
 ./test-create --host=localhost --user=test --password=test --database=test 
 --socket=/tmp/mysql.sock --server=MySQL --random --threads=10
 
 gave following results:
 Test name   ONE TWO
 -
 create_MANY_tables (1)   12 72
 select_group_when_MANY_tables (1)  7   7
 drop_table_when_MANY_tables (1) 3   3
 create+drop (1)13 59
 create_key+drop (1) 14 54
 Total time   49195
 
 Now these are what I call drastical difference.
 
 
 
 There were also some differences in test-insert set of tests but there 
 machine TWO compensated some of it's loss with it's query cache so Total 
 times were 1336(ONE) vs. 1084(TWO). But it had one most of iritating 
 results:
 select_column+column (10)  12  20
 Why is older version that faster in such a simple query?
 
 Also note that when I installed MySQL 3.23.58 to machine TWO with exactly same 
 options as it is installed on machine ONE the results were almost identical - 
 meaning hardware has no noticable impact whatsoever.
 
 
 Does anyone know where these (and other) differences come from?
 
 
 PS: I would be very pleased if I could see hardware

Re: Benchmarking/optimization of MySQL

2004-03-02 Thread Sasha Pachev
Bostjan Skufca (at) domenca.com wrote:
Hello, 

for the last few days I've been running benchmarks from sql-bench directory 
and tunning server parameters and I have few questions. 

Firstly I would like to note that benchmarks were run on two different but 
similar machines:

Machine ONE:
Dual Xeon 2.4 533MHz FSB
4GB RAM
SCSI raid 10 (controller from Adaptec)
Reiserfs
Linux 2.4.25-grsec
MySQL 3.23.58
/etc/my.cnf is almost empty, server mostly uses defaults for given version
This one is running Apache also but was tested when very lightly loaded 
(5req/s, 5queries/s)

Machine TWO:
Dual Xeon 2.4 400MHz FSB
2GB RAM
SCSI raid 1 (controller from Adaptec)
Reiserfs
Linux 2.4.25-grsec
MySQL 4.0.18
/etc/my.cnf is gracious, giving server enough resources - i guess
This one is actually a mail server but is running MySQL for testing and 
comparison purposes.

Both machines return similar results when doing hdparm on MySQLs' datadir 
disks (+/-2Mb for disk reads):
 Timing buffer-cache reads:   128 MB in  0.24 seconds =533.33 MB/sec
 Timing buffered disk reads:  64 MB in  1.37 seconds = 46.72 MB/sec
(Does somebody also think this is not enogh?)

Running bonnie++ on machines also resulted in very similar results (results 
not included in this message).

Load on machines was not noticeable at the time of benchmarking but machine 
ONE is generally considered more loaded than machine TWO.

My questions have arisen from observations that in some results the older 
version of MySQL on more loaded machine was quite faster that the newer 
one.
Machine one has a faster memory bus - that is a factor, although not a factor of 
2-fold performance difference - I would expect the difference to be no more than 
5%. A gracious my.cnf often does harm. Try using the same one in both 
configurations. And, newer version is not necessarily faster in every way - it 
just has more features, and is usually faster on some queries that have been an 
serious issue in the past version.

--
Sasha Pachev
Create online surveys at http://www.surveyz.com/
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: how-to: benchmarking and query analysis?

2003-10-14 Thread Gabriel Ricard
I've just finished reading through most of the MySQL Enterprise  
Solutions book  by Alexander Pachev and I think you might want to take  
a look at it. There is a section that deals with testing and MySQL  
benchmarking tools.

These tools are available in the mysql/sql-bench (if your MySQL was  
configured --with-bench):

mysql-test-run is the standard test and is what MySQL AB uses for  
regression testing

crash-me is, well, what it sounds like. you can use it to find the  
limits of your mysql installation.

run-all-tests is a comprehensive single threaded benchmark suite

You can also download a multithreaded benchmark called mysqlsyseval  
from www.wiley.com/compbooks/pachev, which was written by Alexander.  
All in all, I highly recommend you get that book. I've only had it for  
a few days and it's already become an invaluable reference.

- Gabriel

On Monday, October 13, 2003, at 07:07  AM, Hanno Fietz wrote:

Hello,

I would like to examine with as much detail as possible the following
aspects of my queries:
- Hows the total execution time distributed between disk reads /  
writes
and cpu / memory usage?
- Hows the total execution time distributed between index search and
actually reading data?
- Among all the queries in my application, which queries / operations
are the most load-intensive ones?

If possible, I'd like to log my results for some time, while the system
is in use. So far, I've used EXPLAIN, the manual section on  
optimisation
and the OS load statistics ('top') as well as some simple scripts of my
own for a rough estimation of what I'm doing. I've come a long way
compared to my first steps but I'll be damned if I can't get it faster
still, since my results are not yet what I believe MySQL can do. I'm
about to start learning how to use the benchmark suite and I would be
happy about any suggestions and instructions on how to perform a
detailed analysis of what MySQL is doing when my queries are processed.
Are there any tools? Would it be a good idea to write my own
benchmarking program? Any good suggestions for that?

Thanks,
Hanno
--- 
-

Hanno Fietz

dezem GmbH
Lohmeyerstr. 9
10587 Berlin

Tel.: 030 / 34 70 50 22
Fax: 030 / 34 70 50 21
www.dezem.de
--- 
-




--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe: 
http://lists.mysql.com/[EMAIL PROTECTED]



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


how-to: benchmarking and query analysis?

2003-10-13 Thread Hanno Fietz
Hello,

I would like to examine with as much detail as possible the following
aspects of my queries:

- How’s the total execution time distributed between disk reads / writes
and cpu / memory usage?
- How’s the total execution time distributed between index search and
actually reading data?
- Among all the queries in my application, which queries / operations
are the most load-intensive ones?

If possible, I'd like to log my results for some time, while the system
is in use. So far, I've used EXPLAIN, the manual section on optimisation
and the OS load statistics ('top') as well as some simple scripts of my
own for a rough estimation of what I'm doing. I've come a long way
compared to my first steps but I'll be damned if I can't get it faster
still, since my results are not yet what I believe MySQL can do. I'm
about to start learning how to use the benchmark suite and I would be
happy about any suggestions and instructions on how to perform a
detailed analysis of what MySQL is doing when my queries are processed.
Are there any tools? Would it be a good idea to write my own
benchmarking program? Any good suggestions for that?

Thanks,
Hanno



Hanno Fietz
 
dezem GmbH
Lohmeyerstr. 9
10587 Berlin
 
Tel.: 030 / 34 70 50 22
Fax: 030 / 34 70 50 21
www.dezem.de





--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Benchmarking

2003-07-08 Thread mixo
How can I benchmark the perfomance of Mysql with the following setup:
 Perl 5.8.0 (perl-DBI, perl-DBI-Mysql)
 mysql-3.23.54a-11
 apache-2.0.40-21
 mod_perl-1.99_07-5
I want to compare the perfomance of Mysql against that of Pg using my
own data.
And, how can I resolve :
  DBD::mysql::st execute failed: The table 'Attachments' is full at 
/usr/lib/perl5/site_perl/5.8.0/DBIx/SearchBuilder/Handle.pm

My configuration is as follows:

+++/etc/my.cnf+=
[mysqld]
datadir=/var/lib/mysql
socket=/var/lib/mysql/mysql.sock
#InnoDB
innodb_data_file_path = rt3ibdata/ibdata1:2000M;
innodb_data_home_dir = /var/lib/mysql
[mysql.server]
user=mysql
basedir=/var/lib
[safe_mysqld]
err-log=/var/log/mysqld.log
pid-file=/var/run/mysqld/mysqld.pid
+++/etc/my.cnf+=
I have an additional 11Gig partition which is not is use.

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: Benchmarking

2003-07-08 Thread Heikki Tuuri
Mixo,

you have to add another InnoDB data file. Also adjust other InnoDB
parameters in my.cnf to get a good performance.

http://www.innodb.com/ibman.html#InnoDB_start


An advanced my.cnf example. Suppose you have a Linux computer with 2 GB RAM
and three 60 GB hard disks (at directory paths `/', `/dr2' and `/dr3').
Below is an example of possible configuration parameters in my.cnf for
InnoDB.

Note that InnoDB does not create directories: you have to create them
yourself. Use the Unix or MS-DOS mkdir command to create the data and log
group home directories.

[mysqld]
# You can write your other MySQL server options here
# ...
innodb_data_home_dir =
#Data files must be able to
#hold your data and indexes
innodb_data_file_path =
/ibdata/ibdata1:2000M;/dr2/ibdata/ibdata2:2000M:autoextend
#Set buffer pool size to
#50 - 80 % of your computer's
#memory, but make sure on Linux
#x86 total memory usage is
# 2 GB
set-variable = innodb_buffer_pool_size=1G
set-variable = innodb_additional_mem_pool_size=20M
innodb_log_group_home_dir = /dr3/iblogs
#.._log_arch_dir must be the same
#as .._log_group_home_dir; starting
#from 4.0.6, you can omit it
#innodb_log_arch_dir = /dr3/iblogs
set-variable = innodb_log_files_in_group=3
#Set the log file size to about
#15 % of the buffer pool size
set-variable = innodb_log_file_size=150M
set-variable = innodb_log_buffer_size=8M
#Set ..flush_log_at_trx_commit to
#0 if you can afford losing
#some last transactions
innodb_flush_log_at_trx_commit=1
set-variable = innodb_lock_wait_timeout=50
#innodb_flush_method=fdatasync
#set-variable = innodb_thread_concurrency=5

Note that we have placed the two data files on different disks. InnoDB will
fill the tablespace formed by the data files from bottom up. In some cases
it will improve the performance of the database if all data is not placed on
the same physical disk. Putting log files on a different disk from data is
very often beneficial for performance. You can also use raw disk partitions
(raw devices) as data files. In some Unixes they speed up i/o. See section
12.1 about how to specify them in my.cnf.


http://www.innodb.com/ibman.html#Adding_and_removing


5 Adding and removing InnoDB data and log files

To add a new data file to the tablespace you have to shut down your MySQL
database, edit the my.cnf file, adding a new file to innodb_data_file_path,
and then start MySQL again.

If your last data file was defined with the keyword autoextend, then the
procedure to edit my.cnf is the following. You have to look the size of the
last data file, round the size downward to the closest multiple of 1024 *
1024 bytes (= 1 MB), and specify the rounded size explicitly in
innodb_data_file_path. Then you can add another data file. Remember that
only the last data file in the innodb_data_file_path can be specified as
auto-extending.

An example: We assume you had just one auto-extending data file ibdata1
first, and that file grew to 988 MB. Below a possible line after adding
another auto-extending data file.

innodb_data_home_dir =
innodb_data_file_path = /ibdata/ibdata1:988M;/disk2/ibdata2:50M:autoextend

Currently you cannot remove a data file from InnoDB. To decrease the size of
your database you have to use mysqldump to dump all your tables, create a
new database, and import your tables to the new database.

If you want to change the number or the size of your InnoDB log files, you
have to shut down MySQL and
make sure that it shuts down without errors. Then copy the old log files
into a safe place just in case something went wrong in the shutdown and you
will need them to recover the database. Delete then the old log files from
the log file directory, edit my.cnf, and start MySQL again. InnoDB will tell
you at the startup that it is creating new log files.


Best regards,

Heikki Tuuri
Innobase Oy
http://www.innodb.com
Transactions, foreign keys, and a hot backup tool for MySQL
Order MySQL technical support from https://order.mysql.com/


- Original Message - 
From: mixo [EMAIL PROTECTED]
To: [EMAIL PROTECTED]; [EMAIL PROTECTED]
Sent: Tuesday, July 08, 2003 11:21 AM
Subject: Benchmarking


 How can I benchmark the perfomance of Mysql with the following setup:
   Perl 5.8.0 (perl-DBI, perl-DBI-Mysql)
   mysql-3.23.54a-11
   apache-2.0.40-21
   mod_perl-1.99_07-5

 I want to compare the perfomance of Mysql against that of Pg using my
 own data

question on mysql benchmarking

2002-02-13 Thread P Zhao

Hi,

When I run perl run-all-tests --server=mysql --cmp=mysql,pg,solid
--user=test --password=test --log in the sql-bench direcotory. I
encountered following error messages:

   Can't locate DBI.pm in @INC (@INC contains:
/usr/lib/perl5/5.6.0/ia64-linux /usr/lib/perl5/5.6.0
/usr/lib/perl5/site_perl/5.6.0/ia64-linux /usr/lib/perl5/site_perl/5.6.0
/usr/lib/perl5/site_perl .) at run-all-tests line 36.
BEGIN failed--compilation aborted at run-all-tests line 36.

I am very new to mysql. I cannot figure out the reason easily. Is
there any problem with my perl?
Another question is that I am also using another compiler (ORC
from intel) to compile mysql. ORC borrows the frontend of gcc. but orcc
doesnot accept compiler options such as -fno-implicit-templates
-fno-exceptions -fno-rtti. I don't know the meaning of these flags. Can
you explain me this? Can I eliminate these options and make mysql still
runs well?

Thanks.
Peng


-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




benchmarking pgsql with mysql benchmark suite

2002-01-24 Thread jon-david schlough

hi.

i'm trying to benchmark pgsql living in cygwin on a 2K box with mysql
installed normaly on windows. i get:

C:\mysql\benchperl
run-all-tests --host=PAVILION --server=Pg --user=n
  --password=x --log --comment 2x Pentium II 400mz, 256M, under
vmware
  Got error: 'connectDBStart() -- socket() failed: errno=2
  No such file or directory

my ? is: is there a way to set an absolute path to the pgsql server in run
or in run-all-tests so it find the pg server?

or would it be easier to simply re-install mysql in the emulated unix
environment of cygwin and try again.

thanks in advance!

jd


-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




RE: Benchmarking MyISAM, InnoDB, and Oracle: a problem with InnoDB

2002-01-23 Thread Weaver, Walt

Sounds good. Thanks for the info, Heikki.

--Walt

-Original Message-
From: Heikki Tuuri [mailto:[EMAIL PROTECTED]]
Sent: Wednesday, January 23, 2002 11:13 AM
To: [EMAIL PROTECTED]
Subject: Re: Benchmarking MyISAM, InnoDB, and Oracle: a problem with
InnoDB


Walt,

this is probably a performance bug in Linux. I have seen the same phenomenon
on our 2-way Xeon, Linux-2.4.4 when there are just big SELECT queries
running concurrently.

My hypothesis is that the semaphore reservations I added to 3.23.44 cause
some strange phenomenon inside the Linux kernel. Funny, but the problem did
not appear when there were also inserts present. It seems to be some kind of
a resonance problem.

I will change .48 so that there are less semaphore operations in it. That
will probably fix the problem, at least it it fixed a similar problem in
some other concurrent SELECTs.

You could try running MySQL-Max-3.23.43 to check if the strange phenomenon
happens in it too.

Regards,

Heikki
..
I'm finally finishing up my benchmarking, comparing MyISAM, Oracle, and
InnoDB, and am having a problem with InnoDB.
The final testing was load testing; we set up a number of perl scripts to
simulate massive inserts, subquery inserts, selects, and updates to three
tables. A master perl script runs all of the other scripts that connect and
perform the activity on the tables. The benchmarking is being done on a VA
Linux 2230 with 2gb of memory and two disk drives (about 30gb each)
available. We're running Linux kernel 2.4.17, MySQL 3.23.47, Oracle 8.1.7.2.
Doing inserts only, we can load 95,000 rows into one table in about 1 minute
on MyISAM and around 2 minutes for both InnoDB and Oracle.
When we run everything -- inserts, selects, updates -- at once, Oracle
finishes the test in about 12 minutes, MyISAM finishes in about 26 minutes,
but InnoDB just keeps running, and running, and running. In one test we let
InnoDB run for over 24 hours and it was still going. MySQL doesn't hang; the
daemons are cranking like crazy the whole time. There winds up being 19
MySQL processes that keep running.
I'm thinking it's a configuration issue, but I'm not sure what I'm doing
wrong. It seems to be the selects that are taking so long. Doing a show
processlist, each of the processes look like this:

| Id | User   | Host  | db  | Command | Time | State| Info
| 12 | oracle | localhost | test_innodb | Query   | 817  | Copying to tmp
table | SELECT  q.key_val, q.timestamp_val, q.range_val,
q.rand_val,d.text_val
FROMbench_data d, bench |

And output from part of the InnoDB Monitor looks like this:

MySQL thread id 12, query id 137166 localhost oracle Copying to tmp table
SELECT  q.key_val, q.timestamp_val, q.range_val, q.rand_val, d.text_val
FROMbench_data d, bench

Trx read view will not see trx with id = 0 618839, sees  0 618839
I've copied the my.cnf file; it's at the end of the email. At this point I'm
not sure what's wrong. We're hoping to migrate from MyISAM to InnoDB, but
our production environment is web-hosting based one that can get very busy,
and a combination of inserts, updates, and selects is common.
If anyone has any ideas about configuring InnoDB for this benchmarking I'm
all ears. I'm also more than happy to provide output from InnoDB Monitor,
the perl scripts, etc.

Thanks,--

Walt Weaver
 Bozeman, Montana



-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail
[EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php

-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Benchmarking

2001-12-24 Thread Joel Wickard

Hello,
I've looked around on mysql.com, and through the directories of my mysql
install I'm looking for information on benchmarking my mysql database, but
I'm not interested in seeing how it performs against other databases, I'm
interested in testing how my designs will perform when scaled.  If anyone
can give me any pointers it would be great.



-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: Benchmarking

2001-12-24 Thread Michael Brunson

On Mon, 24 Dec 2001 12:37:21 -0800, Joel Wickard used a
few recycled electrons to form:
| Hello,
| I've looked around on mysql.com, and through the directories of my mysql
| install I'm looking for information on benchmarking my mysql database, but
| I'm not interested in seeing how it performs against other databases, I'm
| interested in testing how my designs will perform when scaled.  If anyone
| can give me any pointers it would be great.

Your database performance is greatly dependant upon the
design. If your design is good, it should run great.
There are a lot of resources about good database
design.

Here are a few for you.
http://databases.about.com/cs/specificproducts/

The mysql docs have some turning tips also.



-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Benchmarking

2001-12-20 Thread Joel Wickard

Hello,
I've looked around on mysql.com, and through the directories of my mysql
install I'm looking for information on benchmarking my mysql database, but
I'm not interested in seeing how it performs against other databases, I'm
interested in testing how my designs will perform when scaled.  If anyone
can give me any pointers it would be great.


-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




RE: Benchmarking

2001-11-19 Thread Venu

Hi, 

 -Original Message-
 From: Rachman M.H [mailto:[EMAIL PROTECTED]]
 Sent: Sunday, November 18, 2001 11:02 PM
 To: [EMAIL PROTECTED]
 Subject: Benchmarking
 
 
 Dear all,
 
 I've been trying benchmark MySQL, SQL Server 7, and M$ Access 97.
 But, SQL Server 7 and M$ Access is won when connected and opening
 recordset using ADO, Even i'm using MyODBC with TCP/IP connections.
 
 If i'm use cursorlocation=serverside with adOpenDynamic, and 
 adLockOptimistic,
 still MyODBC is looses.

Something interesting. When I did a small research on this, 
I got MyODBC as the best result for most of the cases.

Is it possible for you to pass the following info, 
so that we can also cross check:

1. Test scenarios
2. Comparision analysis for various tools/applications
3. MyODBC version and the DLL type (debug/share version)
4. MySQL version

 
 How can i make speed up the connection and opening recordset for MySQL ?

It also depends up on your SELECT query and how the table is structured.
You can find more information on this in the manual ( 5. MySQL 
optimisation).

Regards, venu
-- 
For technical support contracts, go to https://order.mysql.com
   __  ___ ___   __
  /  |/  /_ __/ __/ __ \/ /  Mr. Venu [EMAIL PROTECTED]
 / /|_/ / // /\ \/ /_/ / /__ MySQL AB, Developer
/_/  /_/\_, /___/\___\_\___/ California, USA
   ___/ www.mysql.com


-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Benchmarking

2001-11-18 Thread Rachman M.H

Dear all,

I've been trying benchmark MySQL, SQL Server 7, and M$ Access 97.
But, SQL Server 7 and M$ Access is won when connected and opening
recordset using ADO, Even i'm using MyODBC with TCP/IP connections.

If i'm use cursorlocation=serverside with adOpenDynamic, and adLockOptimistic,
still MyODBC is looses.

How can i make speed up the connection and opening recordset for MySQL ?

FYI, MySQL installed on NT Machines with 128Mb RAM.

I hope somebody can replying to me.

Thanks.


-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Benchmarking Tools for MYSQL

2001-11-11 Thread steve smith

Does anyone know of any well-known Benchmarking tools for MYSQL database?

Thanks
S.M.


-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: Benchmarking

2001-08-16 Thread Jeremy Zawodny

On Mon, Aug 13, 2001 at 05:54:35PM +0100, Tadej Guzej wrote:

 How do I benchmark 2 queries that return same results without having
 mysql read from cache?

The only certain way is to restart the server between the queries and
do what you can to flush the OS cache, too, if you're concerned about
it influencing the results.

 What is the max. size of the index file that fits into 512M memmory,
 so that mysql doesn't have to read the index file from disk?

It's probably rather close to 512MB, right?

Jeremy
-- 
Jeremy D. Zawodny, [EMAIL PROTECTED]
Technical Yahoo - Yahoo Finance
Desk: (408) 349-7878   Fax: (408) 349-5454   Cell: (408) 685-5936

MySQL 3.23.29: up 60 days, processed 575,280,080 queries (109/sec. avg)

-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Benchmarking

2001-08-13 Thread Tadej Guzej

How do I benchmark 2 queries that return same results
without having mysql read from cache?

Example:
if i run the first query it will take 2 seconds, when I run the query
again, it takes 0.05 seconds.


And

What is the max. size of the index file that fits into 512M memmory, so that
mysql doesn't have to read the index file from disk?
Can someone explain the settings of that.


Thanks,
Tadej



Re: linux innobase benchmarking and BSD problem

2001-03-31 Thread Heikki Tuuri

Thank you Dan!

I do not have access to a FreeBSD computer during this weekend but
your stack prints already tell the origin of the problem.

I have implemented my own mutexes in the purpose that I can use
an assembler instruction for the atomic test-and-set operation
needed in a mutex. But for now I have done the test-and-set
with pthread_mutex_trylock: it provides an atomic operation
on an OS mutex which I can use in place of test-and-set.

It seems that if a thread does not acquire a mutex with the
first try, then something goes wrong and the thread is left in a loop.
The stack prints show that FreeBSD uses a spin wait also in the
case a trylock fails. This may be associated with the problem.
More logical would be that FreeBSD would return with a failure
code if trylock fails.

A fix would be to replace pthread_mutex_trylock with
the XCHG instruction to implement test-and-set. But that would not
work on non-Intel FreeBSD platforms.

I will dig into the FreeBSD documentation and try to find a
solution from there.

Best regards,

Heikki

At 03:52 PM 3/30/01 -0600, you wrote:
In the last episode (Mar 30), Heikki Tuuri said:
 The FreeBSD bug is known. I will run tests on our FreeBSD machine in
 the next few days. Obviously there is something wrong with the
 FreeBSD port. Was it so that it hung and used 100 % of CPU? That has
 been reported also from Italy.

I have a similar problem, on FreeBSD 5 (i.e. -current).  I can insert
records one at a time with no problem, but if I try to update more than
~250 records at a time, it hangs, consuming 100% cpu.  gdb'ing a corefile of
the process, it looks like a mutex/spinlock problem of some sort. 
Deleting records dies if I delete between 100 and 150 records in one
go.  Does innobase create a mutex for each record processed?  Maybe
there's a limit on 256 held mutices per thread on FreeBSD or something.

-- mysqld hung on "insert into temp (value) select ip from iptable limit 300":
(gdb) thread apply all bt

Thread 1 (process 764):
#0  0x28288163 in _get_curthread ()
at /usr/src/lib/libc_r/uthread/uthread_kern.c:1145
#1  0x28280064 in _spinlock_debug (lck=0xbfaa9ecc, 
fname=0x28280138
"\203”€\020\205””\017\2055\213\205–\213U\b\211B\004\213E\f\
211B\b\213E\020\211B\f\215‘•–[^_\211ˆž]”œ$FreeBSD:
src/lib/libc_r/arch/i386/_atomic_lock.S,v 1.3 1999/08/28 00:03:01 peter Exp
$", lineno=149551536)
at /usr/src/lib/libc_r/uthread/uthread_spinlock.c:83
#2  0x282854d6 in mutex_trylock_common (mutex=0xbfaa9ecc)
at /usr/src/lib/libc_r/uthread/uthread_mutex.c:311
#3  0x28285712 in __pthread_mutex_trylock (mutex=0x8ea3090)
at /usr/src/lib/libc_r/uthread/uthread_mutex.c:441
#4  0x8193d4b in mutex_spin_wait (mutex=0x8ea308c) at ../include/os0sync.ic:38
#5  0x8126ead in srv_master_thread (arg=0x0) at ../include/sync0sync.ic:220
#6  0x2827f18c in _thread_start ()
at /usr/src/lib/libc_r/uthread/uthread_create.c:326
#7  0x0 in ?? ()
(gdb) 


-- mysqld hung on "delete from temp limit 150":
(gdb) info threads;
* 1 process 26111  0x28361b54 in gettimeofday () from /usr/lib/libc.so.5
(gdb) where
#0  0x28361b54 in gettimeofday () from /usr/lib/libc.so.5
#1  0x28280949 in _thread_sig_handler (sig=0, info=0x828a660, ucp=0x282808d1)
at /usr/src/lib/libc_r/uthread/uthread_sig.c:93
#2  0xbfbfffac in ?? ()
#3  0x28287ffb in _thread_kern_sig_defer ()
at /usr/src/lib/libc_r/uthread/uthread_kern.c:1049
#4  0x282854bf in mutex_trylock_common (mutex=0x0)
at /usr/src/lib/libc_r/uthread/uthread_mutex.c:308
#5  0x28285712 in __pthread_mutex_trylock (mutex=0x8ea3210)
at /usr/src/lib/libc_r/uthread/uthread_mutex.c:441
#6  0x8193d4b in mutex_spin_wait (mutex=0x8ea320c) at ../include/os0sync.ic:38
#7  0x8165a84 in buf_page_get_gen (space=0, offset=6, rw_latch=2, guess=0x0,
mode=10, mtr=0xbfaa95d0) at ../include/sync0sync.ic:220
#8  0x81576d9 in trx_purge_truncate_rseg_history (rseg=0x8ebd10c,
limit_trx_no={high = 0, low = 7946}, limit_undo_no={high = 0, low = 0})
at ../include/trx0rseg.ic:25
#9  0x8157bdd in trx_purge_truncate_history () at trx0purge.c:545
#10 0x81589c7 in trx_purge_fetch_next_rec (roll_ptr=0xbfaa9ee4,
cell=0x8ec016c, heap=0x8ebfe0c) at trx0purge.c:564
#11 0x813a2b6 in row_purge (node=0x8ec0134, thr=0x8ec00d4) at row0purge.c:481
#12 0x813a4fe in row_purge_step (thr=0x8ec00d4) at row0purge.c:548
#13 0x8129aa2 in que_run_threads (thr=0x8ec00d4) at que0que.c:1223
#14 0x8158f95 in trx_purge () at trx0purge.c:1050
#15 0x8126fb5 in srv_master_thread (arg=0x0) at srv0srv.c:1901
#16 0x2827f18c in _thread_start ()
at /usr/src/lib/libc_r/uthread/uthread_create.c:326
#17 0x0 in ?? ()
(gdb)


-- 
   Dan Nelson
   [EMAIL PROTECTED]



-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble 

Re: linux benchmarking test and BSD problems

2001-03-30 Thread Heikki Tuuri

Hi Seung!

Yes, you have to do some tuning to get the performance from a transactional
database. You should not use autocommit=1, because then the database
has to physically write the log segment to disk after each individual
insert. Try to insert your data in a single transaction, only a single commit
after the insert, or a few commits on the way.

I understand you measured several times, doing DELETE FROM TABLE in between.
Innobase uses some CPU to physically remove the delete marked records in
a purge operation. DROP TABLE and TRUNCATE TABLE (in MySQL-4.0) are faster.

I will also add a small optimization to auto_increment columns.
Currently it executes SELECT MAX (KEY) FROM TABLE before each insert,
which wastes some CPU and is not necessary. It is better cache the latest
inserted key value to main memory, to the data dictionary.

The FreeBSD bug is known. I will run tests on our FreeBSD machine
in the next few days. Obviously there is something wrong with the
FreeBSD port. Was it so that it hung and used 100 % of CPU? That
has been reported also from Italy.

Best regards,

Heikki

At 06:11 PM 3/30/01 +0900, you wrote:
Hi Heikki, I have run a few test for innobase tables on linux and BSD

1. It seems that 'innobase' table was slower than 'bdb' table for 'insert'
command, which is surprising(or not? o_O)

2. It may be due to that fact that I dont' know much about optimazing the
'innobase' table.  Do you know how to boost the performance of 'bdb' or
'innobase table'?  Please let me know...:(

3. I mainly tested the innobase table with linux, but I actually want to
run mysql with 'bdb' or 'innobase' tables on FreeBSD.

4. However, mysql just hung when I tried to insert the 1 rows of data,
the same data used for linux test, using 'mysql  data.sql'.  I used
'my-large.cnf' and changed the setting as follwings.  

innobase_buffer_pool_size=400M
innobase_additional_mem_pool_size=20M

FreeBSD setting
4.2 release #1
pentium 550, 512 RAM, mysql-3.23.35

complied with ./configure --
1.
--prefix=/usr/local/mysql 
--with-charset=euc_kr 
--with-low-memory 
--without-debug 
--without-readline 
--with-mysqld-ldflags=-all-static
--with-mit-threads=no
--with-client-ldflags=-all-static 
--with-innobase
--with-bdb

2. 
--with-charset=euc_kr 

--with-innobase
--with-berkeley-db 



Linux setting
Redhat 6.2, kernal 2.2.14-5.0, pentium550, 128 RAM, mysql-3.23.35
complied with ./configure --with-charset=euc_kr --with-berkeley-db
--with-innobase


Other setting
1. The data used had three columns, 'no', 'name', 'grade' with 'no' as the
primary index and auto_increament and grade as a key. There were 1 rows,
and I inserted the data using 'shellmysql  data.sql'.
2. Whenever I inserted the data, I simply did 'delete from table where no 
0' instead of  dropping the table. 
2. I used 'my-medium' as 'my.cnf'

'mysql insert' benchmarking result( all times are in seconds).
1. with key, autocommit=1, flush_log_at_trx=1, all on, basic setting

myisam

real 5.61
real 5.10
real 5.08

innobase

real 68.57
real 69.27
real 83.26
real 100.87
real 99.99
real 102.31
real 108.82
real 114.23
real 124.23
real 94.42
real 98.20

bdb

real 56.83
real 43.49
real 39.76
real 54.77
real 48.75

2. with key, autocommit=1, flush_log_at_trx=1, bdb off, basic setting

myisam

real 4.48
real 3.01
real 3.08
real 3.08
real 3.04

innobase

real 70.04
real 99.04

with 
innobase_buffer_pool_size=100M
innobase_additional_mem_pool_size=10M

real 76.60
real 80.97
real 81.52
real 107.47

3. with key, autocommit=1, flush_log_at_trx=1, bdb on

bdb

real 53.14
real 54.29!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN"
HTMLHEAD
META http-equiv=Content-Type content="text/html; charset=ks_c_5601-1987"
META content="MSHTML 5.50.4522.1800" name=GENERATOR
STYLE/STYLE
/HEAD
BODY bgColor=#ff
DIVFONT size=2
DIVFONT size=2Hi Heikki, I have run a few test for innobase tables on
linux 
andnbsp;BSD/FONT/DIV
DIVFONT size=2/FONTnbsp;/DIV
DIVFONT size=21. It seems that 'innobase' table was slower than 'bdb'
table 
for 'insert' command, which is surprising(or not? o_O)/FONT/DIV
DIVFONT size=2/FONTnbsp;/DIV
DIVFONT size=22. It may be due to that fact that I dont' know much about 
optimazing the 'innobase' table.nbsp;/FONTFONT size=2nbsp;Do you know
how 
to boost the performance of 'bdb' or 'innobase table'?nbsp;nbsp;Please
let me 
know...:(/FONT/DIV
DIVFONT size=2/FONTnbsp;/DIV
DIVFONT size=23. I mainly tested the innobase table with linux, but I 
actually want to run mysql with 'bdb' or 'innobase' tables on 
FreeBSD./FONT/DIV
DIVFONT size=2/FONTnbsp;/DIV
DIVFONT size=24. However, mysql just hung when I tried to insert the 1 
rows of data, the same data used for linux test,nbsp;using 'mysql lt; 
data.sql'.nbsp; I used 'my-large.cnf' and changednbsp;the setting 
asnbsp;follwings.nbsp;nbsp;/FONT/DIV
DIVFONT size=2/FONTnbsp;/DIV
DIVFONT 
s

Re:linux innobase benchmarking and BSD problem

2001-03-30 Thread Heikki Tuuri

Hi Seung!

Yes, you have to do some tuning to get the performance from a transactional
database. You should not use autocommit=1, because then the database
has to physically write the log segment to disk after each individual
insert. Try to insert your data in a single transaction, only a single commit
after the insert batch, or a few commits along the way.

I understand you measured several times, doing DELETE FROM TABLE in between.
Innobase uses some CPU to physically remove the delete marked records in
a purge operation. DROP TABLE and TRUNCATE TABLE (in MySQL-4.0) are faster.

I will also add a small optimization to auto_increment columns.
Currently it executes SELECT MAX (KEY) FROM TABLE before each insert,
which wastes some CPU and is not necessary. It is better cache the latest
inserted key value to main memory, to the data dictionary.

The FreeBSD bug is known. I will run tests on our FreeBSD machine
in the next few days. Obviously there is something wrong with the
FreeBSD port. Was it so that it hung and used 100 % of CPU? That
has been reported also from Italy.

Best regards,

Heikki

At 06:11 PM 3/30/01 +0900, you wrote:
Hi Heikki, I have run a few test for innobase tables on linux and BSD

1. It seems that 'innobase' table was slower than 'bdb' table for 'insert'
command, which is surprising(or not? o_O)

2. It may be due to that fact that I dont' know much about optimazing the
'innobase' table.  Do you know how to boost the performance of 'bdb' or
'innobase table'?  Please let me know...:(

3. I mainly tested the innobase table with linux, but I actually want to
run mysql with 'bdb' or 'innobase' tables on FreeBSD.

4. However, mysql just hung when I tried to insert the 1 rows of data,
the same data used for linux test, using 'mysql  data.sql'.  I used
'my-large.cnf' and changed the setting as follwings.  

innobase_buffer_pool_size=400M
innobase_additional_mem_pool_size=20M

FreeBSD setting
4.2 release #1
pentium 550, 512 RAM, mysql-3.23.35

complied with ./configure --
1.
--prefix=/usr/local/mysql 
--with-charset=euc_kr 
--with-low-memory 
--without-debug 
--without-readline 
--with-mysqld-ldflags=-all-static
--with-mit-threads=no
--with-client-ldflags=-all-static 
--with-innobase
--with-bdb

2. 
--with-charset=euc_kr 

--with-innobase
--with-berkeley-db 



Linux setting
Redhat 6.2, kernal 2.2.14-5.0, pentium550, 128 RAM, mysql-3.23.35
complied with ./configure --with-charset=euc_kr --with-berkeley-db
--with-innobase


Other setting
1. The data used had three columns, 'no', 'name', 'grade' with 'no' as the
primary index and auto_increament and grade as a key. There were 1 rows,
and I inserted the data using 'shellmysql  data.sql'.
2. Whenever I inserted the data, I simply did 'delete from table where no 
0' instead of  dropping the table. 
2. I used 'my-medium' as 'my.cnf'

'mysql insert' benchmarking result( all times are in seconds).
1. with key, autocommit=1, flush_log_at_trx=1, all on, basic setting

myisam

real 5.61
real 5.10
real 5.08

innobase

real 68.57
real 69.27
real 83.26
real 100.87
real 99.99
real 102.31
real 108.82
real 114.23
real 124.23
real 94.42
real 98.20

bdb

real 56.83
real 43.49
real 39.76
real 54.77
real 48.75

2. with key, autocommit=1, flush_log_at_trx=1, bdb off, basic setting

myisam

real 4.48
real 3.01
real 3.08
real 3.08
real 3.04

innobase

real 70.04
real 99.04

with 
innobase_buffer_pool_size=100M
innobase_additional_mem_pool_size=10M

real 76.60
real 80.97
real 81.52
real 107.47

3. with key, autocommit=1, flush_log_at_trx=1, bdb on

bdb

real 53.14
real 54.29!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN"
HTMLHEAD
META http-equiv=Content-Type content="text/html; charset=ks_c_5601-1987"
META content="MSHTML 5.50.4522.1800" name=GENERATOR
STYLE/STYLE
/HEAD
BODY bgColor=#ff
DIVFONT size=2
DIVFONT size=2Hi Heikki, I have run a few test for innobase tables on
linux 
andnbsp;BSD/FONT/DIV
DIVFONT size=2/FONTnbsp;/DIV
DIVFONT size=21. It seems that 'innobase' table was slower than 'bdb'
table 
for 'insert' command, which is surprising(or not? o_O)/FONT/DIV
DIVFONT size=2/FONTnbsp;/DIV
DIVFONT size=22. It may be due to that fact that I dont' know much about 
optimazing the 'innobase' table.nbsp;/FONTFONT size=2nbsp;Do you know
how 
to boost the performance of 'bdb' or 'innobase table'?nbsp;nbsp;Please
let me 
know...:(/FONT/DIV
DIVFONT size=2/FONTnbsp;/DIV
DIVFONT size=23. I mainly tested the innobase table with linux, but I 
actually want to run mysql with 'bdb' or 'innobase' tables on 
FreeBSD./FONT/DIV
DIVFONT size=2/FONTnbsp;/DIV
DIVFONT size=24. However, mysql just hung when I tried to insert the 1 
rows of data, the same data used for linux test,nbsp;using 'mysql lt; 
data.sql'.nbsp; I used 'my-large.cnf' and changednbsp;the setting 
asnbsp;follwings.nbsp;nbsp;/FONT/DIV
DIVFONT size=2/FONTnbsp;/DIV
DIVFONT 
s

Re: linux innobase benchmarking and BSD problem

2001-03-30 Thread Dan Nelson

In the last episode (Mar 30), Heikki Tuuri said:
 The FreeBSD bug is known. I will run tests on our FreeBSD machine in
 the next few days. Obviously there is something wrong with the
 FreeBSD port. Was it so that it hung and used 100 % of CPU? That has
 been reported also from Italy.

I have a similar problem, on FreeBSD 5 (i.e. -current).  I can insert
records one at a time with no problem, but if I try to update more than
~250 records at a time, it hangs, consuming 100% cpu.  gdb'ing a corefile of
the process, it looks like a mutex/spinlock problem of some sort. 
Deleting records dies if I delete between 100 and 150 records in one
go.  Does innobase create a mutex for each record processed?  Maybe
there's a limit on 256 held mutices per thread on FreeBSD or something.

-- mysqld hung on "insert into temp (value) select ip from iptable limit 300":
(gdb) thread apply all bt

Thread 1 (process 764):
#0  0x28288163 in _get_curthread ()
at /usr/src/lib/libc_r/uthread/uthread_kern.c:1145
#1  0x28280064 in _spinlock_debug (lck=0xbfaa9ecc, 
fname=0x28280138 
"\203─\020\205└\017\2055   \213\205ⁿ■  \213U\b\211B\004\213E\f\211B\b\213E\020\211B\f\215Ñ╪■  [^_\211∞]├$FreeBSD:
 src/lib/libc_r/arch/i386/_atomic_lock.S,v 1.3 1999/08/28 00:03:01 peter Exp $", 
lineno=149551536)
at /usr/src/lib/libc_r/uthread/uthread_spinlock.c:83
#2  0x282854d6 in mutex_trylock_common (mutex=0xbfaa9ecc)
at /usr/src/lib/libc_r/uthread/uthread_mutex.c:311
#3  0x28285712 in __pthread_mutex_trylock (mutex=0x8ea3090)
at /usr/src/lib/libc_r/uthread/uthread_mutex.c:441
#4  0x8193d4b in mutex_spin_wait (mutex=0x8ea308c) at ../include/os0sync.ic:38
#5  0x8126ead in srv_master_thread (arg=0x0) at ../include/sync0sync.ic:220
#6  0x2827f18c in _thread_start ()
at /usr/src/lib/libc_r/uthread/uthread_create.c:326
#7  0x0 in ?? ()
(gdb) 


-- mysqld hung on "delete from temp limit 150":
(gdb) info threads;
* 1 process 26111  0x28361b54 in gettimeofday () from /usr/lib/libc.so.5
(gdb) where
#0  0x28361b54 in gettimeofday () from /usr/lib/libc.so.5
#1  0x28280949 in _thread_sig_handler (sig=0, info=0x828a660, ucp=0x282808d1)
at /usr/src/lib/libc_r/uthread/uthread_sig.c:93
#2  0xbfbfffac in ?? ()
#3  0x28287ffb in _thread_kern_sig_defer ()
at /usr/src/lib/libc_r/uthread/uthread_kern.c:1049
#4  0x282854bf in mutex_trylock_common (mutex=0x0)
at /usr/src/lib/libc_r/uthread/uthread_mutex.c:308
#5  0x28285712 in __pthread_mutex_trylock (mutex=0x8ea3210)
at /usr/src/lib/libc_r/uthread/uthread_mutex.c:441
#6  0x8193d4b in mutex_spin_wait (mutex=0x8ea320c) at ../include/os0sync.ic:38
#7  0x8165a84 in buf_page_get_gen (space=0, offset=6, rw_latch=2, guess=0x0,
mode=10, mtr=0xbfaa95d0) at ../include/sync0sync.ic:220
#8  0x81576d9 in trx_purge_truncate_rseg_history (rseg=0x8ebd10c,
limit_trx_no={high = 0, low = 7946}, limit_undo_no={high = 0, low = 0})
at ../include/trx0rseg.ic:25
#9  0x8157bdd in trx_purge_truncate_history () at trx0purge.c:545
#10 0x81589c7 in trx_purge_fetch_next_rec (roll_ptr=0xbfaa9ee4,
cell=0x8ec016c, heap=0x8ebfe0c) at trx0purge.c:564
#11 0x813a2b6 in row_purge (node=0x8ec0134, thr=0x8ec00d4) at row0purge.c:481
#12 0x813a4fe in row_purge_step (thr=0x8ec00d4) at row0purge.c:548
#13 0x8129aa2 in que_run_threads (thr=0x8ec00d4) at que0que.c:1223
#14 0x8158f95 in trx_purge () at trx0purge.c:1050
#15 0x8126fb5 in srv_master_thread (arg=0x0) at srv0srv.c:1901
#16 0x2827f18c in _thread_start ()
at /usr/src/lib/libc_r/uthread/uthread_create.c:326
#17 0x0 in ?? ()
(gdb)


-- 
Dan Nelson
[EMAIL PROTECTED]

-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




linux innobase benchmarking and BSD problem

2001-03-29 Thread Seung Yoo

Hi everybody, I have run a few test for innobase tables on linux and BSD

1. It seems that 'innobase' table was slower than 'bdb' table for 'insert' command, 
which is surprising(or not? o_O)

2. It may be due to that fact that I dont' know much about optimazing the 'innobase' 
table.  Anybody know how to boost the performance of 'bdb' or 'innobase table'?  
Please let me know...:(

3. I mainly tested the innobase table with linux, but I actually want to run mysql 
with 'bdb' or 'innobase' tables on FreeBSD.

4. However, mysql just hung when I tried to insert the 1 rows of data, the same 
data used for linux test, using 'mysql  data.sql'.  I used 'my-large.cnf' and changed 
the setting as follwing.  Anybody knows what the problems are? 

innobase_buffer_pool_size=400M
innobase_additional_mem_pool_size=20M

FreeBSD setting
4.2 release #1
pentium 550, 512 RAM, mysql-3.23.35
complied with ./configure --

--prefix=/usr/local/mysql 
--with-charset=euc_kr 
--with-low-memory 
--without-debug 
--without-readline 
--with-mysqld-ldflags=-all-static
--with-mit-threads=no
--with-client-ldflags=-all-static 



Linux setting
Redhat 6.2, kernal 2.2.14-5.0, pentium550, 128 RAM, mysql-3.23.35
complied with ./configure --with-charset=euc_kr --with-berkeley-db --with-innobase


Other setting
1. The data used had three columns, 'no', 'name', 'grade' with 'no' as the primary 
index and auto_increament and grade as a key. There were 1 rows, and I inserted 
the data using 'shellmysql  data.sql'.
2. Whenever I inserted the data, I simply did 'delete from table where no  0' instead 
of  dropping the table. 
2. I used 'my-medium' as 'my.cnf'

'mysql insert' benchmarking result( all times are in seconds).
1. with key, autocommit=1, flush_log_at_trx=1, all on, basic setting

myisam

real 5.61
real 5.10
real 5.08

innobase

real 68.57
real 69.27
real 83.26
real 100.87
real 99.99
real 102.31
real 108.82
real 114.23
real 124.23
real 94.42
real 98.20

bdb

real 56.83
real 43.49
real 39.76
real 54.77
real 48.75

2. with key, autocommit=1, flush_log_at_trx=1, bdb off, basic setting

myisam

real 4.48
real 3.01
real 3.08
real 3.08
real 3.04

innobase

real 70.04
real 99.04

with 
innobase_buffer_pool_size=100M
innobase_additional_mem_pool_size=10M

real 76.60
real 80.97
real 81.52
real 107.47

3. with key, autocommit=1, flush_log_at_trx=1, bdb on

bdb

real 53.14
real 54.29



Re[2]: Benchmarking innobase tables

2001-03-19 Thread Peter Zaitsev

Hello Heikki,

Monday, March 19, 2001, 4:40:30 PM, you wrote:


Also the problem with innobase_flush_log_at_trx_commit=0 should be
there is no guarantie the last transaction commited will be on it's
place if the power would be lost.  Also I don't know is it possible in
this case for database to be corrupted as some transactions may modify
database but are not in a logfile (Let's ask Heikki about this).

HT The database does not get corrupted even if you use
HT innobase_flush_logs_at_trx_commit=0 and it crashes: Innobase always writes
HT the appropriate log segment to disk before writing a modified database
HT page to disk. In this sense the log on disk is always 'ahead' of the disk
HT image of the database. But, of course, you may lose the updates of the
HT latest transactions in a crash, if the database has not yet written the
HT relevant log segment to disk.


OK. The only question is is in this case only last transactions may be
lost, and what the transaction can be only be lost completely ?

I'm speaking about the situation - if I have connection there I have
1,2,3,4 transactions as a sequince - may it happen what changes made
by transaction 4 take place while while by transaction 3 not ?





-- 
Best regards,
 Petermailto:[EMAIL PROTECTED]



-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re: Benchmarking innobase tables

2001-03-18 Thread Christian Jaeger

At 20:43 Uhr -0600 17.3.2001, Dan Nelson wrote:
In the last episode (Mar 17), Christian Jaeger said:
  innobase table:
   autocommit=0, rollback after each insert:   59 insert+rollback/sec.
  autocommit=0, one rollback at the end:  2926 inserts/sec.
  autocommit=0, one commit at the end:2763 inserts/sec.
   autocommit=1:   34 inserts/sec.

  In the last case I can hear the head from the hard disk vibrating, it
  seems that innobase synches each commit through to the disk oxide.
  I'm sure innobase isn't the fastest database in the world if this is
  true for everyone. Why could this be the case for me?

If you are going to be committing on every record, you'll want your
tablespace and logfile directories on separate disks to avoid
thrashing.  If you only have one disk and don't care if you lose the
last few transactions if your system crashes, try setting
innobase_flush_log_at_trx_commit=0 in my.cnf.

Wow, thanks. With innobase_flush_log_at_trx_commit=0, the benchmark now shows:

autocommit=0, rollback after each insert:   1587 inserts+rollbacks/sec
autocommit=1:   2764 inserts/sec.

That's even faster than myisam (2487 inserts/sec today)!!!

ChristianJ

--
   Dan Nelson
   [EMAIL PROTECTED]


-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Re[2]: Benchmarking innobase tables

2001-03-18 Thread Peter Zaitsev

Hello Christian,

Sunday, March 18, 2001, 12:22:44 PM, you wrote:


If you are going to be committing on every record, you'll want your
tablespace and logfile directories on separate disks to avoid
thrashing.  If you only have one disk and don't care if you lose the
last few transactions if your system crashes, try setting
innobase_flush_log_at_trx_commit=0 in my.cnf.

CJ Wow, thanks. With innobase_flush_log_at_trx_commit=0, the benchmark now shows:

CJ autocommit=0, rollback after each insert:   1587 inserts+rollbacks/sec
CJ autocommit=1:   2764 inserts/sec.

CJ That's even faster than myisam (2487 inserts/sec today)!!!

In this case you should compare it to myisam created with
delay_key_write=1, also  the size of key_buffer matter.

Also the problem with innobase_flush_log_at_trx_commit=0 should be
there is no guarantie the last transaction commited will be on it's
place if the power would be lost.  Also I don't know is it possible in
this case for database to be corrupted as some transactions may modify
database but are not in a logfile (Let's ask Heikki about this).



-- 
Best regards,
 Petermailto:[EMAIL PROTECTED]



-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




Benchmarking innobase tables

2001-03-17 Thread Christian Jaeger

Hello

I've compiled mysql-3.23.35 with innobase support - it runs much 
better than BDB for me - and run a simple benchmark with the 
following script:

use DBI;
my $DB= DBI-connect("dbi:mysql:innobase","chris",shift) or die;
$DB-{RaiseError}=1;
$DB-do("drop table if exists speedtest");
$DB-do("create table speedtest (a int not null primary key, b int 
not null) type=innobase");
$DB-do("set autocommit=0"); # or =1
my $ins=$DB-prepare("insert into speedtest values(?,?)");
foreach (0..1000) {
eval {
$ins-execute(int(rand(1000)),int(rand(10)) );
};
if ($@) {warn $@} else {$done++}
}
# $DB-do("commit"); # uncommented for some test
print "have inserted $done entries\n";

On a lightly loaded powermac G3 running linuxppc I get the following results:

myisam table:   2000 inserts/sec.

innobase table:
autocommit=0, rollback after each insert:   59 insert+rollback/sec.
autocommit=0, one rollback at the end:  2926 inserts/sec.
autocommit=0, one commit at the end:2763 inserts/sec.
autocommit=1:   34 inserts/sec.

In the last case I can hear the head from the hard disk vibrating, it 
seems that innobase synches each commit through to the disk oxide. 
I'm sure innobase isn't the fastest database in the world if this is 
true for everyone. Why could this be the case for me?

Some system info:
LinuxPPC June 1999, Kernel 2.2.17-0.6.1,
glibc-2.1.3-0j
gcc-2.95.3-2f
Innobase data is written to an IDE harddisk.

Cheers
Christian Jaeger

-
Before posting, please check:
   http://www.mysql.com/manual.php   (the manual)
   http://lists.mysql.com/   (the list archive)

To request this thread, e-mail [EMAIL PROTECTED]
To unsubscribe, e-mail [EMAIL PROTECTED]
Trouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php




MYSQL BENCHMARKING PROBLEMS

2001-02-06 Thread Teddy A Jasin

Hi,
My website is running on MySql 3.21 and it has so much records that it sometimes 
stopped running and
I had to restart the mysqld.
My question is how do I go about benchmarking my site and the mysql server?

TIA

Teddy