RE: mysterious speedup after doing FLUSH TABLES

2006-07-12 Thread Moritz Möller
Hi,

that variable is not set, so it should be the default value (2 * number of
processors I believe).

To what value should I set it?

-Original Message-
From: Gabriel PREDA [mailto:[EMAIL PROTECTED] 
Sent: Wednesday, July 12, 2006 10:02 AM
To: Moritz Möller
Subject: Re: mysterious speedup after doing FLUSH TABLES

What value do you have innodb_thread_concurency ?

There is a bug with that on 64bit machines !

-- 
Gabriel PREDA
Senior Web Developer


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: mysterious speedup after doing FLUSH TABLES

2006-07-12 Thread Moritz Möller
I will try that.

But the symptoms are a little bit different - the server works fine for 3-4
hours, but when it gets slow, neither cpu nor disk-io reach the limit.

As far as I know a cache/memory-bottleneck should turn cpu to 100%, like you
have observed.

In our case I guess it's a locking issue, will grab the output of INNODB
STATUS the next time it happens.

Moritz


-Original Message-
From: Gabriel PREDA [mailto:[EMAIL PROTECTED] 
Sent: Wednesday, July 12, 2006 11:22 AM
To: Moritz Möller
Subject: Re: mysterious speedup after doing FLUSH TABLES

See http://lists.mysql.com/mysql/199572

As lower as you can afford...
The same thing happened here... MySQL will work fine for let's say
some hours then it begins struggling... then the CPU load increases
and machines works like a bull in the Spanish-style bullfighting...

We have it now at 2 and the highest load we've seen it's 35%... at 900
simultaneous connections...

Since the implementation of 'memcached' the workload on the MySQL
dropped significantly... now 900 simultaneous connections is enough to
keep the website at it's best !

-- 
Gabriel PREDA
Senior Web Developer


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: mysterious speedup after doing FLUSH TABLES

2006-07-12 Thread Moritz Möller
Hi,

table_cache is 8 on our systems. I quick glance at the manual tells me to
increase that value (Opened_tables is 2680462406)... I will try that.

Thanks ;)


-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] 
Sent: Wednesday, July 12, 2006 12:32 PM
To: Moritz Möller
Cc: 'Gabriel PREDA'; mysql@lists.mysql.com
Subject: RE: mysterious speedup after doing FLUSH TABLES

Hi,

What about your table_cache setting ?
Is show status reporting a high number for Opened_tables ?

---
Regards,
  Jocelyn Fournier
  www.mesdiscussions.net

 I will try that.

 But the symptoms are a little bit different - the server works fine for
 3-4
 hours, but when it gets slow, neither cpu nor disk-io reach the limit.

 As far as I know a cache/memory-bottleneck should turn cpu to 100%, like
 you
 have observed.

 In our case I guess it's a locking issue, will grab the output of INNODB
 STATUS the next time it happens.

 Moritz


 -Original Message-
 From: Gabriel PREDA [mailto:[EMAIL PROTECTED]
 Sent: Wednesday, July 12, 2006 11:22 AM
 To: Moritz Möller
 Subject: Re: mysterious speedup after doing FLUSH TABLES

 See http://lists.mysql.com/mysql/199572

 As lower as you can afford...
 The same thing happened here... MySQL will work fine for let's say
 some hours then it begins struggling... then the CPU load increases
 and machines works like a bull in the Spanish-style bullfighting...

 We have it now at 2 and the highest load we've seen it's 35%... at 900
 simultaneous connections...

 Since the implementation of 'memcached' the workload on the MySQL
 dropped significantly... now 900 simultaneous connections is enough to
 keep the website at it's best !

 --
 Gabriel PREDA
 Senior Web Developer


 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]





--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



mysterious speedup after doing FLUSH TABLES

2006-07-11 Thread Moritz Möller
Hello list,

we use MySQL 4.1.13 / InnoDB on Xeon em64t running Debian, kernel 2.6.8.

After a while, queries run slower and slower, although the CPU is 70% idle
and diskio is nearly zero. A simple select by primary key takes up to one
second. After doing a FLUSH TABLES the speed is up again.

I also noticed that 4.1.15 seems to run slower than 4.1.13 but that might be
subjective.

As far as I know, FLUSH TABLES reopens the tables and clears the query
cache. Query cache is enabled, but does not seem to cause the slow down, as
I can observe the same effect without the query cache.

So it looks as if some buffers run full, or a lock is blocked or something.
The queries running do not hang in any specific stage (such as waiting for
lock).

Even stranger: we have 4 servers of that type (hardware, software,
configuration all identical) and two of them need a flush tables every now
and then, the other two (which have more load, 5000 q/s compared to 1500)
are performing without problems. 

Well, if anyone has an idea what causes the slowdown, and/or why a flush
tables helps, I'd be glad to hear from you ;)

Thanks,

Moritz



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: MySQL 5 problem after upgrade

2006-05-27 Thread Moritz Möller
Hi,

i've noticed similar behaviour as we upgraded to mysql 5.0.


The query optimizer seems to been optimized a little too much :-\



-Original Message-
From: Szymon Kosok [mailto:[EMAIL PROTECTED] 
Sent: Saturday, May 27, 2006 5:42 PM
Cc: mysql@lists.mysql.com
Subject: Re: MySQL 5 problem after upgrade

Hello again,

 SELECT shout_avatar, shout_own, timestamp, tekst, imie, pseudo, wiek,
 City, Country, plec, wlosy_kol, gra_gwiazdy.id, gra_gm.gid AS gm FROM
 gra_shoutbox INNER JOIN (gra_gwiazdy) ON (gra_gwiazdy.id =
 gra_shoutbox.gid) INNER JOIN (swiat_miasta) ON (swiat_miasta.CityId =
 gra_gwiazdy.miastoid) INNER JOIN (swiat_panstwa) ON
 (swiat_panstwa.CountryId = gra_gwiazdy.countryid) INNER JOIN (gra_gm) ON
 (gra_gm.gid = gra_gwiazdy.id) WHERE gra_gwiazdy.id != '1310' ORDER BY
 gra_shoutbox.id DESC LIMIT 0, 10;
   

It was problem with query. MySQL uses filesort, and that was cause. I've
deleted WHERE clause and now execution time is 100 times faster. (with
WHERE about 3 sec, without 0,03 sec. ;-)) In other way, strange is that
there wasn't any problem in 4.1.



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: mysql performance / ndb 5.1 performance

2006-05-25 Thread Moritz Möller
Hi Jay!

Writes make up about 17% of the queries, and we average 4000 queries/sec.
Querycache hit ratio is about 45% (QC seems to be a topic worth spending
some time for... 45% does not look so good).

We already tested replication, changing the database logic in the scripts to
use the slaves until the first update operation, then use the master only
(so the SELECT-after-UPDATE operations succeed).

Main problem was that the slaves got out of sync after large updates (LOAD
DATA INFILE for example), which we solved by doing large updates in small
chunks (UPDATE LIMIT n while rows_matched  n), but still, every 2-3 days
the slaves got completely out of sync (duplicate key and other) :(

Moritz

-Original Message-
From: Jay Pipes [mailto:[EMAIL PROTECTED] 
Sent: Thursday, May 25, 2006 6:37 PM
To: Moritz Möller
Cc: 'Dan Trainor'; mysql@lists.mysql.com
Subject: Re: mysql performance / ndb 5.1 performance

Moritz Möller wrote:
 Hi Dan,
 
 there are about 2GB free, so the net size would still be 32 GB.
 
 The queries are really optimized, 99.9% of all queries can be satisfied
 without table scans.
 
 Well, I guess I have to give NDB a chance, I hope it will help. The only
 alternative I come to is to cluster the database on application level (use
 server userID%numServers), which would be a [insert favourite
non-swear-word
 here] lot of work ;)

Hi Moritz!

There is an alternative solution than NdbCluster, and that would be to 
set up a replicated environment, and have commodity hardware slaves 
supply the bulk of the SELECT operations, with the 8-core machine used 
as the master replication database.

Your application server or web server would have to point SELECTs to the 
slaves for reporting purposes, and do writes to the master only.  This 
is a cheap way to get scale-out performance from commodity hardware, and 
it is pretty customizable as far as the replicationi layout you would 
want...

For instance, you could have your application server direct a certain 
category of queries to one slave, and another category to another slave, 
depending on traffic conditions.

BTW, how many requests/sec are you averaging, and also, what is the 
percentage reads to writes in your database?  You can get both answers 
from SHOW STATUS variables.

Cheers,

Jay

 
 -Original Message-
 From: Dan Trainor [mailto:[EMAIL PROTECTED] 
 Sent: Thursday, May 25, 2006 1:41 AM
 To: Moritz Möller; mysql@lists.mysql.com
 Subject: Re: mysql performance
 
 Moritz Möller wrote:
 Hi list,

 we're running some large high-traffic mysql servers, and are currently
 reaching the limit of our machines.

 We're using mysql 4.1 / innodb on debian, ibdata is about 35GB. Hardware
 is
 quad xeon dualcore, 8 GB RAM. Disk-io is nearly zero, limiting factor is
 CPU.
 The queries run very fast (I seldom see a process that's running longer
 than
 a second), but there are too many of them, I guess.

 As far as I know, NDB keeps the whole database in memory, so with indices
 and some mem as reserve, we'd need ~48GB (3x16 or something) in total for
 NDB :(

 Does someone know other solutions to this? Is NDB the only storage engine
 supporting clustering?

 Thanks in advantage,

 Moritz



 
 Hi -
 
 That's quite a large database.  I, too, have been dealing with what I 
 thought was a large database for this new project.  Being 2G, it hardly 
 compares to your database size.
 
 Keep in mind, however, that a 36G ibdata file does not necessarily mean 
 that you are using 36G to store data.  InnoDB documents from the MySQL 
 site explain ways to compact these files, possibly shrinking the size of 
 ibdata files.  Another way to get a better idea of how much data you're 
 actually using is to use the 'SHOW TABLE STATUS' query from within 
 MySQL.  Take the InnoDB Free: item under the 'Comment:' column, and 
 subtract this from the total size of the ibdata file(s).  This will give 
 you a more accurate representation of how much of that ibdata file 
 you're actually using.  I think.  (Someone mind correcting me if I'm way 
 off here?)
 
 NDB may not be your solution.  Even though disk-based storage is 
 included with NDB in 5.1 and beyond, I'm not too sure how this will 
 affect the speed of your operations.  I suppose it's worth a try, however.
 
 Please take this advise with a grain of salt, as InnoDB is still quite 
 new to me, as well.  Other things I've found to speed up large databases 
 are to properly make indexes, and testing them with the EXPLAIN 
 function.  This alone has let me to speed up our operations as much as 
 30% in most cases.
 
 Thanks
 -dant
 
 

-- 
Jay Pipes
Community Relations Manager, North America, MySQL Inc.
Roaming North America, based in Columbus, Ohio
email: [EMAIL PROTECTED]mob: +1 614 406 1267

Are You MySQL Certified? http://www.mysql.com/certification
Got Cluster? http://www.mysql.com/cluster

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http

mysql performance

2006-05-24 Thread Moritz Möller
Hi list,

we're running some large high-traffic mysql servers, and are currently
reaching the limit of our machines.

We're using mysql 4.1 / innodb on debian, ibdata is about 35GB. Hardware is
quad xeon dualcore, 8 GB RAM. Disk-io is nearly zero, limiting factor is
CPU.
The queries run very fast (I seldom see a process that's running longer than
a second), but there are too many of them, I guess.

As far as I know, NDB keeps the whole database in memory, so with indices
and some mem as reserve, we'd need ~48GB (3x16 or something) in total for
NDB :(

Does someone know other solutions to this? Is NDB the only storage engine
supporting clustering?

Thanks in advantage,

Moritz



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



RE: mysql performance / ndb 5.1 performance

2006-05-24 Thread Moritz Möller
Hi Dan,

there are about 2GB free, so the net size would still be 32 GB.

The queries are really optimized, 99.9% of all queries can be satisfied
without table scans.

Well, I guess I have to give NDB a chance, I hope it will help. The only
alternative I come to is to cluster the database on application level (use
server userID%numServers), which would be a [insert favourite non-swear-word
here] lot of work ;)

Moritz


-Original Message-
From: Dan Trainor [mailto:[EMAIL PROTECTED] 
Sent: Thursday, May 25, 2006 1:41 AM
To: Moritz Möller; mysql@lists.mysql.com
Subject: Re: mysql performance

Moritz Möller wrote:
 Hi list,
 
 we're running some large high-traffic mysql servers, and are currently
 reaching the limit of our machines.
 
 We're using mysql 4.1 / innodb on debian, ibdata is about 35GB. Hardware
is
 quad xeon dualcore, 8 GB RAM. Disk-io is nearly zero, limiting factor is
 CPU.
 The queries run very fast (I seldom see a process that's running longer
than
 a second), but there are too many of them, I guess.
 
 As far as I know, NDB keeps the whole database in memory, so with indices
 and some mem as reserve, we'd need ~48GB (3x16 or something) in total for
 NDB :(
 
 Does someone know other solutions to this? Is NDB the only storage engine
 supporting clustering?
 
 Thanks in advantage,
 
 Moritz
 
 
 

Hi -

That's quite a large database.  I, too, have been dealing with what I 
thought was a large database for this new project.  Being 2G, it hardly 
compares to your database size.

Keep in mind, however, that a 36G ibdata file does not necessarily mean 
that you are using 36G to store data.  InnoDB documents from the MySQL 
site explain ways to compact these files, possibly shrinking the size of 
ibdata files.  Another way to get a better idea of how much data you're 
actually using is to use the 'SHOW TABLE STATUS' query from within 
MySQL.  Take the InnoDB Free: item under the 'Comment:' column, and 
subtract this from the total size of the ibdata file(s).  This will give 
you a more accurate representation of how much of that ibdata file 
you're actually using.  I think.  (Someone mind correcting me if I'm way 
off here?)

NDB may not be your solution.  Even though disk-based storage is 
included with NDB in 5.1 and beyond, I'm not too sure how this will 
affect the speed of your operations.  I suppose it's worth a try, however.

Please take this advise with a grain of salt, as InnoDB is still quite 
new to me, as well.  Other things I've found to speed up large databases 
are to properly make indexes, and testing them with the EXPLAIN 
function.  This alone has let me to speed up our operations as much as 
30% in most cases.

Thanks
-dant


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]