The application is supposed to disregard any failure to acquire the
lock. The return value isn't even checked inside the application. There
is no loop in which we attempt the acquire the lock. The only intention
is to wait for a maximum of 10 seconds for the lock to be relased, and
if that fails, simply move on as if nothing happened. However, on these
occasions the server doesn't return anything at all after the 10 second
timeout as it normally does (either 1 or 0). Under normal operation we
would expect to get lock (1) either immediately or after no more than a
few seconds.

fre 2007-08-31 klockan 12:51 -0400 skrev Michael Dykman:
> At a glance, from your description, it sounds like your application
> servers are hitting an occasional dead-lock when multiple processes in
> your various app servers start contending for the lock .. you say
> that, in normal usage, you expect the lock to last around 10 seconds:
> what is your application programmed to do when it attempts to acquire
> the lock and fails?  If it is programmed to retry,  either immediately
> or with a back-off interval, rather than graciously accepting failure
> on occasion, then the bevahiour you are observing is exactly what I
> would expect.
> 
> On 8/31/07, Niklas Westerberg <[EMAIL PROTECTED]> wrote:
> > Hello,
> > We seem to have a problem with the usage of GET_LOCK on our mysql
> > server. Is there anyone who has experienced similar behavior, or could
> > provide some insight into what is going on?
> >
> > /Niklas
> >
> > Symptoms:
> > * mysqld CPU-usage is 100%
> > * Queries of the type GET_LOCK('lock_name', 10); seems to abound in the
> > mysql process list. These remain in the list for far longer than the
> > expected 10 seconds. At one instance the number of queries exceeded 600,
> > all of which had been active between 400 and 600 seconds. As it happens
> > to be a GET_LOCK query is the first one executed by our web application
> > on each request. There were also some RELEASE_LOCK queries in the list.
> >
> > * The number of queries hanging on the list happened to exactly match
> > the maximum number of concurrent requests from the web servers.
> >
> > * The queries remain for a time in the process list even after the web
> > servers (apache/php) has been taken down.
> >
> > * The database seems to exhibit a slow decline in performance between
> > the point in time of its latest restart and a full stop. This has not
> > been thoroughly investigated yet however.
> >
> > * Accessing the server through the CLI still works and regular queries
> > return as expected.
> >
> > Occurrence:
> > * Intermittent, sometimes weeks apart, sometimes once day for a few days
> > in a row.
> >
> > * Without an apparent correlation with the load of the machine.
> >
> > Remedy until now:
> > Restarting mysqld and after that the apache processes, which sometimes
> > start to die of segmentation faults if this is neglected.
> >
> > Configuration:
> > Server:
> > 2 dual-core process (i.e. theoretical maximum cpu usage of 400%)
> > Tested on Mysql 5.0.44 and 5.0.38
> > Both the MyISAM and the InnoDB engines are in use (mostly InnoDB).
> > InnoDB uses a raw partition residing on a software raid level 10
> > consisting of 10 disks.
> >
> > Clients:
> > Apache 2.2.4 (mpm-prefork)
> > PDO 1.0.3
> > PDO-MYSQL 1.0.2
> > PHP 5.2.1
> >
> >
> > --
> > MySQL General Mailing List
> > For list archives: http://lists.mysql.com/mysql
> > To unsubscribe:    http://lists.mysql.com/[EMAIL PROTECTED]
> >
> >
> 
> 


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:    http://lists.mysql.com/[EMAIL PROTECTED]

Reply via email to