Re: [HACKERS] Adjustment of spinlock sleep delays

2003-08-14 Thread Rod Taylor
> To forestall this scenario, I'm thinking of introducing backoff into the > sleep intervals --- that is, after first failure to get the spinlock, > sleep 10 msec; after the second, sleep 20 msec, then 40, etc, with a > maximum sleep time of maybe a second. The number of iterations would be > redu

Re: [HACKERS] Adjustment of spinlock sleep delays

2003-08-14 Thread Tom Lane
"Mendola Gaetano" <[EMAIL PROTECTED]> writes: > What about use the same algorithm used in ethernet when a collision is > detected? Random backoff is what Rod suggested, but I don't care for the ethernet method in detail, because it allows for only a fairly small number of retries before giving up.

Re: [HACKERS] Adjustment of spinlock sleep delays

2003-08-14 Thread Christopher Kings-Lynne
> To forestall this scenario, I'm thinking of introducing backoff into the > sleep intervals --- that is, after first failure to get the spinlock, > sleep 10 msec; after the second, sleep 20 msec, then 40, etc, with a > maximum sleep time of maybe a second. The number of iterations would be > redu

Re: [HACKERS] Adjustment of spinlock sleep delays

2003-08-14 Thread Mike Mascari
Tom Lane wrote: > I've been thinking about Ludwig Lim's recent report of a "stuck > spinlock" failure on a heavily loaded machine. Although I originally > found this hard to believe, there is a scenario which makes it > plausible. Suppose that we have a bunch of recently-started backends > as we

Re: [HACKERS] Adjustment of spinlock sleep delays

2003-08-14 Thread Rod Taylor
On Tue, 2003-08-05 at 18:19, Tom Lane wrote: > Rod Taylor <[EMAIL PROTECTED]> writes: > > After the first few sleeps should it add a random() element to the delay > > time? > > Hmm, that's a thought --- but how big a random element? > > Fooling with the original idea, I'm having trouble with gett

Re: [HACKERS] Adjustment of spinlock sleep delays

2003-08-09 Thread Mendola Gaetano
From: "Tom Lane" <[EMAIL PROTECTED]> > To forestall this scenario, I'm thinking of introducing backoff into the > sleep intervals --- that is, after first failure to get the spinlock, > sleep 10 msec; after the second, sleep 20 msec, then 40, etc, with a > maximum sleep time of maybe a second. Th

Re: [HACKERS] Adjustment of spinlock sleep delays

2003-08-06 Thread Tom Lane
I said: > The random component should already help to scatter the wakeups pretty > well, so I'm thinking about just > if (oldtime > 1 sec) > time = 10msec > else > time = oldtime + oldtime * rand() > ie random growth of a maximum of 2x per try, and reset to m

Re: [HACKERS] Adjustment of spinlock sleep delays

2003-08-05 Thread Tom Lane
Rod Taylor <[EMAIL PROTECTED]> writes: > After the first few sleeps should it add a random() element to the delay > time? Hmm, that's a thought --- but how big a random element? Fooling with the original idea, I'm having trouble with getting both plausible backoff and a reasonable number of attem

Re: [HACKERS] Adjustment of spinlock sleep delays

2003-08-05 Thread Tom Lane
Rod Taylor <[EMAIL PROTECTED]> writes: > How about (round to nearest 10msec): > time =3D oldtime + oldtime / 2 + oldtime * rand() > while (time > 1 second) > time =3D time - 0.80sec > This would stagger the wakeup times, and ensure a larger number of > retries -- but the times should be la

Re: [HACKERS] Adjustment of spinlock sleep delays

2003-08-05 Thread Tom Lane
Mike Mascari <[EMAIL PROTECTED]> writes: > Should there be any correlation between the manner by which the > backoff occurs and the number of active backends? If we could guess how many are contending for the same spinlock, maybe we could use that info ... but I don't see a reasonably cheap way to

[HACKERS] Adjustment of spinlock sleep delays

2003-08-05 Thread Tom Lane
I've been thinking about Ludwig Lim's recent report of a "stuck spinlock" failure on a heavily loaded machine. Although I originally found this hard to believe, there is a scenario which makes it plausible. Suppose that we have a bunch of recently-started backends as well as one or more that have