On Tue, Mar 09, 2021 at 09:04:43AM +0100, Maciej Zdeb wrote:
> Hi,
> 
> After applying the patch, the issue did not occur, however I'm still not
> sure it is fixed. Unfortunately I don't have a reliable way to trigger it.

OK. If it's related, it's very possible that some of the issues we've
identified there recently are at least responsible for a part of the
problem. In short, if the CPUs are too fair, some contention can last
long because two steps are required to complete such operations. We've
added calls to cpu_relax() in 2.4 and some are queued in 2.3 already.
After a while, some of them should also be backported to 2.2 as they
significantly improved the situation with many threads.

> pt., 5 mar 2021 o 22:07 Willy Tarreau <w...@1wt.eu> napisal(a):
> 
> > Note, before 2.4, a single thread can execute Lua scripts at once,
> > with the others waiting behind, and if the Lua load is important, maybe
> > this can happen (but I've never experienced it yet, and the premption
> > interval is short enough not to cause issues in theory).
> >
> I'm not sure if related but during every reload, for a couple seconds all
> 12 threads on OLD pid are using 100% cpu and then one after one core return
> to normal usage, finally the old haproxy process exits. I have no idea why
> it is behaving like that.

It could be related but it's hard to tell. It is also possible that
for any reason the old threads constantly believe they have something
to do, for example a health check that doesn't get stopped and that
keep ringing.

> > Maciej, if this happens often,
> > would you be interested in running one machine on 2.4-dev11 ?
> 
> It is a very rare issue and of course it occurs only in production
> environment. :(

Obviously!

> I'm very willing to test the 2.4 version, especially with
> that tasty lua optimization for multiple threads, but I can't do it on
> production until it's stable.

This makes sense. We'll try to issue 2.3 with some thread fixes this
week, maybe it will be a possible step for you then.

Cheers,
Willy

Reply via email to