On Mon, Jul 22, 2013 at 06:07:03PM +0200, Mark Janssen wrote:
The setup has been running for a few days now (still with nbproc 1) and is
performing admirably.
I'd say this issue is fixed/resolved.
Excellent, thank you very much for your feedback Mark. I'm merging the
patches then.
Best
The setup has been running for a few days now (still with nbproc 1) and is
performing admirably.
I'd say this issue is fixed/resolved.
Thanks!
On Fri, Jul 19, 2013 at 9:58 PM, Mark Janssen maniac...@gmail.com wrote:
I've applied the patches and am running with the new version now. I'll let
I've applied the patches and am running with the new version now. I'll let
it run overnight (with nbproc back at 1...).
I'll probably switch back to nbproc 1 in the morning, as traffic starts
ramping up.
Mark
On Thu, Jul 18, 2013 at 10:42 PM, Willy Tarreau w...@1wt.eu wrote:
Hi Mark,
OK I
On Fri, Jul 19, 2013 at 09:58:19PM +0200, Mark Janssen wrote:
I've applied the patches and am running with the new version now. I'll let
it run overnight (with nbproc back at 1...).
I'll probably switch back to nbproc 1 in the morning, as traffic starts
ramping up.
Thank you Mark, then I'm
Hi Mark,
OK I could reproduce, debug and fix. It was a tough one, really...
More a problem of internal semantics than anything else, so I had
to test several possibilities and study their impacts and the corner
cases. In the end we get something that's fixed and better :-)
The issue was mostly
Hi Willy,
This explains why this only happens for short durations (at most the duration
of a client timeout).
Good to hear you pinpointed this.
What is important is to know that CPU usage aside, there is no loss of
information nor service. Connections are correctly handled
Mark early
On Wed, Jul 17, 2013 at 08:16:18PM +0200, Lukas Tribus wrote:
Hi Willy,
This explains why this only happens for short durations (at most the
duration
of a client timeout).
Good to hear you pinpointed this.
What is important is to know that CPU usage aside, there is no loss
Hi Mark,
On Sat, Jul 13, 2013 at 11:35:56AM +0200, Mark Janssen wrote:
On Fri, Jul 12, 2013 at 5:57 PM, Lukas Tribus luky...@hotmail.com wrote:
Hi Mark,
epoll_wait(0, {}, 200, 0) = 0
(repeated 10-15 times)
A few questions:
Can you reproduce this without the health checks?
I've tried with 'noepoll' and 'nosplice' ... and this seems to have solved
the cpu spikes... though the base-load is now a lot higher, due to using
poll instead of epoll.
Next I tried with noepoll, but with splice enabled. This resulted in the
same higher base-load, but still the occasional peak
On Fri, Jul 12, 2013 at 5:30 PM, Tomas Pospisek t...@sourcepole.ch wrote:
Am 11.07.2013 11:45, schrieb Mark Janssen:
I did see large amounts of sequential epoll_wait calls in the processes
with 100% cpu load, and not with the other processes.
epoll_wait(0, {}, 200, 0) = 0
On Fri, Jul 12, 2013 at 5:57 PM, Lukas Tribus luky...@hotmail.com wrote:
Hi Mark,
epoll_wait(0, {}, 200, 0) = 0
(repeated 10-15 times)
A few questions:
Can you reproduce this without the health checks?
I haven't tried yet... and can't really test this currently
Do you have the
Am 11.07.2013 11:45, schrieb Mark Janssen:
I've noticed that the HAProxy processes occasionally jump to 100% cpu
load, while the load before and after these peaks is only 3-5%, and the
traffic is also the same as outside of these cpu-peaks.
I saw a thread about this earlier (april/may),
Hi Mark,
Hi list...
I've noticed that the HAProxy processes occasionally jump to 100% cpu
load, while the load before and after these peaks is only 3-5%, and the
traffic is also the same as outside of these cpu-peaks.
I saw a thread about this earlier (april/may), which concluded
13 matches
Mail list logo