> >
> > However, I agree that 100 is less than 200 and queue will be overflown
if
> > ab
> > generates requests faster than the app processes them. Sure ab fires
> > things
> > quite fast.
> >
> > I oversimplified my production case. In the presented case it's 1.5ms
> > between accept()
> > calls. In production requests I see 600ms gaps between accept() calls.
> > This
> > is what
> > brought me here to the list.
> >
> > Probably I have to dig more into production straces...
> >
> > What do you think? Ideas?
> >

ab and wrk are tool for measuring throughput, not emulating realistic
workload.
They make 200 connection at once (within very short time rage) if you
specified 200 concurrency.

I think constant rate load testing tool (e.g. vegeta, wrk2) may help you.

And iff you want to emulate production requests, you must use other load
simulator tools (e.g. locust).
_______________________________________________
uWSGI mailing list
uWSGI@lists.unbit.it
http://lists.unbit.it/cgi-bin/mailman/listinfo/uwsgi

Reply via email to