Adding threads should allow connection setup (socket creation, accept, and
initial malloc of data structures) to run in parallel with connection
processing (socket read/write, TLS overhead, AMQP encode/decode, your
application on_message callback).

The epoll proactor scales better with additional threads than the libuv
implementation.  If you are seeing no benefit with extra threads, trying
the libuv  proactor is a worthwhile idea.

On Fri, Jun 3, 2022 at 2:38 AM Fredrik Hallenberg <megahal...@gmail.com>
wrote:

> Yes, the fd limit is already raised a lot. Increasing backlog has improved
> performance and more file descriptors are in use but still I feel
> connection times are too long. Is there anything else to tune in the
> proactor? Should I try with the libuv proactor instead of epoll?
> I have tried with multiple threads in the past but did not notice any
> difference, but perhaps it is worth trying again with the current backlog
> setting?
>
>
> On Thu, Jun 2, 2022 at 5:11 PM Cliff Jansen <cliffjan...@gmail.com> wrote:
>
> > Please try raising your fd limit too. Perhaps doubling it or more.
> >
> > I would also try running your proton::container with more threads, say 4
> > and then 16, and see if that makes a difference.  It shouldn’t if your
> > processing within Proton is as minimal as you describe.   However, if
> there
> > is lengthy lock contention as you pass work out and then back in to
> Proton,
> > that may introduce delays.
> >
> > On Thu, Jun 2, 2022 at 7:43 AM Fredrik Hallenberg <megahal...@gmail.com>
> > wrote:
> >
> > > I have done some experiments raising the backlog value, and it is
> > > possibly a bit better, I have to test it more. Even if it works I would
> > of
> > > course like to avoid having to rely on a patched qpid. Also, maybe some
> > > internal queues or similar should be modified to handle this?
> > >
> > > I have not seen transport errors in the clients, but this may be
> because
> > > reconnection is enabled. I am unsure on what the reconnection feature
> > > actually does, I never seen an on_connection_open where
> > > connection.reconnection() returns true.
> > > Perhaps it is only useful when a connection is established and then
> lost?
> > >
> > >
> > >
> > >
> > >
> > > On Thu, Jun 2, 2022 at 1:44 PM Ted Ross <tr...@redhat.com> wrote:
> > >
> > > > On Thu, Jun 2, 2022 at 9:06 AM Fredrik Hallenberg <
> > megahal...@gmail.com>
> > > > wrote:
> > > >
> > > > > Hi, my application tends to get a lot of short lived incoming
> > > > connections.
> > > > > Messages are very short sync messages that usually can be responded
> > > with
> > > > > very little processing on the server side. It works fine but I feel
> > > > > that the performance is a bit lacking when many connections happen
> at
> > > the
> > > > > same time and would like advice on how to improve it. I am using
> qpid
> > > > > proton c++ 0.37 with epoll proactor.
> > > > > My current design uses a single thread for the listener but it will
> > > > > immediately push incoming messages in on_message to a queue that is
> > > > handled
> > > > > elsewhere. I can see that clients have to wait for a long time (up
> > to a
> > > > > minute) until they get a response, but I don't believe there is an
> > > issue
> > > > on
> > > > > my end as I as will quickly deal with any client messages as soon
> as
> > > they
> > > > > show up. Rather the issues seems to be that messages are not pushed
> > > into
> > > > > the queue quickly enough.
> > > > > I have noticed that the pn_proactor_listen is hardcoded to use a
> > > backlog
> > > > of
> > > > > 16 in the default container implementation, this seems low, but I
> am
> > > not
> > > > > sure if it is correct to change it.
> > > > > Any advice apppreciated. My goal is that a client should never need
> > to
> > > > wait
> > > > > more than a few seconds for a response even under reasonably high
> > load,
> > > > > maybe a few hundred connections per seconds.
> > > > >
> > > >
> > > > I would try increasing the backlog.  16 seems low to me as well.  Do
> > you
> > > > know if any of your clients are re-trying the connection setup
> because
> > > they
> > > > overran the server's backlog?
> > > >
> > > > -Ted
> > > >
> > >
> >
>

Reply via email to