Hi Steve,
Thanks for the response.
Sorry yeah I remembered after I posted that there was a thread pool that
was number of cores + 1, What I was meaning (in my own mind at least, if
not what I wrote :-D ) was that the threading was *related* to
connection (as opposed to per connection D'oh!!) that is to say if you
have one connection with multiple sessions it will be bound to a single
thread in the pool, so if you want concurrency on qpidd you would need
to establish multiple connections (which isn't necessarily something
you'd expect if you were used to JMS where the session is the unit of
concurrency on the client side).
On my thread saturation point I was really meaning that as the number of
cores goes up the lock contention problem gets exponentially worse, so
say 24 connections running flat out on say a four core box is one thing,
but I'm wondering about the same 24 connections on a 24 core box and at
what point will lock contention become the bottleneck?
As I say I think that ActiveMQ Apollo started out because of scaling
limitations of the original ActiveMQ and has evolved to a reactor based
threading model built on hawt-dispatch (Java implementation of Grand
Central Dispatch https://en.wikipedia.org/wiki/Grand_Central_Dispatch) -
so a task parallel threading model rather than a shared state threading
model. Given that the Proton reactor stuff seems to be, well, a reaction
:-[ to this problem space it got me curious about the Qpid brokers.
F.
On 12/07/15 00:28, Steve Huston wrote:
Hi Frase,
You’ve done a good job at guessing the limits - I’ll let Gordon educate us on
the rest.
The threading model, though - definitely not per connection. There’s a pool of
threads that can handle I/O - by default there are (number of cores) + 1, but
you can change it by command line option.
The I/O model is proactive (as opposed to reactive) and those threads handle
what ever handles get I/O completions. Yes, at some point, it gets saturated
but it’s driven by the amount of I/O going on concurrently, not by the sheer
number of connections.
So go for it - let us know how high you (well, the number of clients) can get
:-)
-Steve
On Jul 11, 2015, at 12:52 PM, Fraser Adams <fraser.ad...@blueyonder.co.uk>
wrote:
Hey all,
Suppose I have a queue on qpidd, I know that I can have multiple consumer clients
subscribing to that queue node - I've used that several times to provide a means of
scaling out consumers, so if I have "n" consumers each consumer receives
roughly 1/n of the messages published onto the queue.
I've never really pushed the limits of this and have tended to only have 10 -
20 consumers, but it got me wondering:
a) What's the maximum theoretical limit for the number of consumers on such a
shared resource
b) what's the maximum practical limit - I'm not sure how the qpidd threading model works
(I *think* it's per connection), so I'm wondering at what point we get into a position
where the "stampeding herds" lock contention problem kicks in.
I know I should probably just stand up a test and give it a whirl, but figured
I'd ask first :-)
I *think* that the answer to a) is that the theoretical limit is the maximum
number of link handles and is a 32 bit unsigned int so would be 4294967295,
though I suspect something else would get a bit sad before that limit is
reached.
On point b. has anyone (most likely Gordon) explored the scaling limits of
qpidd? Obviously when Qpid started out servers tended to have something between
one and four cores, but now of course Moore's law tends to be followed by
increasing the number of cores, so I'm curious as to how qpidd scales. I'm
thinking of things like ActiveMQ Apollo where as I understand it it uses
hawt-dispatch (which I think is the Java equivalent of Apple's Grand Central
Dispatch pattern) and I know with Proton there has been a lot of work migrating
to a more reactive pattern, which makes me wonder if that's an acknowledgement
of any potential scaling limits in qpidd at present? If there are known limits
is there a roadmap to change the threading model. I'm mostly just curious at
the moment, but I guess it's a question that's going to crop up more and more.
Cheers,
Frase
---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscr...@qpid.apache.org
For additional commands, e-mail: users-h...@qpid.apache.org
---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscr...@qpid.apache.org
For additional commands, e-mail: users-h...@qpid.apache.org
---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscr...@qpid.apache.org
For additional commands, e-mail: users-h...@qpid.apache.org