On Nov 11, 2015 9:32 AM, "jahlborn" <jahlb...@gmail.com> wrote:
>
> First, thank you so much for your detailed answers!  I have a few more
> questions inline.
>
> > * NetworkConnector
> > > ** "dynamicOnly" - i've seen a couple of places mention enabling this
> > and
> > > some
> > >    indication that it helps with scaling in a network of brokers (e.g.
> > > [3]).
> > >    The description in [1] also makes it sound like something i would
> > want
> > > to
> > >    enable.  However, the value defaults to false, which seems to
> > indicate
> > > that
> > >    there is a down-side to enabling it.  Why wouldn't i want to enable
> > > this?
> > >
> >
> > One major difference here is that with this setting disabled (the
default)
> > messages will stay on the broker to which they were first produced,
while
> > with it enabled they will go to the broker to which the durable
subscriber
> > was last connected (or at least, the last one in the route to the
> > consumer,
> > if the broker to which the consumer was actually connected has gone down
> > since then).  There are at least two disadvantages to enabling it: 1) if
> > the producer connects to an embedded broker, then those messages go
> > offline
> > when the producer goes offline and aren't available when the consumer
> > reconnects, 2) it only takes filling one broker's store before you
> > Producer
> > Flow Control the producer (whereas with the default setting you have to
> > fill every broker along the last route to the consumer before PFC kicks
> > in), and 3) if you have a high-latency network link in the route from
> > producer to consumer, you delay traversing it until the consumer
> > reconnects, which means the consumer may experience more latency than it
> > would otherwise need to.  So as with so many of these settings, the best
> > configuration for you will depend on your situation.
>
> I'm a little confused by 3).  is that the behavior if this feature is
> enabled
> (dynamicOnly: true) or disabled (dynamicOnly: false)?

All three describe the behavior when dynamicOnly: true, and are the result
of the producer's broker not forwarding messages until there is a connected
consumer (as opposed to forwarding messages in the direction that consumer
was last seen).

> > Also, default values are often the default because that's what they were
> > when they were first introduced (to avoid breaking legacy
configurations),
> > not necessarily because that's the setting that's recommended for all
> > users.  Default values do get changed when the old value is clearly not
> > appropriate and the benefits of a change outweigh the inconvenience to
> > legacy users, but when there's not a clear preference they usually get
> > left
> > alone, which is a little confusing to new users.
>
> yep, definitely understand that.  which is one of the complications,
because
> you
> don't always know if a default is a default because it's the best option
or
> because it's a backwards compatibility choice.
>
> > > ** "networkTTL", "messageTTL", "consumerTTL" - until recently, we kept
> > > these
> > >    at the defaults (1).  However, we recently realized that we can end
> > up
> > > with
> > >    stuck messages with these settings.  I've seen a couple of places
> > which
> > >    recommend setting "networkTTL" to the number of brokers in the
> > network
> > >    (e.g. [2]), or at least something > 1.  However, the recommendation
> > for
> > >    "consumerTTL" on [1] is that this value should be 1 in a mesh
network
> > > (and
> > >    setting the "networkTTL" will set the "consumerTTL" as well).
> > >    Additionally, [2] seems to imply that enabling
> > >    "suppressDuplicateQueueSubscriptions" acts like "networkTTL" is 1
for
> > > proxy
> > >    messages (unsure what this means?).  We ended up setting only the
> > >    "messageTTL" and this seemed to solve our immediate problem.
Unsure
> > if
> > > it
> > >    will cause other problems...?
> > >
> >
> > In a mesh (all brokers connected to each other), you only need a
> > consumerTTL of 1, because you can get the advisory message to every
other
> > broker in one hop.  But in that same mesh, there's no guarantee that a
> > single hop will get you to the broker where the consumer is, because the
> > consumer might jump to another node in the mesh before consuming the
> > message, which would then require another forward.  So in a mesh with
> > decreaseNetworkConsumerPriorty you may need a messageTTL/networkTTL of
1 +
> > [MAX # FORWARDS] or greater, where [MAX # FORWARDS] is the worst-case
> > number of jumps a consumer might make between the time a message is
> > produced and the time it is consumed.  In your case you've chosen 9999,
so
> > that allows 9998 consumer jumps, which should be more than adequate.
>
> any idea why the "network of brokers" documentation [1] has the
> recommendation
> for "consumerTTL" of "keep to 1 in a mesh"?

With a pure mesh all brokers are directly connected, so the advisory
messages only have to go one hop to get from the broker producing them (the
one to which the consumer is connected), so you don't need more than 1.
I'm not aware of any disadvantages to using a value larger than 1 in a pure
mesh, so I believe the recommendation is worded a little too strongly, but
maybe there really is a problem you avoid that way and I just don't know
about it.

> > > ** "prefetchSize" - defaults to 1000, but I see recommendations that
it
> > > should
> > >    be 1 for network connectors (e.g. [3]).  I think that in our
initial
> > >    testing i saw bad things happen with this setting and got more even
> > load
> > >    balancing by lowering it to 1.
> > >
> >
> > As I mentioned above, setting a small prefetch size is important for
load
> > balancing; if you allow a huge backlog of messages to buffer up for one
> > consumer, the other consumers can't work on them even if they're sitting
> > around idle.  I'd pick a value like 1, 3, 5, 10, etc.; something small
> > relative to the number of messages you're likely to have pending at any
> > one
> > time.  (But note that the prefetch buffer can improve performance if you
> > have messages that take a variable amount of time to process and
sometimes
> > the amount of time to process them is lower than the amount of time to
> > transfer them between your brokers or from the broker to the consumer,
> > such
> > as with a high-latency network link.  This doesn't sound like your
> > situation, but it's yet another case where the right setting depends on
> > your situation.)
>
> when you say "smaller prefetch buffer sizes", i assume you mean for _all_
> consumers, not just the network connectors?

I was primarily talking about the clients here; brokers generally can
either process a message immediately (dispatch it, persist it, etc.) or
will kick in Producer Flow Control (if you have it enabled), so usually
there's not much point fiddling with their prefetch buffer size.

> our product runs the activemq
> brokers embedded within our application (so consumers are in the same jvm
as
> the brokers).  in this case, does the consumer prefetch size make much of
a
> difference in terms of raw speed of consumption (ignoring the load
balancing
> issue for a moment)?

No, I would expect to see no performance gain from prefetching in that
setup.  I'd expect to see gains in the complete opposite situation: a
consumer connected over a satellite link or an intercontinental fiber
link.  But in your scenario, the transfer is essentially free, so there's
nothing to gain.

> > > [1] http://activemq.apache.org/networks-of-brokers.html
> > > [2] https://issues.jboss.org/browse/MB-471
> > > [3]
> >
http://www.javabeat.net/deploying-activemq-for-large-numbers-of-concurrent-applications/
>
>
>
>
>
> --
> View this message in context:
http://activemq.2283324.n4.nabble.com/Correctly-configuring-a-network-of-brokers-tp4703715p4703867.html
> Sent from the ActiveMQ - User mailing list archive at Nabble.com.

Reply via email to