On Fri, Oct 30, 2015 at 6:29 PM, Kevin Burton <bur...@spinn3r.com> wrote:

> sorry for the delay in reply.  was dealing with a family issue that I
> needed to prioritize...
>
> On Wed, Oct 21, 2015 at 6:52 AM, Tim Bain <tb...@alumni.duke.edu> wrote:
>
> > Right off the top, can't you use INDIVIDUAL_ACK here, rather than
> > committing transactions?  That seems like the ideal mode to let you
> choose
> > which messages to ack without having to ack all the ones up to a certain
> > point.
> >
> >
> I thought about that. We had moved to sessions to avoid over-indexing
> because our tasks create more messages and this way I can bulk commit them
> as one unit.
>
> But maybe if I just deal with the "at least once" semantics while the
> transactions aren't combined I'll just execute a message at least once.
> But there might be a failure scenario where we execute the second message
> hundreds of times where if it was a transaction this could be avoided.
>

I think there's a limit to how many redelivery attempts you're willing to
take before to send the message to the DLQ, which I think would cover most
scenarios when that would happen in the wild.  (You could always construct
an arbitrarily bad failure case, but the odds of actually seeing it in the
real world get vanishingly small as it gets uglier.)


> > Also, I'm curious about how a 30-second message with a prefetch size of 1
> > results in a 5-minute latency; why isn't that 2 * 30 seconds = 1 minute?
> >
> >
> It's because I have one connection per thread per server.
>
> So if we have 10 servers, each thread has ten sessions.  and if prefetch is
> 1 then that means I prefetch 10 total messages.  If each message takes 30
> seconds to execute that thread will take a while to handle all ten.
> This leads to significant latency.
>

If I'm understanding correctly, you've got a single client consuming one
message at a time while consuming from N brokers that are presumably not
networked (otherwise why would you connect to more than one of them)?
Why?  (Among other things, why not just network the brokers and simplify
your use-case?)


> I pushed some code last week to instrument this and our average latency
> right now is 3-5 minutes between prefetching a message and servicing a
> message.
>
> Fortunately there's a timestamp added on prefetch so I can just take the
> current time that I am executing the message/task and then subtract the
> prefetch time to compute the latency.
>
> Kevin
>
> --
>
> We’re hiring if you know of any awesome Java Devops or Linux Operations
> Engineers!
>
> Founder/CEO Spinn3r.com
> Location: *San Francisco, CA*
> blog: http://burtonator.wordpress.com
> … or check out my Google+ profile
> <https://plus.google.com/102718274791889610666/posts>
>

Reply via email to