On 25 March 2013 23:05, Fraser Adams <[email protected]> wrote:
> On 25/03/13 20:38, Rob Godfrey wrote: > >> >> We should definitely try to address the cases where queue arguments simply >> have different names. Things like ring queues are a little odd. In the >> C++ broker they make a lot of sense because of the design of the >> persistent >> store in use. >> > I'm not sure that it's just because of the design of the persistent store > where ring queues make sense. I'm responsible for a large federated > topology of C++ brokers and we use ring queues pretty much everywhere > (without persistence). > > Sorry - I should have been clearer... The implementation of persistence in the C++ broker pretty much forced them to define ring queues. That's not to say that there aren't non-persistent use cases for them. The subtleties are about what happens when consumption from the queue is not strictly FIFO. The C++ persistent queue implementation literally is a ring. Thus is you have non-FIFO consumption you can get "ring" overwriting even when the number of unconsumed messages in the queue is less than the ring size. The Java Broker doesn't have that sort of implementation. The in-memory queues are just (fancy) linked lists... and the disk storage is just a transaction log. As such it'd actually be quite hard to mimic exactly the behaviour of the C++ queue. If, OTOH, the actual requirement is just to bound the unconsumed messages in a queue, that could certainly be implemented as a policy. > We need a bounded buffer architecture so we erm rejected reject policy and > are running at very high throughput so want to avoid the performance hit of > persistence and our producers are real time components so we certainly > don't want to use flow control to "push back" on the producers. > > It might be a fairly "specialised" set of requirements, but I've certainly > found ring queues to be really useful indeed in our system and would have > balked (ha ha) had I not had the option of using them. > > > > Because the design of the store in the Java Broker is so >> different it makes less sense to implement them in the same way... but >> having some sort of size bounded queue would make sense... and we should >> aim to provide a single name for this so they can be created consistently >> across the two implementations. >> >> >> The impression that I got about the Java broker queues is that it wasn't > possible to specify a queue size per se but to implement "bounded" > behaviour by the use of flow control, so when the number of messages/bytes > hits a given limit the producer gets throttled. The C++ broker has added > flow control from I think 0.12 - though I think the parameters for that are > messages but the Java broker limits are in bytes (aaaarrrrggghhh). > > Yeah - this is driven by the fact that the user requirement we were meeting was about not running out of memory so bytes makes more sense as the value to limit. Adding a count limit would not be too hard. As above also implementing some sort of discard policy based on bounding the number of unconsumed messages in a queue is not that hard either. It just wouldn't be quite the same as the C++ ring queue. > None of this is really insurmountable, but getting the little things right > I think will ultimately make a big difference. > > From a user perspective some of this comes across rather as "black magic" > the confluence wiki page that you put together describing some of the > differences isn't exactly easy to find (I found it by accident and > bookmarked it :-)) > > Yeah - we *really* need to sort our wiki and docs. Having people such as yourself who can point this out to us certainly helps :-) > There's a bit of a common theme in my rambles isn't there :-) > > Understood... Obviously any time you can spare to help us try to make things better is much appreciated :-) Cheers, Rob > Cheers, > > Frase > > > > > ------------------------------**------------------------------**--------- > To unsubscribe, e-mail: > [email protected].**org<[email protected]> > For additional commands, e-mail: [email protected] > >
