Given that Marc's producers will be sending non-persistent messages, wouldn't a shared - as opposed to pure - master/slave configuration provide redundancy at the broker level and do so w/no extra overhead? My thinking was that if the master were to fail, you can its clients failover to the slave and thus not burden the other master brokers with the extra message load.
Thanks, Joe Marc Zampetti wrote: > > All, > > I'm considering ActiveMQ for an application that has very high message > rates expected, at the rate of 6 - 10 million messages per minute. All of > these messages are fairly small, on the order of 100 bytes or less, but > they will be very regular, with a a large burst of additional messages > (around 20 million extra) once an hour. Obviously, I'm looking at a fairly > large Network of Brokers. I don't expect, nor do I need persistent > messages on disk, nor do I want guaranteed delivery, though it would be > nice. :-) Does anyone have any idea if this is even possible with AMQ? > > There are a few portions of the applications that need to receive a subset > of the message stream, and other portions that will simply process the > entire stream. For those components that need to get a sub-set, I need to > have some way to route the appropriate messages to the components. While > still only a subset, this could still be 1 million+ messages per minute, > and I'm looking for an efficient way to decide when to route a message or > not. Each of these 6 million messages are unique, with a unique > identifier, so I would need to have an id to queue mapping table in order > to perform the routing. At 1 million+, my concern is that the table itself > can get pretty large, and that some of the more "normal" routing things > that Camel might help with won't be that helpful. > > Anyone have any ideas or best practices? > > Marc > -- View this message in context: http://www.nabble.com/Questions-on-Network-of-Brokers-and-high-message-rates-tf4941283s2354.html#a14174704 Sent from the ActiveMQ - User mailing list archive at Nabble.com.