James,

You are right about the HA comment I originally made. I was referring to
fact that I'm not looking for persistent messages. But I am concerned about
what happens when a broker fails and being able to recover from that
quickly, even if that means losing messages.

I understand the point about the need for more information. Here is what I
say so far.

There will be around 30 - 50 processes producing messages, each one
producing the same type of messages. Think it of a farm of processes doing
the same basic work, spread out to handle processing load, scalability,
fault tolerance, etc. These processes know nothing about what happens to
these messages once they send them.

On the consuming side, there are multiple farms of processes, each farm
doing a particular type of processing on some subset of the messages. There
is nothing in the message, other than a unique id, that can be used to
determine which messages should be sent to each farm. Each of these farms
would have anywhere from 2 - 30 processes, depending upon the functionality
and load requirements. 

For the AMQ topologies, I was thinking of a Virtual Topic that all of the
publishers send messages to, with queues behind that go to some sort of
routing layer that in turns sends the message to the queue that the service
the relevant consumer farm.

Marc


James.Strachan wrote:
> 
> BTW I thought you said previously that you were not that concerned
> with persistence / HA?
> 
> Before we can really know the right ActiveMQ architecture to solve
> your problem we need a few more details such as how many producers,
> consumers, do you want queues or topics and exactly what the traffic
> shape is (out of the 200K messages per second, can we partition them
> etc).
> 
> If you have a single consumer processing 10 million messages/minute on
> a single destination (say 200K/second) you'll struggle a bit on a
> blade/PC to reach that throughput with a single broker - 30Kish is the
> more usual rate for a single consumer on a topic; similarly if all
> this is on a single destination then a network of brokers won't help
> in the slightest (in fact its a really bad idea :).
> 
> On 05/12/2007, Marc Zampetti <[EMAIL PROTECTED]> wrote:
>>
>> Joe,
>>
>> Thanks for the suggestion. But in this case, it won't work since the
>> routing
>> criteria are much too fine-grained. Basically, each of the 6 million
>> messages will each have a unique id. Then some subset, say 1 million
>> messages, will need to be routed to a process for special processing.
>> Basically, what I have is a list of those 1 million ids that I need to
>> look
>> for. I'm thinking I will need to write a simple consumer that acts as the
>> router, reading in the routing list and consuming the messages,
>> determining
>> if they match, and then publishing them on the final destination. My
>> concern
>> is scalability and HA in that situation. I think using an embedded broker
>> might help, doing something similar to what Camel is trying to do.
>>
>> One of my concerns here is the likely hood that I might overwhelm a
>> broker.
>> I only have a small number of publishers, so I'm concerned with too many
>> publishers ending up on the same broker. I'm also concerned with the
>> shear
>> number of messages that would have to be forwarded to the other brokers
>> in
>> the network in this situation.
>>
>> Marc
>>
>>
>> ttmdev wrote:
>> >
>> > Re the second part of your post. If Camel is not an option, then what
>> > about a composite queue in combination with selectors?  For example, in
>> > the snippet below, Q.FOO gets a subset of the message stream being sent
>> to
>> > Q.BLAST, while Q.BAR gets the entire stream.
>> >
>> > <compositeQueue name="Q.BLAST">
>> >             <forwardTo>
>> >               <filteredDestination selector="color='blue'"
>> queue="Q.FOO"
>> > />
>> >               <queue physicalName="Q.BAR" />
>> >             </forwardTo>
>> > </compositeQueue>
>> >
>> > Hope this helps - Joe
>> >
>> >
>> > Marc Zampetti wrote:
>> >>
>> >> All,
>> >>
>> >> I'm considering ActiveMQ for an application that has very high message
>> >> rates expected, at the rate of 6 - 10 million messages per minute. All
>> of
>> >> these messages are fairly small, on the order of 100 bytes or less,
>> but
>> >> they will be very regular, with a a large burst of additional messages
>> >> (around 20 million extra) once an hour. Obviously, I'm looking at a
>> >> fairly large Network of Brokers. I don't expect, nor do I need
>> persistent
>> >> messages on disk, nor do I want guaranteed delivery, though it would
>> be
>> >> nice. :-) Does anyone have any idea if this is even possible with AMQ?
>> >>
>> >> There are a few portions of the applications that need to receive a
>> >> subset of the message stream, and other portions that will simply
>> process
>> >> the entire stream. For those components that need to get a sub-set, I
>> >> need to have some way to route the appropriate messages to the
>> >> components. While still only a subset, this could still be 1 million+
>> >> messages per minute, and I'm looking for an efficient way to decide
>> when
>> >> to route a message or not. Each of these 6 million messages are
>> unique,
>> >> with a unique identifier, so I would need to have an id to queue
>> mapping
>> >> table in order to perform the routing. At 1 million+, my concern is
>> that
>> >> the table itself can get pretty large, and that some of the more
>> "normal"
>> >> routing things that Camel might help with won't be that helpful.
>> >>
>> >> Anyone have any ideas or best practices?
>> >>
>> >> Marc
>> >>
>> >
>> >
>>
>> --
>> View this message in context:
>> http://www.nabble.com/Questions-on-Network-of-Brokers-and-high-message-rates-tf4941283s2354.html#a14165394
>> Sent from the ActiveMQ - User mailing list archive at Nabble.com.
>>
>>
> 
> 
> -- 
> James
> -------
> http://macstrac.blogspot.com/
> 
> Open Source Integration
> http://open.iona.com
> 
> 

-- 
View this message in context: 
http://www.nabble.com/Questions-on-Network-of-Brokers-and-high-message-rates-tf4941283s2354.html#a14173515
Sent from the ActiveMQ - User mailing list archive at Nabble.com.

Reply via email to