Thanks Robbie ! On Fri, Jan 29, 2010 at 3:12 PM, Robbie Gemmell <[email protected]> wrote: > It changed in r823149, citing QPID-1440 as an offshoot from QPID-1289 > > Robbie > >> -----Original Message----- >> From: Rajith Attapattu [mailto:[email protected]] >> Sent: 29 January 2010 19:57 >> To: [email protected] >> Subject: Re: all subscribers not equal? >> >> Robbie, >> >> By any chance do you remember the JIRA/commit related to the changing >> of the prefetch from 5000 to 500? >> We should if possible put this into release notes. Some folks who are >> using this in production may rely on this and may see a drop minor in >> performance. >> So the release notes should help here. >> >> Regards, >> >> Rajith >> >> On Fri, Jan 29, 2010 at 12:29 PM, Robbie Gemmell >> <[email protected]> wrote: >> > Is this possibly related to the prefetch buffer? May not be the full >> cause >> > but could be playing a part. The 0.5 java client prefetches 5000 >> messages >> > IIRC, with that reduced to 500 for 0.6. >> > >> > You could try lowering the buffer size following the info on the >> lower half >> > of this page: >> > http://qpid.apache.org/use-priority-queues.html >> > >> > Note that reducing the prefetch will lower peak performance, but in >> this >> > case it seems you have your own buffers in place which will already >> be >> > restricting individual client throughput to below peak anyway. >> > >> > Robbie >> > >> > On 29 Jan 2010, at 15:54, mARK bLOORE <[email protected]> wrote: >> > >> >> Thanks for the reply, Rob. >> >> >> >> That doesn't suggest a reason for the behaviour which I saw. I got >> a >> >> similar thing today: one of the subscribers filled its buffer, and >> so >> >> stopped taking messages as fast as possible. The message backlog >> >> soared to tens of thousands, even though the other two subscribers' >> >> buffers were not full, so that they could have taken the load. I >> >> increased the first subscriber's buffer, and it it quickly absorbed >> >> the backlog, while the others continued to take messages only as the >> >> publisher added them. >> >> >> >> This is a little different from what I saw yesterday, but again >> >> suggests that message get dedicated to a single subscriber upon >> >> publication, rather than being available to all. One thing the two >> >> cases have in common is that the first subscriber is on the same box >> >> as the broker, and the other two are on a different box. They are >> EC2 >> >> instances, so the pipe between them is very wide. >> >> >> >> I really can't take the time to try to create a test case. This is >> a >> >> large script running on realtime data. >> >> >> >> >> >> >> >> On Wed, Jan 27, 2010 at 4:22 PM, Robert Godfrey >> <[email protected]> >> >> wrote: >> >>> >> >>> Hi Mark, >> >>> >> >>> I can give an outline of how the Java Broker distributes messages >> >>> between subscriptions... I'm not familiar with anything the Python >> >>> client may do... >> >>> >> >>> In general, when there is no backlog in the queue, the Java Broker >> >>> will round-robin between subscriptions which have available credit. >> >>> >> >>> If a subscription runs out of credit then it is marked as >> suspended. >> >>> When such a subscription gets credit again, or when a new >> subscription >> >>> is added to the queue then the queue will attempt to send as many >> >>> messages as it can to this new (or unsuspended) subscription. (To >> be >> >>> absolutely accurate it attempts to send up to 10 messages, then >> yields >> >>> - but schedules another attempt to send 10 messages, and so on). >> >>> >> >>> I'm not sure if the above totally explains what you are seeing - >> >>> certainly I don't see why the subscriptions should fail to keep up >> >>> with the publisher (until you kill the first subscriber) in the way >> >>> you describe. >> >>> >> >>> If you could provide a simple example that shows the same >> behaviour, >> >>> that would be fantastic... In the meantime I will have a more >> >>> thorough dive into the broker code to see if I can spot anything >> >>> obvious. >> >>> >> >>> Hope this helps, >> >>> Rob >> >>> >> >>> 2010/1/27 mARK bLOORE <[email protected]>: >> >>>> >> >>>> I am using the java broker 0.5 and the python client. I have a >> queue >> >>>> with one publisher and (usually) three subscribers. >> >>>> The publisher sends messages at a fairly constant rate. >> >>>> The subscribers take messages in a single thread at an unlimited >> rate, >> >>>> put them in a buffer, and ack the messages immediately. If the >> buffer >> >>>> fills then they block before the ack. >> >>>> >> >>>> Normally all the subscribers get messages at about the same rate, >> and >> >>>> the queue's message count is mostly zero. If a subscriber starts >> to >> >>>> block then the message count may rise, and when it gets to the >> tens of >> >>>> thousands I reduce the publication rate. But an odd situation >> >>>> appears: The blocked subscriber takes messages into its buffer as >> >>>> fast as it takes them out, but the other two subscribers get >> messages >> >>>> at only one third of the rate that the publisher is adding them. >> That >> >>>> first subscriber is running many times faster. >> >>>> >> >>>> If I add a fourth subscriber it gets messages at one quarter the >> >>>> publication rate, and the other two start getting messages at that >> >>>> rate too. If I kill the fourth subscriber rates return to what >> they >> >>>> were. >> >>>> >> >>>> If I kill the first subscriber then the other two start taking >> >>>> messages very fast, and the backlog in the queue quickly >> disappears. >> >>>> >> >>>> It seems as if the broker has earmarked the backlogged messages >> for >> >>>> that one subscriber, and won't deliver them to any other unless >> that >> >>>> one goes away. Could that be the case? Note that a subscriber >> may >> >>>> have at most one unacked message. >> >>>> >> >>>> I'm afraid I can't abstract any reasonable amount of code to >> display. >> >>>> I'm not sure that would help anyway. >> >>>> >> >>> >> >>> >> >>> >> >>>> -- >> >>>> mARK bLOORE <[email protected]> >> >>>> >> >>>> ------------------------------------------------------------------ >> --- >> >>>> Apache Qpid - AMQP Messaging Implementation >> >>>> Project: http://qpid.apache.org >> >>>> Use/Interact: mailto:[email protected] >> >>>> >> >>>> >> >>> >> >>> ------------------------------------------------------------------- >> -- >> >>> Apache Qpid - AMQP Messaging Implementation >> >>> Project: http://qpid.apache.org >> >>> Use/Interact: mailto:[email protected] >> >>> >> >>> >> >> >> >> >> >> >> >> -- >> >> mARK bLOORE <[email protected]> >> >> >> >> -------------------------------------------------------------------- >> - >> >> Apache Qpid - AMQP Messaging Implementation >> >> Project: http://qpid.apache.org >> >> Use/Interact: mailto:[email protected] >> >> >> > >> > --------------------------------------------------------------------- >> > Apache Qpid - AMQP Messaging Implementation >> > Project: http://qpid.apache.org >> > Use/Interact: mailto:[email protected] >> > >> > >> >> >> >> -- >> Regards, >> >> Rajith Attapattu >> Red Hat >> http://rajith.2rlabs.com/ >> >> --------------------------------------------------------------------- >> Apache Qpid - AMQP Messaging Implementation >> Project: http://qpid.apache.org >> Use/Interact: mailto:[email protected] > > > > --------------------------------------------------------------------- > Apache Qpid - AMQP Messaging Implementation > Project: http://qpid.apache.org > Use/Interact: mailto:[email protected] > >
-- Regards, Rajith Attapattu Red Hat http://rajith.2rlabs.com/ --------------------------------------------------------------------- Apache Qpid - AMQP Messaging Implementation Project: http://qpid.apache.org Use/Interact: mailto:[email protected]
