For the record this problem was solved by using jms4r rather than the JSparrow wrapper for jms. The jms4r ruby gem is a much thinner layer over jms and allowed me to reuse a single consumer rather then creating a new one for each request.
Thanks for the help, Jesse > Hi Jesse, > I had a quick look at the heap and the cause of the heap failure is > down to the open connections. > You have 3957 connections: Which with MAT you can see using OQL: > SELECT * from org.apache.qpid.server.transport.ServerConnection > > Each connection is reserves a minimum of 262k of memory in its session. > SELECT * from org.apache.qpid.server.transport.ServerSession > This is for the AMQP 0-10 layer to record the protocol commands. > > You say you are using sparrow in your client, is that in conjunction > with the Qpid Ruby Client? > > As you point out there are no messages in the broker. > I shall have to take a further look at sparrow to understand why you > have so many open connections. Are you able to post/share your Ruby > application? > > It would be good to have a working application to run to analyse the > broker's operation. > The 0-10 implementation is new in the Java broker so I would not be > surprised if some of the house keeping (counts, etc) are not 100% > correctly wired up. I certainly know the additions to make the 0-10 > items appear in the JMX console were added fairly recently. IIRC the > connection objects still do not show up. > > I suspect that you may be seeing some accounting issue especially with > the ActiveConsumerCount as the main queue that you are using says it > has over 10,000 active consumers... though you only have 3957 0_10 > consumers. Searching for all instances of the > SubscriptionLoggingSubject which is linked to every subscriber shows > you only 7912 in the system. > SELECT * from org.apache.qpid.server.logging.subjects.SubscriptionLogSubject > > The LogSubject also shows that you have had over 14000 subscribers as > the first LogSubject has value id sub:14,084. The id is incremented > for every new subscriber. This suggests that you are closing your > subscriptions it might just be that it isn't closing them fast enough. > > Can you explain a bit more about all the connected clients? You > mentioned synapse. Whilst the heap suggested there were 7912 > Subscribers I could only find the 3957 0_10 clients. There were no > 0-9/8 subscribers. > > If you are running 0.6 it would be great if you could verify status > logging is enabled (should be at the bottom of the broker config.xml). > It is enabled by default so should still be set to on. If you could > then make that available to us I'll see if we can find out where your > connections are coming from. That coupled with your application code > and I should be able to find out where your connections are not > closing. > > Hope that helps > > Martin > > On 9 February 2010 14:00, Jesse W. Hathaway <[email protected]> wrote: > > Martin, > > > > Did you happen to have a moment to take a look at the dump I sent over? > > I am still struggling to find the cause of the out of memory problem, > > any help would be greatly appreciated. > > > > thanks, Jesse > > > >> > Hi Jesse, > >> > Sorry for not picking this up on qpid-users. Please see embedded > >> > responses. > >> > >> no problem, thanks for the reply > >> > >> > > 1. What does the ActiveConsumerCount represent? > >> > > >> > The number of connected consumers that have available space in their > >> > prefetch buffer to receive messages. > >> > >> Why would ActiveConsumerCount continually increase and ConsumerCount > >> stay steady? > >> > >> > > 2. Is it possible that the increasing of the ActiveConsumerCount is > >> > > causing the broker to exhaust its memory? > >> > > >> > With out knowing more about what you are doing it is difficult to say > >> > but certainly the broker cannot service an infinite number of > >> > consumers. > >> > > >> > > 3. What might be the reason my jruby JMS process is causing this value > >> > > to increase? > >> > > >> > I am not familiar with jruby jms (I'll try and take a look at the > >> > weekend). Tools like spring by default will create a new session and > >> > consumer for each message received. If jruby doing something simlar > >> > and you are sending a lot of messages then I would expect the > >> > behaviour you are experiencing with a large number of active > >> > consumers. > >> > >> I am using Sparrow, http://github.com/leandrosilva/sparrow/ > >> > >> Here is the function from Sparrow I am using: > >> > >> class Receiver < Base > >> def receive_message(criteria_for_receiving = {:timeout => > >> DEFAULT_RECEIVER_TIMEOUT, :selector => ''}, &message_handler) > >> # Cria uma conexao, uma sessao e um consumidor de qualquer tipo de > >> mensagem > >> connection = @connection_factory.create_connection > >> session = connection.create_session(false, > >> Session::AUTO_ACKNOWLEDGE) > >> consumer = session.create_consumer(@destination, > >> criteria_for_receiving[:selector]) > >> > >> # Prepara a conexao para receber mensagens > >> connection.start > >> > >> # Inicia o recebimento de mensagens > >> timeout = criteria_for_receiving[:timeout] || DEFAULT_RECEIVER_TIMEOUT > >> > >> while (received_message = consumer.receive(timeout)) > >> # Inclui o modulo de identificacao de mensagem, util para o > >> message_handler > >> class << received_message > >> include MessageType > >> end > >> > >> # Delega o tratamento da mensagem para o bloco recebido > >> message_handler.call(received_message) > >> end > >> > >> # Fecha a conexao > >> connection.close > >> end > >> end > >> > >> I suspected that I was leaking connections since, it appears Sparrow is > >> creating a new connection each time I call receive_message, but from > >> what I have read `connection.close` should perform all the necessary > >> cleanup. > >> > >> > Are you using topics by chance? If you are then every one of those > >> > consumers will be receiving a copy of the sent message which will > >> > greatly contribute to your OOM problems. > >> > >> no these are direct messages > >> > >> > If you have a heap dump of the broker to hand I shall put up some > >> > details of how you can interrogate the heap to understand what has > >> > happend. > >> > >> Here is a heap dump, http://mbuki-mvuki.org/java_pid15443.hprof.bz2 > >> I tried analyzing it with the Eclipse Memory Analyzer, but my knowledge > >> of java and QPID were to nascent to really figure out what was the cause. > >> > >> thanks for the help, Jesse > >> > >> --------------------------------------------------------------------- > >> Apache Qpid - AMQP Messaging Implementation > >> Project: http://qpid.apache.org > >> Use/Interact: mailto:[email protected] > >> > > > > -- > > There is no expedient to which man will not > > resort to avoid the real labor of thinking. > > - Sir Joshua Reynolds > > > > --------------------------------------------------------------------- > > Apache Qpid - AMQP Messaging Implementation > > Project: http://qpid.apache.org > > Use/Interact: mailto:[email protected] > > > > > > > > -- > Martin Ritchie > > --------------------------------------------------------------------- > Apache Qpid - AMQP Messaging Implementation > Project: http://qpid.apache.org > Use/Interact: mailto:[email protected] > -- There is no expedient to which man will not resort to avoid the real labor of thinking. - Sir Joshua Reynolds --------------------------------------------------------------------- Apache Qpid - AMQP Messaging Implementation Project: http://qpid.apache.org Use/Interact: mailto:[email protected]
