Re: Getting a Cannot publish to a deleted Destination but eventually works after a couple of retries

2014-07-17 Thread tikboy
Apologies for not giving an immediate response. It seems that our problem was solved by increasing the open files limit setting for both Centos and Ubuntu hosting Glassfish, Fuse and ActiveMQ. But to answer your question, yes we are using a broker network comprised by 4 Ubuntu nodes. 1 has a

Getting a Cannot publish to a deleted Destination but eventually works after a couple of retries

2014-06-25 Thread tikboy
Hi We have an application that runs on Glassfish 3.1. It uses Camel to publish a request/reply JMS message to an AMQ broker. The message is then consumed by a Camel JMS listener running on Fuse 7.0. This setup works most of the time but sometimes, when the camel route on Fuse is sending the

Re: memoryLimit behavior (5.5.1)

2014-03-26 Thread tikboy
It worked! It never occurred to me that the cursor memory high water mark can be set to a value higher than 100. The producers would only receive the exception if the messages they're sending are persistent. But that would be a simple change on their end. At least now my broker would be able to

Re: memoryLimit behavior (5.5.1)

2014-03-25 Thread tikboy
Actually what I would like is when the memoryLimit for a certain queue is reached (even if it is set to 70%), the data should not be offloaded to any store (memory, kahadb, temp) and for the producers to know that an exception has happened so they could handle for themselves. What is currently

memoryLimit behavior (5.5.1)

2014-03-20 Thread tikboy
Hi, I've configured a policy for my broker to have 20 mb memory limit for a certain queue and the PFC turned off. I was expecting that when the memory limit is reached, the client would be unable to publish messages to that queue and would receive exceptions while sending a message. But