Re: Upgrade from 5.9. to 5.10

2014-08-29 Thread lacigas

Hi
I'm a bit puzzled, that I don't get any replies. Is this because 
- it's a stupid question
- nobody experienced this before
- nobody knows the answer

Surely I'm not the only one who ever did an upgrade in a productive system
and had to preserve the state of the queues.

Thanks for your time.
Regards
Laci



--
View this message in context: 
http://activemq.2283324.n4.nabble.com/Upgrade-from-5-9-to-5-10-tp4685006p4685134.html
Sent from the ActiveMQ - User mailing list archive at Nabble.com.


Re: Request / Reply in Network of Brokers

2014-05-16 Thread lacigas
We have Network TTL=10 and decreaseNetworkConsumerPriority not set (so 
it defaults to false)

So I'm going to try with decreaseNetworkConsumerPriority = true.
Thanks for your help.
Regards
Laci

On 08.05.2014 05:17, artnaseef [via ActiveMQ] wrote:
> Is this with the default Network TTL setting of 1?  Sounds like that's 
> the case and that  decreaseNetworkConsumerPriority is false (also the 
> default).
>
> So, even though the consumer is only on A, and a person can easily see 
> it makes no sense to move the messages to broker B, in an ActiveMQ 
> demand-forwarded network-of-brokers, brokers B and C are both going to 
> create consumers as the means to allow consumers across the network to 
> consume the messages, and to allow that consumer on Broker A to 
> receive messages produced on the other brokers, causing messages to 
> move from Broker A to Broker B.  With the default round-robin delivery 
> of messages, exactly half will go to broker A's immediate consumer, 
> and the other half will go to broker B.
>
> Broker B will not return those messages to Broker A, nor forward them 
> on to Broker C, due to exhaustion of the Network TTL of 1.
>
> Your best bet is to use the decreaseNetworkConsumerPriority=true which 
> never sends messages to another broker when there is another local 
> consumer.
>
> 
> If you reply to this email, your message will be added to the 
> discussion below:
> http://activemq.2283324.n4.nabble.com/Request-Reply-in-Network-of-Brokers-tp4680957p4680976.html
>  
>
> To start a new topic under ActiveMQ - User, email 
> ml-node+s2283324n2341805...@n4.nabble.com
> To unsubscribe from ActiveMQ, click here 
> .
> NAML 
> 
>  
>





--
View this message in context: 
http://activemq.2283324.n4.nabble.com/Request-Reply-in-Network-of-Brokers-tp4680957p4680988.html
Sent from the ActiveMQ - User mailing list archive at Nabble.com.

Re: Request / Reply in Network of Brokers

2014-05-13 Thread lacigas
Setting decreaseNetworkConsumerPriority = true solved the problem.
Thanks again.

Regards
Laci

On 08.05.2014 07:42, Laci Gaspar wrote:
> We have Network TTL=10 and decreaseNetworkConsumerPriority not set (so 
> it defaults to false)
>
> So I'm going to try with decreaseNetworkConsumerPriority = true.
> Thanks for your help.
> Regards
> Laci
>
> On 08.05.2014 05:17, artnaseef [via ActiveMQ] wrote:
>> Is this with the default Network TTL setting of 1?  Sounds like 
>> that's the case and that  decreaseNetworkConsumerPriority is false 
>> (also the default).
>>
>> So, even though the consumer is only on A, and a person can easily 
>> see it makes no sense to move the messages to broker B, in an 
>> ActiveMQ demand-forwarded network-of-brokers, brokers B and C are 
>> both going to create consumers as the means to allow consumers across 
>> the network to consume the messages, and to allow that consumer on 
>> Broker A to receive messages produced on the other brokers, causing 
>> messages to move from Broker A to Broker B.  With the default 
>> round-robin delivery of messages, exactly half will go to broker A's 
>> immediate consumer, and the other half will go to broker B.
>>
>> Broker B will not return those messages to Broker A, nor forward them 
>> on to Broker C, due to exhaustion of the Network TTL of 1.
>>
>> Your best bet is to use the decreaseNetworkConsumerPriority=true 
>> which never sends messages to another broker when there is another 
>> local consumer.
>>
>> 
>> If you reply to this email, your message will be added to the 
>> discussion below:
>> http://activemq.2283324.n4.nabble.com/Request-Reply-in-Network-of-Brokers-tp4680957p4680976.html
>>  
>>
>> To start a new topic under ActiveMQ - User, email 
>> ml-node+s2283324n2341805...@n4.nabble.com
>> To unsubscribe from ActiveMQ, click here 
>> .
>> NAML 
>> 
>>  
>>
>





--
View this message in context: 
http://activemq.2283324.n4.nabble.com/Request-Reply-in-Network-of-Brokers-tp4680957p4680991.html
Sent from the ActiveMQ - User mailing list archive at Nabble.com.

Request / Reply in Network of Brokers

2014-05-11 Thread lacigas
Hi
We have a network of brokers something like this:

A <--->B<--->C<>Mesh

We are producing some Messages with Request / Reply Pattern in A and those
Messages are supposed to
be consumed also in A.

Now sometimes only half the Messages are being consumed in A and the rest is
"lost" somewhere.
Because it is always half the Messages, I suspected that they are being sent
to B etc.
And when I turn on DEBUG in B I see loads of Messages like (see below).

I have read post:
http://activemq.2283324.n4.nabble.com/Request-Response-Model-Not-Working-in-Network-of-Brokers-Please-Help-td4658071.html
and post:
http://blog.garytully.com/2012/07/activemq-broker-networks-think-demand.html

What I don't understand is that there ist only a consumer for those Messages
in A and nowhere else.
Why is the message sent to another broker???

We use a selector like this:
"JMSType like 'bla.checkInsurance%25'"

Any ideas?
Regards
Laci





2014-05-07 16:08:59,532 | DEBUG | msp-zh-i Ignoring sub from VIINTAPP02,
already routed through this broker once: ConsumerInfo {commandId = 930,
responseRequired = false, consumerId =
VIINTAPP02->msp-zh-i-49180-1399376355769-11:1:1:1177, destination =
queue://prax.delivery.replyto, prefetchSize = 1000,
maximumPendingMessageLimit = 0, browser = false, dispatchAsync = true,
selector =
JMSCorrelationID='ID-cux-cxin-tst0-curabill-ops-ch-59632-1399450208729-42-1'
OR
JMSCorrelationID='ID-cux-cxin-tst0-curabill-ops-ch-59632-1399450208729-42-101'
OR
JMSCorrelationID='ID-cux-cxin-tst0-curabill-ops-ch-59632-1399450208729-42-103'
OR
JMSCorrelationID='ID-cux-cxin-tst0-curabill-ops-ch-59632-1399450208729-42-105'
OR
JMSCorrelationID='ID-cux-cxin-tst0-curabill-ops-ch-59632-1399450208729-42-107'
OR
JMSCorrelationID='ID-cux-cxin-tst0-curabill-ops-ch-59632-1399450208729-42-109'
OR
JMSCorrelationID='ID-cux-cxin-tst0-curabill-ops-ch-59632-1399450208729-42-11'
OR
JMSCorrelationID='ID-cux-cxin-tst0-curabill-ops-ch-59632-1399450208729-42-111'
OR
JMSCorrelationID='ID-cux-cxin-tst0-curabill-ops-ch-59632-1399450208729-42-113'
OR
JMSCorrelationID='ID-cux-cxin-tst0-curabill-ops-ch-59632-1399450208729-42-115'
OR
JMSCorrelationID='ID-cux-cxin-tst0-curabill-ops-ch-59632-1399450208729-42-117'
OR
JMSCorrelationID='ID-cux-cxin-tst0-curabill-ops-ch-59632-1399450208729-42-119'
OR
JMSCorrelationID='ID-cux-cxin-tst0-curabill-ops-ch-59632-1399450208729-42-121'
OR
JMSCorrelationID='ID-cux-cxin-tst0-curabill-ops-ch-59632-1399450208729-42-123'
OR
JMSCorrelationID='ID-cux-cxin-tst0-curabill-ops-ch-59632-1399450208729-42-125'
OR
JMSCorrelationID='ID-cux-cxin-tst0-curabill-ops-ch-59632-1399450208729-42-127'
OR
JMSCorrelationID='ID-cux-cxin-tst0-curabill-ops-ch-59632-1399450208729-42-13'
OR
JMSCorrelationID='ID-cux-cxin-tst0-curabill-ops-ch-59632-1399450208729-42-15'
OR
JMSCorrelationID='ID-cux-cxin-tst0-curabill-ops-ch-59632-1399450208729-42-17'
OR
JMSCorrelationID='ID-cux-cxin-tst0-curabill-ops-ch-59632-1399450208729-42-19'
OR
JMSCorrelationID='ID-cux-cxin-tst0-curabill-ops-ch-59632-1399450208729-42-21'
OR
JMSCorrelationID='ID-cux-cxin-tst0-curabill-ops-ch-59632-1399450208729-42-23'
OR
JMSCorrelationID='ID-cux-cxin-tst0-curabill-ops-ch-59632-1399450208729-42-25'
OR
JMSCorrelationID='ID-cux-cxin-tst0-curabill-ops-ch-59632-1399450208729-42-27'
OR
JMSCorrelationID='ID-cux-cxin-tst0-curabill-ops-ch-59632-1399450208729-42-29'
OR
JMSCorrelationID='ID-cux-cxin-tst0-curabill-ops-ch-59632-1399450208729-42-3'
OR
JMSCorrelationID='ID-cux-cxin-tst0-curabill-ops-ch-59632-1399450208729-42-31'
OR
JMSCorrelationID='ID-cux-cxin-tst0-curabill-ops-ch-59632-1399450208729-42-33'
OR
JMSCorrelationID='ID-cux-cxin-tst0-curabill-ops-ch-59632-1399450208729-42-35'
OR
JMSCorrelationID='ID-cux-cxin-tst0-curabill-ops-ch-59632-1399450208729-42-37'
OR
JMSCorrelationID='ID-cux-cxin-tst0-curabill-ops-ch-59632-1399450208729-42-39'
OR
JMSCorrelationID='ID-cux-cxin-tst0-curabill-ops-ch-59632-1399450208729-42-41'
OR
JMSCorrelationID='ID-cux-cxin-tst0-curabill-ops-ch-59632-1399450208729-42-43'
OR
JMSCorrelationID='ID-cux-cxin-tst0-curabill-ops-ch-59632-1399450208729-42-45'
OR
JMSCorrelationID='ID-cux-cxin-tst0-curabill-ops-ch-59632-1399450208729-42-47'
OR
JMSCorrelationID='ID-cux-cxin-tst0-curabill-ops-ch-59632-1399450208729-42-49'
OR
JMSCorrelationID='ID-cux-cxin-tst0-curabill-ops-ch-59632-1399450208729-42-5'
OR
JMSCorrelationID='ID-cux-cxin-tst0-curabill-ops-ch-59632-1399450208729-42-51'
OR
JMSCorrelationID='ID-cux-cxin-tst0-curabill-ops-ch-59632-1399450208729-42-53'
OR
JMSCorrelationID='ID-cux-cxin-tst0-curabill-ops-ch-59632-1399450208729-42-55'
OR
JMSCorrelationID='ID-cux-cxin-tst0-curabill-ops-ch-59632-1399450208729-42-57'
OR
JMSCorrelationID='ID-cux-cxin-tst0-curabill-ops-ch-59632-1399450208729-42-59'
OR
JMSCorrelationID='ID-cux-cxin-tst0-curabill-ops-ch-59632-1399450208729-42-61'
OR
JMSCorrelationID='ID-cux-cxin-tst0-curabill-ops-ch-59632-1399450208729-42-63'
OR
JMSCorrelationID='ID-cux-cxin-tst0-curabill-ops-ch-59632-1399450208729-42-65'
OR
JMSCorrelationID='

Re: Loads of TIME_WAIT connections

2013-05-02 Thread lacigas
You are right, the problem is caused by the client.
The expiry timeout for the PooledConnectionFactory was set to 1ms by
default, 
so every connection was going to TIME_WAIT state after 1ms if not used.
I have increased this numer and now I have a nice low number of TIME_WAIT
connections.

thanks
Laci



--
View this message in context: 
http://activemq.2283324.n4.nabble.com/Loads-of-TIME-WAIT-connections-tp4666433p4666566.html
Sent from the ActiveMQ - User mailing list archive at Nabble.com.


Loads of TIME_WAIT connections

2013-05-01 Thread lacigas
Sorry if this has been asked before, I have seen several posts, but none of
them helped.

I'm running AMQ 5.8 on linux 64bit. Camel routes generated by Talend ESB.

As a test I have one pooled connection factory (default values) and one jms
consumer.
The jms consumer is configured with cacheLevelName=CACHE_CONSUMER.

Alone with this simple setup my camel route produces ~60 TIME_WAIT
connections.
Our system consists of several routes with multiple JMS components and if
they are all running we have between 2000 and 3000 TIME_WAIT connections and
that seems to kill our server.

What I don't know if this behaviour is normal (loads of TIME_WAIT
connections) and if not how to solve it.

Thanks for your help.

Laci



--
View this message in context: 
http://activemq.2283324.n4.nabble.com/Loads-of-TIME-WAIT-connections-tp4666433.html
Sent from the ActiveMQ - User mailing list archive at Nabble.com.