[ANNOUNCE] Apache.NMS.ActiveMQ v1.7.2 Released

2016-04-08 Thread Timothy Bish
The Apache ActiveMQusers team is pleased to announce the release of 
Apache NMS.ActiveMQ v1.7.2. This release contains a few important fixes 
including a memory leak fix, users are urged to upgrade from previous 
releases.


The Wiki Page for this release can be found here:
http://activemq.apache.org/nms/apachenmsactivemq-v172.html

The list of issues fixed in this release is available here:
https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12311201&version=12332993

--
Tim Bish
twitter: @tabish121
blog: http://timbish.blogspot.com/



RE: MasterSlave config ActiveMQ

2016-04-08 Thread Natarajan, Rajeswari
Thanks Tim for the detailed email.  When I do failover I see below exception in 
our app logs.

org.apache.activemq.transport.failover.FailoverTransport  Transport 
(tcp://hdp132.lab1.ariba.com:61616) failed, attempting to automatically 
reconnect java.io.EOFException: null
at java.io.DataInputStream.readInt(DataInputStream.java:392)
at 
org.apache.activemq.openwire.OpenWireFormat.unmarshal(OpenWireFormat.java:267)
at 
org.apache.activemq.transport.tcp.TcpTransport.readCommand(TcpTransport.java:240)
at 
org.apache.activemq.transport.tcp.TcpTransport.doRun(TcpTransport.java:232)
at 
org.apache.activemq.transport.tcp.TcpTransport.run(TcpTransport.java:215)

The message redelivery attempt is default  in the activemq conf, so it is 6 
times. 


I just send 1 message to the queue ,receive it and don't ack it ,then do 
failover . There is a thread which listens for receiving messages in the queue 
,I don't see any exception in there .  So not sure  from where the transport 
exception is thrown. 
In the produce send  , receive and ack code the JMS 
Is there a way to track failover happened.  May be some sort of listener ,then 
the session can be recovered.

Regards,
Rajeswari

-Original Message-
From: tbai...@gmail.com [mailto:tbai...@gmail.com] On Behalf Of Tim Bain
Sent: Thursday, April 07, 2016 8:53 PM
To: ActiveMQ Users 
Subject: Re: MasterSlave config ActiveMQ

I apologize, I misunderstood what you meant by "I made sure the messages
will not be acknowledged."

When either the broker or a consumer goes down while a consumer is actively
consuming a message, the broker will consider that a filed delivery and try
to redeliver the message.  The maximumRedeliveries property of the
Redelivery Policy (http://activemq.apache.org/redelivery-policy.html)
controls how many times the message will be redelivered before being put
into the DLQ.  The default value is 6; is this the value you're using, or
have you explicitly set a different value?  With the default value of 6,
your consumer should fail over to the new master, the message should be
redelivered (because this is redelivery attempt #1 and that's less than 6),
and your consumer should start processing the message for the second time.
As long as that doesn't throw any exceptions or fail in any other way (have
you confirmed this?), the second redelivery attempt should eventually
result in a successful processing of the message.

However, it sounds like that's not happening for you, and
http://stackoverflow.com/questions/8576821/cant-get-activemq-to-resend-my-messages#comment35122357_8576821
seems to indicate that when using INDIVIDUAL_ACKNOWLEDGE mode, if you don't
close the Connection or recover() the Session, the messages will be
considered duplicates and ignored by the consumer.  I haven't worked with
INDIVIDUAL_ACKNOWLEDGE mode myself; can anyone else on the list shed more
light on whether that comment is accurate?

If so, it would sound like clients who use INDIVIDUAL_ACKNOWLEDGE mode
shouldn't use the failover transport, because they need the connection to
die so they can recreate it.  If that were true, I'd expect we'd have a
warning somewhere on the wiki saying that those two features aren't
compatible with one another, but I haven't seen one in my searching this
evening...

Tim

On Thu, Apr 7, 2016 at 12:02 PM, Natarajan, Rajeswari <
rajeswari.natara...@sap.com> wrote:

> I am trying to understand, sorry if I sound , I don't believe you or
> skeptic. I have an application with the broker url defined as below  (with
> shared file system)
>
> failover:(tcp://:61616,tcp://:61616.
>
> When I failover I see the web console and see one consumer on the fail
> over host ,as the application is having a consumer and got a message ,but
> did not acknowledge (delaying it on purpose for a use case) during
> failover.. In our use case, there will be messagse sent to the queue and we
> do some processing ,but not acknowledging the messages ,unless we get a
> condition satisfied ,which might take some time. So when a failover happens
> within that time ,all such messages will be moved to DLQ?
> Is there any way to  have them in the same state as before in  the fail
> over host.
>
> Thank you,
> Rajeswari
>
> -Original Message-
> From: tbai...@gmail.com [mailto:tbai...@gmail.com] On Behalf Of Tim Bain
> Sent: Thursday, April 07, 2016 5:41 AM
> To: ActiveMQ Users 
> Subject: RE: MasterSlave config ActiveMQ
>
> Quoting from what I originally wrote you: "or if there's a consumer
> on the failover host that tries and fails to consume them."  Consuming a
> message but not acking it == failing to consume it.  The failover
> functionality works, despite your apparent skepticism; retry your test with
> no consumers if you don't believe me.
>
> Tim
> On Apr 7, 2016 5:22 AM, "Natarajan, Rajeswari" <
> rajeswari.natara...@sap.com>
> wrote:
>
> > FYI using non-transactional session with
> > ActiveMQSession.INDIVIDUAL_ACKNO

Re: ActiveMQ with KahaDB as persistent store becomes very slow (almost unresponsive) after creating large no (25000+) of Topics

2016-04-08 Thread Shobhana
Hi Tim,

I said indexing was the point of contention after seeing that Thread
"ActiveMQ NIO Worker 169" was still working on
org.apache.activemq.store.kahadb.MessageDatabase.updateIndex even after >
3.5 minutes.

These are full thread dumps. I guess the lock (read lock) is held by threads
"ActiveMQ NIO Worker 169" and "ActiveMQ NIO Worker 171". Since the read lock
is already held by other threads, the thread "ActiveMQ Broker[localhost]
Scheduler" is waiting to acquire write lock. Since there is already a thread
waiting to acquire write lock, other threads which are waiting to acquire
read lock are still waiting.

What could be the reason for updateIndex not completing even after 3.5
minutes?



--
View this message in context: 
http://activemq.2283324.n4.nabble.com/ActiveMQ-with-KahaDB-as-persistent-store-becomes-very-slow-almost-unresponsive-after-creating-large-s-tp4709985p4710533.html
Sent from the ActiveMQ - User mailing list archive at Nabble.com.


Re: Pooled connection factory does not work

2016-04-08 Thread Michele
Hi Tim,

exactly... According to http://camel.apache.org/jms.html section Concurrent
Consumin, we can configure concurrentConsumers option in one of the
following ways:
On the JmsComponent,
On the endpoint URI or,
By invoking setConcurrentConsumers() directly on the JmsEndpoint.

However, I tried to add concurrentConsumers=10 as option in endpoint uri,
but always only one works.

I also tried with prefetchSize = 1 but with errors... I'm investigating.

Any suggestions?

Thanks in advance.

Michele






--
View this message in context: 
http://activemq.2283324.n4.nabble.com/Pooled-connection-factory-does-not-work-tp4710421p4710525.html
Sent from the ActiveMQ - User mailing list archive at Nabble.com.


Re: Pooled connection factory does not work

2016-04-08 Thread Quinn Stevenson
Sorry Everyone - I missed that when reading the configs.

Thanks for pointing that out Tim - the configuration does look right.

> On Apr 7, 2016, at 10:00 PM, Tim Bain  wrote:
> 
> Although I've only used the string-based syntax (e.g.
> from("jms:MyQueue?concurrentConsumers=10") ) to specify the number of
> concurrent consumers when I've used Camel, I assumed that  name="concurrentConsumers" value="10"/> specified in the XML DSL would have
> the same effect.
> 
> On Thu, Apr 7, 2016 at 9:21 AM, Quinn Stevenson > wrote:
> 
>> Looking at the Camel route, I only see once consumer on the queue - am I
>> missing something?
>> 
>> 
>> 
>>> On Apr 7, 2016, at 4:17 AM, Michele 
>> wrote:
>>> 
>>> Hi Tim,
>>> 
>>> sorry, but I'm a bit confused.
>>> 
>>> My use case is to create a Camel Route that it is capable:
>>> 
>>> 1. Read large number of lines in File (approx. 5)
>>> 2. Split, Process and store every line in AMQ (Simple hash map)
>>> 3. Retrieve message from queue and invoke with POST a Rest Service that
>> is
>>> bottleneck (I need to process messages stored in AMQ slowly in order to
>>> don't overload a Rest Service Interface).
>>> 
>>> 
>>>   
>>>
>>>  
>>>  
>>>  
>>>  
>>>  
>>>
>>> 
>>> 
>>> 
>>>>> uri="activemq:queue:incomingTickets?destination.consumer.prefetchSize=0"
>> />
>>>
>>> 5
>>>  >> uri="jetty:
>> http://host/rs/v1.0/ticket?jettyHttpBindingRef=CustomJettyHttpBinding";
>>> />
>>>
>>> 
>>> 
>>> I don't understand well how Producer and Consumer work on the broker. My
>>> idea is that Producer pust message in Queue and Consumer pop message from
>>> queue to dispatch. Is it right?
>>> So, I configured  a pooled connection factory to handle efficiently
>>> connections, sessions, producers and consumers.
>>> As you can see bt attached picture  pooled-connection.png
>>> <
>> http://activemq.2283324.n4.nabble.com/file/n4710461/pooled-connection.png>
>>> , why is there only one consumer that works on Queue?
>>> 
>>> I hope i was clear.
>>> 
>>> Thanks a lot again
>>> 
>>> Kind greetings
>>> 
>>> Michele
>>> 
>>> 
>>> 
>>> --
>>> View this message in context:
>> http://activemq.2283324.n4.nabble.com/Pooled-connection-factory-does-not-work-tp4710421p4710461.html
>>> Sent from the ActiveMQ - User mailing list archive at Nabble.com.
>> 
>>