Need Unix command to purge and delete queues and topics

2018-07-17 Thread Shital24
Hi Team,

Can you please help me to share Unix command to purge and delete queues,
also please share command to delete topics from ActiveMQ. We need this
command create script to purge and delete ActiveMQ queues and topics,
currently we are doing from web url "http://<>:8161/admin/queues.jsp" and "http://<>:8161/admin/topics.jsp"

Thanks in advance.

Thanks,
Shital



--
Sent from: http://activemq.2283324.n4.nabble.com/ActiveMQ-User-f2341805.html


Re: Artemis Failover tests

2018-07-17 Thread udayansahu
Thanks, it solved the problem...

-- Udayan Sahu



--
Sent from: http://activemq.2283324.n4.nabble.com/ActiveMQ-User-f2341805.html


RE: Potential message loss seen with HA topology in Artemis 2.6.2 on failback

2018-07-17 Thread Udayan Sahu
Its simple HA subsystem, with a simple ask in replicated state system, it 
should start from last committed state…

 

Step1: Master (M1) & Standby (S1) Alive

Step2: Producer Send 10 Message à M1 receives it and replicates it to S1

Step3: Kill Master ( M1) à It makes S1 as New Master 

Step4: Producer Send 10 Message à S1 receives messages and is not replicated as 
M1 is Down

Step5: Kill Standby ( S1 )

Step6: Start Master ( M1 ) 

Step7: Start Standby (S1) ( it sync with Master (M1) discarding its internal 
state )

This is wrong. M1 should sync with S1 since S1 represents the current state of 
the queue.

 

How can we protect Step 4 Messages being lost… We are using transacted session 
and calling commit to make sure messages are persisted..

 

--- Udayan Sahu

 

 

From: Clebert Suconic [mailto:clebert.suco...@gmail.com] 
Sent: Tuesday, July 17, 2018 2:50 PM
To: users@activemq.apache.org
Cc: Udayan Sahu 
Subject: Re: Potential message loss seen with HA topology in Artemis 2.6.2 on 
failback

 

Ha is about preserving the journals between failures. 

 

When you read and send messages you may still have an failure during the 
reading.  I would need to understand what you do in case of a failure with your 
consumer and producer.  

 

Retries on send and duplicate detection are key for your case.  

 

You could also play with XA and a transaction manager.  

 

On Tue, Jul 17, 2018 at 5:01 PM Neha Sareen mailto:neha.sar...@oracle.com"neha.sar...@oracle.com> wrote:

Hi,



We are setting up a cluster of 6 brokers using Artemis 2.6.2.



The cluster has 3 groups.

- Each group has one master, and one slave broker pair.

- The HA uses replication.

- Each master broker configuration has the flag 'check-for-live-server' set to 
true.

- Each slave broker configuration has the flag 'allow-failback' set to true.

- We use static connectors for allowing cluster topology discovery.

- Each broker's static connector list includes the connectors to the other 5 
servers in the cluster.

- Each broker declares its acceptor.

- Each broker exports its own connector information via the  'connector-ref' 
configuration element.

- The acceptor and the connector URLs for each broker are identical with 
respect to the host and port information



We have a standalone test application that creates producers and 

consumers to write messages and receive messages respectively using a 
transacted JMS session.



> We are trying to execute an automatic failover test case followed by failback 
> as follows:

TestCase -1

Step1: Master & Standby Alive

Step2: Producer Send Message , say 9 messages

Step3: Kill Master

Step4: Producer Send Message , say another 9 messages

Step5: Kill Standby

Step6: Start Master 

Step7: Start Standby. 

What we see is that it sync with Master discarding its internal state , and we 
are able to consume only 9 messages, leading to a loss of 9 messages





Test Case - 2

Step1: Master & Standby Alive

Step2: Producer Send Message 

Step3: Kill Master

Step4: Producer Send Message 

Step5: Kill Standby

Step6: Start Standby ( it waits for Master )

Step7: Start Master (Question does it wait for slave ??)

Step8: Consume Message



Can someone provide any insights here regarding the potential message loss?

Also are there alternatives to a different topology we may use here to get 
around this issue?



Thanks

Neha





-- 

Clebert Suconic


Re: Potential message loss seen with HA topology in Artemis 2.6.2 on failback

2018-07-17 Thread Clebert Suconic
Ha is about preserving the journals between failures.

When you read and send messages you may still have an failure during the
reading.  I would need to understand what you do in case of a failure with
your consumer and producer.

Retries on send and duplicate detection are key for your case.

You could also play with XA and a transaction manager.

On Tue, Jul 17, 2018 at 5:01 PM Neha Sareen  wrote:

> Hi,
>
>
>
> We are setting up a cluster of 6 brokers using Artemis 2.6.2.
>
>
>
> The cluster has 3 groups.
>
> - Each group has one master, and one slave broker pair.
>
> - The HA uses replication.
>
> - Each master broker configuration has the flag 'check-for-live-server'
> set to true.
>
> - Each slave broker configuration has the flag 'allow-failback' set to
> true.
>
> - We use static connectors for allowing cluster topology discovery.
>
> - Each broker's static connector list includes the connectors to the other
> 5 servers in the cluster.
>
> - Each broker declares its acceptor.
>
> - Each broker exports its own connector information via the
> 'connector-ref' configuration element.
>
> - The acceptor and the connector URLs for each broker are identical with
> respect to the host and port information
>
>
>
> We have a standalone test application that creates producers and
>
> consumers to write messages and receive messages respectively using a
> transacted JMS session.
>
>
>
> > We are trying to execute an automatic failover test case followed by
> failback as follows:
>
> TestCase -1
>
> Step1: Master & Standby Alive
>
> Step2: Producer Send Message , say 9 messages
>
> Step3: Kill Master
>
> Step4: Producer Send Message , say another 9 messages
>
> Step5: Kill Standby
>
> Step6: Start Master
>
> Step7: Start Standby.
>
> What we see is that it sync with Master discarding its internal state ,
> and we are able to consume only 9 messages, leading to a loss of 9 messages
>
>
>
>
>
> Test Case - 2
>
> Step1: Master & Standby Alive
>
> Step2: Producer Send Message
>
> Step3: Kill Master
>
> Step4: Producer Send Message
>
> Step5: Kill Standby
>
> Step6: Start Standby ( it waits for Master )
>
> Step7: Start Master (Question does it wait for slave ??)
>
> Step8: Consume Message
>
>
>
> Can someone provide any insights here regarding the potential message loss?
>
> Also are there alternatives to a different topology we may use here to get
> around this issue?
>
>
>
> Thanks
>
> Neha
>
>
>
>
> --
Clebert Suconic


Potential message loss seen with HA topology in Artemis 2.6.2 on failback

2018-07-17 Thread Neha Sareen
Hi,

 

We are setting up a cluster of 6 brokers using Artemis 2.6.2.

 

The cluster has 3 groups.

- Each group has one master, and one slave broker pair.

- The HA uses replication.

- Each master broker configuration has the flag 'check-for-live-server' set to 
true.

- Each slave broker configuration has the flag 'allow-failback' set to true.

- We use static connectors for allowing cluster topology discovery.

- Each broker's static connector list includes the connectors to the other 5 
servers in the cluster.

- Each broker declares its acceptor.

- Each broker exports its own connector information via the  'connector-ref' 
configuration element.

- The acceptor and the connector URLs for each broker are identical with 
respect to the host and port information

 

We have a standalone test application that creates producers and 

consumers to write messages and receive messages respectively using a 
transacted JMS session.

 

> We are trying to execute an automatic failover test case followed by failback 
> as follows:

TestCase -1

Step1: Master & Standby Alive

Step2: Producer Send Message , say 9 messages

Step3: Kill Master

Step4: Producer Send Message , say another 9 messages

Step5: Kill Standby

Step6: Start Master 

Step7: Start Standby. 

What we see is that it sync with Master discarding its internal state , and we 
are able to consume only 9 messages, leading to a loss of 9 messages

 

 

Test Case - 2

Step1: Master & Standby Alive

Step2: Producer Send Message 

Step3: Kill Master

Step4: Producer Send Message 

Step5: Kill Standby

Step6: Start Standby ( it waits for Master )

Step7: Start Master (Question does it wait for slave ??)

Step8: Consume Message

 

Can someone provide any insights here regarding the potential message loss?

Also are there alternatives to a different topology we may use here to get 
around this issue?

 

Thanks

Neha

 

 


Retry queue message delivery to another node after timeout or connection failure

2018-07-17 Thread Scott Van Wart
I can't for the life of me figure out how to get ActiveMQ Artemis to try
another node for message delivery.  I have a domain-managed wildfly cluster
with 3 nodes:

Wildfly 12 (Artemis 1.5.5) and JDK 1.8.0_131 running on Ubuntu 16.04.  I
started with the defaults.  I deployed an EAR with a single MDB that
listens to a durable queue.  I then connect to a node and send a test
message every 250ms.  I can see the messages appearing round-robin on all
nodes (and JMSDeliveryType is PERSISTENT).

The MDB is configured with dups-ok-acknowledge.  I changed some settings
from the defaults that ship with Wildfly for cluster-connection:

check-period=500
connection-ttl=1000
reconnect-attempts=2
call-timeout=1000
call-failover-timeout=500
use-duplicate-detection=false

Other relevant settings:

retry-interval=500
retry-interval-multiplier=1.5
initial-connect-attempts=-1
message-load-balancing=ON_DEMAND
notification-interval=1000
notification-attempts=2

While the test is going on, I unplug the network cable to one of the
nodes.  The other two nodes fail their 3rd node connection in about a
second and start distributing the messages across only the 2 remaining
nodes, which is fine.  But I "lose" about 2-3 messages during this time to
the failed node.  I can leave that failed node unplugged for as long as I
want.  I can even plug the failed node back in and it won't retransmit
these 2-3 messages.  Finally, I restart all the nodes and the 2-3 "lost"
messages are then transmitted (much later) and only to the failed node.

What I really want is for ActiveMQ to quickly retry delivery to another
node.  So if it attempts delivery and the message isn't acknowledged for
750-1000ms, try another node.  I can handle duplicates just fine. Am I
going about this the right way?

Thanks,
  Scott


Re: Logging DLQ activity

2018-07-17 Thread Tom Hall
If you are willing to do some light development you can put in some specific 
code to do just that logging, or you could enable the logging plugin but you 
will get more than what you want.

take a look at http://activemq.apache.org/developing-plugins.html 

the logging plugin code is at: 
https://github.com/apache/activemq/blob/master/activemq-broker/src/main/java/org/apache/activemq/broker/util/LoggingBrokerPlugin.java
 


Thanks,
Tom


> On Jul 17, 2018, at 1:20 AM, Frizz  wrote:
> 
> I'd like to get notified each time a messages ends up in a DLQ.
> 
> My preferred solution would be to use the existing ELK stack and write a
> filter in Kibana.
> (have the AMQ logfiles sent via Logstash to Kibana)
> 
> Question is: How can I configure AMQ to write DLQ related messages in the
> logfiles? What messages do I have to look for?



Logging DLQ activity

2018-07-17 Thread Frizz
I'd like to get notified each time a messages ends up in a DLQ.

My preferred solution would be to use the existing ELK stack and write a
filter in Kibana.
(have the AMQ logfiles sent via Logstash to Kibana)

Question is: How can I configure AMQ to write DLQ related messages in the
logfiles? What messages do I have to look for?