Re: Using AWS EFS as the shared file system for master/slave broker pair

2018-07-18 Thread Tom Hall
There are a number of docker images for activemq, you will need to follow the 
documentation for that docker image.  
Make sure your docker image is using the latest version of activemq.
You are going to need to change the data directory to be under the efs mount, 
and you are going to have to configure a pluggable storage locker as well.
https://cwiki.apache.org/confluence/display/ACTIVEMQ/Pluggable+storage+lockers 

https://activemq.apache.org/pluggable-storage-lockers.html

-Tom

> On Jul 18, 2018, at 3:47 PM, avmpt  wrote:
> 
> I would like to use EFS as the shared file system when I set up a
> master/slave broker pair. What changes would I need to make to the
> persistence adapter to use the mounted EFS path?
> 
> When I have
> 
>
> 
> 
> Would I need to change it to the path where EFS is mounted? Are there any
> additional changes required? My activemq brokers are both docker containers
> so I'm not sure how exactly this would affect my setup. Thanks!
> 
> 
> 
> 
> --
> Sent from: http://activemq.2283324.n4.nabble.com/ActiveMQ-User-f2341805.html



Using AWS EFS as the shared file system for master/slave broker pair

2018-07-18 Thread avmpt
I would like to use EFS as the shared file system when I set up a
master/slave broker pair. What changes would I need to make to the
persistence adapter to use the mounted EFS path?

When I have




Would I need to change it to the path where EFS is mounted? Are there any
additional changes required? My activemq brokers are both docker containers
so I'm not sure how exactly this would affect my setup. Thanks!




--
Sent from: http://activemq.2283324.n4.nabble.com/ActiveMQ-User-f2341805.html


RE: Potential message loss seen with HA topology in Artemis 2.6.2 on failback

2018-07-18 Thread Udayan Sahu
I also thought that bringing slave first before master would solve the problem, 
but it didn’t ...

Slave waits with a message 
"AMQ221109: Apache ActiveMQ Artemis Backup Server version 2.6.2 [null] started, 
waiting live to fail before it gets active"

as soon as master is started it says

AMQ221024: Backup server 
ActiveMQServerImpl::serverUUID=e0c8c135-8834-11e8-a326-0a002714 is 
synchronized with live-server.
AMQ221031: backup announced


As we want fail-back functionality, we have used following in slave

 
0

I have strong feeling that this may be messing it up, please confirm

Thanks

--- Udayan Sahu

-Original Message-
From: Clebert Suconic [mailto:clebert.suco...@gmail.com] 
Sent: Wednesday, July 18, 2018 6:28 AM
To: Udayan Sahu 
Cc: users@activemq.apache.org
Subject: Re: Potential message loss seen with HA topology in Artemis 2.6.2 on 
failback

You could have another passive backup that would assume when M1 is killed and 
it could become the backup.

But if the node is alone and you killed it. you need to start it first.

On Wed, Jul 18, 2018 at 9:27 AM, Clebert Suconic  
wrote:
> At the moment you have to start the latest server to be alive first.
>
> I know there's a task to compare age of the journals before 
> synchronizing it.. but it's not done yet.
>
> On Tue, Jul 17, 2018 at 6:48 PM, Udayan Sahu  wrote:
>> Its simple HA subsystem, with a simple ask in replicated state 
>> system, it should start from last committed state…
>>
>>
>>
>> Step1: Master (M1) & Standby (S1) Alive
>>
>> Step2: Producer Send 10 Message à M1 receives it and replicates it to 
>> S1
>>
>> Step3: Kill Master ( M1) à It makes S1 as New Master
>>
>> Step4: Producer Send 10 Message à S1 receives messages and is not 
>> replicated as M1 is Down
>>
>> Step5: Kill Standby ( S1 )
>>
>> Step6: Start Master ( M1 )
>>
>> Step7: Start Standby (S1) ( it sync with Master (M1) discarding its 
>> internal state )
>>
>> This is wrong. M1 should sync with S1 since S1 represents the current 
>> state of the queue.
>>
>>
>>
>> How can we protect Step 4 Messages being lost… We are using 
>> transacted session and calling commit to make sure messages are persisted..
>>
>>
>>
>> --- Udayan Sahu
>>
>>
>>
>>
>>
>> From: Clebert Suconic [mailto:clebert.suco...@gmail.com]
>> Sent: Tuesday, July 17, 2018 2:50 PM
>> To: users@activemq.apache.org
>> Cc: Udayan Sahu 
>> Subject: Re: Potential message loss seen with HA topology in Artemis 
>> 2.6.2 on failback
>>
>>
>>
>> Ha is about preserving the journals between failures.
>>
>>
>>
>> When you read and send messages you may still have an failure during 
>> the reading.  I would need to understand what you do in case of a 
>> failure with your consumer and producer.
>>
>>
>>
>> Retries on send and duplicate detection are key for your case.
>>
>>
>>
>> You could also play with XA and a transaction manager.
>>
>>
>>
>> On Tue, Jul 17, 2018 at 5:01 PM Neha Sareen  wrote:
>>
>> Hi,
>>
>>
>>
>> We are setting up a cluster of 6 brokers using Artemis 2.6.2.
>>
>>
>>
>> The cluster has 3 groups.
>>
>> - Each group has one master, and one slave broker pair.
>>
>> - The HA uses replication.
>>
>> - Each master broker configuration has the flag 
>> 'check-for-live-server' set to true.
>>
>> - Each slave broker configuration has the flag 'allow-failback' set to true.
>>
>> - We use static connectors for allowing cluster topology discovery.
>>
>> - Each broker's static connector list includes the connectors to the 
>> other 5 servers in the cluster.
>>
>> - Each broker declares its acceptor.
>>
>> - Each broker exports its own connector information via the  'connector-ref'
>> configuration element.
>>
>> - The acceptor and the connector URLs for each broker are identical 
>> with respect to the host and port information
>>
>>
>>
>> We have a standalone test application that creates producers and
>>
>> consumers to write messages and receive messages respectively using a 
>> transacted JMS session.
>>
>>
>>
>>> We are trying to execute an automatic failover test case followed by 
>>> failback as follows:
>>
>> TestCase -1
>>
>> Step1: Master & Standby Alive
>>
>> Step2: Producer Send Message , say 9 messages
>>
>> Step3: Kill Master
>>
>> Step4: Producer Send Message , say another 9 messages
>>
>> Step5: Kill Standby
>>
>> Step6: Start Master
>>
>> Step7: Start Standby.
>>
>> What we see is that it sync with Master discarding its internal state 
>> , and we are able to consume only 9 messages, leading to a loss of 9 
>> messages
>>
>>
>>
>>
>>
>> Test Case - 2
>>
>> Step1: Master & Standby Alive
>>
>> Step2: Producer Send Message
>>
>> Step3: Kill Master
>>
>> Step4: Producer Send Message
>>
>> Step5: Kill Standby
>>
>> Step6: Start Standby ( it waits for Master )
>>
>> Step7: Start Master (Question does it wait for slave ??)
>>
>> Step8: Consume Message
>>
>>
>>
>> Can someone provide any insights here regarding the potential message loss?
>>
>> Also are there alternati

Re: Potential message loss seen with HA topology in Artemis 2.6.2 on failback

2018-07-18 Thread Clebert Suconic
You could have another passive backup that would assume when M1 is
killed and it could become the backup.

But if the node is alone and you killed it. you need to start it first.

On Wed, Jul 18, 2018 at 9:27 AM, Clebert Suconic
 wrote:
> At the moment you have to start the latest server to be alive first.
>
> I know there's a task to compare age of the journals before
> synchronizing it.. but it's not done yet.
>
> On Tue, Jul 17, 2018 at 6:48 PM, Udayan Sahu  wrote:
>> Its simple HA subsystem, with a simple ask in replicated state system, it
>> should start from last committed state…
>>
>>
>>
>> Step1: Master (M1) & Standby (S1) Alive
>>
>> Step2: Producer Send 10 Message à M1 receives it and replicates it to S1
>>
>> Step3: Kill Master ( M1) à It makes S1 as New Master
>>
>> Step4: Producer Send 10 Message à S1 receives messages and is not replicated
>> as M1 is Down
>>
>> Step5: Kill Standby ( S1 )
>>
>> Step6: Start Master ( M1 )
>>
>> Step7: Start Standby (S1) ( it sync with Master (M1) discarding its internal
>> state )
>>
>> This is wrong. M1 should sync with S1 since S1 represents the current state
>> of the queue.
>>
>>
>>
>> How can we protect Step 4 Messages being lost… We are using transacted
>> session and calling commit to make sure messages are persisted..
>>
>>
>>
>> --- Udayan Sahu
>>
>>
>>
>>
>>
>> From: Clebert Suconic [mailto:clebert.suco...@gmail.com]
>> Sent: Tuesday, July 17, 2018 2:50 PM
>> To: users@activemq.apache.org
>> Cc: Udayan Sahu 
>> Subject: Re: Potential message loss seen with HA topology in Artemis 2.6.2
>> on failback
>>
>>
>>
>> Ha is about preserving the journals between failures.
>>
>>
>>
>> When you read and send messages you may still have an failure during the
>> reading.  I would need to understand what you do in case of a failure with
>> your consumer and producer.
>>
>>
>>
>> Retries on send and duplicate detection are key for your case.
>>
>>
>>
>> You could also play with XA and a transaction manager.
>>
>>
>>
>> On Tue, Jul 17, 2018 at 5:01 PM Neha Sareen  wrote:
>>
>> Hi,
>>
>>
>>
>> We are setting up a cluster of 6 brokers using Artemis 2.6.2.
>>
>>
>>
>> The cluster has 3 groups.
>>
>> - Each group has one master, and one slave broker pair.
>>
>> - The HA uses replication.
>>
>> - Each master broker configuration has the flag 'check-for-live-server' set
>> to true.
>>
>> - Each slave broker configuration has the flag 'allow-failback' set to true.
>>
>> - We use static connectors for allowing cluster topology discovery.
>>
>> - Each broker's static connector list includes the connectors to the other 5
>> servers in the cluster.
>>
>> - Each broker declares its acceptor.
>>
>> - Each broker exports its own connector information via the  'connector-ref'
>> configuration element.
>>
>> - The acceptor and the connector URLs for each broker are identical with
>> respect to the host and port information
>>
>>
>>
>> We have a standalone test application that creates producers and
>>
>> consumers to write messages and receive messages respectively using a
>> transacted JMS session.
>>
>>
>>
>>> We are trying to execute an automatic failover test case followed by
>>> failback as follows:
>>
>> TestCase -1
>>
>> Step1: Master & Standby Alive
>>
>> Step2: Producer Send Message , say 9 messages
>>
>> Step3: Kill Master
>>
>> Step4: Producer Send Message , say another 9 messages
>>
>> Step5: Kill Standby
>>
>> Step6: Start Master
>>
>> Step7: Start Standby.
>>
>> What we see is that it sync with Master discarding its internal state , and
>> we are able to consume only 9 messages, leading to a loss of 9 messages
>>
>>
>>
>>
>>
>> Test Case - 2
>>
>> Step1: Master & Standby Alive
>>
>> Step2: Producer Send Message
>>
>> Step3: Kill Master
>>
>> Step4: Producer Send Message
>>
>> Step5: Kill Standby
>>
>> Step6: Start Standby ( it waits for Master )
>>
>> Step7: Start Master (Question does it wait for slave ??)
>>
>> Step8: Consume Message
>>
>>
>>
>> Can someone provide any insights here regarding the potential message loss?
>>
>> Also are there alternatives to a different topology we may use here to get
>> around this issue?
>>
>>
>>
>> Thanks
>>
>> Neha
>>
>>
>>
>> --
>>
>> Clebert Suconic
>
>
>
> --
> Clebert Suconic



-- 
Clebert Suconic


Re: Potential message loss seen with HA topology in Artemis 2.6.2 on failback

2018-07-18 Thread Clebert Suconic
At the moment you have to start the latest server to be alive first.

I know there's a task to compare age of the journals before
synchronizing it.. but it's not done yet.

On Tue, Jul 17, 2018 at 6:48 PM, Udayan Sahu  wrote:
> Its simple HA subsystem, with a simple ask in replicated state system, it
> should start from last committed state…
>
>
>
> Step1: Master (M1) & Standby (S1) Alive
>
> Step2: Producer Send 10 Message à M1 receives it and replicates it to S1
>
> Step3: Kill Master ( M1) à It makes S1 as New Master
>
> Step4: Producer Send 10 Message à S1 receives messages and is not replicated
> as M1 is Down
>
> Step5: Kill Standby ( S1 )
>
> Step6: Start Master ( M1 )
>
> Step7: Start Standby (S1) ( it sync with Master (M1) discarding its internal
> state )
>
> This is wrong. M1 should sync with S1 since S1 represents the current state
> of the queue.
>
>
>
> How can we protect Step 4 Messages being lost… We are using transacted
> session and calling commit to make sure messages are persisted..
>
>
>
> --- Udayan Sahu
>
>
>
>
>
> From: Clebert Suconic [mailto:clebert.suco...@gmail.com]
> Sent: Tuesday, July 17, 2018 2:50 PM
> To: users@activemq.apache.org
> Cc: Udayan Sahu 
> Subject: Re: Potential message loss seen with HA topology in Artemis 2.6.2
> on failback
>
>
>
> Ha is about preserving the journals between failures.
>
>
>
> When you read and send messages you may still have an failure during the
> reading.  I would need to understand what you do in case of a failure with
> your consumer and producer.
>
>
>
> Retries on send and duplicate detection are key for your case.
>
>
>
> You could also play with XA and a transaction manager.
>
>
>
> On Tue, Jul 17, 2018 at 5:01 PM Neha Sareen  wrote:
>
> Hi,
>
>
>
> We are setting up a cluster of 6 brokers using Artemis 2.6.2.
>
>
>
> The cluster has 3 groups.
>
> - Each group has one master, and one slave broker pair.
>
> - The HA uses replication.
>
> - Each master broker configuration has the flag 'check-for-live-server' set
> to true.
>
> - Each slave broker configuration has the flag 'allow-failback' set to true.
>
> - We use static connectors for allowing cluster topology discovery.
>
> - Each broker's static connector list includes the connectors to the other 5
> servers in the cluster.
>
> - Each broker declares its acceptor.
>
> - Each broker exports its own connector information via the  'connector-ref'
> configuration element.
>
> - The acceptor and the connector URLs for each broker are identical with
> respect to the host and port information
>
>
>
> We have a standalone test application that creates producers and
>
> consumers to write messages and receive messages respectively using a
> transacted JMS session.
>
>
>
>> We are trying to execute an automatic failover test case followed by
>> failback as follows:
>
> TestCase -1
>
> Step1: Master & Standby Alive
>
> Step2: Producer Send Message , say 9 messages
>
> Step3: Kill Master
>
> Step4: Producer Send Message , say another 9 messages
>
> Step5: Kill Standby
>
> Step6: Start Master
>
> Step7: Start Standby.
>
> What we see is that it sync with Master discarding its internal state , and
> we are able to consume only 9 messages, leading to a loss of 9 messages
>
>
>
>
>
> Test Case - 2
>
> Step1: Master & Standby Alive
>
> Step2: Producer Send Message
>
> Step3: Kill Master
>
> Step4: Producer Send Message
>
> Step5: Kill Standby
>
> Step6: Start Standby ( it waits for Master )
>
> Step7: Start Master (Question does it wait for slave ??)
>
> Step8: Consume Message
>
>
>
> Can someone provide any insights here regarding the potential message loss?
>
> Also are there alternatives to a different topology we may use here to get
> around this issue?
>
>
>
> Thanks
>
> Neha
>
>
>
> --
>
> Clebert Suconic



-- 
Clebert Suconic


Re: Artemis: Autocreate durable queues (via AMQP)

2018-07-18 Thread AndreSteenbergen
@Andreas:

I had the same issue. You can define the target capability ("queue").
Artemis will create a Anycast instead of Multicast. So the queue will hold
message until a consumer comes along. I have created a c# example:

public async Task TestHelloWorld()
{
//strange, works using regular activeMQ and the amqp test broker
from here: http://azure.github.io/amqpnetlite/articles/hello_amqp.html
//but this does not work in ActiveMQ Artemis
Address address = new
Address("amqp://guest:guest@localhost:5672");
Connection connection = await
Connection.Factory.CreateAsync(address);
Session session = new Session(connection);

Message message = new Message("Hello AMQP");

Target target = new Target
{
Address = "q1",
Capabilities = new Symbol[] { new Symbol("queue") }
};

SenderLink sender = new SenderLink(session, "sender-link",
target, null);
await sender.SendAsync(message);

Source source = new Source
{
Address = "q1",
Capabilities = new Symbol[] { new Symbol("queue") }
};

ReceiverLink receiver = new ReceiverLink(session,
"receiver-link", source, null);
message = await receiver.ReceiveAsync();
receiver.Accept(message);

await sender.CloseAsync();
await receiver.CloseAsync();
await session.CloseAsync();
await connection.CloseAsync();
}



--
Sent from: http://activemq.2283324.n4.nabble.com/ActiveMQ-User-f2341805.html


Re: Artemis: Autocreate durable queues (via AMQP)

2018-07-18 Thread AndreSteenbergen
> If a client doesn't explicitly (e.g. via the core API, by using a
> configured prefix, etc.) or implicitly (e.g. working with a JMS queue
> (i.e.
> anycast) or JMS topic (i.e. multicast)) specify a routing type then the
> broker uses the defaults (which is multicast for both addresses and
> queues)
> when auto-creating resources.  The defaults are set via the
> "default-address-routing-type" and "default-queue-routing-type" elements
> of
> "address-setting".  You can read more about this in the documentation [1].

Do you have any example configuration?
https://stackoverflow.com/questions/51389611/how-to-set-routing-type-activemq-artemis-from-client

I am trying to auto-create durable queues as well, and I can't seem to find
the way to configure it correctly to allow sending messages before
consuming.




--
Sent from: http://activemq.2283324.n4.nabble.com/ActiveMQ-User-f2341805.html