>  I'd like to keep things a bit in better order and use just 1 acceptor
(=port) here, but still keep the default aceptor in place. But i'd like to
still rely on the client-side discovery.

Your third party provider of artemis cluster needs to add two
cluster-connection in broker.xml, one to discovery nodes at port 61616, the
other to discovery nodes at port 61617. Then add clusterConnection param in
accecptor uri which refers to the cluster-connection with same port as used
in acceptor, something like

> <acceptor
> name="artemis1">tcp://broker1:61617?protocols=CORE;clusterConnection=clusterConnection61617</acceptor>
> <acceptor
> name="artemis2">tcp://broker1:61616?protocols=CORE;clusterConnection=clusterConnection61616</acceptor>
>

I didn't try but it should work.

> Can this be achieved somewhat transparently to the applicaton - e.g.
configuation-wise at the application server / Artemis client level?

I don't think this can be achieved on client side. It can be achieved on
cluster side by setting message-load-balancing to ON_DEMAND and specifying
redistribution delay to enable message redistribution. This way messages
would be forwarded to nodes in the cluster which do have matching consumers.


<[email protected]> 于2019年7月22日周一 下午5:46写道:

>
> Hello,
>
> we're testing the use of JBoss EAP 7.2 and its bundled Artemis resource
> adapter for connecting to the Artemis cluster which consists of 6 nodes
> total (3 master/slave pairs).
>
>
>
>
> The current config is based on:
>
> - https://access.redhat.com/documentation/en-us/red_hat_jboss_enterprise_
>
> application_platform/7.2/html/configuring_messaging/resource_adapters#using_
> <https://access.redhat.com/documentation/en-us/red_hat_jboss_enterprise_application_platform/7.2/html/configuring_messaging/resource_adapters#using_>
> jboss_amq_for_remote_jms_communication
>
> - https://developers.redhat.com/blog/2018/12/06/how-to-integrate-a-remote-
> red-hat-amq-7-cluster-on-red-hat-jboss-eap-7/
> <https://developers.redhat.com/blog/2018/12/06/how-to-integrate-a-remote-red-hat-amq-7-cluster-on-red-hat-jboss-eap-7/>
>
> - https://activemq.apache.org/components/artemis/documentation/latest/
> clusters.html
> <https://activemq.apache.org/components/artemis/documentation/latest/clusters.html>
>
>
>
>
> And looks as follows (slightly reduced version for simplicity - there are
> actually all 6 outbound socket bindings and remote connectors referring to
> all 6 cluster instances, this example shows just 1)):
>
>
>
>
> JBoss EAP domain.xml
>
>
>
>
>     <profiles>
>         <profile name="TEST-full">
>             ...
>             <subsystem xmlns="urn:jboss:domain:messaging-activemq:4.0">
>                 <remote-connector name="TEST-artemis-remote-connector-a"
> socket-binding="TEST-artemis-remote-a"/>
>                 ...
>                 <pooled-connection-factory name="TEST-activemq-ra-remote"
> entries="java:/RemoteJmsXA java:jboss/RemoteJmsXA"
> connectors="TEST-artemis-
> remote-connector-a TEST-artemis-remote-connector-b TEST-artemis-remote-
> connector-c TEST-artemis-remote-connector-d
> TEST-artemis-remote-connector-e
> TEST-artemis-remote-connector-f" ha="true" user="..." password="..." min-
> pool-size="3" max-pool-size="15">
>                     <inbound-config rebalance-connections="true"/>
>                 </pooled-connection-factory>
>                 ...
>
>             <subsystem xmlns="urn:jboss:domain:naming:2.0">
>                 <bindings>
>                     <external-context name="java:global/TEST-artemis-
> remoteContext" module="org.apache.activemq.artemis" class="javax.naming.
> InitialContext">
>                         <environment>
>                             <property name="java.naming.factory.initial"
> value="org.apache.activemq.artemis.jndi.ActiveMQInitialContextFactory"/>
>                             <property name="java.naming.provider.url"
> value=
> "(tcp://host1:61617,tcp://host2:61617,tcp://host3:61617,tcp://host4:61617,
> tcp://host5:61617,tcp://host6:61617)?ha=true"/>
>                             <property name="queue.Q.Error"
> value="Q.Error"/>
>                             ...
>                         </environment>
>                     </external-context>
>                     <lookup name="java:/jms/queue/Q.Error" lookup="java:
> global/TEST-artemis-remoteContext/Q.Error"/>
>                     ...
>
>
>
>
>     <socket-binding-groups>
>         <socket-binding-group name="TEST-full-sockets" default-interface=
> "public">
>             ...
>             <outbound-socket-binding name="TEST-artemis-remote-a">
>                 <remote-destination host="host1" port="61617"/>
>             </outbound-socket-binding>
>             ...
>
>
>
>
> Each Artemis instance has its default acceptor configured at port 61616
> and
> an additional one - specifically for JBoss EAP configured at port 61617
> with
> only the CORE protocol enabled and with the additional parameters:
>
> anycastPrefix=jms.queue.;multicastPrefix=jms.topic.
>
> as required in the JBoss EAP documentation. This is to leave the default
> acceptor configuration intact as it's used for other purposes (other
> message
> consumers and producers).
>
>
>
>
>
> Note: I think that ha=true parameter in the java.naming.provider.url is
> probably unnecessary.
>
>
>
>
>
> This seems to work, however:
>
>
>
>
> 1.
>
> While initial connections are made to acceptors at port 61617, the actual
> connections used for message transporting seem to be made to acceptors at
> port 61616 even though these aren't configured at the client side at all.
> I
> suppose this is due to the client-side discovery.
>
>
>
>
> Can the Artemis configuration be somewhat modified to provide the
> information about the 2nd acceptors at port 61617 in the discovery
> messages
> sent to the client (e.g. to prefer certain acceptors - ideally those to
> which the initial connections were made)? I'd like to keep things a bit in
> better order and use just 1 acceptor (=port) here, but still keep the
> default aceptor in place. But i'd like to still rely on the client-side
> discovery.
>
>
>
>
>
> Note that the Artemis cluster isn't managed by us - it's an integration
> layer provided to us by other party, we're just its client and there are
> other clients connecting there which we don't want to affect.
>
>
>
>
>
>
>
> 2.
>
> Initially only one message consumer instance is created (from the
> consuming
> application) per queue and that leads to the message consumption from only
> 1
> Artemis node. I suppose that once we create at least 3 consumer instances
> they will make use of the client-side load balancing and all 3 master
> nodes
> will be utilized for message consumption. Correct?
>
>
>
>
> Can this be achieved somewhat transparently to the applicaton - e.g.
> configuation-wise at the application server / Artemis client level? For
> example some application servers allow MDBs to be configured in a specific
> way so that the application server ensures that the MDB is connected to
> each
> active mesaging provider cluster node and can consume messages from there.
> No specific configuration in the application (which would have to reflect
> the current topology = count of active cluster nodes) is then needed. I
> can'
> t find anything like this here so it's probably not possible, right? E.g.
> we
> have to ensure we open at least as many sessions as there are active
> Artemis
> cluster nodes...
>
>
>
>
>
> Best regards,
>
> Petr
>

Reply via email to