[ 
https://issues.apache.org/jira/browse/ARTEMIS-4671?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Justin Bertram updated ARTEMIS-4671:
------------------------------------
    Description: 
I am experiencing an issue with ActiveMQ Artemis. I created an address named 
IN.ADDRESS with the address mode type set to "multiple". We also created two 
queues (IN.ADDRESS.Q1 and IN.ADDRESS.Q2) and set the filter expression on the 
queues using the CORE API. Regardless of whether the filter is the same or not, 
for example, when we send 2000 messages to the address that do not meet the 
filter, I am sure the messages are not being filtered. However, the queues 
receive a different number of messages, which suggests that some messages may 
be lost in the routing process.If I remove the filter expression on one queue, 
the message count is correct.

Here is my code for create queue: 
{code:java}
    public Queue outFss1Queue() {
        try {
            String filter = 
ConvertUtils.genFilter(staticProperties.getServiceCode(), 
SsConstant.XML_MSG_TYPE_FSS1);
            SimpleString str = new 
SimpleString(staticProperties.getFss1InQueue());
            ClientSession.QueueQuery query = session.queueQuery(str);
            QueueConfiguration queueConf = new 
QueueConfiguration(staticProperties.getFss1InQueue());
            queueConf.setAddress(staticProperties.getFss1InTopic());
            queueConf.setDurable(true);
            queueConf.setAutoCreated(true);
            queueConf.setAutoCreateAddress(true);
            queueConf.setRoutingType(RoutingType.MULTICAST);
            queueConf.setFilterString(filter);
            if (!query.isExists()) {
                queueConf.setEnabled(false);
                session.createQueue(queueConf);
                session.start();
            } else {
                ClientRequestor requestor = new ClientRequestor(session, 
"activemq.management");
                ClientMessage message = session.createMessage(false);
                ManagementHelper.putOperationInvocation(message, 
ResourceNames.BROKER, "updateQueue", queueConf.toJSON());
                session.start();
                ClientMessage response = requestor.request(message);
                Object resResult = ManagementHelper.getResult(response);
                requestor.close();
            }
        } catch (Exception e) {
            log.error("Init fss1 queue failed, error:", e);
            throw new RuntimeException("Init fss1 queue failed");
        }
        return new ActiveMQQueue(staticProperties.getFss1InQueue());
    } {code}

Here is my filter expression on the queue. In this case, my queues have the 
same filter expression.
{noformat}
XPATH '/IMFRoot/Data/PrimaryKey/FlightKey/FlightDirection[text() = 
"D" or @OldValue = 
"D"]|/IMFRoot/SysInfo/OperationMode[text()="DEL"][/IMFRoot/Data/PrimaryKey/FlightKey/FlightDirection[text()
 = "D" or @OldValue = "D"]]' {noformat} 

Here is my  [^broker-211.xml]  configuration file. We have two servers in the 
Artemis cluster, so we have two broker.xml files. I have upload my 
configuration as an attachment.

Here is my  [^broker-215.xml] .

 
{code:java}
<?xml version='1.0'?><!--Licensed to the Apache Software Foundation (ASF) under 
oneor more contributor license agreements.  See the NOTICE filedistributed with 
this work for additional informationregarding copyright ownership.  The ASF 
licenses this fileto you under the Apache License, Version 2.0 (the"License"); 
you may not use this file except in compliancewith the License.  You may obtain 
a copy of the License at
  http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing,software distributed 
under the License is distributed on an"AS IS" BASIS, WITHOUT WARRANTIES OR 
CONDITIONS OF ANYKIND, either express or implied.  See the License for 
thespecific language governing permissions and limitationsunder the License.-->
<configuration xmlns="urn:activemq"               
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"               
xmlns:xi="http://www.w3.org/2001/XInclude"               
xsi:schemaLocation="urn:activemq /schema/artemis-configuration.xsd">
   <core xmlns="urn:activemq:core" 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"         
xsi:schemaLocation="urn:activemq:core ">
      <name>master</name>

      <persistence-enabled>true</persistence-enabled>
      <thread-pool-max-size>80</thread-pool-max-size>
      <!-- this could be ASYNCIO, MAPPED, NIO           ASYNCIO: Linux Libaio   
        MAPPED: mmap files           NIO: Plain Java Files       -->      
<journal-type>ASYNCIO</journal-type>
      <paging-directory>data/paging</paging-directory>
      <bindings-directory>data/bindings</bindings-directory>
      <journal-directory>data/journal</journal-directory>
      <large-messages-directory>data/large-messages</large-messages-directory>
            <!-- if you want to retain your journal uncomment this following 
configuration.
      This will allow your system to keep 7 days of your data, up to 10G. Tweak 
it accordingly to your use case and capacity.
      it is recommended to use a separate storage unit from the journal for 
performance considerations.
      <journal-retention-directory period="7" unit="DAYS" 
storage-limit="10G">data/retention</journal-retention-directory>
      You can also enable retention by using the argument journal-retention on 
the `artemis create` command -->


      <journal-datasync>true</journal-datasync>
      <journal-min-files>2</journal-min-files>
      <journal-pool-files>10</journal-pool-files>
      <journal-device-block-size>4096</journal-device-block-size>
      <journal-file-size>10M</journal-file-size>            <!--       This 
value was determined through a calculation.       Your system could perform 
13.89 writes per millisecond       on the current journal configuration.       
That translates as a sync write every 72000 nanoseconds.
       Note: If you specify 0 the system will perform writes directly to the 
disk.             We recommend this to be 0 if you are using journalType=MAPPED 
and journal-datasync=false.      -->      
<journal-buffer-timeout>72000</journal-buffer-timeout>

      <!--        When using ASYNCIO, this will determine the writing queue 
depth for libaio.       -->      <journal-max-io>4096</journal-max-io>      
<!--        You can verify the network health of a particular NIC by specifying 
the <network-check-NIC> element.         
<network-check-NIC>theNicName</network-check-NIC>        -->
      <!--        Use this to use an HTTP server to validate the network        
 <network-check-URL-list>http://www.apache.org</network-check-URL-list> -->
      <!-- <network-check-period>10000</network-check-period> -->      <!-- 
<network-check-timeout>1000</network-check-timeout> -->
      <!-- this is a comma separated list, no spaces, just DNS or IPs           
it should accept IPV6
           Warning: Make sure you understand your network topology as this is 
meant to validate if your network is valid.                    Using IPs that 
could eventually disappear or be partially visible may defeat the purpose.      
              You can use a list of multiple IPs, and if any successful ping 
will make the server OK to continue running -->      <!-- 
<network-check-list>10.0.0.1</network-check-list> -->
      <!-- use this to customize the ping used for ipv4 addresses -->      <!-- 
<network-check-ping-command>ping -c 1 -t %d %s</network-check-ping-command> -->
      <!-- use this to customize the ping used for ipv6 addresses -->      <!-- 
<network-check-ping6-command>ping6 -c 1 %2$s</network-check-ping6-command> -->

      <connectors>            <!-- Connector used to be announced through 
cluster connections and notifications -->            <connector 
name="artemis">tcp://192.168.201.215:61616</connector>            <connector 
name = "node0">tcp://192.168.201.211:61616</connector>      </connectors>

      <!-- how often we are looking for how many bytes are being used on the 
disk in ms -->      <disk-scan-period>5000</disk-scan-period>
      <!-- once the disk hits this limit the system will block, or close the 
connection in certain protocols           that won't support flow control. -->  
    <max-disk-usage>90</max-disk-usage>
      <!-- should the broker detect dead locks and other issues -->      
<critical-analyzer>true</critical-analyzer>
      <critical-analyzer-timeout>120000</critical-analyzer-timeout>
      <critical-analyzer-check-period>60000</critical-analyzer-check-period>
      <critical-analyzer-policy>HALT</critical-analyzer-policy>
            <page-sync-timeout>420000</page-sync-timeout>

            <!-- the system will enter into page mode once you hit this limit.  
         This is an estimate in bytes of how much the messages are using in 
memory
            The system will use half of the available memory (-Xmx) by default 
for the global-max-size.            You may specify a different value here if 
you need to customize it to your needs.
            <global-max-size>100Mb</global-max-size>
      -->
      <acceptors>
         <!-- useEpoll means: it will use Netty epoll if you are on a system 
(Linux) that supports it -->         <!-- amqpCredits: The number of credits 
sent to AMQP producers -->         <!-- amqpLowCredits: The server will send 
the # credits specified at amqpCredits at this low mark -->         <!-- 
amqpDuplicateDetection: If you are not using duplicate detection, set this to 
false                                      as duplicate detection requires 
applicationProperties to be parsed on the server. -->         <!-- 
amqpMinLargeMessageSize: Determines how many bytes are considered large, so we 
start using files to hold their data.                                       
default: 102400, -1 would mean to disable large mesasge control -->
         <!-- Note: If an acceptor needs to be compatible with HornetQ and/or 
Artemis 1.x clients add                    
"anycastPrefix=jms.queue.;multicastPrefix=jms.topic." to the acceptor url.      
              See https://issues.apache.org/jira/browse/ARTEMIS-1644 for more 
information. -->

         <!-- Acceptor for every supported protocol -->         <acceptor 
name="artemis">tcp://192.168.201.215:61616?tcpSendBufferSize=1048576;tcpReceiveBufferSize=1048576;amqpMinLargeMessageSize=102400;protocols=CORE,AMQP,STOMP,HORNETQ,MQTT,OPENWIRE;useEpoll=true;amqpCredits=1000;amqpLowCredits=300;amqpDuplicateDetection=true;supportAdvisory=false;suppressInternalManagementObjects=false</acceptor>
         <!-- AMQP Acceptor.  Listens on default AMQP port for AMQP traffic.--> 
        <acceptor 
name="amqp">tcp://192.168.201.215:5672?tcpSendBufferSize=1048576;tcpReceiveBufferSize=1048576;protocols=AMQP;useEpoll=true;amqpCredits=1000;amqpLowCredits=300;amqpMinLargeMessageSize=102400;amqpDuplicateDetection=true</acceptor>
         <!-- STOMP Acceptor. -->         <acceptor 
name="stomp">tcp://192.168.201.215:61613?anycastPrefix=queue/;multicastPrefix=topic/;stompEnableMessageId=true;tcpSendBufferSize=1048576;tcpReceiveBufferSize=1048576;protocols=STOMP;useEpoll=true;stompMinLargeMessageSize=1024000</acceptor>
         <!-- HornetQ Compatibility Acceptor.  Enables HornetQ Core and STOMP 
for legacy HornetQ clients. -->         <acceptor 
name="hornetq">tcp://192.168.201.215:5445?anycastPrefix=jms.queue.;multicastPrefix=jms.topic.;protocols=HORNETQ,STOMP;useEpoll=true</acceptor>
         <!-- MQTT Acceptor -->         <acceptor 
name="mqtt">tcp://192.168.201.215:1883?tcpSendBufferSize=1048576;tcpReceiveBufferSize=1048576;protocols=MQTT;useEpoll=true</acceptor>
      </acceptors>

      <cluster-user>admin</cluster-user>
      <cluster-password>admin</cluster-password>      <cluster-connections>     
    <cluster-connection name="my-cluster">            
<connector-ref>artemis</connector-ref>            
<message-load-balancing>ON_DEMAND</message-load-balancing>            
<max-hops>0</max-hops>            <static-connectors>               
<connector-ref>node0</connector-ref>
            </static-connectors>         </cluster-connection>      
</cluster-connections>

      <ha-policy>         <replication>            <master>               
<vote-on-replication-failure>true</vote-on-replication-failure>               
<check-for-live-server>true</check-for-live-server>            </master>        
 </replication>      </ha-policy>
      <security-settings>         <security-setting match="#">            
<permission type="createNonDurableQueue" roles="amq"/>            <permission 
type="deleteNonDurableQueue" roles="amq"/>            <permission 
type="createDurableQueue" roles="amq"/>            <permission 
type="deleteDurableQueue" roles="amq"/>            <permission 
type="createAddress" roles="amq"/>            <permission type="deleteAddress" 
roles="amq"/>            <permission type="consume" roles="amq"/>            
<permission type="browse" roles="amq"/>            <permission type="send" 
roles="amq"/>            <!-- we need this otherwise ./artemis data imp 
wouldn't work -->            <permission type="manage" roles="amq"/>         
</security-setting>      </security-settings>
      <address-settings>         <!-- if you define auto-create on certain 
queues, management has to be auto-create -->         <address-setting 
match="activemq.management#">            
<dead-letter-address>DLQ</dead-letter-address>            
<expiry-address>ExpiryQueue</expiry-address>            
<redelivery-delay>0</redelivery-delay>            <!-- with -1 only the 
global-max-size is in use for limiting -->            
<max-size-bytes>-1</max-size-bytes>            
<message-counter-history-day-limit>10</message-counter-history-day-limit>       
     <address-full-policy>PAGE</address-full-policy>            
<auto-create-queues>true</auto-create-queues>            
<auto-create-addresses>true</auto-create-addresses>            
<auto-create-jms-queues>true</auto-create-jms-queues>            
<auto-create-jms-topics>true</auto-create-jms-topics>         
</address-setting>         <!--default for catch all-->         
<address-setting match="#">            
<dead-letter-address>DLQ</dead-letter-address>            
<expiry-address>ExpiryQueue</expiry-address>            
<redelivery-delay>0</redelivery-delay>            <!-- with -1 only the 
global-max-size is in use for limiting -->            
<max-size-bytes>-1</max-size-bytes>            
<message-counter-history-day-limit>10</message-counter-history-day-limit>       
     <address-full-policy>PAGE</address-full-policy>            
<auto-create-queues>true</auto-create-queues>            
<auto-create-addresses>true</auto-create-addresses>            
<auto-create-jms-queues>true</auto-create-jms-queues>            
<auto-create-jms-topics>true</auto-create-jms-topics>            
<auto-delete-queues>false</auto-delete-queues>            
<auto-delete-addresses>false</auto-delete-addresses>         </address-setting> 
     </address-settings>
      <addresses>         <address name="DLQ">            <anycast>             
  <queue name="DLQ" />            </anycast>         </address>         
<address name="ExpiryQueue">            <anycast>               <queue 
name="ExpiryQueue" />            </anycast>         </address>
      </addresses>

      <broker-plugins>         <broker-plugin 
class-name="org.apache.activemq.artemis.core.server.plugin.impl.LoggingActiveMQServerPlugin">
            <property key="LOG_ALL_EVENTS" value="true"/>            <property 
key="LOG_CONNECTION_EVENTS" value="true"/>            <property 
key="LOG_SESSION_EVENTS" value="true"/>            <property 
key="LOG_CONSUMER_EVENTS" value="true"/>            <property 
key="LOG_DELIVERING_EVENTS" value="true"/>            <property 
key="LOG_SENDING_EVENTS" value="true"/>            <property 
key="LOG_INTERNAL_EVENTS" value="true"/>         </broker-plugin>      
</broker-plugins>
   </core></configuration>
 {code}
 

 

  was:
I am experiencing an issue with ActiveMQ Artemis. I created an address named 
IN.ADDRESS with the address mode type set to "multiple". We also created two 
queues (IN.ADDRESS.Q1 and IN.ADDRESS.Q2) and set the filter expression on the 
queues using the CORE API. Regardless of whether the filter is the same or not, 
for example, when we send 2000 messages to the address that do not meet the 
filter, I am sure the messages are not being filtered. However, the queues 
receive a different number of messages, which suggests that some messages may 
be lost in the routing process.If I remove the filter expression on one queue, 
the message count is correct.

Here is my code for create queue: 
{code:java}
    public Queue outFss1Queue() {
        try {
            String filter = 
ConvertUtils.genFilter(staticProperties.getServiceCode(), 
SsConstant.XML_MSG_TYPE_FSS1);
            SimpleString str = new 
SimpleString(staticProperties.getFss1InQueue());
            ClientSession.QueueQuery query = session.queueQuery(str);
            QueueConfiguration queueConf = new 
QueueConfiguration(staticProperties.getFss1InQueue());
            queueConf.setAddress(staticProperties.getFss1InTopic());
            queueConf.setDurable(true);
            queueConf.setAutoCreated(true);
            queueConf.setAutoCreateAddress(true);
            queueConf.setRoutingType(RoutingType.MULTICAST);
            queueConf.setFilterString(filter);
            if (!query.isExists()) {
                queueConf.setEnabled(false);
                session.createQueue(queueConf);
                session.start();
            } else {
                ClientRequestor requestor = new ClientRequestor(session, 
"activemq.management");
                ClientMessage message = session.createMessage(false);
                ManagementHelper.putOperationInvocation(message, 
ResourceNames.BROKER, "updateQueue", queueConf.toJSON());
                session.start();
                ClientMessage response = requestor.request(message);
                Object resResult = ManagementHelper.getResult(response);
                requestor.close();
            }
        } catch (Exception e) {
            log.error("Init fss1 queue failed, error:", e);
            throw new RuntimeException("Init fss1 queue failed");
        }
        return new ActiveMQQueue(staticProperties.getFss1InQueue());
    } {code}

Here is my filter expression on the queue. In this case, my queues have the 
same filter expression.
{noformat}
XPATH &#39;/IMFRoot/Data/PrimaryKey/FlightKey/FlightDirection[text() = 
&quot;D&quot; or @OldValue = 
&quot;D&quot;]|/IMFRoot/SysInfo/OperationMode[text()=&quot;DEL&quot;][/IMFRoot/Data/PrimaryKey/FlightKey/FlightDirection[text()
 = &quot;D&quot; or @OldValue = &quot;D&quot;]]&#39; {noformat} 

Here is my  [^broker-211.xml]  configuration file. We have two servers in the 
Artemis cluster, so we have two broker.xml files. I have upload my 
configuration as an attachment.
-----------broker-211.xml-----------

 
{code:java}
<?xml version='1.0'?>
<!--
Licensed to the Apache Software Foundation (ASF) under one
or more contributor license agreements.  See the NOTICE file
distributed with this work for additional information
regarding copyright ownership.  The ASF licenses this file
to you under the Apache License, Version 2.0 (the
"License"); you may not use this file except in compliance
with the License.  You may obtain a copy of the License at  
http://www.apache.org/licenses/LICENSE-2.0Unless required by applicable law or 
agreed to in writing,
software distributed under the License is distributed on an
"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
KIND, either express or implied.  See the License for the
specific language governing permissions and limitations
under the License.
--><configuration xmlns="urn:activemq"
               xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance";
               xmlns:xi="http://www.w3.org/2001/XInclude";
               xsi:schemaLocation="urn:activemq 
/schema/artemis-configuration.xsd">   <core xmlns="urn:activemq:core" 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance";
         xsi:schemaLocation="urn:activemq:core ">      <name>slave</name>
      <persistence-enabled>true</persistence-enabled>
      
      <thread-pool-max-size>80</thread-pool-max-size>      <!-- this could be 
ASYNCIO, MAPPED, NIO
           ASYNCIO: Linux Libaio
           MAPPED: mmap files
           NIO: Plain Java Files
       -->
      <journal-type>ASYNCIO</journal-type>      
<paging-directory>data/paging</paging-directory>      
<bindings-directory>data/bindings</bindings-directory>      
<journal-directory>data/journal</journal-directory>      
<large-messages-directory>data/large-messages</large-messages-directory>      
      <!-- if you want to retain your journal uncomment this following 
configuration.      This will allow your system to keep 7 days of your data, up 
to 10G. Tweak it accordingly to your use case and capacity.      it is 
recommended to use a separate storage unit from the journal for performance 
considerations.      <journal-retention-directory period="7" unit="DAYS" 
storage-limit="10G">data/retention</journal-retention-directory>      You can 
also enable retention by using the argument journal-retention on the `artemis 
create` command -->      <journal-datasync>true</journal-datasync>      
<journal-min-files>2</journal-min-files>      
<journal-pool-files>10</journal-pool-files>      
<journal-device-block-size>4096</journal-device-block-size>      
<journal-file-size>10M</journal-file-size>
      
      <!--
       This value was determined through a calculation.
       Your system could perform 16.67 writes per millisecond
       on the current journal configuration.
       That translates as a sync write every 59999 nanoseconds.       Note: If 
you specify 0 the system will perform writes directly to the disk.
             We recommend this to be 0 if you are using journalType=MAPPED and 
journal-datasync=false.
      -->
      <journal-buffer-timeout>59999</journal-buffer-timeout>
      <!--
        When using ASYNCIO, this will determine the writing queue depth for 
libaio.
       -->
      <journal-max-io>4096</journal-max-io>
      <!--
        You can verify the network health of a particular NIC by specifying the 
<network-check-NIC> element.
         <network-check-NIC>theNicName</network-check-NIC>
        -->      <!--
        Use this to use an HTTP server to validate the network
         <network-check-URL-list>http://www.apache.org</network-check-URL-list> 
-->      <!-- <network-check-period>10000</network-check-period> -->
      <!-- <network-check-timeout>1000</network-check-timeout> -->      <!-- 
this is a comma separated list, no spaces, just DNS or IPs
           it should accept IPV6           Warning: Make sure you understand 
your network topology as this is meant to validate if your network is valid.
                    Using IPs that could eventually disappear or be partially 
visible may defeat the purpose.
                    You can use a list of multiple IPs, and if any successful 
ping will make the server OK to continue running -->
      <!-- <network-check-list>10.0.0.1</network-check-list> -->      <!-- use 
this to customize the ping used for ipv4 addresses -->
      <!-- <network-check-ping-command>ping -c 1 -t %d 
%s</network-check-ping-command> -->      <!-- use this to customize the ping 
used for ipv6 addresses -->
      <!-- <network-check-ping6-command>ping6 -c 1 
%2$s</network-check-ping6-command> -->
      <connectors>
            <!-- Connector used to be announced through cluster connections and 
notifications -->
            <connector name="artemis">tcp://192.168.201.211:61616</connector>
            <connector name = "node0">tcp://192.168.201.215:61616</connector>
      </connectors>
      <!-- how often we are looking for how many bytes are being used on the 
disk in ms -->
      <disk-scan-period>5000</disk-scan-period>      <!-- once the disk hits 
this limit the system will block, or close the connection in certain protocols
           that won't support flow control. -->
      <max-disk-usage>90</max-disk-usage>      <!-- should the broker detect 
dead locks and other issues -->
      <critical-analyzer>true</critical-analyzer>      
<critical-analyzer-timeout>120000</critical-analyzer-timeout>      
<critical-analyzer-check-period>60000</critical-analyzer-check-period>      
<critical-analyzer-policy>HALT</critical-analyzer-policy>      
      <page-sync-timeout>380000</page-sync-timeout>
            <!-- the system will enter into page mode once you hit this limit.
           This is an estimate in bytes of how much the messages are using in 
memory            The system will use half of the available memory (-Xmx) by 
default for the global-max-size.
            You may specify a different value here if you need to customize it 
to your needs.            <global-max-size>100Mb</global-max-size>      -->     
 <acceptors>         <!-- useEpoll means: it will use Netty epoll if you are on 
a system (Linux) that supports it -->
         <!-- amqpCredits: The number of credits sent to AMQP producers -->
         <!-- amqpLowCredits: The server will send the # credits specified at 
amqpCredits at this low mark -->
         <!-- amqpDuplicateDetection: If you are not using duplicate detection, 
set this to false
                                      as duplicate detection requires 
applicationProperties to be parsed on the server. -->
         <!-- amqpMinLargeMessageSize: Determines how many bytes are considered 
large, so we start using files to hold their data.
                                       default: 102400, -1 would mean to 
disable large mesasge control -->         <!-- Note: If an acceptor needs to be 
compatible with HornetQ and/or Artemis 1.x clients add
                    "anycastPrefix=jms.queue.;multicastPrefix=jms.topic." to 
the acceptor url.
                    See https://issues.apache.org/jira/browse/ARTEMIS-1644 for 
more information. -->
         <!-- Acceptor for every supported protocol -->
         <acceptor 
name="artemis">tcp://192.168.201.211:61616?tcpSendBufferSize=1048576;tcpReceiveBufferSize=1048576;amqpMinLargeMessageSize=102400;protocols=CORE,AMQP,STOMP,HORNETQ,MQTT,OPENWIRE;useEpoll=true;amqpCredits=1000;amqpLowCredits=300;amqpDuplicateDetection=true;supportAdvisory=false;suppressInternalManagementObjects=false</acceptor>
         <!-- AMQP Acceptor.  Listens on default AMQP port for AMQP traffic.-->
         <acceptor 
name="amqp">tcp://192.168.201.211:5672?tcpSendBufferSize=1048576;tcpReceiveBufferSize=1048576;protocols=AMQP;useEpoll=true;amqpCredits=1000;amqpLowCredits=300;amqpMinLargeMessageSize=102400;amqpDuplicateDetection=true</acceptor>
         <!-- STOMP Acceptor. -->
         <acceptor 
name="stomp">tcp://192.168.201.211:61613?anycastPrefix=queue/;multicastPrefix=topic/;stompEnableMessageId=true;tcpSendBufferSize=10485760;tcpReceiveBufferSize=10485760;protocols=STOMP;useEpoll=true;stompMinLargeMessageSize=1024000</acceptor>
         <!-- HornetQ Compatibility Acceptor.  Enables HornetQ Core and STOMP 
for legacy HornetQ clients. -->
         <acceptor 
name="hornetq">tcp://192.168.201.211:5445?anycastPrefix=jms.queue.;multicastPrefix=jms.topic.;protocols=HORNETQ,STOMP;useEpoll=true</acceptor>
         <!-- MQTT Acceptor -->
         <acceptor 
name="mqtt">tcp://192.168.201.211:1883?tcpSendBufferSize=1048576;tcpReceiveBufferSize=1048576;protocols=MQTT;useEpoll=true</acceptor>
      </acceptors>
      <cluster-user>admin</cluster-user>      
<cluster-password>admin</cluster-password>
      <cluster-connections>
         <cluster-connection name="my-cluster">
            <connector-ref>artemis</connector-ref>
            <message-load-balancing>ON_DEMAND</message-load-balancing>
            <max-hops>0</max-hops>
            <static-connectors>
               <connector-ref>node0</connector-ref>            
</static-connectors>
         </cluster-connection>
      </cluster-connections>
      <ha-policy>
         <replication>
            <slave/>
         </replication>
      </ha-policy>      <security-settings>
         <security-setting match="#">
            <permission type="createNonDurableQueue" roles="amq"/>
            <permission type="deleteNonDurableQueue" roles="amq"/>
            <permission type="createDurableQueue" roles="amq"/>
            <permission type="deleteDurableQueue" roles="amq"/>
            <permission type="createAddress" roles="amq"/>
            <permission type="deleteAddress" roles="amq"/>
            <permission type="consume" roles="amq"/>
            <permission type="browse" roles="amq"/>
            <permission type="send" roles="amq"/>
            <!-- we need this otherwise ./artemis data imp wouldn't work -->
            <permission type="manage" roles="amq"/>
         </security-setting>
      </security-settings>      <address-settings>
         <!-- if you define auto-create on certain queues, management has to be 
auto-create -->
         <address-setting match="activemq.management#">
            <dead-letter-address>DLQ</dead-letter-address>
            <expiry-address>ExpiryQueue</expiry-address>
            <redelivery-delay>0</redelivery-delay>
            <!-- with -1 only the global-max-size is in use for limiting -->
            <max-size-bytes>-1</max-size-bytes>
            
<message-counter-history-day-limit>10</message-counter-history-day-limit>
            <address-full-policy>PAGE</address-full-policy>
            <auto-create-queues>true</auto-create-queues>
            <auto-create-addresses>true</auto-create-addresses>
            <auto-create-jms-queues>true</auto-create-jms-queues>
            <auto-create-jms-topics>true</auto-create-jms-topics>
         </address-setting>
         <!--default for catch all-->
         <address-setting match="#">
            <dead-letter-address>DLQ</dead-letter-address>
            <expiry-address>ExpiryQueue</expiry-address>
            <redelivery-delay>0</redelivery-delay>
            <!-- with -1 only the global-max-size is in use for limiting -->
            <max-size-bytes>-1</max-size-bytes>
            
<message-counter-history-day-limit>10</message-counter-history-day-limit>
            <address-full-policy>PAGE</address-full-policy>
            <auto-create-queues>true</auto-create-queues>
            <auto-create-addresses>true</auto-create-addresses>
            <auto-create-jms-queues>true</auto-create-jms-queues>
            <auto-create-jms-topics>true</auto-create-jms-topics>
            <auto-delete-queues>false</auto-delete-queues>
            <auto-delete-addresses>false</auto-delete-addresses>
         </address-setting>
      </address-settings>      <addresses>
         <address name="DLQ">
            <anycast>
               <queue name="DLQ" />
            </anycast>
         </address>
         <address name="ExpiryQueue">
            <anycast>
               <queue name="ExpiryQueue" />
            </anycast>
         </address>      </addresses>
      <broker-plugins>
         <broker-plugin 
class-name="org.apache.activemq.artemis.core.server.plugin.impl.LoggingActiveMQServerPlugin">
            <property key="LOG_ALL_EVENTS" value="true"/>
            <property key="LOG_CONNECTION_EVENTS" value="true"/>
            <property key="LOG_SESSION_EVENTS" value="true"/>
            <property key="LOG_CONSUMER_EVENTS" value="true"/>
            <property key="LOG_DELIVERING_EVENTS" value="true"/>
            <property key="LOG_SENDING_EVENTS" value="true"/>
            <property key="LOG_INTERNAL_EVENTS" value="true"/>
         </broker-plugin>
      </broker-plugins>   </core>
</configuration> {code}
 

 

-----------broker-215.xml-----------

 
{code:java}
<?xml version='1.0'?><!--Licensed to the Apache Software Foundation (ASF) under 
oneor more contributor license agreements.  See the NOTICE filedistributed with 
this work for additional informationregarding copyright ownership.  The ASF 
licenses this fileto you under the Apache License, Version 2.0 (the"License"); 
you may not use this file except in compliancewith the License.  You may obtain 
a copy of the License at
  http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing,software distributed 
under the License is distributed on an"AS IS" BASIS, WITHOUT WARRANTIES OR 
CONDITIONS OF ANYKIND, either express or implied.  See the License for 
thespecific language governing permissions and limitationsunder the License.-->
<configuration xmlns="urn:activemq"               
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"               
xmlns:xi="http://www.w3.org/2001/XInclude"               
xsi:schemaLocation="urn:activemq /schema/artemis-configuration.xsd">
   <core xmlns="urn:activemq:core" 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"         
xsi:schemaLocation="urn:activemq:core ">
      <name>master</name>

      <persistence-enabled>true</persistence-enabled>
      <thread-pool-max-size>80</thread-pool-max-size>
      <!-- this could be ASYNCIO, MAPPED, NIO           ASYNCIO: Linux Libaio   
        MAPPED: mmap files           NIO: Plain Java Files       -->      
<journal-type>ASYNCIO</journal-type>
      <paging-directory>data/paging</paging-directory>
      <bindings-directory>data/bindings</bindings-directory>
      <journal-directory>data/journal</journal-directory>
      <large-messages-directory>data/large-messages</large-messages-directory>
            <!-- if you want to retain your journal uncomment this following 
configuration.
      This will allow your system to keep 7 days of your data, up to 10G. Tweak 
it accordingly to your use case and capacity.
      it is recommended to use a separate storage unit from the journal for 
performance considerations.
      <journal-retention-directory period="7" unit="DAYS" 
storage-limit="10G">data/retention</journal-retention-directory>
      You can also enable retention by using the argument journal-retention on 
the `artemis create` command -->


      <journal-datasync>true</journal-datasync>
      <journal-min-files>2</journal-min-files>
      <journal-pool-files>10</journal-pool-files>
      <journal-device-block-size>4096</journal-device-block-size>
      <journal-file-size>10M</journal-file-size>            <!--       This 
value was determined through a calculation.       Your system could perform 
13.89 writes per millisecond       on the current journal configuration.       
That translates as a sync write every 72000 nanoseconds.
       Note: If you specify 0 the system will perform writes directly to the 
disk.             We recommend this to be 0 if you are using journalType=MAPPED 
and journal-datasync=false.      -->      
<journal-buffer-timeout>72000</journal-buffer-timeout>

      <!--        When using ASYNCIO, this will determine the writing queue 
depth for libaio.       -->      <journal-max-io>4096</journal-max-io>      
<!--        You can verify the network health of a particular NIC by specifying 
the <network-check-NIC> element.         
<network-check-NIC>theNicName</network-check-NIC>        -->
      <!--        Use this to use an HTTP server to validate the network        
 <network-check-URL-list>http://www.apache.org</network-check-URL-list> -->
      <!-- <network-check-period>10000</network-check-period> -->      <!-- 
<network-check-timeout>1000</network-check-timeout> -->
      <!-- this is a comma separated list, no spaces, just DNS or IPs           
it should accept IPV6
           Warning: Make sure you understand your network topology as this is 
meant to validate if your network is valid.                    Using IPs that 
could eventually disappear or be partially visible may defeat the purpose.      
              You can use a list of multiple IPs, and if any successful ping 
will make the server OK to continue running -->      <!-- 
<network-check-list>10.0.0.1</network-check-list> -->
      <!-- use this to customize the ping used for ipv4 addresses -->      <!-- 
<network-check-ping-command>ping -c 1 -t %d %s</network-check-ping-command> -->
      <!-- use this to customize the ping used for ipv6 addresses -->      <!-- 
<network-check-ping6-command>ping6 -c 1 %2$s</network-check-ping6-command> -->

      <connectors>            <!-- Connector used to be announced through 
cluster connections and notifications -->            <connector 
name="artemis">tcp://192.168.201.215:61616</connector>            <connector 
name = "node0">tcp://192.168.201.211:61616</connector>      </connectors>

      <!-- how often we are looking for how many bytes are being used on the 
disk in ms -->      <disk-scan-period>5000</disk-scan-period>
      <!-- once the disk hits this limit the system will block, or close the 
connection in certain protocols           that won't support flow control. -->  
    <max-disk-usage>90</max-disk-usage>
      <!-- should the broker detect dead locks and other issues -->      
<critical-analyzer>true</critical-analyzer>
      <critical-analyzer-timeout>120000</critical-analyzer-timeout>
      <critical-analyzer-check-period>60000</critical-analyzer-check-period>
      <critical-analyzer-policy>HALT</critical-analyzer-policy>
            <page-sync-timeout>420000</page-sync-timeout>

            <!-- the system will enter into page mode once you hit this limit.  
         This is an estimate in bytes of how much the messages are using in 
memory
            The system will use half of the available memory (-Xmx) by default 
for the global-max-size.            You may specify a different value here if 
you need to customize it to your needs.
            <global-max-size>100Mb</global-max-size>
      -->
      <acceptors>
         <!-- useEpoll means: it will use Netty epoll if you are on a system 
(Linux) that supports it -->         <!-- amqpCredits: The number of credits 
sent to AMQP producers -->         <!-- amqpLowCredits: The server will send 
the # credits specified at amqpCredits at this low mark -->         <!-- 
amqpDuplicateDetection: If you are not using duplicate detection, set this to 
false                                      as duplicate detection requires 
applicationProperties to be parsed on the server. -->         <!-- 
amqpMinLargeMessageSize: Determines how many bytes are considered large, so we 
start using files to hold their data.                                       
default: 102400, -1 would mean to disable large mesasge control -->
         <!-- Note: If an acceptor needs to be compatible with HornetQ and/or 
Artemis 1.x clients add                    
"anycastPrefix=jms.queue.;multicastPrefix=jms.topic." to the acceptor url.      
              See https://issues.apache.org/jira/browse/ARTEMIS-1644 for more 
information. -->

         <!-- Acceptor for every supported protocol -->         <acceptor 
name="artemis">tcp://192.168.201.215:61616?tcpSendBufferSize=1048576;tcpReceiveBufferSize=1048576;amqpMinLargeMessageSize=102400;protocols=CORE,AMQP,STOMP,HORNETQ,MQTT,OPENWIRE;useEpoll=true;amqpCredits=1000;amqpLowCredits=300;amqpDuplicateDetection=true;supportAdvisory=false;suppressInternalManagementObjects=false</acceptor>
         <!-- AMQP Acceptor.  Listens on default AMQP port for AMQP traffic.--> 
        <acceptor 
name="amqp">tcp://192.168.201.215:5672?tcpSendBufferSize=1048576;tcpReceiveBufferSize=1048576;protocols=AMQP;useEpoll=true;amqpCredits=1000;amqpLowCredits=300;amqpMinLargeMessageSize=102400;amqpDuplicateDetection=true</acceptor>
         <!-- STOMP Acceptor. -->         <acceptor 
name="stomp">tcp://192.168.201.215:61613?anycastPrefix=queue/;multicastPrefix=topic/;stompEnableMessageId=true;tcpSendBufferSize=1048576;tcpReceiveBufferSize=1048576;protocols=STOMP;useEpoll=true;stompMinLargeMessageSize=1024000</acceptor>
         <!-- HornetQ Compatibility Acceptor.  Enables HornetQ Core and STOMP 
for legacy HornetQ clients. -->         <acceptor 
name="hornetq">tcp://192.168.201.215:5445?anycastPrefix=jms.queue.;multicastPrefix=jms.topic.;protocols=HORNETQ,STOMP;useEpoll=true</acceptor>
         <!-- MQTT Acceptor -->         <acceptor 
name="mqtt">tcp://192.168.201.215:1883?tcpSendBufferSize=1048576;tcpReceiveBufferSize=1048576;protocols=MQTT;useEpoll=true</acceptor>
      </acceptors>

      <cluster-user>admin</cluster-user>
      <cluster-password>admin</cluster-password>      <cluster-connections>     
    <cluster-connection name="my-cluster">            
<connector-ref>artemis</connector-ref>            
<message-load-balancing>ON_DEMAND</message-load-balancing>            
<max-hops>0</max-hops>            <static-connectors>               
<connector-ref>node0</connector-ref>
            </static-connectors>         </cluster-connection>      
</cluster-connections>

      <ha-policy>         <replication>            <master>               
<vote-on-replication-failure>true</vote-on-replication-failure>               
<check-for-live-server>true</check-for-live-server>            </master>        
 </replication>      </ha-policy>
      <security-settings>         <security-setting match="#">            
<permission type="createNonDurableQueue" roles="amq"/>            <permission 
type="deleteNonDurableQueue" roles="amq"/>            <permission 
type="createDurableQueue" roles="amq"/>            <permission 
type="deleteDurableQueue" roles="amq"/>            <permission 
type="createAddress" roles="amq"/>            <permission type="deleteAddress" 
roles="amq"/>            <permission type="consume" roles="amq"/>            
<permission type="browse" roles="amq"/>            <permission type="send" 
roles="amq"/>            <!-- we need this otherwise ./artemis data imp 
wouldn't work -->            <permission type="manage" roles="amq"/>         
</security-setting>      </security-settings>
      <address-settings>         <!-- if you define auto-create on certain 
queues, management has to be auto-create -->         <address-setting 
match="activemq.management#">            
<dead-letter-address>DLQ</dead-letter-address>            
<expiry-address>ExpiryQueue</expiry-address>            
<redelivery-delay>0</redelivery-delay>            <!-- with -1 only the 
global-max-size is in use for limiting -->            
<max-size-bytes>-1</max-size-bytes>            
<message-counter-history-day-limit>10</message-counter-history-day-limit>       
     <address-full-policy>PAGE</address-full-policy>            
<auto-create-queues>true</auto-create-queues>            
<auto-create-addresses>true</auto-create-addresses>            
<auto-create-jms-queues>true</auto-create-jms-queues>            
<auto-create-jms-topics>true</auto-create-jms-topics>         
</address-setting>         <!--default for catch all-->         
<address-setting match="#">            
<dead-letter-address>DLQ</dead-letter-address>            
<expiry-address>ExpiryQueue</expiry-address>            
<redelivery-delay>0</redelivery-delay>            <!-- with -1 only the 
global-max-size is in use for limiting -->            
<max-size-bytes>-1</max-size-bytes>            
<message-counter-history-day-limit>10</message-counter-history-day-limit>       
     <address-full-policy>PAGE</address-full-policy>            
<auto-create-queues>true</auto-create-queues>            
<auto-create-addresses>true</auto-create-addresses>            
<auto-create-jms-queues>true</auto-create-jms-queues>            
<auto-create-jms-topics>true</auto-create-jms-topics>            
<auto-delete-queues>false</auto-delete-queues>            
<auto-delete-addresses>false</auto-delete-addresses>         </address-setting> 
     </address-settings>
      <addresses>         <address name="DLQ">            <anycast>             
  <queue name="DLQ" />            </anycast>         </address>         
<address name="ExpiryQueue">            <anycast>               <queue 
name="ExpiryQueue" />            </anycast>         </address>
      </addresses>

      <broker-plugins>         <broker-plugin 
class-name="org.apache.activemq.artemis.core.server.plugin.impl.LoggingActiveMQServerPlugin">
            <property key="LOG_ALL_EVENTS" value="true"/>            <property 
key="LOG_CONNECTION_EVENTS" value="true"/>            <property 
key="LOG_SESSION_EVENTS" value="true"/>            <property 
key="LOG_CONSUMER_EVENTS" value="true"/>            <property 
key="LOG_DELIVERING_EVENTS" value="true"/>            <property 
key="LOG_SENDING_EVENTS" value="true"/>            <property 
key="LOG_INTERNAL_EVENTS" value="true"/>         </broker-plugin>      
</broker-plugins>
   </core></configuration>
 {code}
 

 


> set the filter expression on the queues under the same address, queues 
> receive a different number of messages
> -------------------------------------------------------------------------------------------------------------
>
>                 Key: ARTEMIS-4671
>                 URL: https://issues.apache.org/jira/browse/ARTEMIS-4671
>             Project: ActiveMQ Artemis
>          Issue Type: Bug
>          Components: Broker
>    Affects Versions: 2.19.1
>         Environment: OS Version: 5.14.0-70.13.1.el9_0.x86_64
> JDK Version: 
> java version "1.8.0_281"
> Java(TM) SE Runtime Environment (build 1.8.0_281-b09)
> Java HotSpot(TM) 64-Bit Server VM (build 25.281-b09, mixed mode)
> Web Server: Apache Tomcat/9.0.35
> Artemis: 2.19.1
>  
>            Reporter: Ke Xu
>            Priority: Major
>         Attachments: broker-211.xml, broker-215.xml, 
> image-2024-03-06-09-48-41-501.png
>
>
> I am experiencing an issue with ActiveMQ Artemis. I created an address named 
> IN.ADDRESS with the address mode type set to "multiple". We also created two 
> queues (IN.ADDRESS.Q1 and IN.ADDRESS.Q2) and set the filter expression on the 
> queues using the CORE API. Regardless of whether the filter is the same or 
> not, for example, when we send 2000 messages to the address that do not meet 
> the filter, I am sure the messages are not being filtered. However, the 
> queues receive a different number of messages, which suggests that some 
> messages may be lost in the routing process.If I remove the filter expression 
> on one queue, the message count is correct.
> Here is my code for create queue: 
> {code:java}
>     public Queue outFss1Queue() {
>         try {
>             String filter = 
> ConvertUtils.genFilter(staticProperties.getServiceCode(), 
> SsConstant.XML_MSG_TYPE_FSS1);
>             SimpleString str = new 
> SimpleString(staticProperties.getFss1InQueue());
>             ClientSession.QueueQuery query = session.queueQuery(str);
>             QueueConfiguration queueConf = new 
> QueueConfiguration(staticProperties.getFss1InQueue());
>             queueConf.setAddress(staticProperties.getFss1InTopic());
>             queueConf.setDurable(true);
>             queueConf.setAutoCreated(true);
>             queueConf.setAutoCreateAddress(true);
>             queueConf.setRoutingType(RoutingType.MULTICAST);
>             queueConf.setFilterString(filter);
>             if (!query.isExists()) {
>                 queueConf.setEnabled(false);
>                 session.createQueue(queueConf);
>                 session.start();
>             } else {
>                 ClientRequestor requestor = new ClientRequestor(session, 
> "activemq.management");
>                 ClientMessage message = session.createMessage(false);
>                 ManagementHelper.putOperationInvocation(message, 
> ResourceNames.BROKER, "updateQueue", queueConf.toJSON());
>                 session.start();
>                 ClientMessage response = requestor.request(message);
>                 Object resResult = ManagementHelper.getResult(response);
>                 requestor.close();
>             }
>         } catch (Exception e) {
>             log.error("Init fss1 queue failed, error:", e);
>             throw new RuntimeException("Init fss1 queue failed");
>         }
>         return new ActiveMQQueue(staticProperties.getFss1InQueue());
>     } {code}
> Here is my filter expression on the queue. In this case, my queues have the 
> same filter expression.
> {noformat}
> XPATH &#39;/IMFRoot/Data/PrimaryKey/FlightKey/FlightDirection[text() = 
> &quot;D&quot; or @OldValue = 
> &quot;D&quot;]|/IMFRoot/SysInfo/OperationMode[text()=&quot;DEL&quot;][/IMFRoot/Data/PrimaryKey/FlightKey/FlightDirection[text()
>  = &quot;D&quot; or @OldValue = &quot;D&quot;]]&#39; {noformat} 
> Here is my  [^broker-211.xml]  configuration file. We have two servers in the 
> Artemis cluster, so we have two broker.xml files. I have upload my 
> configuration as an attachment.
> Here is my  [^broker-215.xml] .
>  
> {code:java}
> <?xml version='1.0'?><!--Licensed to the Apache Software Foundation (ASF) 
> under oneor more contributor license agreements.  See the NOTICE 
> filedistributed with this work for additional informationregarding copyright 
> ownership.  The ASF licenses this fileto you under the Apache License, 
> Version 2.0 (the"License"); you may not use this file except in 
> compliancewith the License.  You may obtain a copy of the License at
>   http://www.apache.org/licenses/LICENSE-2.0
> Unless required by applicable law or agreed to in writing,software 
> distributed under the License is distributed on an"AS IS" BASIS, WITHOUT 
> WARRANTIES OR CONDITIONS OF ANYKIND, either express or implied.  See the 
> License for thespecific language governing permissions and limitationsunder 
> the License.-->
> <configuration xmlns="urn:activemq"               
> xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"               
> xmlns:xi="http://www.w3.org/2001/XInclude"               
> xsi:schemaLocation="urn:activemq /schema/artemis-configuration.xsd">
>    <core xmlns="urn:activemq:core" 
> xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"         
> xsi:schemaLocation="urn:activemq:core ">
>       <name>master</name>
>       <persistence-enabled>true</persistence-enabled>
>       <thread-pool-max-size>80</thread-pool-max-size>
>       <!-- this could be ASYNCIO, MAPPED, NIO           ASYNCIO: Linux Libaio 
>           MAPPED: mmap files           NIO: Plain Java Files       -->      
> <journal-type>ASYNCIO</journal-type>
>       <paging-directory>data/paging</paging-directory>
>       <bindings-directory>data/bindings</bindings-directory>
>       <journal-directory>data/journal</journal-directory>
>       <large-messages-directory>data/large-messages</large-messages-directory>
>             <!-- if you want to retain your journal uncomment this following 
> configuration.
>       This will allow your system to keep 7 days of your data, up to 10G. 
> Tweak it accordingly to your use case and capacity.
>       it is recommended to use a separate storage unit from the journal for 
> performance considerations.
>       <journal-retention-directory period="7" unit="DAYS" 
> storage-limit="10G">data/retention</journal-retention-directory>
>       You can also enable retention by using the argument journal-retention 
> on the `artemis create` command -->
>       <journal-datasync>true</journal-datasync>
>       <journal-min-files>2</journal-min-files>
>       <journal-pool-files>10</journal-pool-files>
>       <journal-device-block-size>4096</journal-device-block-size>
>       <journal-file-size>10M</journal-file-size>            <!--       This 
> value was determined through a calculation.       Your system could perform 
> 13.89 writes per millisecond       on the current journal configuration.      
>  That translates as a sync write every 72000 nanoseconds.
>        Note: If you specify 0 the system will perform writes directly to the 
> disk.             We recommend this to be 0 if you are using 
> journalType=MAPPED and journal-datasync=false.      -->      
> <journal-buffer-timeout>72000</journal-buffer-timeout>
>       <!--        When using ASYNCIO, this will determine the writing queue 
> depth for libaio.       -->      <journal-max-io>4096</journal-max-io>      
> <!--        You can verify the network health of a particular NIC by 
> specifying the <network-check-NIC> element.         
> <network-check-NIC>theNicName</network-check-NIC>        -->
>       <!--        Use this to use an HTTP server to validate the network      
>    <network-check-URL-list>http://www.apache.org</network-check-URL-list> -->
>       <!-- <network-check-period>10000</network-check-period> -->      <!-- 
> <network-check-timeout>1000</network-check-timeout> -->
>       <!-- this is a comma separated list, no spaces, just DNS or IPs         
>   it should accept IPV6
>            Warning: Make sure you understand your network topology as this is 
> meant to validate if your network is valid.                    Using IPs that 
> could eventually disappear or be partially visible may defeat the purpose.    
>                 You can use a list of multiple IPs, and if any successful 
> ping will make the server OK to continue running -->      <!-- 
> <network-check-list>10.0.0.1</network-check-list> -->
>       <!-- use this to customize the ping used for ipv4 addresses -->      
> <!-- <network-check-ping-command>ping -c 1 -t %d 
> %s</network-check-ping-command> -->
>       <!-- use this to customize the ping used for ipv6 addresses -->      
> <!-- <network-check-ping6-command>ping6 -c 1 
> %2$s</network-check-ping6-command> -->
>       <connectors>            <!-- Connector used to be announced through 
> cluster connections and notifications -->            <connector 
> name="artemis">tcp://192.168.201.215:61616</connector>            <connector 
> name = "node0">tcp://192.168.201.211:61616</connector>      </connectors>
>       <!-- how often we are looking for how many bytes are being used on the 
> disk in ms -->      <disk-scan-period>5000</disk-scan-period>
>       <!-- once the disk hits this limit the system will block, or close the 
> connection in certain protocols           that won't support flow control. 
> -->      <max-disk-usage>90</max-disk-usage>
>       <!-- should the broker detect dead locks and other issues -->      
> <critical-analyzer>true</critical-analyzer>
>       <critical-analyzer-timeout>120000</critical-analyzer-timeout>
>       <critical-analyzer-check-period>60000</critical-analyzer-check-period>
>       <critical-analyzer-policy>HALT</critical-analyzer-policy>
>             <page-sync-timeout>420000</page-sync-timeout>
>             <!-- the system will enter into page mode once you hit this 
> limit.           This is an estimate in bytes of how much the messages are 
> using in memory
>             The system will use half of the available memory (-Xmx) by 
> default for the global-max-size.            You may specify a different value 
> here if you need to customize it to your needs.
>             <global-max-size>100Mb</global-max-size>
>       -->
>       <acceptors>
>          <!-- useEpoll means: it will use Netty epoll if you are on a system 
> (Linux) that supports it -->         <!-- amqpCredits: The number of credits 
> sent to AMQP producers -->         <!-- amqpLowCredits: The server will send 
> the # credits specified at amqpCredits at this low mark -->         <!-- 
> amqpDuplicateDetection: If you are not using duplicate detection, set this to 
> false                                      as duplicate detection requires 
> applicationProperties to be parsed on the server. -->         <!-- 
> amqpMinLargeMessageSize: Determines how many bytes are considered large, so 
> we start using files to hold their data.                                      
>  default: 102400, -1 would mean to disable large mesasge control -->
>          <!-- Note: If an acceptor needs to be compatible with HornetQ and/or 
> Artemis 1.x clients add                    
> "anycastPrefix=jms.queue.;multicastPrefix=jms.topic." to the acceptor url.    
>                 See https://issues.apache.org/jira/browse/ARTEMIS-1644 for 
> more information. -->
>          <!-- Acceptor for every supported protocol -->         <acceptor 
> name="artemis">tcp://192.168.201.215:61616?tcpSendBufferSize=1048576;tcpReceiveBufferSize=1048576;amqpMinLargeMessageSize=102400;protocols=CORE,AMQP,STOMP,HORNETQ,MQTT,OPENWIRE;useEpoll=true;amqpCredits=1000;amqpLowCredits=300;amqpDuplicateDetection=true;supportAdvisory=false;suppressInternalManagementObjects=false</acceptor>
>          <!-- AMQP Acceptor.  Listens on default AMQP port for AMQP 
> traffic.-->         <acceptor 
> name="amqp">tcp://192.168.201.215:5672?tcpSendBufferSize=1048576;tcpReceiveBufferSize=1048576;protocols=AMQP;useEpoll=true;amqpCredits=1000;amqpLowCredits=300;amqpMinLargeMessageSize=102400;amqpDuplicateDetection=true</acceptor>
>          <!-- STOMP Acceptor. -->         <acceptor 
> name="stomp">tcp://192.168.201.215:61613?anycastPrefix=queue/;multicastPrefix=topic/;stompEnableMessageId=true;tcpSendBufferSize=1048576;tcpReceiveBufferSize=1048576;protocols=STOMP;useEpoll=true;stompMinLargeMessageSize=1024000</acceptor>
>          <!-- HornetQ Compatibility Acceptor.  Enables HornetQ Core and STOMP 
> for legacy HornetQ clients. -->         <acceptor 
> name="hornetq">tcp://192.168.201.215:5445?anycastPrefix=jms.queue.;multicastPrefix=jms.topic.;protocols=HORNETQ,STOMP;useEpoll=true</acceptor>
>          <!-- MQTT Acceptor -->         <acceptor 
> name="mqtt">tcp://192.168.201.215:1883?tcpSendBufferSize=1048576;tcpReceiveBufferSize=1048576;protocols=MQTT;useEpoll=true</acceptor>
>       </acceptors>
>       <cluster-user>admin</cluster-user>
>       <cluster-password>admin</cluster-password>      <cluster-connections>   
>       <cluster-connection name="my-cluster">            
> <connector-ref>artemis</connector-ref>            
> <message-load-balancing>ON_DEMAND</message-load-balancing>            
> <max-hops>0</max-hops>            <static-connectors>               
> <connector-ref>node0</connector-ref>
>             </static-connectors>         </cluster-connection>      
> </cluster-connections>
>       <ha-policy>         <replication>            <master>               
> <vote-on-replication-failure>true</vote-on-replication-failure>             
> <check-for-live-server>true</check-for-live-server>            </master>      
>    </replication>      </ha-policy>
>       <security-settings>         <security-setting match="#">            
> <permission type="createNonDurableQueue" roles="amq"/>            <permission 
> type="deleteNonDurableQueue" roles="amq"/>            <permission 
> type="createDurableQueue" roles="amq"/>            <permission 
> type="deleteDurableQueue" roles="amq"/>            <permission 
> type="createAddress" roles="amq"/>            <permission 
> type="deleteAddress" roles="amq"/>            <permission type="consume" 
> roles="amq"/>            <permission type="browse" roles="amq"/>            
> <permission type="send" roles="amq"/>            <!-- we need this otherwise 
> ./artemis data imp wouldn't work -->            <permission type="manage" 
> roles="amq"/>         </security-setting>      </security-settings>
>       <address-settings>         <!-- if you define auto-create on certain 
> queues, management has to be auto-create -->         <address-setting 
> match="activemq.management#">            
> <dead-letter-address>DLQ</dead-letter-address>            
> <expiry-address>ExpiryQueue</expiry-address>            
> <redelivery-delay>0</redelivery-delay>            <!-- with -1 only the 
> global-max-size is in use for limiting -->            
> <max-size-bytes>-1</max-size-bytes>            
> <message-counter-history-day-limit>10</message-counter-history-day-limit>     
>        <address-full-policy>PAGE</address-full-policy>            
> <auto-create-queues>true</auto-create-queues>            
> <auto-create-addresses>true</auto-create-addresses>            
> <auto-create-jms-queues>true</auto-create-jms-queues>            
> <auto-create-jms-topics>true</auto-create-jms-topics>         
> </address-setting>         <!--default for catch all-->         
> <address-setting match="#">            
> <dead-letter-address>DLQ</dead-letter-address>            
> <expiry-address>ExpiryQueue</expiry-address>            
> <redelivery-delay>0</redelivery-delay>            <!-- with -1 only the 
> global-max-size is in use for limiting -->            
> <max-size-bytes>-1</max-size-bytes>            
> <message-counter-history-day-limit>10</message-counter-history-day-limit>     
>        <address-full-policy>PAGE</address-full-policy>            
> <auto-create-queues>true</auto-create-queues>            
> <auto-create-addresses>true</auto-create-addresses>            
> <auto-create-jms-queues>true</auto-create-jms-queues>            
> <auto-create-jms-topics>true</auto-create-jms-topics>            
> <auto-delete-queues>false</auto-delete-queues>            
> <auto-delete-addresses>false</auto-delete-addresses>         
> </address-setting>      </address-settings>
>       <addresses>         <address name="DLQ">            <anycast>           
>     <queue name="DLQ" />            </anycast>         </address>         
> <address name="ExpiryQueue">            <anycast>               <queue 
> name="ExpiryQueue" />            </anycast>         </address>
>       </addresses>
>       <broker-plugins>         <broker-plugin 
> class-name="org.apache.activemq.artemis.core.server.plugin.impl.LoggingActiveMQServerPlugin">
>             <property key="LOG_ALL_EVENTS" value="true"/>            
> <property key="LOG_CONNECTION_EVENTS" value="true"/>            <property 
> key="LOG_SESSION_EVENTS" value="true"/>            <property 
> key="LOG_CONSUMER_EVENTS" value="true"/>            <property 
> key="LOG_DELIVERING_EVENTS" value="true"/>            <property 
> key="LOG_SENDING_EVENTS" value="true"/>            <property 
> key="LOG_INTERNAL_EVENTS" value="true"/>         </broker-plugin>      
> </broker-plugins>
>    </core></configuration>
>  {code}
>  
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@activemq.apache.org
For additional commands, e-mail: issues-h...@activemq.apache.org
For further information, visit: https://activemq.apache.org/contact


Reply via email to