[jira] [Created] (ARTEMIS-5062) ClusterConnectionControl has wrong annotation

2024-09-23 Thread Howard Gao (Jira)
Howard Gao created ARTEMIS-5062:
---

 Summary: ClusterConnectionControl has wrong annotation
 Key: ARTEMIS-5062
 URL: https://issues.apache.org/jira/browse/ARTEMIS-5062
 Project: ActiveMQ Artemis
  Issue Type: Bug
  Components: management
Affects Versions: 2.37.0
Reporter: Howard Gao
Assignee: Howard Gao


The ClusterConnectionControl.getBridgeMetrics() method is annotated with 
Attribute but it should be Operation.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@activemq.apache.org
For additional commands, e-mail: issues-h...@activemq.apache.org
For further information, visit: https://activemq.apache.org/contact




[jira] [Updated] (ARTEMIS-4924) Proper handling of invalid messages in SNF queues

2024-08-22 Thread Howard Gao (Jira)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-4924?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Howard Gao updated ARTEMIS-4924:

Summary: Proper handling of invalid messages in SNF queues  (was: Do not 
allow sending messages directly to store-and-forward queues)

> Proper handling of invalid messages in SNF queues
> -
>
> Key: ARTEMIS-4924
> URL: https://issues.apache.org/jira/browse/ARTEMIS-4924
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>  Components: Clustering
>Affects Versions: 2.35.0
>Reporter: Howard Gao
>Assignee: Howard Gao
>Priority: Major
>  Time Spent: 2h
>  Remaining Estimate: 0h
>
> Currently, if a poison message missing queue information ends up in the SF 
> queue, the broker logs an error, like:
> AMQ222110: no queue IDs defined...
> but the message remains in the queue, with the result that the bridge 
> continuously reconnects, encounters the failure, then evicts the bridge 
> consumer.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@activemq.apache.org
For additional commands, e-mail: issues-h...@activemq.apache.org
For further information, visit: https://activemq.apache.org/contact




[jira] [Created] (ARTEMIS-5002) AMQP producer not unblock if the disk space is freed

2024-08-19 Thread Howard Gao (Jira)
Howard Gao created ARTEMIS-5002:
---

 Summary: AMQP producer not unblock if the disk space is freed
 Key: ARTEMIS-5002
 URL: https://issues.apache.org/jira/browse/ARTEMIS-5002
 Project: ActiveMQ Artemis
  Issue Type: Bug
  Components: AMQP, Broker
Affects Versions: 2.37.0
Reporter: Howard Gao
Assignee: Howard Gao


Prerequisites:

address-full-policy=FAIL or DROP
The system storage should exceed max-disk-usage before the amqp producer sends 
messages.
At this point, the producer threads are blocked, waiting for the storage 
utilization to decrease.
Once the disk usage comes back below the max-disk-usage, the producer threads 
never unblock and stay in the sending state forever.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@activemq.apache.org
For additional commands, e-mail: issues-h...@activemq.apache.org
For further information, visit: https://activemq.apache.org/contact




[jira] [Commented] (ARTEMIS-4973) pageSizeBytes/pageLimitBytes combination can cause Address full

2024-08-05 Thread Howard Gao (Jira)


[ 
https://issues.apache.org/jira/browse/ARTEMIS-4973?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17870986#comment-17870986
 ] 

Howard Gao commented on ARTEMIS-4973:
-

I think a reasonable improvement would be:

If pageSizeBytes > pageLimitBytes the addressSettings should get rejected by 
the broker. The broker logs a warning and exception giving the details. The 
original settings are not changed. Continuing sending more messages should be 
allowed as long as there is more disk for paging.

> pageSizeBytes/pageLimitBytes combination can cause Address full
> ---
>
> Key: ARTEMIS-4973
> URL: https://issues.apache.org/jira/browse/ARTEMIS-4973
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>  Components: Broker
>Affects Versions: 2.36.0
>Reporter: Howard Gao
>Assignee: Howard Gao
>Priority: Major
>
> There is an edge case where adjusting pageSizeBytes can cause "Address is 
> full" errors, even though the address is not full.
> Do we need to enforce that pageSizeBytes <= pageLimitBytes?
> Reproducer steps:
> Step 1: configure pageSizeBytes == pageLimitBytes == 1mb:
> $ cat my.broker.properties 
> addressSettings."FOO".pageSizeBytes=1048576
> addressSettings."FOO".pageLimitBytes=1048576
> addressSettings."FOO".maxSizeBytes=1048576
> addressSettings."FOO".pageFullMessagePolicy=FAIL
> addressConfigurations."FOO".routingTypes=MULTICAST
> addressConfigurations."FOO".queueConfigs."FOO".name=FOO
> addressConfigurations."FOO".queueConfigs."FOO".address=FOO
> addressConfigurations."FOO".queueConfigs."FOO".routingType=MULTICAST
> Step 2: run broker
> bin/artemis run --properties my.broker.properties
> Step 3: produce 15 messages
> $ bin/artemis producer --user admin --password admin --destination 
> topic://FOO --message-count 15 --message-size 10 --protocol amqp
> Step 4: observe paging started on the destination (but the page size is 
> 328kb, can hold more messages)
> INFO  [org.apache.activemq.artemis.core.server] AMQ222038: Starting paging on 
> address 'FOO'; size=1107003 bytes (11 messages); maxSize=1048576 bytes (-1 
> messages); globalSize=1107003 bytes (11 messages); globalMaxSize=1073741824 
> bytes (-1 messages);
> Step 5: stop broker, increase page size
>  cat my.broker.properties 
> addressSettings."FOO".pageSizeBytes=4048576
> ...
> Step 6: run broker, observe logs show paging warning
> 2024-06-25 15:23:47,135 WARN  [org.apache.activemq.artemis.core.server] 
> AMQ224123: Address FOO has more pages than allowed. System currently has 1 
> pages, while the estimated max number of pages is 0 based on the 
> page-limit-bytes (1048576) / page-size (4048576)
> Step 7: try to produce a message, address full
> WARN  [org.apache.activemq.artemis.protocol.amqp.broker.AMQPSessionCallback] 
> AMQ229102: Address "FOO" is full.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@activemq.apache.org
For additional commands, e-mail: issues-h...@activemq.apache.org
For further information, visit: https://activemq.apache.org/contact




[jira] [Created] (ARTEMIS-4973) pageSizeBytes/pageLimitBytes combination can cause Address full

2024-08-05 Thread Howard Gao (Jira)
Howard Gao created ARTEMIS-4973:
---

 Summary: pageSizeBytes/pageLimitBytes combination can cause 
Address full
 Key: ARTEMIS-4973
 URL: https://issues.apache.org/jira/browse/ARTEMIS-4973
 Project: ActiveMQ Artemis
  Issue Type: Bug
  Components: Broker
Affects Versions: 2.36.0
Reporter: Howard Gao
Assignee: Howard Gao


There is an edge case where adjusting pageSizeBytes can cause "Address is full" 
errors, even though the address is not full.

Do we need to enforce that pageSizeBytes <= pageLimitBytes?

Reproducer steps:

Step 1: configure pageSizeBytes == pageLimitBytes == 1mb:

$ cat my.broker.properties 
addressSettings."FOO".pageSizeBytes=1048576
addressSettings."FOO".pageLimitBytes=1048576
addressSettings."FOO".maxSizeBytes=1048576
addressSettings."FOO".pageFullMessagePolicy=FAIL
addressConfigurations."FOO".routingTypes=MULTICAST
addressConfigurations."FOO".queueConfigs."FOO".name=FOO
addressConfigurations."FOO".queueConfigs."FOO".address=FOO
addressConfigurations."FOO".queueConfigs."FOO".routingType=MULTICAST
Step 2: run broker

bin/artemis run --properties my.broker.properties
Step 3: produce 15 messages

$ bin/artemis producer --user admin --password admin --destination topic://FOO 
--message-count 15 --message-size 10 --protocol amqp



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@activemq.apache.org
For additional commands, e-mail: issues-h...@activemq.apache.org
For further information, visit: https://activemq.apache.org/contact




[jira] [Updated] (ARTEMIS-4973) pageSizeBytes/pageLimitBytes combination can cause Address full

2024-08-05 Thread Howard Gao (Jira)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-4973?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Howard Gao updated ARTEMIS-4973:

Description: 
There is an edge case where adjusting pageSizeBytes can cause "Address is full" 
errors, even though the address is not full.

Do we need to enforce that pageSizeBytes <= pageLimitBytes?

Reproducer steps:

Step 1: configure pageSizeBytes == pageLimitBytes == 1mb:

$ cat my.broker.properties 
addressSettings."FOO".pageSizeBytes=1048576
addressSettings."FOO".pageLimitBytes=1048576
addressSettings."FOO".maxSizeBytes=1048576
addressSettings."FOO".pageFullMessagePolicy=FAIL
addressConfigurations."FOO".routingTypes=MULTICAST
addressConfigurations."FOO".queueConfigs."FOO".name=FOO
addressConfigurations."FOO".queueConfigs."FOO".address=FOO
addressConfigurations."FOO".queueConfigs."FOO".routingType=MULTICAST
Step 2: run broker

bin/artemis run --properties my.broker.properties
Step 3: produce 15 messages

$ bin/artemis producer --user admin --password admin --destination topic://FOO 
--message-count 15 --message-size 10 --protocol amqp

Step 4: observe paging started on the destination (but the page size is 328kb, 
can hold more messages)

INFO  [org.apache.activemq.artemis.core.server] AMQ222038: Starting paging on 
address 'FOO'; size=1107003 bytes (11 messages); maxSize=1048576 bytes (-1 
messages); globalSize=1107003 bytes (11 messages); globalMaxSize=1073741824 
bytes (-1 messages);
Step 5: stop broker, increase page size

 cat my.broker.properties 
addressSettings."FOO".pageSizeBytes=4048576
...
Step 6: run broker, observe logs show paging warning

2024-06-25 15:23:47,135 WARN  [org.apache.activemq.artemis.core.server] 
AMQ224123: Address FOO has more pages than allowed. System currently has 1 
pages, while the estimated max number of pages is 0 based on the 
page-limit-bytes (1048576) / page-size (4048576)
Step 7: try to produce a message, address full

WARN  [org.apache.activemq.artemis.protocol.amqp.broker.AMQPSessionCallback] 
AMQ229102: Address "FOO" is full.

  was:
There is an edge case where adjusting pageSizeBytes can cause "Address is full" 
errors, even though the address is not full.

Do we need to enforce that pageSizeBytes <= pageLimitBytes?

Reproducer steps:

Step 1: configure pageSizeBytes == pageLimitBytes == 1mb:

$ cat my.broker.properties 
addressSettings."FOO".pageSizeBytes=1048576
addressSettings."FOO".pageLimitBytes=1048576
addressSettings."FOO".maxSizeBytes=1048576
addressSettings."FOO".pageFullMessagePolicy=FAIL
addressConfigurations."FOO".routingTypes=MULTICAST
addressConfigurations."FOO".queueConfigs."FOO".name=FOO
addressConfigurations."FOO".queueConfigs."FOO".address=FOO
addressConfigurations."FOO".queueConfigs."FOO".routingType=MULTICAST
Step 2: run broker

bin/artemis run --properties my.broker.properties
Step 3: produce 15 messages

$ bin/artemis producer --user admin --password admin --destination topic://FOO 
--message-count 15 --message-size 10 --protocol amqp


> pageSizeBytes/pageLimitBytes combination can cause Address full
> ---
>
> Key: ARTEMIS-4973
> URL: https://issues.apache.org/jira/browse/ARTEMIS-4973
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>  Components: Broker
>Affects Versions: 2.36.0
>Reporter: Howard Gao
>Assignee: Howard Gao
>Priority: Major
>
> There is an edge case where adjusting pageSizeBytes can cause "Address is 
> full" errors, even though the address is not full.
> Do we need to enforce that pageSizeBytes <= pageLimitBytes?
> Reproducer steps:
> Step 1: configure pageSizeBytes == pageLimitBytes == 1mb:
> $ cat my.broker.properties 
> addressSettings."FOO".pageSizeBytes=1048576
> addressSettings."FOO".pageLimitBytes=1048576
> addressSettings."FOO".maxSizeBytes=1048576
> addressSettings."FOO".pageFullMessagePolicy=FAIL
> addressConfigurations."FOO".routingTypes=MULTICAST
> addressConfigurations."FOO".queueConfigs."FOO".name=FOO
> addressConfigurations."FOO".queueConfigs."FOO".address=FOO
> addressConfigurations."FOO".queueConfigs."FOO".routingType=MULTICAST
> Step 2: run broker
> bin/artemis run --properties my.broker.properties
> Step 3: produce 15 messages
> $ bin/artemis producer --user admin --password admin --destination 
> topic://FOO --message-count 15 --message-size 10 --protocol amqp
> Step 4: observe paging started on the destination (but the page size is 
> 328kb, can hold more messages)
> INFO  [org.apache.activemq.artemis.core.server] AMQ222038: Starting paging on 
> address 'FOO'; size=1107003 bytes (11 messages); maxSize=1048576 bytes (-1 
> messages); globalSize=1107003 bytes (11 messages); globalMaxSize=1073741824 
> bytes (-1 messages);
> Step 5: stop broker, increase page size
>  cat my.broker.properties 
> addressSettings."FOO".pageSizeBytes=404

[jira] [Resolved] (ARTEMIS-4954) AddressControl.pause() can pause the snf queue

2024-07-29 Thread Howard Gao (Jira)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-4954?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Howard Gao resolved ARTEMIS-4954.
-
Resolution: Fixed

> AddressControl.pause() can pause the snf queue
> --
>
> Key: ARTEMIS-4954
> URL: https://issues.apache.org/jira/browse/ARTEMIS-4954
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>  Components: management
>Reporter: Howard Gao
>Assignee: Howard Gao
>Priority: Major
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> Configure 2 or more brokers in a load-balancing cluster 
> (message-load-balancing=ON_DEMAND for cluster connection, 
> redistributon-delay=0 for catch-all address).
> Start consumers (preferably ones with a throttled consume rate) on multiple 
> addresses on one broker.
> Start producers (with a higher production rate than the consumers) on the 
> opposite broker for the same addresses.
> Wait for a little bit of message backlog to occur.
> Pause one address on the broker where the producer is / was connected
> We see a backlog develop in the SF queue and delivery to the consumer on the 
> other broker remains blocked, even after all the messages on the consumer 
> broker are consumed.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@activemq.apache.org
For additional commands, e-mail: issues-h...@activemq.apache.org
For further information, visit: https://activemq.apache.org/contact




[jira] [Created] (ARTEMIS-4959) moveMessages operation can move more messages than max messageCount

2024-07-29 Thread Howard Gao (Jira)
Howard Gao created ARTEMIS-4959:
---

 Summary: moveMessages operation can move more messages than max 
messageCount
 Key: ARTEMIS-4959
 URL: https://issues.apache.org/jira/browse/ARTEMIS-4959
 Project: ActiveMQ Artemis
  Issue Type: Bug
  Components: management
Affects Versions: 2.35.0
Reporter: Howard Gao
Assignee: Howard Gao


When paging and many messages in a queue, the following is observed:

$ bin/artemis producer --user admin --password admin --url 
tcp://localhost:61616 --message-count 1 --message-size 10240 --destination 
queue://FOO
...
$ curl -XPOST -H "Content-Type: application/json" -H "Origin: http://localhost"; 
--user "admin:admin" -d 
'{"type":"exec","mbean":"org.apache.activemq.artemis:broker=\"0.0.0.0\",component=addresses,address=\"FOO\",subcomponent=queues,routing-type=\"anycast\",queue=\"FOO\"","operation":"moveMessages(int,java.lang.String,java.lang.String,boolean,int)","arguments":[500,"","DLQ",false,500]}'
...
 
{"request":{"mbean":"org.apache.activemq.artemis:address=\"FOO\",broker=\"0.0.0.0\",component=addresses,queue=\"FOO\",routing-type=\"anycast\",subcomponent=queues","arguments":[500,"","DLQ",false,500],"type":"exec","operation":"moveMessages(int,java.lang.String,java.lang.String,boolean,int)"},"value":8630,"timestamp":1718395680,"status":200}

Note that the messageCount for the moveMessages operation was 500, but in 
reality 8630 messages were moved (verified with queue stats). This does not 
seem to happen for queues with a low queue depth.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@activemq.apache.org
For additional commands, e-mail: issues-h...@activemq.apache.org
For further information, visit: https://activemq.apache.org/contact




[jira] [Updated] (ARTEMIS-4954) AddressControl.pause() can pause the snf queue

2024-07-25 Thread Howard Gao (Jira)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-4954?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Howard Gao updated ARTEMIS-4954:

Description: 
Configure 2 or more brokers in a load-balancing cluster 
(message-load-balancing=ON_DEMAND for cluster connection, redistributon-delay=0 
for catch-all address).

Start consumers (preferably ones with a throttled consume rate) on multiple 
addresses on one broker.

Start producers (with a higher production rate than the consumers) on the 
opposite broker for the same addresses.

Wait for a little bit of message backlog to occur.

Pause one address on the broker where the producer is / was connected

We see a backlog develop in the SF queue and delivery to the consumer on the 
other broker remains blocked, even after all the messages on the consumer 
broker are consumed.

> AddressControl.pause() can pause the snf queue
> --
>
> Key: ARTEMIS-4954
> URL: https://issues.apache.org/jira/browse/ARTEMIS-4954
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>  Components: management
>Reporter: Howard Gao
>Assignee: Howard Gao
>Priority: Major
>
> Configure 2 or more brokers in a load-balancing cluster 
> (message-load-balancing=ON_DEMAND for cluster connection, 
> redistributon-delay=0 for catch-all address).
> Start consumers (preferably ones with a throttled consume rate) on multiple 
> addresses on one broker.
> Start producers (with a higher production rate than the consumers) on the 
> opposite broker for the same addresses.
> Wait for a little bit of message backlog to occur.
> Pause one address on the broker where the producer is / was connected
> We see a backlog develop in the SF queue and delivery to the consumer on the 
> other broker remains blocked, even after all the messages on the consumer 
> broker are consumed.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@activemq.apache.org
For additional commands, e-mail: issues-h...@activemq.apache.org
For further information, visit: https://activemq.apache.org/contact




[jira] [Created] (ARTEMIS-4954) AddressControl.pause() can pause the snf queue

2024-07-25 Thread Howard Gao (Jira)
Howard Gao created ARTEMIS-4954:
---

 Summary: AddressControl.pause() can pause the snf queue
 Key: ARTEMIS-4954
 URL: https://issues.apache.org/jira/browse/ARTEMIS-4954
 Project: ActiveMQ Artemis
  Issue Type: Bug
  Components: management
Reporter: Howard Gao
Assignee: Howard Gao






--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@activemq.apache.org
For additional commands, e-mail: issues-h...@activemq.apache.org
For further information, visit: https://activemq.apache.org/contact




[jira] [Created] (ARTEMIS-4924) Do not allow sending messages directly to store-and-forward queues

2024-07-16 Thread Howard Gao (Jira)
Howard Gao created ARTEMIS-4924:
---

 Summary: Do not allow sending messages directly to 
store-and-forward queues
 Key: ARTEMIS-4924
 URL: https://issues.apache.org/jira/browse/ARTEMIS-4924
 Project: ActiveMQ Artemis
  Issue Type: Bug
  Components: Clustering
Affects Versions: 2.35.0
Reporter: Howard Gao
Assignee: Howard Gao


Currently, if a poison message missing queue information ends up in the SF 
queue, the broker logs an error, like:

AMQ222110: no queue IDs defined...

but the message remains in the queue, with the result that the bridge 
continuously reconnects, encounters the failure, then evicts the bridge 
consumer.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: issues-unsubscr...@activemq.apache.org
For additional commands, e-mail: issues-h...@activemq.apache.org
For further information, visit: https://activemq.apache.org/contact




[jira] [Created] (ARTEMIS-4576) ServerSessionImpl#updateProducerMetrics access large messages after being routed

2024-01-18 Thread Howard Gao (Jira)
Howard Gao created ARTEMIS-4576:
---

 Summary: ServerSessionImpl#updateProducerMetrics access large 
messages after being routed
 Key: ARTEMIS-4576
 URL: https://issues.apache.org/jira/browse/ARTEMIS-4576
 Project: ActiveMQ Artemis
  Issue Type: Bug
Reporter: Howard Gao
Assignee: Howard Gao


ServerSessionImpl#updateProducerMetrics method is being called after a message 
is routed. If the message is a large message, it can be acked quickly and it's 
backing file will be closed before the method is being called.
This bug causes random failure in test 
org.apache.activemq.artemis.tests.integration.paging.MessagesExpiredPagingTest#testSendReceiveCORELarge



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (ARTEMIS-4570) filter not applied to all brokers in cluster

2024-01-16 Thread Howard Gao (Jira)
Howard Gao created ARTEMIS-4570:
---

 Summary: filter not applied to all brokers in cluster
 Key: ARTEMIS-4570
 URL: https://issues.apache.org/jira/browse/ARTEMIS-4570
 Project: ActiveMQ Artemis
  Issue Type: Bug
  Components: Clustering
Affects Versions: 2.31.2
Reporter: Howard Gao
Assignee: Howard Gao


When applying a new filter on a clustered queue, its remotebinding doesn't get 
updated. The result is when sending a message that desn't match the filter to 
the local queue the filter works, i.e. the local consumer won't get the 
message. However if there is a remote consumer that message will be delivered 
the the remote consumer (ON_DEMAND policy)



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (ARTEMIS-4189) NettyConnection.isSameTarget() should compare host names by IPs

2023-03-06 Thread Howard Gao (Jira)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-4189?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Howard Gao closed ARTEMIS-4189.
---
Resolution: Won't Fix

> NettyConnection.isSameTarget() should compare host names by IPs
> ---
>
> Key: ARTEMIS-4189
> URL: https://issues.apache.org/jira/browse/ARTEMIS-4189
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>  Components: Broker
>Affects Versions: 2.28.0
>Reporter: Howard Gao
>Assignee: Howard Gao
>Priority: Major
>  Time Spent: 1h 40m
>  Remaining Estimate: 0h
>
> The method just compares the host names using string equals. However when one 
> host name is in IP form (like "10.7.2.2") it should try to resolve the 
> addresses of the host name and make sure they are equal if one of the IP 
> addresses matches. Otherwise it may return wrong comparison result.
> Also when comparing localhost it should take care of the case where the host 
> is absent from the transport configuration.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (ARTEMIS-4189) NettyConnection.isSameTarget() should compare host names by IPs

2023-02-28 Thread Howard Gao (Jira)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-4189?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Howard Gao updated ARTEMIS-4189:

Description: 
The method just compares the host names using string equals. However when one 
host name is in IP form (like "10.7.2.2") it should try to resolve the 
addresses of the host name and make sure they are equal if one of the IP 
addresses matches. Otherwise it may return wrong comparison result.
Also when comparing localhost it should take care of the case where the host is 
absent from the transport configuration.


  was:
The method just compares the host names using string equals. However when one 
host name is in IP form (like "10.7.2.2") it should try to resolve the 
addresses of the host name and make sure they are equal if one of the IP 
addresses matches. Otherwise it may return wrong comparison result.



> NettyConnection.isSameTarget() should compare host names by IPs
> ---
>
> Key: ARTEMIS-4189
> URL: https://issues.apache.org/jira/browse/ARTEMIS-4189
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>  Components: Broker
>Affects Versions: 2.28.0
>Reporter: Howard Gao
>Assignee: Howard Gao
>Priority: Major
>
> The method just compares the host names using string equals. However when one 
> host name is in IP form (like "10.7.2.2") it should try to resolve the 
> addresses of the host name and make sure they are equal if one of the IP 
> addresses matches. Otherwise it may return wrong comparison result.
> Also when comparing localhost it should take care of the case where the host 
> is absent from the transport configuration.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (ARTEMIS-4189) NettyConnection.isSameTarget() should compare host names by IPs

2023-02-28 Thread Howard Gao (Jira)
Howard Gao created ARTEMIS-4189:
---

 Summary: NettyConnection.isSameTarget() should compare host names 
by IPs
 Key: ARTEMIS-4189
 URL: https://issues.apache.org/jira/browse/ARTEMIS-4189
 Project: ActiveMQ Artemis
  Issue Type: Bug
  Components: Broker
Affects Versions: 2.28.0
Reporter: Howard Gao
Assignee: Howard Gao


The method just compares the host names using string equals. However when one 
host name is in IP form (like "10.7.2.2") it should try to resolve the 
addresses of the host name and make sure they are equal if one of the IP 
addresses matches. Otherwise it may return wrong comparison result.




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (ARTEMIS-3938) Adding logger-properties option to Artemis CLI create command

2022-08-14 Thread Howard Gao (Jira)
Howard Gao created ARTEMIS-3938:
---

 Summary: Adding logger-properties option to Artemis CLI create 
command
 Key: ARTEMIS-3938
 URL: https://issues.apache.org/jira/browse/ARTEMIS-3938
 Project: ActiveMQ Artemis
  Issue Type: New Feature
  Components: Configuration
Affects Versions: 2.24.0
Reporter: Howard Gao
Assignee: Howard Gao


This feature is to enable creating broker instance with custom logging 
configurations from CLI.
The command would be like:

artemis create broker0 --logger-properties 

The *logger properties file* is a java properties file containing broker's 
jboss logging properties intended to update the default logging properties.

example of the properties file:

{noformat}
logger.handlers=CONSOLE
logger.org.apache.activemq.audit.base.handlers=CONSOLE
logger.org.apache.activemq.audit.base.level=WARN
logger.level=WARN
{noformat}

Once applied those properties will be configured in the logging.properties of 
the created broker instance config dir.




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (ARTEMIS-3392) Scale down would fail if target queue's id greater than max int

2021-07-20 Thread Howard Gao (Jira)
Howard Gao created ARTEMIS-3392:
---

 Summary: Scale down would fail if target queue's id greater than 
max int
 Key: ARTEMIS-3392
 URL: https://issues.apache.org/jira/browse/ARTEMIS-3392
 Project: ActiveMQ Artemis
  Issue Type: Bug
  Components: Broker
Affects Versions: 2.17.0
Reporter: Howard Gao
Assignee: Howard Gao
 Fix For: 2.18.0


In org.apache.activemq.artemis.core.server.impl.ScaleDownHandler#getQueueID it 
treated returned queue id value as an Integer while it is a long type. This 
will cause problem when the queue id (generated by broker) is bigger than int 
max value, in which case the queueID will turn into a negative and doesn't 
match the real queue id. That will cause scale down to fail.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Comment Edited] (AMQ-8149) Create Docker Image

2021-02-21 Thread Howard Gao (Jira)


[ 
https://issues.apache.org/jira/browse/AMQ-8149?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17288157#comment-17288157
 ] 

Howard Gao edited comment on AMQ-8149 at 2/22/21, 3:44 AM:
---

Hi [~behm015],

There is a open source project that has Artemis docker images that work for 
kubernetes/openshift. It also has a operator support that comes with many 
options for you to configure the broker for a container environment. I think 
you may be interested in taking a look.

Here is the link: 
https://github.com/artemiscloud

Howard



was (Author: gaohoward):
Hi [~behm015],

There is a open source project that has Artemis docker images that work for 
kubernetes/openshift. It also has a operator support that comes with many 
options for you to configure the broker for a container environment. I think 
you may be interested in taking a look.

Howard


> Create Docker Image
> ---
>
> Key: AMQ-8149
> URL: https://issues.apache.org/jira/browse/AMQ-8149
> Project: ActiveMQ
>  Issue Type: New Feature
>Affects Versions: 5.17.0
>Reporter: Matt Pavlovich
>Assignee: Matt Pavlovich
>Priority: Major
>
> Create an Apache ActiveMQ docker image
> Ideas:
> [ ] jib or jkube mvn plugin
> [ ] Create a general container that supports most use cases (enable all 
> protocols on default ports, etc)
> [ ] Provide artifacts for users to build customized containers
> Tasks:
> [Pending] Creation of Docker repository for ActiveMQ INFRA-21430
> [ ] Add activemq-docker module to 5.17.x
> [ ] Add dockerhub deployment to release process



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (AMQ-8149) Create Docker Image

2021-02-21 Thread Howard Gao (Jira)


[ 
https://issues.apache.org/jira/browse/AMQ-8149?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17288157#comment-17288157
 ] 

Howard Gao commented on AMQ-8149:
-

Hi [~behm015],

There is a open source project that has Artemis docker images that work for 
kubernetes/openshift. It also has a operator support that comes with many 
options for you to configure the broker for a container environment. I think 
you may be interested in taking a look.

Howard


> Create Docker Image
> ---
>
> Key: AMQ-8149
> URL: https://issues.apache.org/jira/browse/AMQ-8149
> Project: ActiveMQ
>  Issue Type: New Feature
>Affects Versions: 5.17.0
>Reporter: Matt Pavlovich
>Assignee: Matt Pavlovich
>Priority: Major
>
> Create an Apache ActiveMQ docker image
> Ideas:
> [ ] jib or jkube mvn plugin
> [ ] Create a general container that supports most use cases (enable all 
> protocols on default ports, etc)
> [ ] Provide artifacts for users to build customized containers
> Tasks:
> [Pending] Creation of Docker repository for ActiveMQ INFRA-21430
> [ ] Add activemq-docker module to 5.17.x
> [ ] Add dockerhub deployment to release process



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (ARTEMIS-2959) Validate against cluster user name uniqueness

2020-10-26 Thread Howard Gao (Jira)
Howard Gao created ARTEMIS-2959:
---

 Summary: Validate against cluster user name uniqueness
 Key: ARTEMIS-2959
 URL: https://issues.apache.org/jira/browse/ARTEMIS-2959
 Project: ActiveMQ Artemis
  Issue Type: Bug
  Components: Broker
Affects Versions: 2.15.0
Reporter: Howard Gao
 Fix For: 2.16.0


The broker has a special cluster user and cluster password and when doing
authentication it always check the cluster user first.
If user configures a cluster user name that is the same as one of it's
normal users, the broker always authenticate that normal user
against the cluster password which is not right and probably failed
authentication as the cluster password usually not the same as
the normal user's password.
To avoid such a case the broker should check the configuration
during startup and if the cluster user name collides with a normal
user name, it should report as an error.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (ARTEMIS-2854) Non-durable subscribers may stop receiving after failover

2020-10-22 Thread Howard Gao (Jira)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-2854?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Howard Gao resolved ARTEMIS-2854.
-
Resolution: Fixed

> Non-durable subscribers may stop receiving after failover
> -
>
> Key: ARTEMIS-2854
> URL: https://issues.apache.org/jira/browse/ARTEMIS-2854
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>  Components: Broker
>Affects Versions: 2.14.0
>Reporter: Howard Gao
>Assignee: Howard Gao
>Priority: Major
> Fix For: 2.16.0
>
>  Time Spent: 1.5h
>  Remaining Estimate: 0h
>
> In a cluster scenario where non durable subscribers fail over to backup while 
> another live node forwarding messages to it, there is a chance that the the 
> live node keeps the old remote binding for the subs and messages go to those
> old remote bindings will result in "finding not found".
> For example suppose there are 2 live-backup pairs in the cluster: Live1 
> backup1
> Live2 and backup2. A non durable subscriber connects to Live1 and messages
> are sent to Live2 and then redistributed to the sub on Live1.
> Now Live1 crashes and backup1 becomes live. The subscriber fails over to 
> backup1.
> In the mean time Live2 re-connects backup1 too. During the process Live2 
> didn't
> successfully remove the old remote binding for the subs and it still point to 
> the
> old temp queue's id (which is gone with the Live1 as it's a temp queue).
> So the messages (after failover) still are routed to the old queue which is 
> no longer there. The subscriber will be idle without receiving new messages 
> from it.
> The code concerned this :
> https://github.com/apache/activemq-artemis/blob/master/artemis-server/src/main/java/org/apache/activemq/artemis/core/server/cluster/impl/ClusterConnectionImpl.java#L1239
> The code doesn't take care of the case where it's possible that the old 
> remote binding is still in the map the it's key (clusterName) will be the 
> same as the new remote binding (which references to a new temp queue) 
> recreated on fail over.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Closed] (ARTEMIS-2854) Non-durable subscribers may stop receiving after failover

2020-10-22 Thread Howard Gao (Jira)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-2854?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Howard Gao closed ARTEMIS-2854.
---

> Non-durable subscribers may stop receiving after failover
> -
>
> Key: ARTEMIS-2854
> URL: https://issues.apache.org/jira/browse/ARTEMIS-2854
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>  Components: Broker
>Affects Versions: 2.14.0
>Reporter: Howard Gao
>Assignee: Howard Gao
>Priority: Major
> Fix For: 2.16.0
>
>  Time Spent: 1.5h
>  Remaining Estimate: 0h
>
> In a cluster scenario where non durable subscribers fail over to backup while 
> another live node forwarding messages to it, there is a chance that the the 
> live node keeps the old remote binding for the subs and messages go to those
> old remote bindings will result in "finding not found".
> For example suppose there are 2 live-backup pairs in the cluster: Live1 
> backup1
> Live2 and backup2. A non durable subscriber connects to Live1 and messages
> are sent to Live2 and then redistributed to the sub on Live1.
> Now Live1 crashes and backup1 becomes live. The subscriber fails over to 
> backup1.
> In the mean time Live2 re-connects backup1 too. During the process Live2 
> didn't
> successfully remove the old remote binding for the subs and it still point to 
> the
> old temp queue's id (which is gone with the Live1 as it's a temp queue).
> So the messages (after failover) still are routed to the old queue which is 
> no longer there. The subscriber will be idle without receiving new messages 
> from it.
> The code concerned this :
> https://github.com/apache/activemq-artemis/blob/master/artemis-server/src/main/java/org/apache/activemq/artemis/core/server/cluster/impl/ClusterConnectionImpl.java#L1239
> The code doesn't take care of the case where it's possible that the old 
> remote binding is still in the map the it's key (clusterName) will be the 
> same as the new remote binding (which references to a new temp queue) 
> recreated on fail over.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (ARTEMIS-2854) Non-durable subscribers may stop receiving after failover

2020-07-26 Thread Howard Gao (Jira)
Howard Gao created ARTEMIS-2854:
---

 Summary: Non-durable subscribers may stop receiving after failover
 Key: ARTEMIS-2854
 URL: https://issues.apache.org/jira/browse/ARTEMIS-2854
 Project: ActiveMQ Artemis
  Issue Type: Bug
  Components: Broker
Affects Versions: 2.14.0
Reporter: Howard Gao
Assignee: Howard Gao
 Fix For: 2.15.0


In a cluster scenario where non durable subscribers fail over to backup while 
another live node forwarding messages to it, there is a chance that the the 
live node keeps the old remote binding for the subs and messages go to those
old remote bindings will result in "finding not found".

For example suppose there are 2 live-backup pairs in the cluster: Live1 backup1
Live2 and backup2. A non durable subscriber connects to Live1 and messages
are sent to Live2 and then redistributed to the sub on Live1.

Now Live1 crashes and backup1 becomes live. The subscriber fails over to 
backup1.
In the mean time Live2 re-connects backup1 too. During the process Live2 didn't
successfully remove the old remote binding for the subs and it still point to 
the
old temp queue's id (which is gone with the Live1 as it's a temp queue).
So the messages (after failover) still are routed to the old queue which is no 
longer there. The subscriber will be idle without receiving new messages from 
it.

The code concerned this :

https://github.com/apache/activemq-artemis/blob/master/artemis-server/src/main/java/org/apache/activemq/artemis/core/server/cluster/impl/ClusterConnectionImpl.java#L1239

The code doesn't take care of the case where it's possible that the old remote 
binding is still in the map the it's key (clusterName) will be the same as the 
new remote binding (which references to a new temp queue) recreated on fail 
over.







--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (ARTEMIS-2835) Porting HORNETQ-1575 and HORNETQ-1578

2020-07-01 Thread Howard Gao (Jira)
Howard Gao created ARTEMIS-2835:
---

 Summary: Porting HORNETQ-1575 and HORNETQ-1578
 Key: ARTEMIS-2835
 URL: https://issues.apache.org/jira/browse/ARTEMIS-2835
 Project: ActiveMQ Artemis
  Issue Type: Bug
  Components: Broker
Affects Versions: 2.13.0
Reporter: Howard Gao
Assignee: Howard Gao
 Fix For: 2.14.0


The HORNETQ-1575 and HORNETQ-1578 bug fixes are not available in Artemis. It 
could be hit by users in Artemis too.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (ARTEMIS-2176) RA connection properties are not propagated to XARecoveryConfig

2020-03-03 Thread Howard Gao (Jira)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-2176?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Howard Gao updated ARTEMIS-2176:

Affects Version/s: 2.11.0

> RA connection properties are not propagated to XARecoveryConfig
> ---
>
> Key: ARTEMIS-2176
> URL: https://issues.apache.org/jira/browse/ARTEMIS-2176
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>Affects Versions: 2.6.3, 2.11.0
>Reporter: Bartosz Spyrko-Smietanko
>Priority: Major
>  Time Spent: 1.5h
>  Remaining Estimate: 0h
>
> XARecoveryConfig#createServerLocator uses only 
> TransportConfiguration/DiscoveryGroupConfiguration to create a new instance 
> of ServerLocator. That means that connection properties like connectionTTL or 
> callFailoverTime are ignored.  This can lead to network issues - eg. 
> [https://issues.jboss.org/browse/WFLY-10725] 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Closed] (ARTEMIS-2176) RA connection properties are not propagated to XARecoveryConfig

2020-03-03 Thread Howard Gao (Jira)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-2176?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Howard Gao closed ARTEMIS-2176.
---
Resolution: Fixed

> RA connection properties are not propagated to XARecoveryConfig
> ---
>
> Key: ARTEMIS-2176
> URL: https://issues.apache.org/jira/browse/ARTEMIS-2176
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>Affects Versions: 2.6.3, 2.11.0
>Reporter: Bartosz Spyrko-Smietanko
>Priority: Major
> Fix For: 2.12.0
>
>  Time Spent: 1.5h
>  Remaining Estimate: 0h
>
> XARecoveryConfig#createServerLocator uses only 
> TransportConfiguration/DiscoveryGroupConfiguration to create a new instance 
> of ServerLocator. That means that connection properties like connectionTTL or 
> callFailoverTime are ignored.  This can lead to network issues - eg. 
> [https://issues.jboss.org/browse/WFLY-10725] 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (ARTEMIS-2176) RA connection properties are not propagated to XARecoveryConfig

2020-03-03 Thread Howard Gao (Jira)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-2176?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Howard Gao updated ARTEMIS-2176:

Fix Version/s: 2.12.0

> RA connection properties are not propagated to XARecoveryConfig
> ---
>
> Key: ARTEMIS-2176
> URL: https://issues.apache.org/jira/browse/ARTEMIS-2176
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>Affects Versions: 2.6.3, 2.11.0
>Reporter: Bartosz Spyrko-Smietanko
>Priority: Major
> Fix For: 2.12.0
>
>  Time Spent: 1.5h
>  Remaining Estimate: 0h
>
> XARecoveryConfig#createServerLocator uses only 
> TransportConfiguration/DiscoveryGroupConfiguration to create a new instance 
> of ServerLocator. That means that connection properties like connectionTTL or 
> callFailoverTime are ignored.  This can lead to network issues - eg. 
> [https://issues.jboss.org/browse/WFLY-10725] 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (ARTEMIS-2176) RA connection properties are not propagated to XARecoveryConfig

2020-03-03 Thread Howard Gao (Jira)


[ 
https://issues.apache.org/jira/browse/ARTEMIS-2176?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17050659#comment-17050659
 ] 

Howard Gao commented on ARTEMIS-2176:
-

The merged commit cb8da541107355f16eb26b21b6563a06876b741f as part of this fix 
has wrong JIRA number. i.e. the commit message:
commit cb8da541107355f16eb26b21b6563a06876b741f (spyrkob/ARTEMIS-2176, 
syprkob_ARTEMIS-2176_final)
Author: Howard Gao 
Date:   Tue Mar 3 18:48:18 2020 +0800

ARTEMIS-*2716* Refactoring

Refers to a wrong JIRA number *2716* instead of *2176*.


> RA connection properties are not propagated to XARecoveryConfig
> ---
>
> Key: ARTEMIS-2176
> URL: https://issues.apache.org/jira/browse/ARTEMIS-2176
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>Affects Versions: 2.6.3
>Reporter: Bartosz Spyrko-Smietanko
>Priority: Major
>  Time Spent: 1.5h
>  Remaining Estimate: 0h
>
> XARecoveryConfig#createServerLocator uses only 
> TransportConfiguration/DiscoveryGroupConfiguration to create a new instance 
> of ServerLocator. That means that connection properties like connectionTTL or 
> callFailoverTime are ignored.  This can lead to network issues - eg. 
> [https://issues.jboss.org/browse/WFLY-10725] 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (ARTEMIS-2620) Avoid Exception on client when large message file corrupted

2020-02-12 Thread Howard Gao (Jira)
Howard Gao created ARTEMIS-2620:
---

 Summary: Avoid Exception on client when large message file 
corrupted
 Key: ARTEMIS-2620
 URL: https://issues.apache.org/jira/browse/ARTEMIS-2620
 Project: ActiveMQ Artemis
  Issue Type: Bug
  Components: Broker
Affects Versions: 2.11.0
Reporter: Howard Gao
Assignee: Howard Gao
 Fix For: 2.12.0


The broker should be more resilient so that it can avoid throwing an exception 
on the client side when it encounters an invalid or empty large message 
file(corrupted for any reason). It would be better to deal with it within the 
broker and not allow the issue to manifest itself on the consuming client.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (ARTEMIS-2596) (kill -9) AMQ causes tmp web dir space usage to increase

2020-01-14 Thread Howard Gao (Jira)
Howard Gao created ARTEMIS-2596:
---

 Summary: (kill -9) AMQ causes tmp web dir space usage to increase
 Key: ARTEMIS-2596
 URL: https://issues.apache.org/jira/browse/ARTEMIS-2596
 Project: ActiveMQ Artemis
  Issue Type: Bug
  Components: Web Console
Affects Versions: 2.10.1
Reporter: Howard Gao
Assignee: Howard Gao
 Fix For: 2.12.0


If you kill the server without invoking a normal shutdown, tmp files are not 
cleaned out.  This leaves old webapp folders lingering until a normal shutdown.

{code}
./tmp/jetty-0.0.0.0-8443-redhat-branding.war-_redhat-branding-any-3285121346946007361.dir
./tmp/jetty-0.0.0.0-8443-console.war-_console-any-4977922188149081743.dir/
./tmp/jetty-0.0.0.0-8443-artemis-plugin.war-_artemis-plugin-any-4959919418401315172.dir
{code}

In a failover test environment that repeatedly kills the server, this causes 
disk space usage issues.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Closed] (ARTEMIS-2560) Duplicate messages created by cluster lead to OOME crash

2019-12-12 Thread Howard Gao (Jira)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-2560?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Howard Gao closed ARTEMIS-2560.
---
Fix Version/s: 2.11.0
   Resolution: Fixed

> Duplicate messages created by cluster lead to OOME crash
> 
>
> Key: ARTEMIS-2560
> URL: https://issues.apache.org/jira/browse/ARTEMIS-2560
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>  Components: AMQP
>Affects Versions: 2.10.1
>Reporter: Mikko Niemi
>Priority: Major
> Fix For: 2.11.0
>
> Attachments: python-qpid-consumer.py, python-qpid-producer.py, 
> python-stomp-consumer.py, server0-broker.xml, server1-broker.xml
>
>  Time Spent: 1h 50m
>  Remaining Estimate: 0h
>
> Summary: When using two node cluster with very simple configuration (see 
> attached broker.xml files), duplicate messages are generated to queue when 
> Python client is used to consume messages one by one from alternating nodes. 
> Duplicate messages are generated until OutOfMemoryException crashes the 
> broker.
> Detailed description how to produce this problem:
>  # Create two node cluster using attached broker.xml files. Node names are 
> server0 and server1 for the rest of this description.
>  # Produce 100 messages to a queue defined in the address configuration 
> inside broker.xml file on node server0. See attached Python producer. 
> Produced messages have identical content. Command to produce the messages 
> using the attached Python producer: "python python-qpid-producer.py -u 
> $username -p $password -H server0 -a exampleQueue -m TestMessageFooBar -A 100"
>  # Consume one message from server1 using attached Python consumer. Cluster 
> will balance the messages to server1 and total amount of messages is 
> decreased by one. After one message is consumed, session and connection are 
> closed. Command to consume a message using attached Python consumer: "python 
> python-qpid-consumer.py -u $username -p $password -H server1 -a exampleQueue"
>  # Consume one message from server0 using attached Python consumer. Cluster 
> will balance the messages to server0 and total amount of messages is 
> decreased by one. After one message is consumed, session and connection are 
> closed. Command to consume a message using attached Python consumer: "python 
> python-qpid-consumer.py -u $username -p $password -H server0 -a exampleQueue"
>  # Consume one message from server1 using attached Python consumer. Cluster 
> will balance the messages to server1 but this time total amount of messages 
> is increased radically.
>  # If consuming of messages is continued in the manner described above (one 
> message from one node and then one message from another node), more messages 
> continue to appear into the queue until broker runs out of memory and crashes.
> Technical details considering Python test described above:
>  * Apache ActiveMQ Artemis 2.10.1 on RHEL 7.7 64bit
>  * OpenJDK 11.0.5
>  * Python 3.4.10
>  * Apache Qpid Proton 0.29.0 installed via PIP
> In addition to above, following different variations have been tested. 
> Problem still occurs with all these variations:
>  * Consumer protocol was changed to STOMP.
>  * Window-Based Flow Control was turned off on both sides, client and server.
>  ** 
> [https://activemq.apache.org/components/artemis/documentation/latest/flow-control.html]
>  consumerWindowSize
>  * Implementation was changed to Java using Apache Qpid JMS library (version 
> 0.39.0 for producer, version 0.46.0 for consumer).
> If this is not a bug, I would be very happy for any solution for this 
> problem, wheter it is pointing out some mistake in the configuration or in 
> the consumer, explaining if this is a designed feature or some other 
> explanation.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (ARTEMIS-2560) Duplicate messages created by cluster lead to OOME crash

2019-12-05 Thread Howard Gao (Jira)


[ 
https://issues.apache.org/jira/browse/ARTEMIS-2560?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16988643#comment-16988643
 ] 

Howard Gao commented on ARTEMIS-2560:
-

Thanks Mikko. I see you are using python-qpid-producer.py to send the messages, 
which means it could be the same issue as amqp. I'll try verify it using your 
steps and see if it's fixed it once and for all.

Howard

> Duplicate messages created by cluster lead to OOME crash
> 
>
> Key: ARTEMIS-2560
> URL: https://issues.apache.org/jira/browse/ARTEMIS-2560
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>  Components: AMQP
>Affects Versions: 2.10.1
>Reporter: Mikko Niemi
>Priority: Major
> Attachments: python-qpid-consumer.py, python-qpid-producer.py, 
> python-stomp-consumer.py, server0-broker.xml, server1-broker.xml
>
>  Time Spent: 1h 50m
>  Remaining Estimate: 0h
>
> Summary: When using two node cluster with very simple configuration (see 
> attached broker.xml files), duplicate messages are generated to queue when 
> Python client is used to consume messages one by one from alternating nodes. 
> Duplicate messages are generated until OutOfMemoryException crashes the 
> broker.
> Detailed description how to produce this problem:
>  # Create two node cluster using attached broker.xml files. Node names are 
> server0 and server1 for the rest of this description.
>  # Produce 100 messages to a queue defined in the address configuration 
> inside broker.xml file on node server0. See attached Python producer. 
> Produced messages have identical content. Command to produce the messages 
> using the attached Python producer: "python python-qpid-producer.py -u 
> $username -p $password -H server0 -a exampleQueue -m TestMessageFooBar -A 100"
>  # Consume one message from server1 using attached Python consumer. Cluster 
> will balance the messages to server1 and total amount of messages is 
> decreased by one. After one message is consumed, session and connection are 
> closed. Command to consume a message using attached Python consumer: "python 
> python-qpid-consumer.py -u $username -p $password -H server1 -a exampleQueue"
>  # Consume one message from server0 using attached Python consumer. Cluster 
> will balance the messages to server0 and total amount of messages is 
> decreased by one. After one message is consumed, session and connection are 
> closed. Command to consume a message using attached Python consumer: "python 
> python-qpid-consumer.py -u $username -p $password -H server0 -a exampleQueue"
>  # Consume one message from server1 using attached Python consumer. Cluster 
> will balance the messages to server1 but this time total amount of messages 
> is increased radically.
>  # If consuming of messages is continued in the manner described above (one 
> message from one node and then one message from another node), more messages 
> continue to appear into the queue until broker runs out of memory and crashes.
> Technical details considering Python test described above:
>  * Apache ActiveMQ Artemis 2.10.1 on RHEL 7.7 64bit
>  * OpenJDK 11.0.5
>  * Python 3.4.10
>  * Apache Qpid Proton 0.29.0 installed via PIP
> In addition to above, following different variations have been tested. 
> Problem still occurs with all these variations:
>  * Consumer protocol was changed to STOMP.
>  * Window-Based Flow Control was turned off on both sides, client and server.
>  ** 
> [https://activemq.apache.org/components/artemis/documentation/latest/flow-control.html]
>  consumerWindowSize
>  * Implementation was changed to Java using Apache Qpid JMS library (version 
> 0.39.0 for producer, version 0.46.0 for consumer).
> If this is not a bug, I would be very happy for any solution for this 
> problem, wheter it is pointing out some mistake in the configuration or in 
> the consumer, explaining if this is a designed feature or some other 
> explanation.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (ARTEMIS-2560) Duplicate messages created by cluster lead to OOME crash

2019-12-03 Thread Howard Gao (Jira)


[ 
https://issues.apache.org/jira/browse/ARTEMIS-2560?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16987065#comment-16987065
 ] 

Howard Gao commented on ARTEMIS-2560:
-

Hi [~mniemi],

I've tested STOMP behavior and I didn't reproduce the issue. Can you please 
share your stomp test if any?

Thanks
Howard


> Duplicate messages created by cluster lead to OOME crash
> 
>
> Key: ARTEMIS-2560
> URL: https://issues.apache.org/jira/browse/ARTEMIS-2560
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>  Components: AMQP
>Affects Versions: 2.10.1
>Reporter: Mikko Niemi
>Priority: Major
> Attachments: python-qpid-consumer.py, python-qpid-producer.py, 
> server0-broker.xml, server1-broker.xml
>
>  Time Spent: 1h 50m
>  Remaining Estimate: 0h
>
> Summary: When using two node cluster with very simple configuration (see 
> attached broker.xml files), duplicate messages are generated to queue when 
> Python client is used to consume messages one by one from alternating nodes. 
> Duplicate messages are generated until OutOfMemoryException crashes the 
> broker.
> Detailed description how to produce this problem:
>  # Create two node cluster using attached broker.xml files. Node names are 
> server0 and server1 for the rest of this description.
>  # Produce 100 messages to a queue defined in the address configuration 
> inside broker.xml file on node server0. See attached Python producer. 
> Produced messages have identical content. Command to produce the messages 
> using the attached Python producer: "python python-qpid-producer.py -u 
> $username -p $password -H server0 -a exampleQueue -m TestMessageFooBar -A 100"
>  # Consume one message from server1 using attached Python consumer. Cluster 
> will balance the messages to server1 and total amount of messages is 
> decreased by one. After one message is consumed, session and connection are 
> closed. Command to consume a message using attached Python consumer: "python 
> python-qpid-consumer.py -u $username -p $password -H server1 -a exampleQueue"
>  # Consume one message from server0 using attached Python consumer. Cluster 
> will balance the messages to server0 and total amount of messages is 
> decreased by one. After one message is consumed, session and connection are 
> closed. Command to consume a message using attached Python consumer: "python 
> python-qpid-consumer.py -u $username -p $password -H server0 -a exampleQueue"
>  # Consume one message from server1 using attached Python consumer. Cluster 
> will balance the messages to server1 but this time total amount of messages 
> is increased radically.
>  # If consuming of messages is continued in the manner described above (one 
> message from one node and then one message from another node), more messages 
> continue to appear into the queue until broker runs out of memory and crashes.
> Technical details considering Python test described above:
>  * Apache ActiveMQ Artemis 2.10.1 on RHEL 7.7 64bit
>  * OpenJDK 11.0.5
>  * Python 3.4.10
>  * Apache Qpid Proton 0.29.0 installed via PIP
> In addition to above, following different variations have been tested. 
> Problem still occurs with all these variations:
>  * Protocol was changed to STOMP.
>  * Window-Based Flow Control was turned off on both sides, client and server.
>  * 
>  ** 
> [https://activemq.apache.org/components/artemis/documentation/latest/flow-control.html]
>  consumerWindowSize
>  * Implementation was changed to Java using Apache Qpid JMS library (version 
> 0.39.0 for producer, version 0.46.0 for consumer).
> If this is not a bug, I would be very happy for any solution for this 
> problem, wheter it is pointing out some mistake in the configuration or in 
> the consumer, explaining if this is a designed feature or some other 
> explanation.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (ARTEMIS-2560) Duplicate messages created by cluster lead to OOME crash

2019-12-02 Thread Howard Gao (Jira)


[ 
https://issues.apache.org/jira/browse/ARTEMIS-2560?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16985951#comment-16985951
 ] 

Howard Gao commented on ARTEMIS-2560:
-

I've investigated this problem so far I can see qpid-jms has the issue (along 
with python I believe). I think there is a problem the AMQPMessage internal 
properties not properly cleaned up. 
I'll see if this happens to STOMP protocols (and others).


> Duplicate messages created by cluster lead to OOME crash
> 
>
> Key: ARTEMIS-2560
> URL: https://issues.apache.org/jira/browse/ARTEMIS-2560
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>  Components: AMQP
>Affects Versions: 2.10.1
>Reporter: Mikko Niemi
>Priority: Major
> Attachments: python-qpid-consumer.py, python-qpid-producer.py, 
> server0-broker.xml, server1-broker.xml
>
>
> Summary: When using two node cluster with very simple configuration (see 
> attached broker.xml files), duplicate messages are generated to queue when 
> Python client is used to consume messages one by one from alternating nodes. 
> Duplicate messages are generated until OutOfMemoryException crashes the 
> broker.
> Detailed description how to produce this problem:
>  # Create two node cluster using attached broker.xml files. Node names are 
> server0 and server1 for the rest of this description.
>  # Produce 100 messages to a queue defined in the address configuration 
> inside broker.xml file on node server0. See attached Python producer. 
> Produced messages have identical content. Command to produce the messages 
> using the attached Python producer: "python python-qpid-producer.py -u 
> $username -p $password -H server0 -a exampleQueue -m TestMessageFooBar -A 100"
>  # Consume one message from server1 using attached Python consumer. Cluster 
> will balance the messages to server1 and total amount of messages is 
> decreased by one. After one message is consumed, session and connection are 
> closed. Command to consume a message using attached Python consumer: "python 
> python-qpid-consumer.py -u $username -p $password -H server1 -a exampleQueue"
>  # Consume one message from server0 using attached Python consumer. Cluster 
> will balance the messages to server0 and total amount of messages is 
> decreased by one. After one message is consumed, session and connection are 
> closed. Command to consume a message using attached Python consumer: "python 
> python-qpid-consumer.py -u $username -p $password -H server0 -a exampleQueue"
>  # Consume one message from server1 using attached Python consumer. Cluster 
> will balance the messages to server1 but this time total amount of messages 
> is increased radically.
>  # If consuming of messages is continued in the manner described above (one 
> message from one node and then one message from another node), more messages 
> continue to appear into the queue until broker runs out of memory and crashes.
> Technical details considering Python test described above:
>  * Apache ActiveMQ Artemis 2.10.1 on RHEL 7.7 64bit
>  * OpenJDK 11.0.5
>  * Python 3.4.10
>  * Apache Qpid Proton 0.29.0 installed via PIP
> In addition to above, following different variations have been tested. 
> Problem still occurs with all these variations:
>  * Protocol was changed to STOMP.
>  * Window-Based Flow Control was turned off on both sides, client and server.
>  * 
>  ** 
> [https://activemq.apache.org/components/artemis/documentation/latest/flow-control.html]
>  consumerWindowSize
>  * Implementation was changed to Java using Apache Qpid JMS library (version 
> 0.39.0 for producer, version 0.46.0 for consumer).
> If this is not a bug, I would be very happy for any solution for this 
> problem, wheter it is pointing out some mistake in the configuration or in 
> the consumer, explaining if this is a designed feature or some other 
> explanation.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (ARTEMIS-2513) Large message's copy may be interfered by other threads

2019-10-07 Thread Howard Gao (Jira)
Howard Gao created ARTEMIS-2513:
---

 Summary: Large message's copy may be interfered by other threads
 Key: ARTEMIS-2513
 URL: https://issues.apache.org/jira/browse/ARTEMIS-2513
 Project: ActiveMQ Artemis
  Issue Type: Bug
  Components: Broker
Affects Versions: 2.10.1
Reporter: Howard Gao
Assignee: Howard Gao
 Fix For: 2.11.0


In LargeMessageImpl.copy(long) it need to open the underlying file in order to 
read and copy bytes into the new copied message. However there is a chance that 
another thread can come in and close the file in the middle, making the copy 
failed with "channel is null" error.

This is happening in cases where a large message is sent to a jms topic 
(multicast address). During delivery it to multiple subscribers, some consumer 
is doing delivery and closed the underlying file after. Some other consumer is 
rolling back the messages and eventually move it to DLQ (which will call the 
above copy method). So there is a chance this bug being hit on.




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (ARTEMIS-2506) MQTT doesn't cleanup underlying connection for bad clients

2019-09-25 Thread Howard Gao (Jira)
Howard Gao created ARTEMIS-2506:
---

 Summary: MQTT doesn't cleanup underlying connection for bad clients
 Key: ARTEMIS-2506
 URL: https://issues.apache.org/jira/browse/ARTEMIS-2506
 Project: ActiveMQ Artemis
  Issue Type: Bug
  Components: MQTT
Affects Versions: 2.10.1
Reporter: Howard Gao
Assignee: Howard Gao
 Fix For: 2.11.0


When a bad MQTT clients drop its connection without proper closing it the 
broker doesn't close the underlying physical connection. 




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (ARTEMIS-2500) CoreMessage doesn't make a full copy of its props

2019-09-22 Thread Howard Gao (Jira)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-2500?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Howard Gao updated ARTEMIS-2500:

Summary: CoreMessage doesn't make a full copy of its props  (was: 
CoreMessage doesn't make a ful copy of its props)

> CoreMessage doesn't make a full copy of its props
> -
>
> Key: ARTEMIS-2500
> URL: https://issues.apache.org/jira/browse/ARTEMIS-2500
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>  Components: Broker
>Affects Versions: 2.10.0
>Reporter: Howard Gao
>Assignee: Howard Gao
>Priority: Major
> Fix For: 2.11.0
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> When CoreMessage is doing copyHeadersAndProperties() it doesn't
> make a full copy of its properties (a TypedProperties object).
> It will cause problem when multiple threads/parties are modifying the
> properties of the copied messages from the same message.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (ARTEMIS-2500) CoreMessage doesn't make a ful copy of its props

2019-09-22 Thread Howard Gao (Jira)
Howard Gao created ARTEMIS-2500:
---

 Summary: CoreMessage doesn't make a ful copy of its props
 Key: ARTEMIS-2500
 URL: https://issues.apache.org/jira/browse/ARTEMIS-2500
 Project: ActiveMQ Artemis
  Issue Type: Bug
  Components: Broker
Affects Versions: 2.10.0
Reporter: Howard Gao
Assignee: Howard Gao
 Fix For: 2.11.0


When CoreMessage is doing copyHeadersAndProperties() it doesn't
make a full copy of its properties (a TypedProperties object).
It will cause problem when multiple threads/parties are modifying the
properties of the copied messages from the same message.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (ARTEMIS-2493) OpenWire session close doesn't cleanup consumer refs

2019-09-16 Thread Howard Gao (Jira)
Howard Gao created ARTEMIS-2493:
---

 Summary: OpenWire session close doesn't cleanup consumer refs
 Key: ARTEMIS-2493
 URL: https://issues.apache.org/jira/browse/ARTEMIS-2493
 Project: ActiveMQ Artemis
  Issue Type: Bug
  Components: OpenWire
Affects Versions: 2.10.0
Reporter: Howard Gao
Assignee: Howard Gao
 Fix For: 2.11.0


When an openwire client closes the session, the broker doesn't clean up it 
server consumer references even though the core consumers are closed. This 
results a leak when sessions within a connection are created and closed when 
the connection keeps open. 



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Closed] (ARTEMIS-2485) Deleting SNF Queue should also delete associated remote bindings

2019-09-12 Thread Howard Gao (Jira)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-2485?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Howard Gao closed ARTEMIS-2485.
---
Resolution: Invalid

Marked as invalid.
The issue may not be what is described here.
Will use the original JIRA for further investigation.

> Deleting SNF Queue should also delete associated remote bindings
> 
>
> Key: ARTEMIS-2485
> URL: https://issues.apache.org/jira/browse/ARTEMIS-2485
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>  Components: Broker
>Affects Versions: 2.10.0
>Reporter: Howard Gao
>Assignee: Howard Gao
>Priority: Major
> Fix For: 2.11.0
>
>
> In https://issues.apache.org/jira/browse/ARTEMIS-2462 we offered an option 
> that automatically remove snf queues in scaledown. However the remote 
> bindings are left out and remains in the broker memory.
> Those are no longer used and if a different node comes up those remaining 
> bindings will prevent the new bindings to be added.
> For a common example in a 2 broker cluster,
> if they both deploy a jms.queue.DLQ queue, each will have a remote binding
> for the queue. One of them scaled down and remove the sf queue on the other.
> Then another broker node (with different node id) comes up and form a cluster 
> with the existing broker. If the new broker also has a jms.queue.DLQ then it 
> will cause
> the other broker to create a remote binding. However the other broker already 
> has a remotebinding and for that reason the new remotebinding will not be 
> added.
> You will see some warning like this:
> 2019-09-12 01:30:51,427 WARN  [org.apache.activemq.artemis.core.server] 
> AMQ222139: MessageFlowRecordImpl 
> [nodeID=a44b0e0a-d4fc-11e9-9e65-0a580a8201d0, 
> connector=TransportConfiguration(name=artemis, 
> factory=org-apache-activemq-artemis-core-remoting-impl-netty-NettyConnectorFactory)
>  ?port=61616&host=ex-aao-ss-2, 
> queueName=$.artemis.internal.sf.my-cluster.a44b0e0a-d4fc-11e9-9e65-0a580a8201d0,
>  
> queue=QueueImpl[name=$.artemis.internal.sf.my-cluster.a44b0e0a-d4fc-11e9-9e65-0a580a8201d0,
>  postOffice=PostOfficeImpl 
> [server=ActiveMQServerImpl::serverUUID=f04e96b2-d4fc-11e9-8b50-0a580a8201d2], 
> temp=false]@522d336c, isClosed=false, reset=true]::Remote queue binding 
> DLQf04e96b2-d4fc-11e9-8b50-0a580a8201d2 has already been bound in the post 
> office. Most likely cause for this is you have a loop in your cluster due to 
> cluster max-hops being too large or you have multiple cluster connections to 
> the same nodes using overlapping addresses



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Created] (ARTEMIS-2485) Deleting SNF Queue should also delete associated remote bindings

2019-09-12 Thread Howard Gao (Jira)
Howard Gao created ARTEMIS-2485:
---

 Summary: Deleting SNF Queue should also delete associated remote 
bindings
 Key: ARTEMIS-2485
 URL: https://issues.apache.org/jira/browse/ARTEMIS-2485
 Project: ActiveMQ Artemis
  Issue Type: Bug
  Components: Broker
Affects Versions: 2.10.0
Reporter: Howard Gao
Assignee: Howard Gao
 Fix For: 2.11.0


In https://issues.apache.org/jira/browse/ARTEMIS-2462 we offered an option that 
automatically remove snf queues in scaledown. However the remote bindings are 
left out and remains in the broker memory.

Those are no longer used and if a different node comes up those remaining 
bindings will prevent the new bindings to be added.

For a common example in a 2 broker cluster,
if they both deploy a jms.queue.DLQ queue, each will have a remote binding
for the queue. One of them scaled down and remove the sf queue on the other.
Then another broker node (with different node id) comes up and form a cluster 
with the existing broker. If the new broker also has a jms.queue.DLQ then it 
will cause
the other broker to create a remote binding. However the other broker already 
has a remotebinding and for that reason the new remotebinding will not be added.
You will see some warning like this:
2019-09-12 01:30:51,427 WARN  [org.apache.activemq.artemis.core.server] 
AMQ222139: MessageFlowRecordImpl [nodeID=a44b0e0a-d4fc-11e9-9e65-0a580a8201d0, 
connector=TransportConfiguration(name=artemis, 
factory=org-apache-activemq-artemis-core-remoting-impl-netty-NettyConnectorFactory)
 ?port=61616&host=ex-aao-ss-2, 
queueName=$.artemis.internal.sf.my-cluster.a44b0e0a-d4fc-11e9-9e65-0a580a8201d0,
 
queue=QueueImpl[name=$.artemis.internal.sf.my-cluster.a44b0e0a-d4fc-11e9-9e65-0a580a8201d0,
 postOffice=PostOfficeImpl 
[server=ActiveMQServerImpl::serverUUID=f04e96b2-d4fc-11e9-8b50-0a580a8201d2], 
temp=false]@522d336c, isClosed=false, reset=true]::Remote queue binding 
DLQf04e96b2-d4fc-11e9-8b50-0a580a8201d2 has already been bound in the post 
office. Most likely cause for this is you have a loop in your cluster due to 
cluster max-hops being too large or you have multiple cluster connections to 
the same nodes using overlapping addresses




--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Updated] (ARTEMIS-2462) Allow store and forward queue to be deleted after scaledown

2019-08-27 Thread Howard Gao (Jira)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-2462?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Howard Gao updated ARTEMIS-2462:

Summary: Allow store and forward queue to be deleted after scaledown  (was: 
Allow store and forward queue to be deleted afte scaledown)

> Allow store and forward queue to be deleted after scaledown
> ---
>
> Key: ARTEMIS-2462
> URL: https://issues.apache.org/jira/browse/ARTEMIS-2462
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>  Components: Broker
>Affects Versions: 2.10.0
>Reporter: Howard Gao
>Priority: Minor
> Fix For: 2.11.0
>
>
> After a node is scaled down to a target node, the sf queue in the target node 
> is not deleted (the sf queue takes the form of 
> $.artemis.internal.sf..).
> Normally this is fine because maybe reused when the scaled down node is back 
> up.
> However in Openshift/Kubernetes environment many drainer pod (scale down 
> broker) can be created and then shutdown in order to drain the messages to a 
> live node (pod). Each drainer pod will have a different node-id. Over time 
> the sf queues in the target broker node grows and those sf queues are no 
> longer reused.
> Although use can use management API/console to manually delete them, it would 
> be nice to have an option to automatically delete those sf queue/address 
> resources after scale down.
> One option (my proposal) is define a system property on the scale down broker.
> If the property is "true" the broker will send a message to the target broker 
> signalling that the SF queue is no longer needed and should be deleted.
> If the property is not defined (default) or other values then "true", the 
> scale down won't remove the sf queue (current behavior).
> This solution could well be suitable for openshift/kubernetes environment 
> because we can easily pass any environment variable into the image to 
> enable/disable this feature.
>  



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Created] (ARTEMIS-2462) Allow store and forward queue to be deleted afte scaledown

2019-08-27 Thread Howard Gao (Jira)
Howard Gao created ARTEMIS-2462:
---

 Summary: Allow store and forward queue to be deleted afte scaledown
 Key: ARTEMIS-2462
 URL: https://issues.apache.org/jira/browse/ARTEMIS-2462
 Project: ActiveMQ Artemis
  Issue Type: Bug
  Components: Broker
Affects Versions: 2.10.0
Reporter: Howard Gao
 Fix For: 2.11.0


After a node is scaled down to a target node, the sf queue in the target node 
is not deleted (the sf queue takes the form of 
$.artemis.internal.sf..).

Normally this is fine because maybe reused when the scaled down node is back up.
However in Openshift/Kubernetes environment many drainer pod (scale down 
broker) can be created and then shutdown in order to drain the messages to a 
live node (pod). Each drainer pod will have a different node-id. Over time the 
sf queues in the target broker node grows and those sf queues are no longer 
reused.

Although use can use management API/console to manually delete them, it would 
be nice to have an option to automatically delete those sf queue/address 
resources after scale down.

One option (my proposal) is define a system property on the scale down broker.
If the property is "true" the broker will send a message to the target broker 
signalling that the SF queue is no longer needed and should be deleted.
If the property is not defined (default) or other values then "true", the scale 
down won't remove the sf queue (current behavior).

This solution could well be suitable for openshift/kubernetes environment 
because we can easily pass any environment variable into the image to 
enable/disable this feature.
 



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Created] (ARTEMIS-2390) JMSMessageID header can be null when messages are cross-protocl

2019-06-20 Thread Howard Gao (JIRA)
Howard Gao created ARTEMIS-2390:
---

 Summary: JMSMessageID header can be null when messages are 
cross-protocl
 Key: ARTEMIS-2390
 URL: https://issues.apache.org/jira/browse/ARTEMIS-2390
 Project: ActiveMQ Artemis
  Issue Type: Bug
  Components: Broker
Affects Versions: 2.9.0
Reporter: Howard Gao
Assignee: Howard Gao
 Fix For: 2.10.0


If a jms client (be it openwire, amqp, or core jms) receives a message that is 
from a different protocol, the JMSMessageID maybe null when the jms client 
expects it.




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Reopened] (ARTEMIS-1825) Live-backup topology not correctly displayed on console

2019-06-18 Thread Howard Gao (JIRA)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-1825?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Howard Gao reopened ARTEMIS-1825:
-

This issue is not fully fixed. The topology graph on backup's console is not 
correct.


> Live-backup topology not correctly displayed on console
> ---
>
> Key: ARTEMIS-1825
> URL: https://issues.apache.org/jira/browse/ARTEMIS-1825
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>  Components: Web Console
>Affects Versions: 2.5.0
>Reporter: Howard Gao
>Assignee: Howard Gao
>Priority: Major
> Fix For: 2.6.4, 2.7.0
>
>
> The backup's web console doesn't correctly shows the topology diagram of 
> live-backup pair. It points to itself.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Closed] (ARTEMIS-2349) CLI FQQN name parsing is incorrect

2019-05-22 Thread Howard Gao (JIRA)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-2349?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Howard Gao closed ARTEMIS-2349.
---
Resolution: Invalid

> CLI FQQN name parsing is incorrect
> --
>
> Key: ARTEMIS-2349
> URL: https://issues.apache.org/jira/browse/ARTEMIS-2349
> Project: ActiveMQ Artemis
>  Issue Type: New Feature
>  Components: Broker
>Affects Versions: 2.8.1
>Reporter: Howard Gao
>Assignee: Howard Gao
>Priority: Major
> Fix For: 2.9.0
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> The methods extracting address and queue names used in CLI are not correct.
> While the queue name should be extracted the returned value is the address, 
> and
> when the address is extracted the returned value is the queue.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (ARTEMIS-2349) CLI FQQN name parsing is incorrect

2019-05-22 Thread Howard Gao (JIRA)
Howard Gao created ARTEMIS-2349:
---

 Summary: CLI FQQN name parsing is incorrect
 Key: ARTEMIS-2349
 URL: https://issues.apache.org/jira/browse/ARTEMIS-2349
 Project: ActiveMQ Artemis
  Issue Type: New Feature
  Components: Broker
Affects Versions: 2.8.1
Reporter: Howard Gao
Assignee: Howard Gao
 Fix For: 2.9.0


The methods extracting address and queue names used in CLI are not correct.

While the queue name should be extracted the returned value is the address, and

when the address is extracted the returned value is the queue.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (ARTEMIS-2273) Adding Audit Log

2019-03-12 Thread Howard Gao (JIRA)
Howard Gao created ARTEMIS-2273:
---

 Summary: Adding Audit Log
 Key: ARTEMIS-2273
 URL: https://issues.apache.org/jira/browse/ARTEMIS-2273
 Project: ActiveMQ Artemis
  Issue Type: New Feature
  Components: Broker
Affects Versions: 2.6.4
Reporter: Howard Gao
Assignee: Howard Gao
 Fix For: 2.7.0


The Audit log allows user to log some important actions, such as

ones performed via management APIs or clients, like queue management,

sending messages, etc. The log tries to record who (the user if any) doing what 
(like deleting a queue) with arguments (if any) and timestamps.

By default the audit log is disabled. Through configuration is can be easily 
turned on.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (ARTEMIS-2230) Exception closing advisory consumers when supportAdvisory=false

2019-01-17 Thread Howard Gao (JIRA)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-2230?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Howard Gao updated ARTEMIS-2230:

Description: 
When broker's advisory is disabled (supportAdvisory=false) any advisory 
consumer won't get created at broker and the advisory consumer ID won't be 
stored.

Legacy openwire clients can have a reference of advisory consumer regardless 
broker's settings and therefore when it closes the advisory consumer the broker 
has no reference to it. Therefore broker throws an exception like:

javax.jms.IllegalStateException: Cannot remove a consumer that had not been 
registered

If the broker stores the consumer info (even it doesn't create it) the 
exception can be avoided.

 

  was:
When broker's advisory is disabled (supportAdvisory=false) any advisory 
consumer won't get created at broker and the advisory consumer ID won't be 
stored.

Legacy openwire clients can have a reference of advisory consumer regardless 
broker's settings and therefore when it closes the advisory consumer the broker 
has no reference to it. Therefore broker throws an exception like:

javax.jms.IllegalStateException: Cannot remove a consumer that had not been 
registered

If the broker stores the consumer info (even it doesn't create it) the 
exception can be avoid.

 


> Exception closing advisory consumers when supportAdvisory=false
> ---
>
> Key: ARTEMIS-2230
> URL: https://issues.apache.org/jira/browse/ARTEMIS-2230
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>  Components: OpenWire
>Affects Versions: 2.6.4
>Reporter: Howard Gao
>Assignee: Howard Gao
>Priority: Major
> Fix For: 2.6.5
>
>
> When broker's advisory is disabled (supportAdvisory=false) any advisory 
> consumer won't get created at broker and the advisory consumer ID won't be 
> stored.
> Legacy openwire clients can have a reference of advisory consumer regardless 
> broker's settings and therefore when it closes the advisory consumer the 
> broker has no reference to it. Therefore broker throws an exception like:
> javax.jms.IllegalStateException: Cannot remove a consumer that had not been 
> registered
> If the broker stores the consumer info (even it doesn't create it) the 
> exception can be avoided.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (ARTEMIS-2230) Exception closing advisory consumers when supportAdvisory=false

2019-01-17 Thread Howard Gao (JIRA)
Howard Gao created ARTEMIS-2230:
---

 Summary: Exception closing advisory consumers when 
supportAdvisory=false
 Key: ARTEMIS-2230
 URL: https://issues.apache.org/jira/browse/ARTEMIS-2230
 Project: ActiveMQ Artemis
  Issue Type: Bug
  Components: OpenWire
Affects Versions: 2.6.4
Reporter: Howard Gao
Assignee: Howard Gao
 Fix For: 2.6.5


When broker's advisory is disabled (supportAdvisory=false) any advisory 
consumer won't get created at broker and the advisory consumer ID won't be 
stored.

Legacy openwire clients can have a reference of advisory consumer regardless 
broker's settings and therefore when it closes the advisory consumer the broker 
has no reference to it. Therefore broker throws an exception like:

javax.jms.IllegalStateException: Cannot remove a consumer that had not been 
registered

If the broker stores the consumer info (even it doesn't create it) the 
exception can be avoid.

 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Closed] (ARTEMIS-2229) Qpid jms consumer cannot receive from multicast queue using FQQN

2019-01-15 Thread Howard Gao (JIRA)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-2229?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Howard Gao closed ARTEMIS-2229.
---
Resolution: Fixed

> Qpid jms consumer cannot receive from multicast queue using FQQN
> 
>
> Key: ARTEMIS-2229
> URL: https://issues.apache.org/jira/browse/ARTEMIS-2229
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>  Components: AMQP
>Affects Versions: 2.6.3
>Reporter: Howard Gao
>Assignee: Howard Gao
>Priority: Major
> Fix For: 2.6.4
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> If a client sends a message to a multicast address configured as below:
> |{color:#00} {color}|
> |{color:#00} {color}|
> |{color:#00} {color}|
> |{color:#00} true{color}|
> |{color:#00} {color}|
> |{color:#00} {color}|
> |{color:#00} true{color}|
> |{color:#00} {color}|
> |{color:#00} {color}|
> |{color:#00}  {color}|
> Using a qpid-jms client to receive the message from one of the queues using 
> fully qualified queue name will fail with following error message:
> {color:#00} Address publish.A is not configured for queue support 
> [condition = amqp:illegal-state]{color}
> It should be able to receive the message without any error.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (ARTEMIS-2228) Check message size sent over management API

2019-01-15 Thread Howard Gao (JIRA)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-2228?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Howard Gao updated ARTEMIS-2228:

Description: 
Currently user can send an arbitrary size of messages via management api 
(console for example) and this may break the buffer size limit and cause the 
broker throw unexpected exceptions. We need to put some check on the message 
size over the management API and reject the message if it's too big.

A better solution would be to convert it into large messages at the server side.

  was:Currently user can send an arbitrary size of messages via management api 
(console for example) and this may break the buffer size limit and cause the 
broker throw unexpected exceptions. We need to put some check on the message 
size over the management API and reject the message if it's too big.


> Check message size sent over management API
> ---
>
> Key: ARTEMIS-2228
> URL: https://issues.apache.org/jira/browse/ARTEMIS-2228
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>  Components: Broker
>Affects Versions: 2.6.3
>Reporter: Howard Gao
>Assignee: Howard Gao
>Priority: Major
> Fix For: 2.6.4
>
>  Time Spent: 3h
>  Remaining Estimate: 0h
>
> Currently user can send an arbitrary size of messages via management api 
> (console for example) and this may break the buffer size limit and cause the 
> broker throw unexpected exceptions. We need to put some check on the message 
> size over the management API and reject the message if it's too big.
> A better solution would be to convert it into large messages at the server 
> side.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (ARTEMIS-2229) Qpid jms consumer cannot receive from multicast queue using FQQN

2019-01-15 Thread Howard Gao (JIRA)
Howard Gao created ARTEMIS-2229:
---

 Summary: Qpid jms consumer cannot receive from multicast queue 
using FQQN
 Key: ARTEMIS-2229
 URL: https://issues.apache.org/jira/browse/ARTEMIS-2229
 Project: ActiveMQ Artemis
  Issue Type: Bug
  Components: AMQP
Affects Versions: 2.6.3
Reporter: Howard Gao
Assignee: Howard Gao
 Fix For: 2.6.4


If a client sends a message to a multicast address configured as below:
|{color:#00} {color}|
|{color:#00} {color}|
|{color:#00} {color}|
|{color:#00} true{color}|
|{color:#00} {color}|
|{color:#00} {color}|
|{color:#00} true{color}|
|{color:#00} {color}|
|{color:#00} {color}|
|{color:#00}  {color}|

Using a qpid-jms client to receive the message from one of the queues using 
fully qualified queue name will fail with following error message:

{color:#00} Address publish.A is not configured for queue support 
[condition = amqp:illegal-state]{color}

It should be able to receive the message without any error.

 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (ARTEMIS-2228) Check message size sent over management API

2019-01-13 Thread Howard Gao (JIRA)
Howard Gao created ARTEMIS-2228:
---

 Summary: Check message size sent over management API
 Key: ARTEMIS-2228
 URL: https://issues.apache.org/jira/browse/ARTEMIS-2228
 Project: ActiveMQ Artemis
  Issue Type: Bug
  Components: Broker
Affects Versions: 2.6.3
Reporter: Howard Gao
Assignee: Howard Gao
 Fix For: 2.6.4


Currently user can send an arbitrary size of messages via management api 
(console for example) and this may break the buffer size limit and cause the 
broker throw unexpected exceptions. We need to put some check on the message 
size over the management API and reject the message if it's too big.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (ARTEMIS-2210) Fix PagingStore creation synchronization issue

2018-12-25 Thread Howard Gao (JIRA)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-2210?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Howard Gao updated ARTEMIS-2210:

Summary: Fix PagingStore creation synchronization issue  (was: PagingStore 
creation is not properly synchronized)

> Fix PagingStore creation synchronization issue
> --
>
> Key: ARTEMIS-2210
> URL: https://issues.apache.org/jira/browse/ARTEMIS-2210
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>  Components: Broker
>Affects Versions: 2.6.3
>Reporter: Howard Gao
>Assignee: Howard Gao
>Priority: Major
> Fix For: 2.7.0, 2.6.4
>
>
> In PagingManagerImpl#getPageStore() the operations on the map 'stores'
> are not synchronzed and it's possible that more than one paging store is
> created for one address.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (ARTEMIS-2210) PagingStore creation is not properly synchronized

2018-12-23 Thread Howard Gao (JIRA)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-2210?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Howard Gao updated ARTEMIS-2210:

Description: 
In PagingManagerImpl#getPageStore() the operations on the map 'stores'

are not synchronzed and it's possible that more than one paging store is

created for one address.

 

  was:
In PagingManagerImpl#getPageStore() the operations on the map 'stores'

if not synchronzed and it's possible that more than one paging store is

created for one address.

 


> PagingStore creation is not properly synchronized
> -
>
> Key: ARTEMIS-2210
> URL: https://issues.apache.org/jira/browse/ARTEMIS-2210
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>  Components: Broker
>Affects Versions: 2.6.3
>Reporter: Howard Gao
>Assignee: Howard Gao
>Priority: Major
> Fix For: 2.7.0, 2.6.4
>
>
> In PagingManagerImpl#getPageStore() the operations on the map 'stores'
> are not synchronzed and it's possible that more than one paging store is
> created for one address.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (ARTEMIS-2210) PagingStore creation is not properly synchronized

2018-12-23 Thread Howard Gao (JIRA)
Howard Gao created ARTEMIS-2210:
---

 Summary: PagingStore creation is not properly synchronized
 Key: ARTEMIS-2210
 URL: https://issues.apache.org/jira/browse/ARTEMIS-2210
 Project: ActiveMQ Artemis
  Issue Type: Bug
  Components: Broker
Affects Versions: 2.6.3
Reporter: Howard Gao
Assignee: Howard Gao
 Fix For: 2.7.0, 2.6.4


In PagingManagerImpl#getPageStore() the operations on the map 'stores'

if not synchronzed and it's possible that more than one paging store is

created for one address.

 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Closed] (ARTEMIS-2197) Page deleted before transaction finishes

2018-12-11 Thread Howard Gao (JIRA)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-2197?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Howard Gao closed ARTEMIS-2197.
---
Resolution: Fixed

> Page deleted before transaction finishes
> 
>
> Key: ARTEMIS-2197
> URL: https://issues.apache.org/jira/browse/ARTEMIS-2197
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>  Components: Broker
>Affects Versions: 2.6.3
>Reporter: Howard Gao
>Assignee: Howard Gao
>Priority: Major
> Fix For: 2.6.4
>
>
> When a receiving transaction is committed in a paging situation,
> if a page happens to be completed and it will be deleted in a
> transaction operation (PageCursorTx). The other tx operation
> RefsOperation needs to access the page (in PageCache) to finish
> its job. There is a chance that the PageCursorTx removes the
> page before RefsOperation and it will cause the RefsOperation
> failed to find a message in a page.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (ARTEMIS-2197) Page deleted before transaction finishes

2018-12-09 Thread Howard Gao (JIRA)
Howard Gao created ARTEMIS-2197:
---

 Summary: Page deleted before transaction finishes
 Key: ARTEMIS-2197
 URL: https://issues.apache.org/jira/browse/ARTEMIS-2197
 Project: ActiveMQ Artemis
  Issue Type: Bug
  Components: Broker
Affects Versions: 2.6.3
Reporter: Howard Gao
Assignee: Howard Gao
 Fix For: 2.6.4


When a receiving transaction is committed in a paging situation,
if a page happens to be completed and it will be deleted in a
transaction operation (PageCursorTx). The other tx operation
RefsOperation needs to access the page (in PageCache) to finish
its job. There is a chance that the PageCursorTx removes the
page before RefsOperation and it will cause the RefsOperation
failed to find a message in a page.

 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (ARTEMIS-2174) Broker reconnect to another with scale down policy cause OOM

2018-11-13 Thread Howard Gao (JIRA)
Howard Gao created ARTEMIS-2174:
---

 Summary: Broker reconnect to another with scale down policy cause 
OOM
 Key: ARTEMIS-2174
 URL: https://issues.apache.org/jira/browse/ARTEMIS-2174
 Project: ActiveMQ Artemis
  Issue Type: Bug
  Components: Broker
Affects Versions: 2.6.3
Reporter: Howard Gao
Assignee: Howard Gao
 Fix For: 2.6.4


When a node tries to reconnects to another node in a scale down cluster, the 
reconnect request gets denied by the other node and keeps retrying, which 
causes tasks in the ordered executor accumulate and eventually OOM.

To reproduce:
 # Start 2 nodes (node1 and 2) cluster configured in scale down mode.
 # stop node2 and restart it.
 # node1 will try to reconnect to node2 repeatedly and ever succeed.
 # Inspect the connecting ClientSessionFactory (like adding log) and its 
threadpool (closeExecutor an object of OrderedExecutor) keeps adding tasks to 
its queue.

Over the time the queue keeps ever growing, and will exhaust the heap memory.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Closed] (ARTEMIS-1850) QueueControl.listDeliveringMessages returns empty result

2018-11-06 Thread Howard Gao (JIRA)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-1850?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Howard Gao closed ARTEMIS-1850.
---
   Resolution: Fixed
Fix Version/s: (was: 2.7.0)
   2.6.4

> QueueControl.listDeliveringMessages returns empty result
> 
>
> Key: ARTEMIS-1850
> URL: https://issues.apache.org/jira/browse/ARTEMIS-1850
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>  Components: AMQP
>Affects Versions: 2.5.0
>Reporter: Howard Gao
>Assignee: Howard Gao
>Priority: Major
> Fix For: 2.6.4
>
>
> With AMQP protocol when some messages are received in a transaction, calling 
> JMX QueueControl.listDeliveringMessages() returns empty list before the 
> transaction is committed.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (ARTEMIS-2133) Artemis tab not showing on IE browser

2018-10-17 Thread Howard Gao (JIRA)
Howard Gao created ARTEMIS-2133:
---

 Summary: Artemis tab not showing on IE browser
 Key: ARTEMIS-2133
 URL: https://issues.apache.org/jira/browse/ARTEMIS-2133
 Project: ActiveMQ Artemis
  Issue Type: Bug
  Components: Web Console
Affects Versions: 2.6.3
 Environment: Windows IE browser
Reporter: Howard Gao
Assignee: Howard Gao
 Fix For: 2.6.4


The web console on IE doesn't have 'Artemis' showed up because it doesn't 
support javascripts => function.

 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (ARTEMIS-2097) Pause and Block Producers

2018-09-28 Thread Howard Gao (JIRA)


[ 
https://issues.apache.org/jira/browse/ARTEMIS-2097?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16631835#comment-16631835
 ] 

Howard Gao commented on ARTEMIS-2097:
-

[~clebertsuconic] the credits may work for core clients, but it may not work 
for other protocol clients, who have different kind of flow control. Is that 
right?

 

> Pause and Block Producers
> -
>
> Key: ARTEMIS-2097
> URL: https://issues.apache.org/jira/browse/ARTEMIS-2097
> Project: ActiveMQ Artemis
>  Issue Type: New Feature
>  Components: Broker
>Affects Versions: 1.5.5
> Environment: AMQ-1.5.5
>Reporter: Tyronne Wickramarathne
>Assignee: Howard Gao
>Priority: Major
> Fix For: unscheduled
>
>
> Could it be possible to block all incoming messages without changing the 
> address-full-policy to 'BLOCK'?
> The address full policy can be configured to block incoming messages should 
> the address full policy reaches the configured max-size-bytes attributes.
> However, on certain circumstances it is important to make a JMS destination 
> drain without accepting incoming messages while keeping the 
> address-full-policy at 'PAGE'. For an instance, if a user needs to bring down 
> the broker for maintenance, it is important to allow the user to drain 
> existing messages in the corresponding destination without accepting any new 
> messages.
>  
> Currently the pause() method on a destination pauses message consumers. In a 
> similar fashion could it be possible to add a new method to block message 
> producers on a given destination irrespective of the address-full-policy 
> being used?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (ARTEMIS-2097) Pause and Block Producers

2018-09-28 Thread Howard Gao (JIRA)


[ 
https://issues.apache.org/jira/browse/ARTEMIS-2097?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16631818#comment-16631818
 ] 

Howard Gao commented on ARTEMIS-2097:
-

I'm thinking adding a method on AddressControl and another on ServerControl.

It should be based on addresses as messages are sending to addresses.

 

> Pause and Block Producers
> -
>
> Key: ARTEMIS-2097
> URL: https://issues.apache.org/jira/browse/ARTEMIS-2097
> Project: ActiveMQ Artemis
>  Issue Type: New Feature
>  Components: Broker
>Affects Versions: 1.5.5
> Environment: AMQ-1.5.5
>Reporter: Tyronne Wickramarathne
>Assignee: Howard Gao
>Priority: Major
> Fix For: unscheduled
>
>
> Could it be possible to block all incoming messages without changing the 
> address-full-policy to 'BLOCK'?
> The address full policy can be configured to block incoming messages should 
> the address full policy reaches the configured max-size-bytes attributes.
> However, on certain circumstances it is important to make a JMS destination 
> drain without accepting incoming messages while keeping the 
> address-full-policy at 'PAGE'. For an instance, if a user needs to bring down 
> the broker for maintenance, it is important to allow the user to drain 
> existing messages in the corresponding destination without accepting any new 
> messages.
>  
> Currently the pause() method on a destination pauses message consumers. In a 
> similar fashion could it be possible to add a new method to block message 
> producers on a given destination irrespective of the address-full-policy 
> being used?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (ARTEMIS-2097) Could it be possible to add a feature to block message producers?

2018-09-26 Thread Howard Gao (JIRA)


[ 
https://issues.apache.org/jira/browse/ARTEMIS-2097?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16629708#comment-16629708
 ] 

Howard Gao commented on ARTEMIS-2097:
-

[~clebertsuconic] I think this is useful. Wdyt?

> Could it be possible to add a feature to block message producers?
> -
>
> Key: ARTEMIS-2097
> URL: https://issues.apache.org/jira/browse/ARTEMIS-2097
> Project: ActiveMQ Artemis
>  Issue Type: New Feature
>  Components: Broker
>Affects Versions: 1.5.5
> Environment: AMQ-1.5.5
>Reporter: Tyronne Wickramarathne
>Assignee: Howard Gao
>Priority: Major
> Fix For: unscheduled
>
>
> Could it be possible to block all incoming messages without changing the 
> address-full-policy to 'BLOCK'?
> The address full policy can be configured to block incoming messages should 
> the address full policy reaches the configured max-size-bytes attributes.
> However, on certain circumstances it is important to make a JMS destination 
> drain without accepting incoming messages while keeping the 
> address-full-policy at 'PAGE'. For an instance, if a user needs to bring down 
> the broker for maintenance, it is important to allow the user to drain 
> existing messages in the corresponding destination without accepting any new 
> messages.
>  
> Currently the pause() method on a destination pauses message consumers. In a 
> similar fashion could it be possible to add a new method to block message 
> producers on a given destination irrespective of the address-full-policy 
> being used?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (ARTEMIS-2097) Could it be possible to add a feature to block message producers?

2018-09-26 Thread Howard Gao (JIRA)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-2097?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Howard Gao reassigned ARTEMIS-2097:
---

Assignee: Howard Gao

> Could it be possible to add a feature to block message producers?
> -
>
> Key: ARTEMIS-2097
> URL: https://issues.apache.org/jira/browse/ARTEMIS-2097
> Project: ActiveMQ Artemis
>  Issue Type: New Feature
>  Components: Broker
>Affects Versions: 1.5.5
> Environment: AMQ-1.5.5
>Reporter: Tyronne Wickramarathne
>Assignee: Howard Gao
>Priority: Major
> Fix For: unscheduled
>
>
> Could it be possible to block all incoming messages without changing the 
> address-full-policy to 'BLOCK'?
> The address full policy can be configured to block incoming messages should 
> the address full policy reaches the configured max-size-bytes attributes.
> However, on certain circumstances it is important to make a JMS destination 
> drain without accepting incoming messages while keeping the 
> address-full-policy at 'PAGE'. For an instance, if a user needs to bring down 
> the broker for maintenance, it is important to allow the user to drain 
> existing messages in the corresponding destination without accepting any new 
> messages.
>  
> Currently the pause() method on a destination pauses message consumers. In a 
> similar fashion could it be possible to add a new method to block message 
> producers on a given destination irrespective of the address-full-policy 
> being used?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (ARTEMIS-2088) Page.write() should throw exception if file is closed

2018-09-13 Thread Howard Gao (JIRA)
Howard Gao created ARTEMIS-2088:
---

 Summary: Page.write() should throw exception if file is closed
 Key: ARTEMIS-2088
 URL: https://issues.apache.org/jira/browse/ARTEMIS-2088
 Project: ActiveMQ Artemis
  Issue Type: Bug
  Components: Broker
Affects Versions: 2.6.3
Reporter: Howard Gao
Assignee: Howard Gao
 Fix For: 2.6.4


In Page.write(final PagedMessage message) if the page file is closed it returns 
silently. The caller has no way to know that if the message is paged to file or 
not.
It should throw an exception so that the caller can handle it correctly.
This causes random failure in test 
org.hornetq.tests.integration.client.PagingTest#testExpireLargeMessageOnPaging().
The test shows that when the server stops it closes the page file. In the mean 
time a message is expired to the expiry queue.
 and if the expiry queue is in paging mode, it goes to Page.write() and returns 
without any error. The result is that the message is removed from the original 
queue and not added to the expiry queue.

If we throw exception here it make the expiration failed, the message will not 
be removed from the orginal queue. Next time broker is started, the message 
will be reloaded and expired again. no message lost.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (ARTEMIS-2035) org.apache.activemq.artemis.uri.JGroupsSchema can only serialize Artemis JGroups endpoint factories

2018-08-19 Thread Howard Gao (JIRA)


[ 
https://issues.apache.org/jira/browse/ARTEMIS-2035?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16585362#comment-16585362
 ] 

Howard Gao commented on ARTEMIS-2035:
-

JGroups configuration/objects are not supposed to be serialized because they 
are local to the network environment. In case of ConnectionFactories using 
JNDI, it should use static connectors for HA.

> org.apache.activemq.artemis.uri.JGroupsSchema can only serialize Artemis 
> JGroups endpoint factories
> ---
>
> Key: ARTEMIS-2035
> URL: https://issues.apache.org/jira/browse/ARTEMIS-2035
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>  Components: Broker
>Affects Versions: 1.5.6
>Reporter: Emmanuel Hugonnet
>Priority: Major
>
> If an application server provides its own BroadcastEndpointFactory then it 
> will never be converted to an URI even if it could provide the channelName or 
> auth parameter.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (ARTEMIS-2013) Can't create durable subscriber to a composite topic

2018-08-07 Thread Howard Gao (JIRA)
Howard Gao created ARTEMIS-2013:
---

 Summary: Can't create durable subscriber to a composite topic
 Key: ARTEMIS-2013
 URL: https://issues.apache.org/jira/browse/ARTEMIS-2013
 Project: ActiveMQ Artemis
  Issue Type: Bug
  Components: OpenWire
Affects Versions: 2.6.2
Reporter: Howard Gao
Assignee: Howard Gao
 Fix For: 2.6.3


An OpenWire client can use a compound destination name of the form "a,b,c..." 
and consume from, or subscribe to, multiple destinations. Such a compound 
destination only works for topics when the subscriber is non-durable. 
Attempting to create a durable subscription on a compound address gives an 
error message:

 
|{color:#00}2018-07-23 14:11:31,166 WARN 
[org.apache.activemq.artemis.core.server] Errors occurred during the buffering 
operation : java.lang.IllegalStateException: Cannot create a subscriber on the 
durable subscription since it already has subscriber(s) {color}|

 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (ARTEMIS-1995) Client fail over fails when live shut down too soon

2018-07-29 Thread Howard Gao (JIRA)


[ 
https://issues.apache.org/jira/browse/ARTEMIS-1995?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16561534#comment-16561534
 ] 

Howard Gao commented on ARTEMIS-1995:
-

It's a porting from HORNETQ-1572

> Client fail over fails when live shut down too soon
> ---
>
> Key: ARTEMIS-1995
> URL: https://issues.apache.org/jira/browse/ARTEMIS-1995
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>  Components: Broker
>Affects Versions: 2.6.2
>Reporter: Howard Gao
>Assignee: Howard Gao
>Priority: Major
> Fix For: 2.6.3
>
>
> In a live-backup scenario, if the live is restarted and shutdown too soon,
>  the client have a chance to fail on failover because it's internal topology
>  is inconsistent with the final status. The client keeps connecting to live
>  already shut down, never trying to connect to the backup.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (ARTEMIS-1995) Client fail over fails when live shut down too soon

2018-07-29 Thread Howard Gao (JIRA)
Howard Gao created ARTEMIS-1995:
---

 Summary: Client fail over fails when live shut down too soon
 Key: ARTEMIS-1995
 URL: https://issues.apache.org/jira/browse/ARTEMIS-1995
 Project: ActiveMQ Artemis
  Issue Type: Bug
  Components: Broker
Affects Versions: 2.6.2
Reporter: Howard Gao
Assignee: Howard Gao
 Fix For: 2.6.3


In a live-backup scenario, if the live is restarted and shutdown too soon,
 the client have a chance to fail on failover because it's internal topology
 is inconsistent with the final status. The client keeps connecting to live
 already shut down, never trying to connect to the backup.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Closed] (ARTEMIS-1916) Remove Jmx ArtemisRMIServerSocketFactory

2018-06-12 Thread Howard Gao (JIRA)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-1916?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Howard Gao closed ARTEMIS-1916.
---
Resolution: Invalid

The Jira is invalid as it's not the cause of the problem

> Remove Jmx ArtemisRMIServerSocketFactory
> 
>
> Key: ARTEMIS-1916
> URL: https://issues.apache.org/jira/browse/ARTEMIS-1916
> Project: ActiveMQ Artemis
>  Issue Type: Improvement
>  Components: Web Console
>Affects Versions: 2.6.1
>Reporter: Howard Gao
>Assignee: Howard Gao
>Priority: Major
> Fix For: 2.6.2
>
>
> The ArtemisRMIServerSocketFactory doesn't do anything special, instead the 
> existence of this impl class causes jmx client failed to connect (for reason 
> not known, probably not fully implemented the functionality). It turns out 
> just fine to use JDK's impl. This class is not necessary.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (ARTEMIS-1916) Remove Jmx ArtemisRMIServerSocketFactory

2018-06-07 Thread Howard Gao (JIRA)
Howard Gao created ARTEMIS-1916:
---

 Summary: Remove Jmx ArtemisRMIServerSocketFactory
 Key: ARTEMIS-1916
 URL: https://issues.apache.org/jira/browse/ARTEMIS-1916
 Project: ActiveMQ Artemis
  Issue Type: Improvement
  Components: Web Console
Affects Versions: 2.6.1
Reporter: Howard Gao
Assignee: Howard Gao
 Fix For: 2.6.2


The ArtemisRMIServerSocketFactory doesn't do anything special, instead the 
existence of this impl class causes jmx client failed to connect (for reason 
not known, probably not fully implemented the functionality). It turns out just 
fine to use JDK's impl. This class is not necessary.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (ARTEMIS-1904) Jmx Management Security Tests

2018-06-04 Thread Howard Gao (JIRA)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-1904?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Howard Gao updated ARTEMIS-1904:

Affects Version/s: 2.6.1

> Jmx Management Security Tests
> -
>
> Key: ARTEMIS-1904
> URL: https://issues.apache.org/jira/browse/ARTEMIS-1904
> Project: ActiveMQ Artemis
>  Issue Type: Test
>  Components: Broker
>Affects Versions: 2.6.1
>Reporter: Howard Gao
>Assignee: Howard Gao
>Priority: Major
> Fix For: 2.7.0
>
>
> Adding tests for jmx security (authentication/authorization) functionalities.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (ARTEMIS-1904) Jmx Management Security Tests

2018-06-04 Thread Howard Gao (JIRA)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-1904?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Howard Gao updated ARTEMIS-1904:

Fix Version/s: 2.7.0

> Jmx Management Security Tests
> -
>
> Key: ARTEMIS-1904
> URL: https://issues.apache.org/jira/browse/ARTEMIS-1904
> Project: ActiveMQ Artemis
>  Issue Type: Test
>  Components: Broker
>Affects Versions: 2.6.1
>Reporter: Howard Gao
>Assignee: Howard Gao
>Priority: Major
> Fix For: 2.7.0
>
>
> Adding tests for jmx security (authentication/authorization) functionalities.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (ARTEMIS-1904) Jmx Management Security Tests

2018-06-04 Thread Howard Gao (JIRA)


 [ 
https://issues.apache.org/jira/browse/ARTEMIS-1904?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Howard Gao updated ARTEMIS-1904:

Component/s: Broker

> Jmx Management Security Tests
> -
>
> Key: ARTEMIS-1904
> URL: https://issues.apache.org/jira/browse/ARTEMIS-1904
> Project: ActiveMQ Artemis
>  Issue Type: Test
>  Components: Broker
>Reporter: Howard Gao
>Assignee: Howard Gao
>Priority: Major
>
> Adding tests for jmx security (authentication/authorization) functionalities.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (ARTEMIS-1904) Jmx Management Security Tests

2018-06-04 Thread Howard Gao (JIRA)
Howard Gao created ARTEMIS-1904:
---

 Summary: Jmx Management Security Tests
 Key: ARTEMIS-1904
 URL: https://issues.apache.org/jira/browse/ARTEMIS-1904
 Project: ActiveMQ Artemis
  Issue Type: Test
Reporter: Howard Gao
Assignee: Howard Gao


Adding tests for jmx security (authentication/authorization) functionalities.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Closed] (ARTEMIS-1853) Adding Netty OpenSSL provider test

2018-05-24 Thread Howard Gao (JIRA)

 [ 
https://issues.apache.org/jira/browse/ARTEMIS-1853?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Howard Gao closed ARTEMIS-1853.
---
Resolution: Fixed

> Adding Netty OpenSSL provider test
> --
>
> Key: ARTEMIS-1853
> URL: https://issues.apache.org/jira/browse/ARTEMIS-1853
> Project: ActiveMQ Artemis
>  Issue Type: Test
>  Components: Broker
>Affects Versions: 2.5.0
>Reporter: Howard Gao
>Assignee: Howard Gao
>Priority: Major
> Fix For: 2.7.0
>
>
> Make sure netty's openssl provider works.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Work started] (ARTEMIS-1853) Adding Netty OpenSSL provider test

2018-05-24 Thread Howard Gao (JIRA)

 [ 
https://issues.apache.org/jira/browse/ARTEMIS-1853?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on ARTEMIS-1853 started by Howard Gao.
---
> Adding Netty OpenSSL provider test
> --
>
> Key: ARTEMIS-1853
> URL: https://issues.apache.org/jira/browse/ARTEMIS-1853
> Project: ActiveMQ Artemis
>  Issue Type: Test
>  Components: Broker
>Affects Versions: 2.5.0
>Reporter: Howard Gao
>Assignee: Howard Gao
>Priority: Major
> Fix For: 2.7.0
>
>
> Make sure netty's openssl provider works.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (ARTEMIS-1868) Openwire doesn't add delivery count in client ack mode

2018-05-15 Thread Howard Gao (JIRA)
Howard Gao created ARTEMIS-1868:
---

 Summary: Openwire doesn't add delivery count in client ack mode
 Key: ARTEMIS-1868
 URL: https://issues.apache.org/jira/browse/ARTEMIS-1868
 Project: ActiveMQ Artemis
  Issue Type: Test
  Components: OpenWire
Affects Versions: 2.5.0
Reporter: Howard Gao
Assignee: Howard Gao
 Fix For: 2.5.1


If a client ack mode consumer receives a message and closes without acking it, 
the redelivery of the message won't set the redelivery flag (JMSRedelivered) 
because it doesn't increment the delivery count when message is cancelled back 
to queue.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (ARTEMIS-1853) Adding Netty OpenSSL provider test

2018-05-08 Thread Howard Gao (JIRA)
Howard Gao created ARTEMIS-1853:
---

 Summary: Adding Netty OpenSSL provider test
 Key: ARTEMIS-1853
 URL: https://issues.apache.org/jira/browse/ARTEMIS-1853
 Project: ActiveMQ Artemis
  Issue Type: Test
  Components: Broker
Affects Versions: 2.5.0
Reporter: Howard Gao
Assignee: Howard Gao
 Fix For: 2.5.1


Make sure netty's openssl provider works.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (ARTEMIS-1850) QueueControl.listDeliveringMessages returns empty result

2018-05-06 Thread Howard Gao (JIRA)
Howard Gao created ARTEMIS-1850:
---

 Summary: QueueControl.listDeliveringMessages returns empty result
 Key: ARTEMIS-1850
 URL: https://issues.apache.org/jira/browse/ARTEMIS-1850
 Project: ActiveMQ Artemis
  Issue Type: Bug
  Components: AMQP
Affects Versions: 2.5.0
Reporter: Howard Gao
Assignee: Howard Gao
 Fix For: 2.5.1


With AMQP protocol when some messages are received in a transaction, calling 
JMX QueueControl.listDeliveringMessages() returns empty list before the 
transaction is committed.

 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (ARTEMIS-1825) Live-backup topology not correctly displayed on console

2018-04-23 Thread Howard Gao (JIRA)
Howard Gao created ARTEMIS-1825:
---

 Summary: Live-backup topology not correctly displayed on console
 Key: ARTEMIS-1825
 URL: https://issues.apache.org/jira/browse/ARTEMIS-1825
 Project: ActiveMQ Artemis
  Issue Type: Bug
  Components: Web Console
Affects Versions: 2.5.0
Reporter: Howard Gao
Assignee: Howard Gao
 Fix For: 2.5.1


The backup's web console doesn't correctly shows the topology diagram of 
live-backup pair. It points to itself.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (ARTEMIS-1809) Add example showing Virtual Topic Mapping

2018-04-16 Thread Howard Gao (JIRA)

[ 
https://issues.apache.org/jira/browse/ARTEMIS-1809?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16440302#comment-16440302
 ] 

Howard Gao commented on ARTEMIS-1809:
-

[~pgfox]Can you assign this issue to yourself and update the status?
Thanks


> Add example showing Virtual Topic Mapping 
> --
>
> Key: ARTEMIS-1809
> URL: https://issues.apache.org/jira/browse/ARTEMIS-1809
> Project: ActiveMQ Artemis
>  Issue Type: Improvement
>  Components: OpenWire
>Affects Versions: 2.5.0
>Reporter: Pat Fox
>Priority: Minor
>
> Add a simple example to demonstrate the use of 
> "virtualTopicConsumerWildcards" to map ActiveMQ 5.x virtual topic consumers 
> to use the Artemis Address model.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Closed] (ARTEMIS-1790) Improve Topology Member Finding

2018-04-11 Thread Howard Gao (JIRA)

 [ 
https://issues.apache.org/jira/browse/ARTEMIS-1790?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Howard Gao closed ARTEMIS-1790.
---
Resolution: Fixed

> Improve Topology Member Finding
> ---
>
> Key: ARTEMIS-1790
> URL: https://issues.apache.org/jira/browse/ARTEMIS-1790
> Project: ActiveMQ Artemis
>  Issue Type: Improvement
>  Components: Broker
>Affects Versions: 2.5.0
>Reporter: Howard Gao
>Assignee: Howard Gao
>Priority: Major
> Fix For: 2.5.1
>
>
> When finding out if a connector belong to a target node it compares the whole 
> parameter map which is not necessary. Also in understanding the connector the 
> best place is to delegate it to the corresponding remoting connection who 
> understands it. (e.g. INVMConnection knows whether the connector belongs to a 
> target node by checking it's serverID only. The netty ones only need to match 
> host and port, and understanding that localhost and 127.0.0.1 are same thing).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Closed] (ARTEMIS-1797) Auto-create-address flag shouldn't block temp destination creation

2018-04-10 Thread Howard Gao (JIRA)

 [ 
https://issues.apache.org/jira/browse/ARTEMIS-1797?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Howard Gao closed ARTEMIS-1797.
---
Resolution: Fixed

> Auto-create-address flag shouldn't block temp destination creation
> --
>
> Key: ARTEMIS-1797
> URL: https://issues.apache.org/jira/browse/ARTEMIS-1797
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>  Components: Broker
>Affects Versions: 2.5.0
>Reporter: Howard Gao
>Assignee: Howard Gao
>Priority: Major
> Fix For: 2.5.1
>
>
> When creating a temp destination and auto-create-address set to false, the 
> broker throws an error and refuse to create it. This doesn't conform to 
> normal use-case (like amqp dynamic flag) where the temp destination should be 
> allowed even if the auto-create-address is false.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (ARTEMIS-1797) Auto-create-address flag shouldn't block temp destination creation

2018-04-10 Thread Howard Gao (JIRA)
Howard Gao created ARTEMIS-1797:
---

 Summary: Auto-create-address flag shouldn't block temp destination 
creation
 Key: ARTEMIS-1797
 URL: https://issues.apache.org/jira/browse/ARTEMIS-1797
 Project: ActiveMQ Artemis
  Issue Type: Bug
  Components: Broker
Affects Versions: 2.5.0
Reporter: Howard Gao
Assignee: Howard Gao
 Fix For: 2.5.1


When creating a temp destination and auto-create-address set to false, the 
broker throws an error and refuse to create it. This doesn't conform to normal 
use-case (like amqp dynamic flag) where the temp destination should be allowed 
even if the auto-create-address is false.

 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Closed] (ARTEMIS-1791) Large message files are not removed after redistribution across a cluster

2018-04-09 Thread Howard Gao (JIRA)

 [ 
https://issues.apache.org/jira/browse/ARTEMIS-1791?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Howard Gao closed ARTEMIS-1791.
---
Resolution: Fixed

> Large message files are not removed after redistribution across a cluster
> -
>
> Key: ARTEMIS-1791
> URL: https://issues.apache.org/jira/browse/ARTEMIS-1791
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>  Components: Broker
>Affects Versions: 2.6.0
>Reporter: Ingo Weiss
>Assignee: Howard Gao
>Priority: Major
>
> Large messages are not removed from the large messages directory after they 
> have been redistributed to other nodes.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (ARTEMIS-1791) Large message files are not removed after redistribution across a cluster

2018-04-09 Thread Howard Gao (JIRA)

 [ 
https://issues.apache.org/jira/browse/ARTEMIS-1791?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Howard Gao reassigned ARTEMIS-1791:
---

Assignee: Howard Gao

> Large message files are not removed after redistribution across a cluster
> -
>
> Key: ARTEMIS-1791
> URL: https://issues.apache.org/jira/browse/ARTEMIS-1791
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>  Components: Broker
>Affects Versions: 2.6.0
>Reporter: Ingo Weiss
>Assignee: Howard Gao
>Priority: Major
>
> Large messages are not removed from the large messages directory after they 
> have been redistributed to other nodes.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (ARTEMIS-1790) Improve Topology Member Finding

2018-04-07 Thread Howard Gao (JIRA)
Howard Gao created ARTEMIS-1790:
---

 Summary: Improve Topology Member Finding
 Key: ARTEMIS-1790
 URL: https://issues.apache.org/jira/browse/ARTEMIS-1790
 Project: ActiveMQ Artemis
  Issue Type: Improvement
  Components: Broker
Affects Versions: 2.5.0
Reporter: Howard Gao
Assignee: Howard Gao
 Fix For: 2.5.1


When finding out if a connector belong to a target node it compares the whole 
parameter map which is not necessary. Also in understanding the connector the 
best place is to delegate it to the corresponding remoting connection who 
understands it. (e.g. INVMConnection knows whether the connector belongs to a 
target node by checking it's serverID only. The netty ones only need to match 
host and port, and understanding that localhost and 127.0.0.1 are same thing).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Closed] (ARTEMIS-1779) ClusterConnectionBridge may connect to other nodes than its target

2018-04-03 Thread Howard Gao (JIRA)

 [ 
https://issues.apache.org/jira/browse/ARTEMIS-1779?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Howard Gao closed ARTEMIS-1779.
---
Resolution: Fixed

> ClusterConnectionBridge may connect to other nodes than its target
> --
>
> Key: ARTEMIS-1779
> URL: https://issues.apache.org/jira/browse/ARTEMIS-1779
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>Affects Versions: 2.5.0
>Reporter: Howard Gao
>Assignee: Howard Gao
>Priority: Major
> Fix For: 2.5.1
>
>
> The cluster connection bridge has a TopologyListener and connects to a new 
> node each time it receives a nodeUp() event. It needs to put a check here to 
> make sure that the cluster bridge only connects to its target node and it's 
> backups.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (ARTEMIS-1779) ClusterConnectionBridge may connect to other nodes than its target

2018-04-02 Thread Howard Gao (JIRA)
Howard Gao created ARTEMIS-1779:
---

 Summary: ClusterConnectionBridge may connect to other nodes than 
its target
 Key: ARTEMIS-1779
 URL: https://issues.apache.org/jira/browse/ARTEMIS-1779
 Project: ActiveMQ Artemis
  Issue Type: Bug
Reporter: Howard Gao
Assignee: Howard Gao


The cluster connection bridge has a TopologyListener and connects to a new node 
each time it receives a nodeUp() event. It needs to put a check here to make 
sure that the cluster bridge only connects to its target node and it's backups.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (ARTEMIS-1779) ClusterConnectionBridge may connect to other nodes than its target

2018-04-02 Thread Howard Gao (JIRA)

 [ 
https://issues.apache.org/jira/browse/ARTEMIS-1779?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Howard Gao updated ARTEMIS-1779:

Affects Version/s: 2.5.0

> ClusterConnectionBridge may connect to other nodes than its target
> --
>
> Key: ARTEMIS-1779
> URL: https://issues.apache.org/jira/browse/ARTEMIS-1779
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>Affects Versions: 2.5.0
>Reporter: Howard Gao
>Assignee: Howard Gao
>Priority: Major
> Fix For: 2.5.1
>
>
> The cluster connection bridge has a TopologyListener and connects to a new 
> node each time it receives a nodeUp() event. It needs to put a check here to 
> make sure that the cluster bridge only connects to its target node and it's 
> backups.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (ARTEMIS-1779) ClusterConnectionBridge may connect to other nodes than its target

2018-04-02 Thread Howard Gao (JIRA)

 [ 
https://issues.apache.org/jira/browse/ARTEMIS-1779?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Howard Gao updated ARTEMIS-1779:

Fix Version/s: 2.5.1

> ClusterConnectionBridge may connect to other nodes than its target
> --
>
> Key: ARTEMIS-1779
> URL: https://issues.apache.org/jira/browse/ARTEMIS-1779
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>Affects Versions: 2.5.0
>Reporter: Howard Gao
>Assignee: Howard Gao
>Priority: Major
> Fix For: 2.5.1
>
>
> The cluster connection bridge has a TopologyListener and connects to a new 
> node each time it receives a nodeUp() event. It needs to put a check here to 
> make sure that the cluster bridge only connects to its target node and it's 
> backups.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Closed] (ARTEMIS-1754) LargeServerMessageImpl.toString() may leak files

2018-03-21 Thread Howard Gao (JIRA)

 [ 
https://issues.apache.org/jira/browse/ARTEMIS-1754?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Howard Gao closed ARTEMIS-1754.
---
Resolution: Fixed

> LargeServerMessageImpl.toString() may leak files
> 
>
> Key: ARTEMIS-1754
> URL: https://issues.apache.org/jira/browse/ARTEMIS-1754
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>  Components: Broker
>Affects Versions: 2.5.0
>Reporter: Howard Gao
>Assignee: Howard Gao
>Priority: Major
> Fix For: 2.5.1
>
>
> This overridden method calls getPersistentSize() which may lead to open the 
> large file to get the size. Calling it alone will cause the file being opened 
> without closing it.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (ARTEMIS-406) STOMP acknowledgements should support transactions

2018-03-20 Thread Howard Gao (JIRA)

 [ 
https://issues.apache.org/jira/browse/ARTEMIS-406?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Howard Gao reassigned ARTEMIS-406:
--

Assignee: Howard Gao

> STOMP acknowledgements should support transactions
> --
>
> Key: ARTEMIS-406
> URL: https://issues.apache.org/jira/browse/ARTEMIS-406
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>  Components: STOMP
>Reporter: Lionel Cons
>Assignee: Howard Gao
>Priority: Major
>
> Artemis currently does not support transactional acknowledgements:
> {quote}
> Message acknowledgements are not transactional. The ACK frame can not be part 
> of a transaction (it will be ignored if its transaction header is set).
> {quote}
> The STOMP 1.2 specification contains:
> {quote}
> Optionally, a transaction header MAY be specified, indicating that the 
> message acknowledgment SHOULD be part of the named transaction.
> {quote}
> And other brokers (such as ActiveMQ 5.x) do support transactional 
> acknowledgements.
> Artemis should support them too.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (ARTEMIS-1754) LargeServerMessageImpl.toString() may leak files

2018-03-20 Thread Howard Gao (JIRA)
Howard Gao created ARTEMIS-1754:
---

 Summary: LargeServerMessageImpl.toString() may leak files
 Key: ARTEMIS-1754
 URL: https://issues.apache.org/jira/browse/ARTEMIS-1754
 Project: ActiveMQ Artemis
  Issue Type: Bug
  Components: Broker
Affects Versions: 2.5.0
Reporter: Howard Gao
Assignee: Howard Gao
 Fix For: 2.5.1


This overridden method calls getPersistentSize() which may lead to open the 
large file to get the size. Calling it alone will cause the file being opened 
without closing it.

 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (ARTEMIS-1732) AMQP anonymous producer not blocked on max-disk-usage

2018-03-07 Thread Howard Gao (JIRA)
Howard Gao created ARTEMIS-1732:
---

 Summary: AMQP anonymous producer not blocked on max-disk-usage
 Key: ARTEMIS-1732
 URL: https://issues.apache.org/jira/browse/ARTEMIS-1732
 Project: ActiveMQ Artemis
  Issue Type: Bug
  Components: AMQP
Affects Versions: 2.4.0
Reporter: Howard Gao
Assignee: Howard Gao
 Fix For: 2.5.0


Anonymous senders (those created without a target address) are not blocked when 
max-disk-usage is reached. The cause is that when such a sender is created on 
the broker, the broker doesn't check the disk/memory usage and gives out the 
credit immediately.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (ARTEMIS-1604) Artemis deadlock using MQTT Protocol

2018-02-26 Thread Howard Gao (JIRA)

[ 
https://issues.apache.org/jira/browse/ARTEMIS-1604?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16377923#comment-16377923
 ] 

Howard Gao commented on ARTEMIS-1604:
-

Hi Tiago,

What version are you using? Did you try the latest snapshot?

 

Howard

> Artemis deadlock using MQTT Protocol
> 
>
> Key: ARTEMIS-1604
> URL: https://issues.apache.org/jira/browse/ARTEMIS-1604
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>Reporter: Tiago Santos
>Priority: Blocker
> Attachments: artemisDeadlock.log, broker.xml
>
>
> Hello,
> When load testing artemis as our MQTT Broker, we detected the broker 
> deadlocks and crashs (see attached log).
> Our use case is trying 1000 simultaneous connections (publishers) each 
> sending 1 message per second with 1KB size.
> After all 1000 clients successfully connected to artemis and start sending 
> the messages after a while the deadlock comes up.
> The Xms and Xmx of the JVM both are set to 4GB
> The broker.xml can be found in attach.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


  1   2   3   4   >