[jira] [Closed] (ARTEMIS-3098) OpenWire message conversion fails on JMSXGroupSeq
[ https://issues.apache.org/jira/browse/ARTEMIS-3098?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Justin Bertram closed ARTEMIS-3098. --- Resolution: Cannot Reproduce > OpenWire message conversion fails on JMSXGroupSeq > - > > Key: ARTEMIS-3098 > URL: https://issues.apache.org/jira/browse/ARTEMIS-3098 > Project: ActiveMQ Artemis > Issue Type: Bug >Affects Versions: 2.16.0 >Reporter: Justin Bertram >Assignee: Justin Bertram >Priority: Major > > Test environment: Linux Fedora 32, OpenJDK Runtime Environment (build > 1.8.0_275-b01), activemq-cpp-library-3.9.5, apache-artemis-2.16.0 > 1) Start Artemis server with default configuration, then run from > apache-activemq-5.15.11/bin: "activemq producer", "activemq consumer" ---> > all ok, all messages received > 2) Start Artemis server with default configuration; using activemq client > library send to a queue several messages having JMSXGroupID property set. The > broker receives the messages, but the dispatch to the consumer never > completes (Wireshark shows that the messages are not sent from the broker). > The following exceptions appear in the server's log for each message: > {noformat} > 2021-02-04 12:39:43,817 TRACE > [org.apache.activemq.artemis.core.server.impl.QueueImpl] Queue > R.QA62_ORA12.AuditSequence is delivering reference > Reference[884]:RELIABLE:CoreMessage[messageID=884,durable=true,userID=41e334b6-662e-11eb-9a20-0800276be4c9,priority=4, > timestamp=Wed Feb 03 16:44:02 EET 2021,expiration=0, durable=true, > address=R.QA62_ORA12.AuditSequence,size=15115,properties=TypedProperties[AuditAuthorized=Y,_AMQ_GROUP_ID=26D5,_AMQ_GROUP_SEQUENCE=0,JMSXGroupID=26D5,__HDR_BROKER_IN_TIME=1612363442486,AuditId=26D5,SequenceNumber=0,SourceSysID=,_AMQ_ROUTING_TYPE=1,JMSXGroupSeq=1,__HDR_COMMAND_ID=27,Version=1,__HDR_MESSAGE_ID=[ > 004A 6E00 017B 0100 2349 443A 4665 646F 7261 3330 2D33 3439 3639 2D31 ... > 0001 0001 0008 > ),AuditState=DONE,__HDR_DROPPABLE=false,__AMQ_CID=ID:Fedora30-34969-1612363441679-1:0,TranslatorType=Distribution,__HDR_ARRIVAL=0,__HDR_MARSHALL_PROP=[ > 000C 000F 4175 6469 7441 7574 686F 7269 7A65 6409 0001 5900 0741 7564 ... > 6E4E 756D 6265 7206 0007 5665 7273 696F 6E05 > 0001),TransactionNumber=0,__HDR_PRODUCER_ID=[ 0037 7B01 0023 4944 3A46 > 6564 6F72 6133 302D 3334 3936 392D 3136 3132 3336 3334 3431 3637 392D 303A > 3000 0100 01)]]@1752182275 > 2021-02-04 12:39:43,817 TRACE > [org.apache.activemq.artemis.core.server.impl.ServerConsumerImpl] > ServerConsumerImpl::ServerConsumerImpl [id=2, filter=null, > binding=LocalQueueBinding [address=R.QA62_ORA12.AuditSequence, > queue=QueueImpl[name=R.QA62_ORA12.AuditSequence, postOffice=PostOfficeImpl > [server=ActiveMQServerImpl::serverUUID=e773f2c8-6629-11eb-a149-0800276be4c9], > temp=false]@45d2ade3, filter=null, name=R.QA62_ORA12.AuditSequence, > clusterName=R.QA62_ORA12.AuditSequencee773f2c8-6629-11eb-a149-0800276be4c9]] > Handling reference > Reference[884]:RELIABLE:CoreMessage[messageID=884,durable=true,userID=41e334b6-662e-11eb-9a20-0800276be4c9,priority=4, > timestamp=Wed Feb 03 16:44:02 EET 2021,expiration=0, durable=true, > address=R.QA62_ORA12.AuditSequence,size=15115,properties=TypedProperties[AuditAuthorized=Y,_AMQ_GROUP_ID=26D5,_AMQ_GROUP_SEQUENCE=0,JMSXGroupID=26D5,__HDR_BROKER_IN_TIME=1612363442486,AuditId=26D5,SequenceNumber=0,SourceSysID=,_AMQ_ROUTING_TYPE=1,JMSXGroupSeq=1,__HDR_COMMAND_ID=27,Version=1,__HDR_MESSAGE_ID=[ > 004A 6E00 017B 0100 2349 443A 4665 646F 7261 3330 2D33 3439 3639 2D31 ... > 0001 0001 0008 > ),AuditState=DONE,__HDR_DROPPABLE=false,__AMQ_CID=ID:Fedora30-34969-1612363441679-1:0,TranslatorType=Distribution,__HDR_ARRIVAL=0,__HDR_MARSHALL_PROP=[ > 000C 000F 4175 6469 7441 7574 686F 7269 7A65 6409 0001 5900 0741 7564 ... > 6E4E 756D 6265 7206 0007 5665 7273 696F 6E05 > 0001),TransactionNumber=0,__HDR_PRODUCER_ID=[ 0037 7B01 0023 4944 3A46 > 6564 6F72 6133 302D 3334 3936 392D 3136 3132 3336 3334 3431 3637 392D 303A > 3000 0100 01)]]@1752182275 > 2021-02-04 12:39:43,817 DEBUG > [org.apache.activemq.artemis.core.server.impl.QueueMessageMetrics] > QueuePendingMessageMetrics[queue=R.QA62_ORA12.AuditSequence, name=delivering] > increment messageCount to 8: > Reference[884]:RELIABLE:CoreMessage[messageID=884,durable=true,userID=41e334b6-662e-11eb-9a20-0800276be4c9,priority=4, > timestamp=Wed Feb 03 16:44:02 EET 2021,expiration=0, durable=true, > address=R.QA62_ORA12.AuditSequence,size=15115,properties=TypedProperties[AuditAuthorized=Y,_AMQ
[jira] [Deleted] (AMQNET-729) Khóa điện tử Bosch
[ https://issues.apache.org/jira/browse/AMQNET-729?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Christopher L. Shannon deleted AMQNET-729: -- > Khóa điện tử Bosch > -- > > Key: AMQNET-729 > URL: https://issues.apache.org/jira/browse/AMQNET-729 > Project: ActiveMQ .Net > Issue Type: Bug >Reporter: Khóa điện tử Bosch >Priority: Major > > Khóa Điện Tử Bosch Thương Hiệu Hàng Đầu Thế Giới Bảo Hành Chính Hãng 12 > Tháng, 1 Đổi 1 Trong 45 Ngày Lắp Đặt Nhanh Chóng, Độ Bảo Mật Cao > Địa chỉ: 95 Nguyễn Trãi, Thượng Đình, Thanh Xuân, Hà Nội > Thông tin liên hệ chúng tôi: > [Website:|https://khoadientubosch.com/] [https://khoadientubosch.com/] > Facebook: [https://www.facebook.com/khoadientubosch] > Map: [https://g.page/khoadientubosch?share] > Youtube: [https://www.youtube.com/channel/UCJsvFfYd-nwcefB0j9cnt5Q] > Twitter: [https://twitter.com/IenBosch] -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (AMQNET-729) Khóa điện tử Bosch
[ https://issues.apache.org/jira/browse/AMQNET-729?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Khóa điện tử Bosch updated AMQNET-729: -- Description: Khóa Điện Tử Bosch Thương Hiệu Hàng Đầu Thế Giới Bảo Hành Chính Hãng 12 Tháng, 1 Đổi 1 Trong 45 Ngày Lắp Đặt Nhanh Chóng, Độ Bảo Mật Cao Địa chỉ: 95 Nguyễn Trãi, Thượng Đình, Thanh Xuân, Hà Nội Thông tin liên hệ chúng tôi: [Website:|https://khoadientubosch.com/] [https://khoadientubosch.com/] Facebook: [https://www.facebook.com/khoadientubosch] Map: [https://g.page/khoadientubosch?share] Youtube: [https://www.youtube.com/channel/UCJsvFfYd-nwcefB0j9cnt5Q] Twitter: [https://twitter.com/IenBosch] was: Khóa Điện Tử Bosch Thương Hiệu Hàng Đầu Thế Giới Bảo Hành Chính Hãng 12 Tháng, 1 Đổi 1 Trong 45 Ngày Lắp Đặt Nhanh Chóng, Độ Bảo Mật Cao 95 Nguyễn Trãi, Thượng Đình, Thanh Xuân, Hà Nội 935889885 |[https://www.facebook.com/khoadientubosch]| |https://g.page/khoadientubosch?share| |[https://www.youtube.com/channel/UCJsvFfYd-nwcefB0j9cnt5Q]| |[https://twitter.com/IenBosch]| |[https://www.linkedin.com/in/khoadientuboschh/]| |[https://www.pinterest.com/khoacuathongminhbosch/_saved/]| |[https://www.twitch.tv/khoadientuboschh/about]| |[https://khoadientubosch.com/]| > Khóa điện tử Bosch > -- > > Key: AMQNET-729 > URL: https://issues.apache.org/jira/browse/AMQNET-729 > Project: ActiveMQ .Net > Issue Type: Bug >Reporter: Khóa điện tử Bosch >Priority: Major > > Khóa Điện Tử Bosch Thương Hiệu Hàng Đầu Thế Giới Bảo Hành Chính Hãng 12 > Tháng, 1 Đổi 1 Trong 45 Ngày Lắp Đặt Nhanh Chóng, Độ Bảo Mật Cao > Địa chỉ: 95 Nguyễn Trãi, Thượng Đình, Thanh Xuân, Hà Nội > Thông tin liên hệ chúng tôi: > [Website:|https://khoadientubosch.com/] [https://khoadientubosch.com/] > Facebook: [https://www.facebook.com/khoadientubosch] > Map: [https://g.page/khoadientubosch?share] > Youtube: [https://www.youtube.com/channel/UCJsvFfYd-nwcefB0j9cnt5Q] > Twitter: [https://twitter.com/IenBosch] -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Work logged] (ARTEMIS-3449) Speedup AMQP large message streaming
[ https://issues.apache.org/jira/browse/ARTEMIS-3449?focusedWorklogId=647357&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-647357 ] ASF GitHub Bot logged work on ARTEMIS-3449: --- Author: ASF GitHub Bot Created on: 07/Sep/21 13:01 Start Date: 07/Sep/21 13:01 Worklog Time Spent: 10m Work Description: gemmellr edited a comment on pull request #3711: URL: https://github.com/apache/activemq-artemis/pull/3711#issuecomment-914286613 Even if it visibly isnt called, the remaining strands seem to suggest like it should be and presumably was before, so I would leave the existing code for now and track down why/when it became unused and determine what the actual next step should be before just removing it, since the getData() usage appears to be pointing to a potentially significant issue. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: gitbox-unsubscr...@activemq.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 647357) Time Spent: 10h 10m (was: 10h) > Speedup AMQP large message streaming > > > Key: ARTEMIS-3449 > URL: https://issues.apache.org/jira/browse/ARTEMIS-3449 > Project: ActiveMQ Artemis > Issue Type: Improvement >Reporter: Francesco Nigro >Assignee: Francesco Nigro >Priority: Major > Time Spent: 10h 10m > Remaining Estimate: 0h > > AMQP is using unpooled heap ByteBuffer(s) to stream AMQP large messages: > given that the underline NIO sequential file can both use FileChannel or > RandomAccessFile (depending if the ByteBuffer used is direct/heap based), > both approaches would benefit from using Netty pooled direct buffers and save > additional copies (performed by RandomAccessFile) to happen, reducing GC too. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Work logged] (ARTEMIS-3449) Speedup AMQP large message streaming
[ https://issues.apache.org/jira/browse/ARTEMIS-3449?focusedWorklogId=647358&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-647358 ] ASF GitHub Bot logged work on ARTEMIS-3449: --- Author: ASF GitHub Bot Created on: 07/Sep/21 13:01 Start Date: 07/Sep/21 13:01 Worklog Time Spent: 10m Work Description: gemmellr commented on pull request #3711: URL: https://github.com/apache/activemq-artemis/pull/3711#issuecomment-914286613 Even if it visibly isnt called, the remaining strangs seem to suggest like it should be and presumably was before, so I would leave the existing code for now and track down why/when it became unused and determine what the actual next step should be before just removing it, since the getData() usage appears to be pointing to a potentially significant issue. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: gitbox-unsubscr...@activemq.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 647358) Time Spent: 10h 20m (was: 10h 10m) > Speedup AMQP large message streaming > > > Key: ARTEMIS-3449 > URL: https://issues.apache.org/jira/browse/ARTEMIS-3449 > Project: ActiveMQ Artemis > Issue Type: Improvement >Reporter: Francesco Nigro >Assignee: Francesco Nigro >Priority: Major > Time Spent: 10h 20m > Remaining Estimate: 0h > > AMQP is using unpooled heap ByteBuffer(s) to stream AMQP large messages: > given that the underline NIO sequential file can both use FileChannel or > RandomAccessFile (depending if the ByteBuffer used is direct/heap based), > both approaches would benefit from using Netty pooled direct buffers and save > additional copies (performed by RandomAccessFile) to happen, reducing GC too. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Work logged] (ARTEMIS-3462) Improve MBean Guard exception messages
[ https://issues.apache.org/jira/browse/ARTEMIS-3462?focusedWorklogId=647330&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-647330 ] ASF GitHub Bot logged work on ARTEMIS-3462: --- Author: ASF GitHub Bot Created on: 07/Sep/21 12:13 Start Date: 07/Sep/21 12:13 Worklog Time Spent: 10m Work Description: andytaylor commented on pull request #3727: URL: https://github.com/apache/activemq-artemis/pull/3727#issuecomment-914252034 lgtm -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: gitbox-unsubscr...@activemq.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 647330) Remaining Estimate: 0h Time Spent: 10m > Improve MBean Guard exception messages > -- > > Key: ARTEMIS-3462 > URL: https://issues.apache.org/jira/browse/ARTEMIS-3462 > Project: ActiveMQ Artemis > Issue Type: Improvement >Reporter: Domenico Francesco Bruscino >Assignee: Domenico Francesco Bruscino >Priority: Major > Time Spent: 10m > Remaining Estimate: 0h > > Add the attributeName or the operationName to the security exception message. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (ARTEMIS-3462) Improve MBean Guard exception messages
Domenico Francesco Bruscino created ARTEMIS-3462: Summary: Improve MBean Guard exception messages Key: ARTEMIS-3462 URL: https://issues.apache.org/jira/browse/ARTEMIS-3462 Project: ActiveMQ Artemis Issue Type: Improvement Reporter: Domenico Francesco Bruscino Assignee: Domenico Francesco Bruscino Add the attributeName or the operationName to the security exception message. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Resolved] (ARTEMIS-3450) StaticPoolTest and DiscoveryPoolTest fail sporadically in CI
[ https://issues.apache.org/jira/browse/ARTEMIS-3450?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Robbie Gemmell resolved ARTEMIS-3450. - Fix Version/s: 2.19.0 Resolution: Fixed > StaticPoolTest and DiscoveryPoolTest fail sporadically in CI > > > Key: ARTEMIS-3450 > URL: https://issues.apache.org/jira/browse/ARTEMIS-3450 > Project: ActiveMQ Artemis > Issue Type: Test > Components: Tests >Affects Versions: 2.18.0 >Reporter: Robbie Gemmell >Assignee: Domenico Francesco Bruscino >Priority: Major > Fix For: 2.19.0 > > Time Spent: 10m > Remaining Estimate: 0h > > The StaticPoolTest and DiscoveryPoolTest tests added in ARTEMIS-3365 fail > sporadically in CI > Some examples: > https://github.com/apache/activemq-artemis/runs/3416448949?check_suite_focus=true#step:5:2325 > https://github.com/apache/activemq-artemis/runs/3433060966?check_suite_focus=true#step:5:2223 > Looking at the tests I do see a few small issues, though they may not explain > the failures: > - The MockTargetProbe contains a HashMap used from multiple threads (test and > pool) in a manner that isnt thread safe. It actually threw > ConcurrentModificationException during at least one CI run (e.g > https://github.com/apache/activemq-artemis/runs/3416448949?check_suite_focus=true#step:5:2163). > That may or may not be the cause of the test failure seen in the same log > (note it isnt seen in the other test log, though it failed at a different > assertion). It should use ConcurrentHashMap or perhaps alternatively protect > use of the map more generally. > - The PoolTestBase#testPoolQuorumWithMultipleTargets test creates and starts > a pool but doesnt ensure it is stopped on assertion failure. The > DiscoveryPoolTest subclass runs this test with a pool using a scheduled > executor, so it should presumably be cleaned up in the same manner the other > tests all use. > - One of the tests asserts there are no 'createdTargets' entries, and then > immediately iterates those [non-existent] entries to assert on the > non-existent values, which seems quite strange. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (ARTEMIS-3382) Cannot publish to a deleted Destination: temp-queue:
[ https://issues.apache.org/jira/browse/ARTEMIS-3382?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Gary Tully updated ARTEMIS-3382: Priority: Major (was: Blocker) > Cannot publish to a deleted Destination: temp-queue: > > > Key: ARTEMIS-3382 > URL: https://issues.apache.org/jira/browse/ARTEMIS-3382 > Project: ActiveMQ Artemis > Issue Type: Bug > Components: JMS >Affects Versions: 2.17.0 > Environment: OS:Alibaba Cloud Linux release 3 (Soaring Falcon)(Linux > kernel 5.10 LTS) > JDK version:1.8.0_251 > >Reporter: wang >Priority: Major > Labels: JmsMsgTemplate, SendAndReceive > > JDK version:1.8.0_251 > spring-jms-5.2.7 > artemis2.17.0, default configuration > has A、B two JAVA program in one machine, > A send a synchronization mesage,B reply, > run a few days later, > B throw exception on reply. > return normal after restart program B. > A: > jmsMsgTemplate.getJmsTemplate().setReceiveTimeout(20*1000); > String message= > jmsMsgTemplate.convertSendAndReceive(queueName,messageStr,String.class); > B: reply,not timeout, but throw this exception: > Cannot publish to a deleted Destination: > temp-queue://ID:iZwz96vtq89cjmxpw5w9boZ-40113-1624574577491-1:1:588; nested > exception is javax.jms.InvalidDestinationException: Cannot publish to a > deleted Destination: > temp-queue://ID:iZwz96vtq89cjmxpw5w9boZ-40113-1624574577491-1:1:588; nested > exception is org.springframework.jms.InvalidDestinationException: Cannot > publish to a deleted Destination: > temp-queue://ID:iZwz96vtq89cjmxpw5w9boZ-40113-1624574577491-1:1:588; nested > exception is javax.jms.InvalidDestinationException: Cannot publish to a > deleted Destination: > temp-queue://ID:iZwz96vtq89cjmxpw5w9boZ-40113-1624574577491-1:1:588 > > this is a bug? or default configuration is not support synchronization > mesage? > What should I do? -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (ARTEMIS-3382) Cannot publish to a deleted Destination: temp-queue:
[ https://issues.apache.org/jira/browse/ARTEMIS-3382?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=1744#comment-1744 ] Gary Tully commented on ARTEMIS-3382: - it seems you are using the openwire client, it listens for advisories by default to be aware of temp destinations and artemis does not publish advisories by default. If you use AMQP as the protocol, you won't see this mismatch. if you configure the openwire client to use jms.watchTopicAdvisories=false then you won't see the problem if you configure artemis to produce advisories all should be ok too. see: ARTEMIS-3234 I would recommend using the qpid jms client in favour of openwire > Cannot publish to a deleted Destination: temp-queue: > > > Key: ARTEMIS-3382 > URL: https://issues.apache.org/jira/browse/ARTEMIS-3382 > Project: ActiveMQ Artemis > Issue Type: Bug > Components: JMS >Affects Versions: 2.17.0 > Environment: OS:Alibaba Cloud Linux release 3 (Soaring Falcon)(Linux > kernel 5.10 LTS) > JDK version:1.8.0_251 > >Reporter: wang >Priority: Blocker > Labels: JmsMsgTemplate, SendAndReceive > > JDK version:1.8.0_251 > spring-jms-5.2.7 > artemis2.17.0, default configuration > has A、B two JAVA program in one machine, > A send a synchronization mesage,B reply, > run a few days later, > B throw exception on reply. > return normal after restart program B. > A: > jmsMsgTemplate.getJmsTemplate().setReceiveTimeout(20*1000); > String message= > jmsMsgTemplate.convertSendAndReceive(queueName,messageStr,String.class); > B: reply,not timeout, but throw this exception: > Cannot publish to a deleted Destination: > temp-queue://ID:iZwz96vtq89cjmxpw5w9boZ-40113-1624574577491-1:1:588; nested > exception is javax.jms.InvalidDestinationException: Cannot publish to a > deleted Destination: > temp-queue://ID:iZwz96vtq89cjmxpw5w9boZ-40113-1624574577491-1:1:588; nested > exception is org.springframework.jms.InvalidDestinationException: Cannot > publish to a deleted Destination: > temp-queue://ID:iZwz96vtq89cjmxpw5w9boZ-40113-1624574577491-1:1:588; nested > exception is javax.jms.InvalidDestinationException: Cannot publish to a > deleted Destination: > temp-queue://ID:iZwz96vtq89cjmxpw5w9boZ-40113-1624574577491-1:1:588 > > this is a bug? or default configuration is not support synchronization > mesage? > What should I do? -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (ARTEMIS-3157) uneven number of connections in a cluster
[ https://issues.apache.org/jira/browse/ARTEMIS-3157?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Erwin Dondorp updated ARTEMIS-3157: --- Description: Using a cluster of 3 master + 3 slave nodes. full interconnect, each with a fresh virtual disk. no other constructions (bridges/federation/brokerconn/etc) and also no clients. On each master nodes, the 2 static connections to the other master nodes are visible, and each is marked with the dedicated cluster username. so that part seems ok. but without any clients having connected, there are additional connections. the amount is not the same in each master node. Some connections show "127.0.0.1" as address, something that is not in my configuration. none of the extra connections have any sessions. Also the connections do not have any sessions. the details of an example: * broker1: 3 connections to own slave; 2 extra to/from broker2; 1 extra to/from backup of broker3 * broker2: 3 connections to own slave; 2 extra to/from broker1; 2 extra to/from broker3; 1 extra to/from slave of broker3 * broker3: 1 connection to own slave; no other extra connections the exact amount of connections varies a little between startups and also seems to depend on the exact startup sequence. my assumption is that these connections should not be present, and that this is not intended, hence this report. my wild guess is that these are remnants from connections that did not succeed due to the other cluster-members not fully available yet. When the cluster is started one node at a time, the effect seems to exists only on the first node that was started. Note: not related to ARTEMIS-2870 as this report is still valid in 2.18.0-20210322.234647-43. was: Using a cluster of 3 master + 3 slave nodes. full interconnect, each with a fresh virtual disk. no other constructions (bridges/federation/brokerconn/etc) and also no clients. On each master nodes, the 2 static connections to the other master nodes are visible, and each is marked with the dedicated cluster username. so that part seems ok. but without any clients having connected, there are additional connections. the amount is not the same in each master node. Some connections show "127.0.0.1" as address, something that is not in my configuration. none of the extra connections have any sessions. Also the connections do not have ant sessions. the details of an example: * broker1: 3 connections to own slave; 2 extra to/from broker2; 1 extra to/from backup of broker3 * broker2: 3 connections to own slave; 2 extra to/from broker1; 2 extra to/from broker3; 1 extra to/from slave of broker3 * broker3: 1 connection to own slave; no other extra connections the exact amount of connections varies a little between startups and also seems to depend on the exact startup sequence. my assumption is that these connections should not be present, and that this is not intended, hence this report. my wild guess is that these are remnants from connections that did not succeed due to the other cluster-members not fully available yet. When the cluster is started one node at a time, the effect seems to exists only on the first node that was started. Note: not related to ARTEMIS-2870 as this report is still valid in 2.18.0-20210322.234647-43. > uneven number of connections in a cluster > - > > Key: ARTEMIS-3157 > URL: https://issues.apache.org/jira/browse/ARTEMIS-3157 > Project: ActiveMQ Artemis > Issue Type: Bug > Components: Broker >Affects Versions: 2.17.0 >Reporter: Erwin Dondorp >Priority: Major > > Using a cluster of 3 master + 3 slave nodes. full interconnect, each with a > fresh virtual disk. no other constructions > (bridges/federation/brokerconn/etc) and also no clients. > On each master nodes, the 2 static connections to the other master nodes are > visible, and each is marked with the dedicated cluster username. so that part > seems ok. > but without any clients having connected, there are additional connections. > the amount is not the same in each master node. Some connections show > "127.0.0.1" as address, something that is not in my configuration. none of > the extra connections have any sessions. Also the connections do not have any > sessions. > the details of an example: > * broker1: 3 connections to own slave; 2 extra to/from broker2; 1 extra > to/from backup of broker3 > * broker2: 3 connections to own slave; 2 extra to/from broker1; 2 extra > to/from broker3; 1 extra to/from slave of broker3 > * broker3: 1 connection to own slave; no other extra connections > the exact amount of connections varies a little between startups and also > seems to depend on the exact startup sequence. > my assumption is that these connections should not be present, and that this > is not intended, hen
[jira] [Commented] (ARTEMIS-3450) StaticPoolTest and DiscoveryPoolTest fail sporadically in CI
[ https://issues.apache.org/jira/browse/ARTEMIS-3450?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17411084#comment-17411084 ] ASF subversion and git services commented on ARTEMIS-3450: -- Commit 0a88aafd740dd3246b7be1b70a5566090518faac in activemq-artemis's branch refs/heads/main from Domenico Francesco Bruscino [ https://gitbox.apache.org/repos/asf?p=activemq-artemis.git;h=0a88aaf ] ARTEMIS-3450 Fix StaticPoolTest and DiscoveryPoolTest intermittent failures > StaticPoolTest and DiscoveryPoolTest fail sporadically in CI > > > Key: ARTEMIS-3450 > URL: https://issues.apache.org/jira/browse/ARTEMIS-3450 > Project: ActiveMQ Artemis > Issue Type: Test > Components: Tests >Affects Versions: 2.18.0 >Reporter: Robbie Gemmell >Assignee: Domenico Francesco Bruscino >Priority: Major > > The StaticPoolTest and DiscoveryPoolTest tests added in ARTEMIS-3365 fail > sporadically in CI > Some examples: > https://github.com/apache/activemq-artemis/runs/3416448949?check_suite_focus=true#step:5:2325 > https://github.com/apache/activemq-artemis/runs/3433060966?check_suite_focus=true#step:5:2223 > Looking at the tests I do see a few small issues, though they may not explain > the failures: > - The MockTargetProbe contains a HashMap used from multiple threads (test and > pool) in a manner that isnt thread safe. It actually threw > ConcurrentModificationException during at least one CI run (e.g > https://github.com/apache/activemq-artemis/runs/3416448949?check_suite_focus=true#step:5:2163). > That may or may not be the cause of the test failure seen in the same log > (note it isnt seen in the other test log, though it failed at a different > assertion). It should use ConcurrentHashMap or perhaps alternatively protect > use of the map more generally. > - The PoolTestBase#testPoolQuorumWithMultipleTargets test creates and starts > a pool but doesnt ensure it is stopped on assertion failure. The > DiscoveryPoolTest subclass runs this test with a pool using a scheduled > executor, so it should presumably be cleaned up in the same manner the other > tests all use. > - One of the tests asserts there are no 'createdTargets' entries, and then > immediately iterates those [non-existent] entries to assert on the > non-existent values, which seems quite strange. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Work logged] (ARTEMIS-3450) StaticPoolTest and DiscoveryPoolTest fail sporadically in CI
[ https://issues.apache.org/jira/browse/ARTEMIS-3450?focusedWorklogId=647268&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-647268 ] ASF GitHub Bot logged work on ARTEMIS-3450: --- Author: ASF GitHub Bot Created on: 07/Sep/21 09:34 Start Date: 07/Sep/21 09:34 Worklog Time Spent: 10m Work Description: asfgit closed pull request #3724: URL: https://github.com/apache/activemq-artemis/pull/3724 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: gitbox-unsubscr...@activemq.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 647268) Remaining Estimate: 0h Time Spent: 10m > StaticPoolTest and DiscoveryPoolTest fail sporadically in CI > > > Key: ARTEMIS-3450 > URL: https://issues.apache.org/jira/browse/ARTEMIS-3450 > Project: ActiveMQ Artemis > Issue Type: Test > Components: Tests >Affects Versions: 2.18.0 >Reporter: Robbie Gemmell >Assignee: Domenico Francesco Bruscino >Priority: Major > Time Spent: 10m > Remaining Estimate: 0h > > The StaticPoolTest and DiscoveryPoolTest tests added in ARTEMIS-3365 fail > sporadically in CI > Some examples: > https://github.com/apache/activemq-artemis/runs/3416448949?check_suite_focus=true#step:5:2325 > https://github.com/apache/activemq-artemis/runs/3433060966?check_suite_focus=true#step:5:2223 > Looking at the tests I do see a few small issues, though they may not explain > the failures: > - The MockTargetProbe contains a HashMap used from multiple threads (test and > pool) in a manner that isnt thread safe. It actually threw > ConcurrentModificationException during at least one CI run (e.g > https://github.com/apache/activemq-artemis/runs/3416448949?check_suite_focus=true#step:5:2163). > That may or may not be the cause of the test failure seen in the same log > (note it isnt seen in the other test log, though it failed at a different > assertion). It should use ConcurrentHashMap or perhaps alternatively protect > use of the map more generally. > - The PoolTestBase#testPoolQuorumWithMultipleTargets test creates and starts > a pool but doesnt ensure it is stopped on assertion failure. The > DiscoveryPoolTest subclass runs this test with a pool using a scheduled > executor, so it should presumably be cleaned up in the same manner the other > tests all use. > - One of the tests asserts there are no 'createdTargets' entries, and then > immediately iterates those [non-existent] entries to assert on the > non-existent values, which seems quite strange. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Work logged] (ARTEMIS-2697) Avoid using raw types for Persister
[ https://issues.apache.org/jira/browse/ARTEMIS-2697?focusedWorklogId=647222&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-647222 ] ASF GitHub Bot logged work on ARTEMIS-2697: --- Author: ASF GitHub Bot Created on: 07/Sep/21 07:47 Start Date: 07/Sep/21 07:47 Worklog Time Spent: 10m Work Description: franz1981 closed pull request #3053: URL: https://github.com/apache/activemq-artemis/pull/3053 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: gitbox-unsubscr...@activemq.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 647222) Time Spent: 0.5h (was: 20m) > Avoid using raw types for Persister > -- > > Key: ARTEMIS-2697 > URL: https://issues.apache.org/jira/browse/ARTEMIS-2697 > Project: ActiveMQ Artemis > Issue Type: Improvement >Affects Versions: 2.11.0 >Reporter: Francesco Nigro >Priority: Minor > Time Spent: 0.5h > Remaining Estimate: 0h > > Persister has been introduced by [ARTEMIS-1009 Pure Message > Encoding|https://issues.apache.org/jira/browse/ARTEMIS-1009], but during the > refactoring process some bits are being left around to properly use Java > Generics to help spotting wrong assignments at compile time eg > {code:java} > public interface Journal extends ActiveMQComponent { >// ... >void appendAddRecord(long id, > byte recordType, > Persister persister, > Object record, > boolean sync, > IOCompletion completionCallback) throws Exception; > {code} > And by consequence, a caller on {{AbstractJournalStorageManager}}: > {code:java} > if (message.isLargeMessage() && message instanceof LargeServerMessageImpl) { > messageJournal.appendAddRecord(message.getMessageID(), > JournalRecordIds.ADD_LARGE_MESSAGE, LargeMessagePersister.getInstance(), > message, false, getContext(false)); > {code} > Where {{LargeMessagePersister.getInstance()}} is a > {{Persister}} and {{message}} is a {{Message}} and such > method call should not compile at all (but it does) -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Work logged] (ARTEMIS-2984) Compressed large messages can leak native resources
[ https://issues.apache.org/jira/browse/ARTEMIS-2984?focusedWorklogId=647221&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-647221 ] ASF GitHub Bot logged work on ARTEMIS-2984: --- Author: ASF GitHub Bot Created on: 07/Sep/21 07:47 Start Date: 07/Sep/21 07:47 Worklog Time Spent: 10m Work Description: franz1981 commented on pull request #3334: URL: https://github.com/apache/activemq-artemis/pull/3334#issuecomment-914072488 Closing this because large messages changed a bit recently: need to be reworked -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: gitbox-unsubscr...@activemq.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 647221) Time Spent: 5h 20m (was: 5h 10m) > Compressed large messages can leak native resources > --- > > Key: ARTEMIS-2984 > URL: https://issues.apache.org/jira/browse/ARTEMIS-2984 > Project: ActiveMQ Artemis > Issue Type: Bug >Reporter: Francesco Nigro >Assignee: Francesco Nigro >Priority: Major > Time Spent: 5h 20m > Remaining Estimate: 0h > > Compressed large messages use native resources in the form of Inflater and > Deflater and should release them in a timely manner (instead of relying on > finalization) to save OOM to happen (of direct memory, to be precise). > This issue has the chance to simplify large message controllers, because much > of the existing code on controllers (including compressed one) isn't needed > at runtime, but just for testing purposes and a proper fix can move dead code > there too, saving leaky behavior to be maintained. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Work logged] (ARTEMIS-2984) Compressed large messages can leak native resources
[ https://issues.apache.org/jira/browse/ARTEMIS-2984?focusedWorklogId=647220&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-647220 ] ASF GitHub Bot logged work on ARTEMIS-2984: --- Author: ASF GitHub Bot Created on: 07/Sep/21 07:47 Start Date: 07/Sep/21 07:47 Worklog Time Spent: 10m Work Description: franz1981 closed pull request #3334: URL: https://github.com/apache/activemq-artemis/pull/3334 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: gitbox-unsubscr...@activemq.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 647220) Time Spent: 5h 10m (was: 5h) > Compressed large messages can leak native resources > --- > > Key: ARTEMIS-2984 > URL: https://issues.apache.org/jira/browse/ARTEMIS-2984 > Project: ActiveMQ Artemis > Issue Type: Bug >Reporter: Francesco Nigro >Assignee: Francesco Nigro >Priority: Major > Time Spent: 5h 10m > Remaining Estimate: 0h > > Compressed large messages use native resources in the form of Inflater and > Deflater and should release them in a timely manner (instead of relying on > finalization) to save OOM to happen (of direct memory, to be precise). > This issue has the chance to simplify large message controllers, because much > of the existing code on controllers (including compressed one) isn't needed > at runtime, but just for testing purposes and a proper fix can move dead code > there too, saving leaky behavior to be maintained. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Work logged] (ARTEMIS-3021) OOM due to wrong CORE clustered message memory estimation
[ https://issues.apache.org/jira/browse/ARTEMIS-3021?focusedWorklogId=647212&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-647212 ] ASF GitHub Bot logged work on ARTEMIS-3021: --- Author: ASF GitHub Bot Created on: 07/Sep/21 07:28 Start Date: 07/Sep/21 07:28 Worklog Time Spent: 10m Work Description: franz1981 commented on pull request #3370: URL: https://github.com/apache/activemq-artemis/pull/3370#issuecomment-914061015 This need some time to be ready, but it's worth to be fixed IMO, so I'll leave it opened as draft -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: gitbox-unsubscr...@activemq.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 647212) Time Spent: 4h 20m (was: 4h 10m) > OOM due to wrong CORE clustered message memory estimation > - > > Key: ARTEMIS-3021 > URL: https://issues.apache.org/jira/browse/ARTEMIS-3021 > Project: ActiveMQ Artemis > Issue Type: Bug >Reporter: Francesco Nigro >Assignee: Francesco Nigro >Priority: Major > Time Spent: 4h 20m > Remaining Estimate: 0h > > This is affecting clustered Core messages (persistent or not). > The process that cause the wrong estimation is: > # add route information to the message > # get memory estimation for paging (ie address size estimation) without > accounting the new route information > # get message persist size for durable append on journal/to update queue > statistics, triggering a re-encoding > # re-encoding (can) enlarge the message buffer to be the next power of 2 > available capacity > The 2 fixes are: > * getting a correct memory estimation of the message (including the added > route information) > * save an excessive buffer growth caused by the default Netty's > ByteBuf::ensureWritable strategy -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Work logged] (ARTEMIS-3163) Experimental support for Netty IO_URING incubator
[ https://issues.apache.org/jira/browse/ARTEMIS-3163?focusedWorklogId=647210&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-647210 ] ASF GitHub Bot logged work on ARTEMIS-3163: --- Author: ASF GitHub Bot Created on: 07/Sep/21 07:26 Start Date: 07/Sep/21 07:26 Worklog Time Spent: 10m Work Description: franz1981 edited a comment on pull request #3479: URL: https://github.com/apache/activemq-artemis/pull/3479#issuecomment-914059144 > whats status on this one? Im keen to merge it, happy to help contribute any last bits like slight code re-org on if statements, and docs that are needed myself, if needed next week. That seems a nice: i just would like to perform some better testing to help users to know which kernel version (and incubator version) to use. Let it parks here for a week and then we can move on and maybe decide with a public vote if people are interested and would like to know what it is/its purpose. We can have a call too with the community to explain it :) -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: gitbox-unsubscr...@activemq.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 647210) Time Spent: 3h 50m (was: 3h 40m) > Experimental support for Netty IO_URING incubator > - > > Key: ARTEMIS-3163 > URL: https://issues.apache.org/jira/browse/ARTEMIS-3163 > Project: ActiveMQ Artemis > Issue Type: New Feature >Reporter: Francesco Nigro >Assignee: Francesco Nigro >Priority: Major > Attachments: flamegraphs.zip > > Time Spent: 3h 50m > Remaining Estimate: 0h > > Netty provides incubator support (ie not for production use yet) for IO_URING > (see https://github.com/netty/netty-incubator-transport-io_uring). > It would be nice for Artemis to support it and allows devs/users to start > playing with it. > To enable this feature to work, users should manually compile > https://github.com/netty/netty-incubator-transport-io_uring and place it on > the lib folder of an artemis installation, as expected by an experimental > feature ;) -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Work logged] (ARTEMIS-3163) Experimental support for Netty IO_URING incubator
[ https://issues.apache.org/jira/browse/ARTEMIS-3163?focusedWorklogId=647209&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-647209 ] ASF GitHub Bot logged work on ARTEMIS-3163: --- Author: ASF GitHub Bot Created on: 07/Sep/21 07:26 Start Date: 07/Sep/21 07:26 Worklog Time Spent: 10m Work Description: franz1981 commented on pull request #3479: URL: https://github.com/apache/activemq-artemis/pull/3479#issuecomment-914059144 > whats status on this one? Im keen to merge it, happy to help contribute any last bits like slight code re-org on if statements, and docs that are needed myself, if needed next week. That seems a nice addition: i just would like to perform some better testing to help users to know which kernel version (and incubator version) to use. Let it parks here for a week and then we can move on and maybe decide with a public vote if people are interested and would like to know what it is/its purpose. We can have a call too with the community to explain it :) -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: gitbox-unsubscr...@activemq.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 647209) Time Spent: 3h 40m (was: 3.5h) > Experimental support for Netty IO_URING incubator > - > > Key: ARTEMIS-3163 > URL: https://issues.apache.org/jira/browse/ARTEMIS-3163 > Project: ActiveMQ Artemis > Issue Type: New Feature >Reporter: Francesco Nigro >Assignee: Francesco Nigro >Priority: Major > Attachments: flamegraphs.zip > > Time Spent: 3h 40m > Remaining Estimate: 0h > > Netty provides incubator support (ie not for production use yet) for IO_URING > (see https://github.com/netty/netty-incubator-transport-io_uring). > It would be nice for Artemis to support it and allows devs/users to start > playing with it. > To enable this feature to work, users should manually compile > https://github.com/netty/netty-incubator-transport-io_uring and place it on > the lib folder of an artemis installation, as expected by an experimental > feature ;) -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Work logged] (ARTEMIS-2452) group-name ignored in shared store colocated setup
[ https://issues.apache.org/jira/browse/ARTEMIS-2452?focusedWorklogId=647205&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-647205 ] ASF GitHub Bot logged work on ARTEMIS-2452: --- Author: ASF GitHub Bot Created on: 07/Sep/21 07:22 Start Date: 07/Sep/21 07:22 Worklog Time Spent: 10m Work Description: franz1981 closed pull request #2793: URL: https://github.com/apache/activemq-artemis/pull/2793 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: gitbox-unsubscr...@activemq.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 647205) Time Spent: 4.5h (was: 4h 20m) > group-name ignored in shared store colocated setup > -- > > Key: ARTEMIS-2452 > URL: https://issues.apache.org/jira/browse/ARTEMIS-2452 > Project: ActiveMQ Artemis > Issue Type: Bug > Components: Broker >Affects Versions: 2.9.0 >Reporter: Francesco Nigro >Priority: Major > Time Spent: 4.5h > Remaining Estimate: 0h > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Work logged] (ARTEMIS-2452) group-name ignored in shared store colocated setup
[ https://issues.apache.org/jira/browse/ARTEMIS-2452?focusedWorklogId=647206&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-647206 ] ASF GitHub Bot logged work on ARTEMIS-2452: --- Author: ASF GitHub Bot Created on: 07/Sep/21 07:22 Start Date: 07/Sep/21 07:22 Worklog Time Spent: 10m Work Description: franz1981 commented on pull request #2793: URL: https://github.com/apache/activemq-artemis/pull/2793#issuecomment-914056875 Closing this because inactive from long time -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: gitbox-unsubscr...@activemq.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 647206) Time Spent: 4h 40m (was: 4.5h) > group-name ignored in shared store colocated setup > -- > > Key: ARTEMIS-2452 > URL: https://issues.apache.org/jira/browse/ARTEMIS-2452 > Project: ActiveMQ Artemis > Issue Type: Bug > Components: Broker >Affects Versions: 2.9.0 >Reporter: Francesco Nigro >Priority: Major > Time Spent: 4h 40m > Remaining Estimate: 0h > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Work logged] (ARTEMIS-3208) Reduce Garbage on Session batch acknowledge
[ https://issues.apache.org/jira/browse/ARTEMIS-3208?focusedWorklogId=647204&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-647204 ] ASF GitHub Bot logged work on ARTEMIS-3208: --- Author: ASF GitHub Bot Created on: 07/Sep/21 07:20 Start Date: 07/Sep/21 07:20 Worklog Time Spent: 10m Work Description: franz1981 closed pull request #3520: URL: https://github.com/apache/activemq-artemis/pull/3520 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: gitbox-unsubscr...@activemq.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 647204) Time Spent: 40m (was: 0.5h) > Reduce Garbage on Session batch acknowledge > --- > > Key: ARTEMIS-3208 > URL: https://issues.apache.org/jira/browse/ARTEMIS-3208 > Project: ActiveMQ Artemis > Issue Type: Improvement >Reporter: Francesco Nigro >Assignee: Francesco Nigro >Priority: Minor > Time Spent: 40m > Remaining Estimate: 0h > > Handling Session acknowledge won't need acked ref IDs in many cases (Stomp > and Core), so better expose a specialized version of it that won't need to > create any garbage. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Work logged] (ARTEMIS-3208) Reduce Garbage on Session batch acknowledge
[ https://issues.apache.org/jira/browse/ARTEMIS-3208?focusedWorklogId=647203&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-647203 ] ASF GitHub Bot logged work on ARTEMIS-3208: --- Author: ASF GitHub Bot Created on: 07/Sep/21 07:20 Start Date: 07/Sep/21 07:20 Worklog Time Spent: 10m Work Description: franz1981 commented on pull request #3520: URL: https://github.com/apache/activemq-artemis/pull/3520#issuecomment-914055819 Closing this, to be reopened in the future, if fixed -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: gitbox-unsubscr...@activemq.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 647203) Time Spent: 0.5h (was: 20m) > Reduce Garbage on Session batch acknowledge > --- > > Key: ARTEMIS-3208 > URL: https://issues.apache.org/jira/browse/ARTEMIS-3208 > Project: ActiveMQ Artemis > Issue Type: Improvement >Reporter: Francesco Nigro >Assignee: Francesco Nigro >Priority: Minor > Time Spent: 0.5h > Remaining Estimate: 0h > > Handling Session acknowledge won't need acked ref IDs in many cases (Stomp > and Core), so better expose a specialized version of it that won't need to > create any garbage. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (ARTEMIS-3219) Improve FQQN message routing
[ https://issues.apache.org/jira/browse/ARTEMIS-3219?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17410965#comment-17410965 ] ASF subversion and git services commented on ARTEMIS-3219: -- Commit f4d7c8ae692850edd58b12a2b36a0c8e13ff3a2b in activemq-artemis's branch refs/heads/main from franz1981 [ https://gitbox.apache.org/repos/asf?p=activemq-artemis.git;h=f4d7c8a ] ARTEMIS-3219 Save allocating map entries during bindings iteration > Improve FQQN message routing > > > Key: ARTEMIS-3219 > URL: https://issues.apache.org/jira/browse/ARTEMIS-3219 > Project: ActiveMQ Artemis > Issue Type: Improvement >Reporter: Francesco Nigro >Assignee: Francesco Nigro >Priority: Major > Fix For: 2.18.0 > > Time Spent: 1h 10m > Remaining Estimate: 0h > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Work logged] (ARTEMIS-3292) PageSyncTimer can reduce garbage and sync while batching syncs
[ https://issues.apache.org/jira/browse/ARTEMIS-3292?focusedWorklogId=647201&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-647201 ] ASF GitHub Bot logged work on ARTEMIS-3292: --- Author: ASF GitHub Bot Created on: 07/Sep/21 07:16 Start Date: 07/Sep/21 07:16 Worklog Time Spent: 10m Work Description: franz1981 edited a comment on pull request #3573: URL: https://github.com/apache/activemq-artemis/pull/3573#issuecomment-914053249 TLDR on this change (still need to run CI on this): - lock-free sync requests (no interference with background sync): it affects `PagingStoreImpl::page`, that's performing `sync` on ```java if (tx == null && syncNonTransactional && message.isDurable()) { sync(); } ``` - performing background sync no longer allocate ` OperationContext[]`: it affects paging with fast disks ie with a very short page sync timeout No hurry to get it in, but I think is a nice improvement on paging @clebertsuconic @michaelandrepearce wdyt? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: gitbox-unsubscr...@activemq.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 647201) Time Spent: 1.5h (was: 1h 20m) > PageSyncTimer can reduce garbage and sync while batching syncs > -- > > Key: ARTEMIS-3292 > URL: https://issues.apache.org/jira/browse/ARTEMIS-3292 > Project: ActiveMQ Artemis > Issue Type: Improvement >Reporter: Francesco Nigro >Assignee: Francesco Nigro >Priority: Minor > Time Spent: 1.5h > Remaining Estimate: 0h > > {\{PageSyncTimer}} is currently producing {{OperationContext[]}} on \{{tick}} > and is synchronizing access while appending new sync requests. > Both things could be improved. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Work logged] (ARTEMIS-3292) PageSyncTimer can reduce garbage and sync while batching syncs
[ https://issues.apache.org/jira/browse/ARTEMIS-3292?focusedWorklogId=647200&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-647200 ] ASF GitHub Bot logged work on ARTEMIS-3292: --- Author: ASF GitHub Bot Created on: 07/Sep/21 07:15 Start Date: 07/Sep/21 07:15 Worklog Time Spent: 10m Work Description: franz1981 commented on pull request #3573: URL: https://github.com/apache/activemq-artemis/pull/3573#issuecomment-914053249 TLDR on this change (still need to run CI on this): - lock-free sync requests (no interference with background sync): it affects `PagingStoreImpl::page`, that's performing `sync` on ```java if (tx == null && syncNonTransactional && message.isDurable()) { sync(); } ``` - performing background sync no longer allocate ` OperationContext[]`: it affects paging with fast disks ie with a very short page sync timeout No hurry to get it but I think is a nice improvement @clebertsuconic @michaelandrepearce wdyt? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: gitbox-unsubscr...@activemq.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 647200) Time Spent: 1h 20m (was: 1h 10m) > PageSyncTimer can reduce garbage and sync while batching syncs > -- > > Key: ARTEMIS-3292 > URL: https://issues.apache.org/jira/browse/ARTEMIS-3292 > Project: ActiveMQ Artemis > Issue Type: Improvement >Reporter: Francesco Nigro >Assignee: Francesco Nigro >Priority: Minor > Time Spent: 1h 20m > Remaining Estimate: 0h > > {\{PageSyncTimer}} is currently producing {{OperationContext[]}} on \{{tick}} > and is synchronizing access while appending new sync requests. > Both things could be improved. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Work logged] (ARTEMIS-3303) Default thread pool size is too generous
[ https://issues.apache.org/jira/browse/ARTEMIS-3303?focusedWorklogId=647194&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-647194 ] ASF GitHub Bot logged work on ARTEMIS-3303: --- Author: ASF GitHub Bot Created on: 07/Sep/21 07:04 Start Date: 07/Sep/21 07:04 Worklog Time Spent: 10m Work Description: franz1981 commented on pull request #3584: URL: https://github.com/apache/activemq-artemis/pull/3584#issuecomment-914046144 I'm keen to re-implement it as suggested by @clebertsuconic , but after some extensive performance testing to be sure is just the right choice for most users. Closing this, to be reopened in the future -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: gitbox-unsubscr...@activemq.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 647194) Time Spent: 4h 20m (was: 4h 10m) > Default thread pool size is too generous > > > Key: ARTEMIS-3303 > URL: https://issues.apache.org/jira/browse/ARTEMIS-3303 > Project: ActiveMQ Artemis > Issue Type: Improvement >Reporter: Francesco Nigro >Assignee: Francesco Nigro >Priority: Major > Time Spent: 4h 20m > Remaining Estimate: 0h > > By tweaking thread pool size from default it's possible to easily gain twice > the troughput: both Netty (acceptor) and global thread pool default sizing > seems too generous according the available cores of a machine: > * 3 x cores for the former > * [0, 30] for the latter (!!!) -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Work logged] (ARTEMIS-3303) Default thread pool size is too generous
[ https://issues.apache.org/jira/browse/ARTEMIS-3303?focusedWorklogId=647195&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-647195 ] ASF GitHub Bot logged work on ARTEMIS-3303: --- Author: ASF GitHub Bot Created on: 07/Sep/21 07:04 Start Date: 07/Sep/21 07:04 Worklog Time Spent: 10m Work Description: franz1981 closed pull request #3584: URL: https://github.com/apache/activemq-artemis/pull/3584 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: gitbox-unsubscr...@activemq.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 647195) Time Spent: 4.5h (was: 4h 20m) > Default thread pool size is too generous > > > Key: ARTEMIS-3303 > URL: https://issues.apache.org/jira/browse/ARTEMIS-3303 > Project: ActiveMQ Artemis > Issue Type: Improvement >Reporter: Francesco Nigro >Assignee: Francesco Nigro >Priority: Major > Time Spent: 4.5h > Remaining Estimate: 0h > > By tweaking thread pool size from default it's possible to easily gain twice > the troughput: both Netty (acceptor) and global thread pool default sizing > seems too generous according the available cores of a machine: > * 3 x cores for the former > * [0, 30] for the latter (!!!) -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Work logged] (ARTEMIS-3289) Reduce journal appender executor Thread wakeup cost
[ https://issues.apache.org/jira/browse/ARTEMIS-3289?focusedWorklogId=647193&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-647193 ] ASF GitHub Bot logged work on ARTEMIS-3289: --- Author: ASF GitHub Bot Created on: 07/Sep/21 07:02 Start Date: 07/Sep/21 07:02 Worklog Time Spent: 10m Work Description: franz1981 commented on pull request #3572: URL: https://github.com/apache/activemq-artemis/pull/3572#issuecomment-914045280 Closing this to be reopened in the future -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: gitbox-unsubscr...@activemq.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 647193) Time Spent: 6h 20m (was: 6h 10m) > Reduce journal appender executor Thread wakeup cost > --- > > Key: ARTEMIS-3289 > URL: https://issues.apache.org/jira/browse/ARTEMIS-3289 > Project: ActiveMQ Artemis > Issue Type: Improvement >Reporter: Francesco Nigro >Assignee: Francesco Nigro >Priority: Major > Attachments: 3289_backup.html, image-2021-05-11-09-32-15-538.png, > main_backup.html > > Time Spent: 6h 20m > Remaining Estimate: 0h > > As mentioned in https://issues.apache.org/jira/browse/ARTEMIS-2877 one of the > major factors that contribute to reduce the scalability of a shared-nothing > replication setup is the thread wake-up cost of the {{JournalImpl}}'s > {{appendExecutor}} I/O threads. > See the flamegraph below for a busy replica while appending replicated > journal record: > !image-2021-05-11-09-32-15-538.png|width=966,height=313! > The violet bars represent the CPU cycles spent to awake the Journal appender > thread(s): despite https://issues.apache.org/jira/browse/ARTEMIS-2877 allow > backup to batch append tasks as much as possible, it seems the signaling cost > is still too high, if compared with the rest of replica packet processing. > Given that the append executor is an ordered executor built on top of I/O > thread pool, see {{ActiveMQServerImpl}}: > {code:java} > if (serviceRegistry.getIOExecutorService() != null) { > this.ioExecutorFactory = new > OrderedExecutorFactory(serviceRegistry.getIOExecutorService()); > } else { > ThreadFactory tFactory = AccessController.doPrivileged(new > PrivilegedAction() { > @Override > public ThreadFactory run() { >return new ActiveMQThreadFactory("ActiveMQ-IO-server-" + > this.toString(), false, ClientSessionFactoryImpl.class.getClassLoader()); > } > }); > this.ioExecutorPool = new ThreadPoolExecutor(0, Integer.MAX_VALUE, > 60L, TimeUnit.SECONDS, new SynchronousQueue(), tFactory); > this.ioExecutorFactory = new OrderedExecutorFactory(ioExecutorPool); > } > {code} > And it's using a {{SynchronousQueue}} to submit/take new wakeup tasks, it > worths investigate if using a different thread pool, executor or a different > "sleeping" strategy could reduce such cost under heavy load and improve > response time with/without replication. > Most of the problems of the existing implementation seems related to how > ThreadPoolExecutor + SynchrnoousQueue works in tandem with ArtemisExecutor. > This small program print, on my machine: > {code:java} >public static void main(String[] args) throws InterruptedException { > ThreadPoolExecutor executor = new ThreadPoolExecutor(0, > Integer.MAX_VALUE, 60L, TimeUnit.SECONDS, new SynchronousQueue(), new > ThreadFactory() { > @Override > public Thread newThread(Runnable r) { > Thread t = new Thread(r); > System.err.println("created new thread: " + t); > return t; > } > }); > ExecutorFactory factory = new OrderedExecutorFactory(executor); > ArtemisExecutor artemisExecutor = factory.getExecutor(); > ConcurrentSet executingT = new ConcurrentHashSet<>(); > for (int j = 0; j< 100;j++) { > for (int i = 0; i < 10; i++) { > artemisExecutor.execute(() -> { >executingT.add(Thread.currentThread()); >try { > TimeUnit.MILLISECONDS.sleep(10); >} catch (InterruptedException e) { > e.printStackTrace(); >} > }); > } > Thread.sleep(100*10); > } > executor.shutdown(); > executor.awaitTermination(70, TimeUnit.SECONDS); > System.out.println("Executing threads: " + executingT); >} > {code} > {code:bas
[jira] [Work logged] (ARTEMIS-3289) Reduce journal appender executor Thread wakeup cost
[ https://issues.apache.org/jira/browse/ARTEMIS-3289?focusedWorklogId=647192&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-647192 ] ASF GitHub Bot logged work on ARTEMIS-3289: --- Author: ASF GitHub Bot Created on: 07/Sep/21 07:02 Start Date: 07/Sep/21 07:02 Worklog Time Spent: 10m Work Description: franz1981 closed pull request #3572: URL: https://github.com/apache/activemq-artemis/pull/3572 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: gitbox-unsubscr...@activemq.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 647192) Time Spent: 6h 10m (was: 6h) > Reduce journal appender executor Thread wakeup cost > --- > > Key: ARTEMIS-3289 > URL: https://issues.apache.org/jira/browse/ARTEMIS-3289 > Project: ActiveMQ Artemis > Issue Type: Improvement >Reporter: Francesco Nigro >Assignee: Francesco Nigro >Priority: Major > Attachments: 3289_backup.html, image-2021-05-11-09-32-15-538.png, > main_backup.html > > Time Spent: 6h 10m > Remaining Estimate: 0h > > As mentioned in https://issues.apache.org/jira/browse/ARTEMIS-2877 one of the > major factors that contribute to reduce the scalability of a shared-nothing > replication setup is the thread wake-up cost of the {{JournalImpl}}'s > {{appendExecutor}} I/O threads. > See the flamegraph below for a busy replica while appending replicated > journal record: > !image-2021-05-11-09-32-15-538.png|width=966,height=313! > The violet bars represent the CPU cycles spent to awake the Journal appender > thread(s): despite https://issues.apache.org/jira/browse/ARTEMIS-2877 allow > backup to batch append tasks as much as possible, it seems the signaling cost > is still too high, if compared with the rest of replica packet processing. > Given that the append executor is an ordered executor built on top of I/O > thread pool, see {{ActiveMQServerImpl}}: > {code:java} > if (serviceRegistry.getIOExecutorService() != null) { > this.ioExecutorFactory = new > OrderedExecutorFactory(serviceRegistry.getIOExecutorService()); > } else { > ThreadFactory tFactory = AccessController.doPrivileged(new > PrivilegedAction() { > @Override > public ThreadFactory run() { >return new ActiveMQThreadFactory("ActiveMQ-IO-server-" + > this.toString(), false, ClientSessionFactoryImpl.class.getClassLoader()); > } > }); > this.ioExecutorPool = new ThreadPoolExecutor(0, Integer.MAX_VALUE, > 60L, TimeUnit.SECONDS, new SynchronousQueue(), tFactory); > this.ioExecutorFactory = new OrderedExecutorFactory(ioExecutorPool); > } > {code} > And it's using a {{SynchronousQueue}} to submit/take new wakeup tasks, it > worths investigate if using a different thread pool, executor or a different > "sleeping" strategy could reduce such cost under heavy load and improve > response time with/without replication. > Most of the problems of the existing implementation seems related to how > ThreadPoolExecutor + SynchrnoousQueue works in tandem with ArtemisExecutor. > This small program print, on my machine: > {code:java} >public static void main(String[] args) throws InterruptedException { > ThreadPoolExecutor executor = new ThreadPoolExecutor(0, > Integer.MAX_VALUE, 60L, TimeUnit.SECONDS, new SynchronousQueue(), new > ThreadFactory() { > @Override > public Thread newThread(Runnable r) { > Thread t = new Thread(r); > System.err.println("created new thread: " + t); > return t; > } > }); > ExecutorFactory factory = new OrderedExecutorFactory(executor); > ArtemisExecutor artemisExecutor = factory.getExecutor(); > ConcurrentSet executingT = new ConcurrentHashSet<>(); > for (int j = 0; j< 100;j++) { > for (int i = 0; i < 10; i++) { > artemisExecutor.execute(() -> { >executingT.add(Thread.currentThread()); >try { > TimeUnit.MILLISECONDS.sleep(10); >} catch (InterruptedException e) { > e.printStackTrace(); >} > }); > } > Thread.sleep(100*10); > } > executor.shutdown(); > executor.awaitTermination(70, TimeUnit.SECONDS); > System.out.println("Executing threads: " + executingT); >} > {code} > {code:bash} > created new thread: Thread[Thread-1,5,main] > created new thread: Thr