[jira] [Updated] (DISPATCH-1338) Improvements to edge router documentation

2019-06-07 Thread Ganesh Murthy (JIRA)


 [ 
https://issues.apache.org/jira/browse/DISPATCH-1338?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ganesh Murthy updated DISPATCH-1338:

Fix Version/s: 1.9.0

> Improvements to edge router documentation
> -
>
> Key: DISPATCH-1338
> URL: https://issues.apache.org/jira/browse/DISPATCH-1338
> Project: Qpid Dispatch
>  Issue Type: Improvement
>  Components: Documentation
>Reporter: Ben Hardesty
>Priority: Major
> Fix For: 1.9.0
>
>
> Restructure the doc for edge router.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Resolved] (DISPATCH-1338) Improvements to edge router documentation

2019-06-07 Thread Ganesh Murthy (JIRA)


 [ 
https://issues.apache.org/jira/browse/DISPATCH-1338?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ganesh Murthy resolved DISPATCH-1338.
-
Resolution: Fixed

> Improvements to edge router documentation
> -
>
> Key: DISPATCH-1338
> URL: https://issues.apache.org/jira/browse/DISPATCH-1338
> Project: Qpid Dispatch
>  Issue Type: Improvement
>  Components: Documentation
>Reporter: Ben Hardesty
>Priority: Major
> Fix For: 1.9.0
>
>
> Restructure the doc for edge router.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Commented] (DISPATCH-1338) Improvements to edge router documentation

2019-06-07 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/DISPATCH-1338?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16859083#comment-16859083
 ] 

ASF subversion and git services commented on DISPATCH-1338:
---

Commit b055c12e19cc75063b5ac5669e1221c1eb62e91d in qpid-dispatch's branch 
refs/heads/master from Ben Hardesty
[ https://gitbox.apache.org/repos/asf?p=qpid-dispatch.git;h=b055c12 ]

DISPATCH-1338 - Improvements to edge router documentation. This closes #512


> Improvements to edge router documentation
> -
>
> Key: DISPATCH-1338
> URL: https://issues.apache.org/jira/browse/DISPATCH-1338
> Project: Qpid Dispatch
>  Issue Type: Improvement
>  Components: Documentation
>Reporter: Ben Hardesty
>Priority: Major
>
> Restructure the doc for edge router.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Commented] (DISPATCH-1338) Improvements to edge router documentation

2019-06-07 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/DISPATCH-1338?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16859084#comment-16859084
 ] 

ASF GitHub Bot commented on DISPATCH-1338:
--

asfgit commented on pull request #512: DISPATCH-1338: Restructure Dispatch 
Router book for edge router
URL: https://github.com/apache/qpid-dispatch/pull/512
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Improvements to edge router documentation
> -
>
> Key: DISPATCH-1338
> URL: https://issues.apache.org/jira/browse/DISPATCH-1338
> Project: Qpid Dispatch
>  Issue Type: Improvement
>  Components: Documentation
>Reporter: Ben Hardesty
>Priority: Major
>
> Restructure the doc for edge router.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[GitHub] [qpid-dispatch] asfgit closed pull request #512: DISPATCH-1338: Restructure Dispatch Router book for edge router

2019-06-07 Thread GitBox
asfgit closed pull request #512: DISPATCH-1338: Restructure Dispatch Router 
book for edge router
URL: https://github.com/apache/qpid-dispatch/pull/512
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[GitHub] [qpid-dispatch] ChugR commented on issue #518: DISPATCH-1354: Annotation processing performance improvements

2019-06-07 Thread GitBox
ChugR commented on issue #518: DISPATCH-1354: Annotation processing performance 
improvements
URL: https://github.com/apache/qpid-dispatch/pull/518#issuecomment-500023799
 
 
   Includes MIN, MAX macros in ctools.h
   
   Closed at commit 95c8f4
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Commented] (DISPATCH-1354) Interrouter annotation processing uses slow methods

2019-06-07 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/DISPATCH-1354?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16858960#comment-16858960
 ] 

ASF GitHub Bot commented on DISPATCH-1354:
--

ChugR commented on issue #518: DISPATCH-1354: Annotation processing performance 
improvements
URL: https://github.com/apache/qpid-dispatch/pull/518#issuecomment-500023799
 
 
   Includes MIN, MAX macros in ctools.h
   
   Closed at commit 95c8f4
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Interrouter annotation processing uses slow methods
> ---
>
> Key: DISPATCH-1354
> URL: https://issues.apache.org/jira/browse/DISPATCH-1354
> Project: Qpid Dispatch
>  Issue Type: Improvement
>  Components: Router Node
>Affects Versions: 1.7.0
>Reporter: Chuck Rolke
>Assignee: Chuck Rolke
>Priority: Major
>
> Message annotation processing on received messages stages key names byte by 
> byte into a flat buffer and then uses strcmp to check them.
> Easy improvements are:
>  * Use name in raw buffer if it does not cross a buffer boundary
>  * If name crosses a boundary then use memmoves to get the name in chunks
>  * Check the name prefix only once and then check variable parts of name 
> strings
>  * Don't create unnecessary qd_iterators and qd_parsed_fields
>  * Don't check names whose lengths differ from the given keys



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Commented] (DISPATCH-1354) Interrouter annotation processing uses slow methods

2019-06-07 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/DISPATCH-1354?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16858961#comment-16858961
 ] 

ASF GitHub Bot commented on DISPATCH-1354:
--

ChugR commented on pull request #518: DISPATCH-1354: Annotation processing 
performance improvements
URL: https://github.com/apache/qpid-dispatch/pull/518
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Interrouter annotation processing uses slow methods
> ---
>
> Key: DISPATCH-1354
> URL: https://issues.apache.org/jira/browse/DISPATCH-1354
> Project: Qpid Dispatch
>  Issue Type: Improvement
>  Components: Router Node
>Affects Versions: 1.7.0
>Reporter: Chuck Rolke
>Assignee: Chuck Rolke
>Priority: Major
>
> Message annotation processing on received messages stages key names byte by 
> byte into a flat buffer and then uses strcmp to check them.
> Easy improvements are:
>  * Use name in raw buffer if it does not cross a buffer boundary
>  * If name crosses a boundary then use memmoves to get the name in chunks
>  * Check the name prefix only once and then check variable parts of name 
> strings
>  * Don't create unnecessary qd_iterators and qd_parsed_fields
>  * Don't check names whose lengths differ from the given keys



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[GitHub] [qpid-dispatch] ChugR closed pull request #518: DISPATCH-1354: Annotation processing performance improvements

2019-06-07 Thread GitBox
ChugR closed pull request #518: DISPATCH-1354: Annotation processing 
performance improvements
URL: https://github.com/apache/qpid-dispatch/pull/518
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Commented] (DISPATCH-1354) Interrouter annotation processing uses slow methods

2019-06-07 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/DISPATCH-1354?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16858956#comment-16858956
 ] 

ASF subversion and git services commented on DISPATCH-1354:
---

Commit 91cd6285162c1edd49993741f627d96deb06a545 in qpid-dispatch's branch 
refs/heads/master from Charles E. Rolke
[ https://gitbox.apache.org/repos/asf?p=qpid-dispatch.git;h=91cd628 ]

DISPATCH-1354: Annotation processing performance improvements

Message annotation processing on received messages stages key names
byte by byte into a flat buffer and then uses strcmp to check them.

Easy improvements are:

 * Use name in raw buffer if it does not cross a buffer boundary
 * If name crosses a boundary then use memmoves to get the name in chunks
 * Check the name prefix only once and then check variable parts of name strings
 * Don't create unnecessary qd_iterators and qd_parsed_fields
 * Don't check names whose lengths differ from the given keys


> Interrouter annotation processing uses slow methods
> ---
>
> Key: DISPATCH-1354
> URL: https://issues.apache.org/jira/browse/DISPATCH-1354
> Project: Qpid Dispatch
>  Issue Type: Improvement
>  Components: Router Node
>Affects Versions: 1.7.0
>Reporter: Chuck Rolke
>Assignee: Chuck Rolke
>Priority: Major
>
> Message annotation processing on received messages stages key names byte by 
> byte into a flat buffer and then uses strcmp to check them.
> Easy improvements are:
>  * Use name in raw buffer if it does not cross a buffer boundary
>  * If name crosses a boundary then use memmoves to get the name in chunks
>  * Check the name prefix only once and then check variable parts of name 
> strings
>  * Don't create unnecessary qd_iterators and qd_parsed_fields
>  * Don't check names whose lengths differ from the given keys



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Commented] (DISPATCH-1354) Interrouter annotation processing uses slow methods

2019-06-07 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/DISPATCH-1354?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16858842#comment-16858842
 ] 

ASF GitHub Bot commented on DISPATCH-1354:
--

kgiusti commented on pull request #518: DISPATCH-1354: Annotation processing 
performance improvements
URL: https://github.com/apache/qpid-dispatch/pull/518#discussion_r291679091
 
 

 ##
 File path: src/parse.c
 ##
 @@ -722,61 +722,123 @@ const char *qd_parse_annotations_v1(
 return parse_error;
 }
 
+// define a shorthand name for the qd message annotation key prefix length
+#define QMPL QD_MA_PREFIX_LEN
+
+#define MIN(a,b) (((a)<(b))?(a):(b))
 
 Review comment:
   ooh!  Please put this (and maybe MAX) into ctools.h - don't know how many 
times I've ended up doing this exact same thing.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Interrouter annotation processing uses slow methods
> ---
>
> Key: DISPATCH-1354
> URL: https://issues.apache.org/jira/browse/DISPATCH-1354
> Project: Qpid Dispatch
>  Issue Type: Improvement
>  Components: Router Node
>Affects Versions: 1.7.0
>Reporter: Chuck Rolke
>Assignee: Chuck Rolke
>Priority: Major
>
> Message annotation processing on received messages stages key names byte by 
> byte into a flat buffer and then uses strcmp to check them.
> Easy improvements are:
>  * Use name in raw buffer if it does not cross a buffer boundary
>  * If name crosses a boundary then use memmoves to get the name in chunks
>  * Check the name prefix only once and then check variable parts of name 
> strings
>  * Don't create unnecessary qd_iterators and qd_parsed_fields
>  * Don't check names whose lengths differ from the given keys



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[GitHub] [qpid-dispatch] kgiusti commented on a change in pull request #518: DISPATCH-1354: Annotation processing performance improvements

2019-06-07 Thread GitBox
kgiusti commented on a change in pull request #518: DISPATCH-1354: Annotation 
processing performance improvements
URL: https://github.com/apache/qpid-dispatch/pull/518#discussion_r291679091
 
 

 ##
 File path: src/parse.c
 ##
 @@ -722,61 +722,123 @@ const char *qd_parse_annotations_v1(
 return parse_error;
 }
 
+// define a shorthand name for the qd message annotation key prefix length
+#define QMPL QD_MA_PREFIX_LEN
+
+#define MIN(a,b) (((a)<(b))?(a):(b))
 
 Review comment:
   ooh!  Please put this (and maybe MAX) into ctools.h - don't know how many 
times I've ended up doing this exact same thing.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Commented] (QPID-8320) [linearstore] Empty journal files orphaned and accumulate when the broker is restarted

2019-06-07 Thread Kim van der Riet (JIRA)


[ 
https://issues.apache.org/jira/browse/QPID-8320?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16858805#comment-16858805
 ] 

Kim van der Riet commented on QPID-8320:


commit 5d549d8409f19707f68ca0fbf24ee4e0d3ae8de6 (HEAD -> master, origin/master, 
origin/HEAD)
Author: Kim van der Riet 
Date: Fri Jun 7 12:39:40 2019 -0400

QPID-8320: Fix for empty journal file leak when linearstore recovers

> [linearstore] Empty journal files orphaned and accumulate when the broker is 
> restarted
> --
>
> Key: QPID-8320
> URL: https://issues.apache.org/jira/browse/QPID-8320
> Project: Qpid
>  Issue Type: Bug
>  Components: C++ Broker
>Reporter: Kim van der Riet
>Assignee: Kim van der Riet
>Priority: Major
>
> If a queue journal is filled until the last file in the journal is full, then 
> the store preemptively adds a new journal file to the queue store. This new 
> file is at first uninitialized, but is initialized when it is first used.
> If, at recovery, such an uninitialized file exists in a journal, then on 
> recovery, this file is ignored, and a new uninitialized file is added. Hence 
> two uninitialized files now exist in the journal. If the broker is repeatedly 
> stopped, then started with a journal in this state, a new uninitialized file 
> is added for each restart.
> In addition, the journal recovery does not dispose of the unused 
> uninitialized files, so they accumulate and continue to exist through 
> multiple restarts.
> +*Reproducer:*+
> Start with a clean store:
> {noformat}
> rm -rf ~/.qpidd
> {noformat}
> Start the broker, then:
> {noformat}
> $ qpid-config add queue --durable test_queue
> $ ls ~/.qpidd/qls/jrnl2/test_queue/
> f965476e-eea0-4c02-be50-cbfbce6da71a.jrnl
> $ hexdump -C 
> ~/.qpidd/qls/jrnl2/test_queue/f965476e-eea0-4c02-be50-cbfbce6da71a.jrnl 
>   51 4c 53 66 02 00 00 00  00 00 00 00 00 00 00 00  |QLSf|
> 0010  00 00 00 00 00 00 00 00  01 00 01 00 00 00 00 00  ||
> 0020  00 08 00 00 00 00 00 00  00 00 00 00 00 00 00 00  ||
> 0030  00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00  ||
> *
> 00201000
> {noformat}
>  
>  which is an uninitialized empty journal file. Now add 1024 messages that 
> when encoded consume almost 2048 bytes per message on disk. This should fill 
> the first file exactly, so that the last enqueue record coincides with the 
> physical end of the file at offset {{0x201000}}:
> {noformat}
> $ qpid-send -a test_queue --durable=yes -m 1024 --content-size=1865
> $ ls ~/.qpidd/qls/jrnl2/test_queue/
> e404051f-8af7-422d-a088-7e957c4db3af.jrnl
> f965476e-eea0-4c02-be50-cbfbce6da71a.jrnl
> $ hexdump -C  
> ~/.qpidd/qls/jrnl2/test_queue/f965476e-eea0-4c02-be50-cbfbce6da71a.jrnl | 
> grep QLS
>   51 4c 53 66 02 00 00 00  4f c8 73 20 db c2 df 1d  |QLSfO.s |
> 1000  51 4c 53 65 02 00 00 00  4f c8 73 20 db c2 df 1d  |QLSeO.s |
> 1800  51 4c 53 65 02 00 00 00  4f c8 73 20 db c2 df 1d  |QLSeO.s |
> ...
> 001ff800  51 4c 53 65 02 00 00 00  4f c8 73 20 db c2 df 1d  |QLSeO.s |
> 0020  51 4c 53 65 02 00 00 00  4f c8 73 20 db c2 df 1d  |QLSeO.s |
> 00200800  51 4c 53 65 02 00 00 00  4f c8 73 20 db c2 df 1d  |QLSeO.s |
> {noformat}
> Check that the newly added file is empty:
> {noformat}
> hexdump -C  
> ~/.qpidd/qls/jrnl2/test_queue/e404051f-8af7-422d-a088-7e957c4db3af.jrnl
>   51 4c 53 66 02 00 00 00  00 00 00 00 00 00 00 00  |QLSf|
> 0010  00 00 00 00 00 00 00 00  01 00 01 00 00 00 00 00  ||
> 0020  00 08 00 00 00 00 00 00  00 00 00 00 00 00 00 00  ||
> 0030  00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00  ||
> *
> 00201000
> {noformat}
> It is important to check that the second file is empty other than the file 
> header. If there are any records present, then the file will not be 
> considered empty during recovery, and the conditions for the bug will not be 
> met. Depending on network and threading conditions, the store may add in 
> filler records {{"QLSx"}} at one or two points during the writing of the 
> files, so this may push the final record to be written in the second file. If 
> this happens, try again, or adjust the number of records down slightly until 
> this condition is met.
> Once this condition has been met, stop the broker, then restart it. There 
> will now be two empty files present, the original, plus a new one added at 
> the broker restart.
> Start and stop the broker several times. For each recovery, one new empty 
> file is added to the journal. The old files are still present, but are 
> orphaned, and are never moved, used or put back into the Empty File Pool.



-

[jira] [Updated] (QPID-8320) [linearstore] Empty journal files orphaned and accumulate when the broker is restarted

2019-06-07 Thread Kim van der Riet (JIRA)


 [ 
https://issues.apache.org/jira/browse/QPID-8320?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kim van der Riet updated QPID-8320:
---
Status: Reviewable  (was: In Progress)

> [linearstore] Empty journal files orphaned and accumulate when the broker is 
> restarted
> --
>
> Key: QPID-8320
> URL: https://issues.apache.org/jira/browse/QPID-8320
> Project: Qpid
>  Issue Type: Bug
>  Components: C++ Broker
>Reporter: Kim van der Riet
>Assignee: Kim van der Riet
>Priority: Major
>
> If a queue journal is filled until the last file in the journal is full, then 
> the store preemptively adds a new journal file to the queue store. This new 
> file is at first uninitialized, but is initialized when it is first used.
> If, at recovery, such an uninitialized file exists in a journal, then on 
> recovery, this file is ignored, and a new uninitialized file is added. Hence 
> two uninitialized files now exist in the journal. If the broker is repeatedly 
> stopped, then started with a journal in this state, a new uninitialized file 
> is added for each restart.
> In addition, the journal recovery does not dispose of the unused 
> uninitialized files, so they accumulate and continue to exist through 
> multiple restarts.
> +*Reproducer:*+
> Start with a clean store:
> {noformat}
> rm -rf ~/.qpidd
> {noformat}
> Start the broker, then:
> {noformat}
> $ qpid-config add queue --durable test_queue
> $ ls ~/.qpidd/qls/jrnl2/test_queue/
> f965476e-eea0-4c02-be50-cbfbce6da71a.jrnl
> $ hexdump -C 
> ~/.qpidd/qls/jrnl2/test_queue/f965476e-eea0-4c02-be50-cbfbce6da71a.jrnl 
>   51 4c 53 66 02 00 00 00  00 00 00 00 00 00 00 00  |QLSf|
> 0010  00 00 00 00 00 00 00 00  01 00 01 00 00 00 00 00  ||
> 0020  00 08 00 00 00 00 00 00  00 00 00 00 00 00 00 00  ||
> 0030  00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00  ||
> *
> 00201000
> {noformat}
>  
>  which is an uninitialized empty journal file. Now add 1024 messages that 
> when encoded consume almost 2048 bytes per message on disk. This should fill 
> the first file exactly, so that the last enqueue record coincides with the 
> physical end of the file at offset {{0x201000}}:
> {noformat}
> $ qpid-send -a test_queue --durable=yes -m 1024 --content-size=1865
> $ ls ~/.qpidd/qls/jrnl2/test_queue/
> e404051f-8af7-422d-a088-7e957c4db3af.jrnl
> f965476e-eea0-4c02-be50-cbfbce6da71a.jrnl
> $ hexdump -C  
> ~/.qpidd/qls/jrnl2/test_queue/f965476e-eea0-4c02-be50-cbfbce6da71a.jrnl | 
> grep QLS
>   51 4c 53 66 02 00 00 00  4f c8 73 20 db c2 df 1d  |QLSfO.s |
> 1000  51 4c 53 65 02 00 00 00  4f c8 73 20 db c2 df 1d  |QLSeO.s |
> 1800  51 4c 53 65 02 00 00 00  4f c8 73 20 db c2 df 1d  |QLSeO.s |
> ...
> 001ff800  51 4c 53 65 02 00 00 00  4f c8 73 20 db c2 df 1d  |QLSeO.s |
> 0020  51 4c 53 65 02 00 00 00  4f c8 73 20 db c2 df 1d  |QLSeO.s |
> 00200800  51 4c 53 65 02 00 00 00  4f c8 73 20 db c2 df 1d  |QLSeO.s |
> {noformat}
> Check that the newly added file is empty:
> {noformat}
> hexdump -C  
> ~/.qpidd/qls/jrnl2/test_queue/e404051f-8af7-422d-a088-7e957c4db3af.jrnl
>   51 4c 53 66 02 00 00 00  00 00 00 00 00 00 00 00  |QLSf|
> 0010  00 00 00 00 00 00 00 00  01 00 01 00 00 00 00 00  ||
> 0020  00 08 00 00 00 00 00 00  00 00 00 00 00 00 00 00  ||
> 0030  00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00  ||
> *
> 00201000
> {noformat}
> It is important to check that the second file is empty other than the file 
> header. If there are any records present, then the file will not be 
> considered empty during recovery, and the conditions for the bug will not be 
> met. Depending on network and threading conditions, the store may add in 
> filler records {{"QLSx"}} at one or two points during the writing of the 
> files, so this may push the final record to be written in the second file. If 
> this happens, try again, or adjust the number of records down slightly until 
> this condition is met.
> Once this condition has been met, stop the broker, then restart it. There 
> will now be two empty files present, the original, plus a new one added at 
> the broker restart.
> Start and stop the broker several times. For each recovery, one new empty 
> file is added to the journal. The old files are still present, but are 
> orphaned, and are never moved, used or put back into the Empty File Pool.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org

[VOTE] Release Qpid Dispatch Router 1.8.0 (RC1)

2019-06-07 Thread Ganesh Murthy
Hello All,
 Please cast your vote on this thread to release RC1 as the
official Qpid Dispatch Router version  1.8.0.

RC1 of Qpid Dispatch Router version 1.8.0 can be found here:

https://dist.apache.org/repos/dist/dev/qpid/dispatch/1.8.0-rc1/

The following features, improvements, and bug fixes are introduced in 1.8.0:

Features -
   DISPATCH-1337 - Fallback Destination for Unreachable Addresses

Improvements -
DISPATCH-1308 - Console access to the force-close a connection feature
DISPATCH-1320 - Make it easier to use separate logos for upstream
and downstream masthead
DISPATCH-1321 - Set rpath for qpid-proton (and other dependencies)
when they are found in nonstandard location
DISPATCH-1329 - Edge router system test needs skip test convenience switches
DISPATCH-1340 - Show settlement rate and delayed deliveries in client popup
DISPATCH-1341 - Add list of delayed links to console's overview page
DISPATCH-1348 - Avoid qdr_error_t allocation if not necessary
DISPATCH-1356 - Remove the dotted line around routers that
indicates the router is fixed.
DISPATCH-1357 - Change the name of the 'Kill' feature to 'Close'

Bug fixes -
DISPATCH-974 - Getting connections via the router management
protocol causes AMQP framing errors
DISPATCH-1230 - System test failing with OpenSSL >= 1.1 - system_tests_ssl
DISPATCH-1312 - Remove cmake option USE_MEMORY_POOL
DISPATCH-1317 - HTTP system test is failing on python2.6
DISPATCH-1318 - edge_router system test failing
DISPATCH-1322 - Edge router drops disposition when remote receiver closes
DISPATCH-1323 - Deprecate addr and externalAddr attributes of
autoLink entity. Add address and externalAddress instead.
DISPATCH-1324 - [tools] Scraper uses deprecated cgi.escape function
DISPATCH-1325 - Sender connections to edge router that connect
'too soon' never get credit
DISPATCH-1326 - Anonymous messages are released by edge router
even if there is a receiver for the messages
DISPATCH-1330 - Q2 stall due to incorrect msg buffer ref count
decrement on link detach
DISPATCH-1334 - Background map on topology page incorrect height
DISPATCH-1335 - After adding client, topology page shows new icon
in upper-left corner
DISPATCH-1339 - Multiple consoles attached to a router are showing
as separate icons


Thanks.

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Updated] (DISPATCH-1167) Return a 404 page from an http request for the console if httpRoot is not set

2019-06-07 Thread Ganesh Murthy (JIRA)


 [ 
https://issues.apache.org/jira/browse/DISPATCH-1167?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ganesh Murthy updated DISPATCH-1167:

Fix Version/s: (was: 1.8.0)

> Return a 404 page from an http request for the console if httpRoot is not set
> -
>
> Key: DISPATCH-1167
> URL: https://issues.apache.org/jira/browse/DISPATCH-1167
> Project: Qpid Dispatch
>  Issue Type: Improvement
>Affects Versions: 1.4.1
>Reporter: Ernest Allen
>Priority: Major
>
> With the change in DISPATCH-1155, a request for the console make to a 
> listener that does not have the proper httpRoot setup will result in a 404 
> status being send back to the browser. This results in a blank page with the 
> number 404 displayed.
> An static html page explaining the reason for the 404 status should be 
> returned.
> This page should:
>  * have the proper branding
>  * explain that the console can't be found
>  * explain what to change in order for the console to work
>  * provide a link to any online documentation about setting up the console 
> listener
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Updated] (DISPATCH-1215) several memory leaks in edge-router soak test

2019-06-07 Thread Ganesh Murthy (JIRA)


 [ 
https://issues.apache.org/jira/browse/DISPATCH-1215?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ganesh Murthy updated DISPATCH-1215:

Affects Version/s: 1.6.0

> several memory leaks in edge-router soak test
> -
>
> Key: DISPATCH-1215
> URL: https://issues.apache.org/jira/browse/DISPATCH-1215
> Project: Qpid Dispatch
>  Issue Type: Bug
>Affects Versions: 1.6.0
>Reporter: michael goulish
>Priority: Major
>
> Using recent master code trees (dispatch and proton)...
> The test sets up a simple 3-linear router network, A-B-C, and attaches 100 
> edge routers to A. It then kills one edge router, replaces it, and repeats 
> that kill-and-replace operation 50 times. (At which point I manually killed 
> router A.)
> Router A was running under valgrind, and produced the following output:
>  
> {color:#ff} {color}
> {color:#ff}[mick@colossus ~]$ /usr/bin/valgrind --leak-check=full 
> --show-leak-kinds=definite --trace-children=yes 
> --suppressions=/home/mick/latest/qpid-dispatch/tests/valgrind.supp 
> /home/mick/latest/install/dispatch/sbin/qdrouterd  --config 
> /home/mick/mercury/results/test_03/2018_12_06/config/A.conf -I 
> /home/mick/latest/install/dispatch/lib/qpid-dispatch/python
> ==9409== Memcheck, a memory error detector
> ==9409== Copyright (C) 2002-2017, and GNU GPL'd, by Julian Seward et al.
> ==9409== Using Valgrind-3.13.0 and LibVEX; rerun with -h for copyright info
> ==9409== Command: /home/mick/latest/install/dispatch/sbin/qdrouterd --config 
> /home/mick/mercury/results/test_03/2018_12_06/config/A.conf -I 
> /home/mick/latest/install/dispatch/lib/qpid-dispatch/python
> ==9409==
> ^C==9409==
> ==9409== Process terminating with default action of signal 2 (SIGINT)
> ==9409==    at 0x61C0A37: kill (in 
> /usr/lib64/[libc-2.26.so|http://libc-2.26.so/])
> ==9409==    by 0x401636: main (main.c:367)
> ==9409==
> ==9409== HEAP SUMMARY:
> ==9409== in use at exit: 6,933,690 bytes in 41,903 blocks
> ==9409==   total heap usage: 669,024 allocs, 627,121 frees, 92,449,020 bytes 
> allocated
> ==9409==
> ==9409== *8,640 (480 direct, 8,160 indirect) bytes in 20 blocks are 
> definitely lost in loss record 4,229 of 4,323*
> ==9409==    at 0x4C2CB6B: malloc (vg_replace_malloc.c:299)
> ==9409==    by 0x4E7D336: qdr_error_from_pn (error.c:37)
> ==9409==    by 0x4E905D7: AMQP_link_detach_handler (router_node.c:822)
> ==9409==    by 0x4E60A6C: close_links (container.c:298)
> ==9409==    by 0x4E6109F: close_handler (container.c:311)
> ==9409==    by 0x4E6109F: qd_container_handle_event (container.c:639)
> ==9409==    by 0x4E93971: handle (server.c:985)
> ==9409==    by 0x4E944C8: thread_run (server.c:1010)
> ==9409==    by 0x4E947CF: qd_server_run (server.c:1284)
> ==9409==    by 0x40186E: main_process (main.c:112)
> ==9409==    by 0x401636: main (main.c:367)
> ==9409==
> ==9409== *14,256 (792 direct, 13,464 indirect) bytes in 33 blocks are 
> definitely lost in loss record 4,261 of 4,323*
> ==9409==    at 0x4C2CB6B: malloc (vg_replace_malloc.c:299)
> ==9409==    by 0x4E7D336: qdr_error_from_pn (error.c:37)
> ==9409==    by 0x4E905D7: AMQP_link_detach_handler (router_node.c:822)
> ==9409==    by 0x4E60A6C: close_links (container.c:298)
> ==9409==    by 0x4E6109F: close_handler (container.c:311)
> ==9409==    by 0x4E6109F: qd_container_handle_event (container.c:639)
> ==9409==    by 0x4E93971: handle (server.c:985)
> ==9409==    by 0x4E944C8: thread_run (server.c:1010)
> {color}
> {color:#ff}==9409==    by 0x550150A: start_thread (in 
> /usr/lib64/[libpthread-2.26.so|http://libpthread-2.26.so/]){color}
>  {color:#ff}==9409==    by 0x628138E: clone (in 
> /usr/lib64/[libc-2.26.so|http://libc-2.26.so/])
> ==9409==
> ==9409== *575,713 (24 direct, 575,689 indirect) bytes in 1 blocks are 
> definitely lost in loss record 4,321 of 4,323*
> ==9409==    at 0x4C2CB6B: malloc (vg_replace_malloc.c:299)
> ==9409==    by 0x4E83FCA: qdr_add_link_ref (router_core.c:518)
> ==9409==    by 0x4E7A3BF: qdr_link_inbound_first_attach_CT 
> (connections.c:1517)
> ==9409==    by 0x4E8484B: router_core_thread (router_core_thread.c:116)
> ==9409==    by 0x550150A: start_thread (in 
> /usr/lib64/[libpthread-2.26.so|http://libpthread-2.26.so/])
> ==9409==    by 0x628138E: clone (in 
> /usr/lib64/[libc-2.26.so|http://libc-2.26.so/])
> ==9409==
> ==9409== LEAK SUMMARY:
> ==9409==    definitely lost: 1,296 bytes in 54 blocks
> ==9409==    indirectly lost: 597,313 bytes in 3,096 blocks
> ==9409==  possibly lost: 1,473,248 bytes in 6,538 blocks
> ==9409==    still reachable: 4,861,833 bytes in 32,215 blocks
> ==9409== suppressed: 0 bytes in 0 blocks
> ==9409== Reachable blocks (those to which a pointer was found) are not shown.
> ==9409== To see them, rerun with: --leak-check=full --show-leak-kinds=all
> ==9409==
> ==9409== For

[jira] [Updated] (DISPATCH-1079) Setting socketAddressFamily/protocolFamily does not take effect

2019-06-07 Thread Ganesh Murthy (JIRA)


 [ 
https://issues.apache.org/jira/browse/DISPATCH-1079?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ganesh Murthy updated DISPATCH-1079:

Fix Version/s: (was: 1.8.0)

> Setting socketAddressFamily/protocolFamily does not take effect
> ---
>
> Key: DISPATCH-1079
> URL: https://issues.apache.org/jira/browse/DISPATCH-1079
> Project: Qpid Dispatch
>  Issue Type: Bug
>  Components: Container
>Affects Versions: 1.2.0
>Reporter: Ganesh Murthy
>Priority: Major
>
> socketAddressFamily/protocolFamily can be specified on a listener like this
> {noformat}
> listener {
>     idleTimeoutSeconds: 120
>     saslMechanisms: ANONYMOUS
>     host: ::1
>     role: normal
>     socketAddressFamily: IPv6
>     authenticatePeer: no
>     port: 29190
> }{noformat}
> In the above examole, socketAddressFamily to IPv6 makes sure that the 
> connection be made on the IPv6 interface.
>  
> The commit 6f56e289bec0db4a1de257883dc456a502c42fe7 removed all code to 
> relating protocol family so the code does not enforce this anymore.
>  
> Add back code to enforce  this restriction.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Updated] (DISPATCH-1214) Valgrind finds invalid reads and many leaks during the unit tests

2019-06-07 Thread Ganesh Murthy (JIRA)


 [ 
https://issues.apache.org/jira/browse/DISPATCH-1214?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ganesh Murthy updated DISPATCH-1214:

Fix Version/s: (was: 1.8.0)

> Valgrind finds invalid reads and many leaks during the unit tests
> -
>
> Key: DISPATCH-1214
> URL: https://issues.apache.org/jira/browse/DISPATCH-1214
> Project: Qpid Dispatch
>  Issue Type: Bug
>  Components: Router Node
>Affects Versions: 1.4.1
>Reporter: Ken Giusti
>Assignee: Ken Giusti
>Priority: Major
> Attachments: grinder-12128.txt, grinder-report.txt
>
>
> See the attached grinder report



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Updated] (DISPATCH-1215) several memory leaks in edge-router soak test

2019-06-07 Thread Ganesh Murthy (JIRA)


 [ 
https://issues.apache.org/jira/browse/DISPATCH-1215?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ganesh Murthy updated DISPATCH-1215:

Fix Version/s: (was: 1.8.0)

> several memory leaks in edge-router soak test
> -
>
> Key: DISPATCH-1215
> URL: https://issues.apache.org/jira/browse/DISPATCH-1215
> Project: Qpid Dispatch
>  Issue Type: Bug
>Reporter: michael goulish
>Priority: Major
>
> Using recent master code trees (dispatch and proton)...
> The test sets up a simple 3-linear router network, A-B-C, and attaches 100 
> edge routers to A. It then kills one edge router, replaces it, and repeats 
> that kill-and-replace operation 50 times. (At which point I manually killed 
> router A.)
> Router A was running under valgrind, and produced the following output:
>  
> {color:#ff} {color}
> {color:#ff}[mick@colossus ~]$ /usr/bin/valgrind --leak-check=full 
> --show-leak-kinds=definite --trace-children=yes 
> --suppressions=/home/mick/latest/qpid-dispatch/tests/valgrind.supp 
> /home/mick/latest/install/dispatch/sbin/qdrouterd  --config 
> /home/mick/mercury/results/test_03/2018_12_06/config/A.conf -I 
> /home/mick/latest/install/dispatch/lib/qpid-dispatch/python
> ==9409== Memcheck, a memory error detector
> ==9409== Copyright (C) 2002-2017, and GNU GPL'd, by Julian Seward et al.
> ==9409== Using Valgrind-3.13.0 and LibVEX; rerun with -h for copyright info
> ==9409== Command: /home/mick/latest/install/dispatch/sbin/qdrouterd --config 
> /home/mick/mercury/results/test_03/2018_12_06/config/A.conf -I 
> /home/mick/latest/install/dispatch/lib/qpid-dispatch/python
> ==9409==
> ^C==9409==
> ==9409== Process terminating with default action of signal 2 (SIGINT)
> ==9409==    at 0x61C0A37: kill (in 
> /usr/lib64/[libc-2.26.so|http://libc-2.26.so/])
> ==9409==    by 0x401636: main (main.c:367)
> ==9409==
> ==9409== HEAP SUMMARY:
> ==9409== in use at exit: 6,933,690 bytes in 41,903 blocks
> ==9409==   total heap usage: 669,024 allocs, 627,121 frees, 92,449,020 bytes 
> allocated
> ==9409==
> ==9409== *8,640 (480 direct, 8,160 indirect) bytes in 20 blocks are 
> definitely lost in loss record 4,229 of 4,323*
> ==9409==    at 0x4C2CB6B: malloc (vg_replace_malloc.c:299)
> ==9409==    by 0x4E7D336: qdr_error_from_pn (error.c:37)
> ==9409==    by 0x4E905D7: AMQP_link_detach_handler (router_node.c:822)
> ==9409==    by 0x4E60A6C: close_links (container.c:298)
> ==9409==    by 0x4E6109F: close_handler (container.c:311)
> ==9409==    by 0x4E6109F: qd_container_handle_event (container.c:639)
> ==9409==    by 0x4E93971: handle (server.c:985)
> ==9409==    by 0x4E944C8: thread_run (server.c:1010)
> ==9409==    by 0x4E947CF: qd_server_run (server.c:1284)
> ==9409==    by 0x40186E: main_process (main.c:112)
> ==9409==    by 0x401636: main (main.c:367)
> ==9409==
> ==9409== *14,256 (792 direct, 13,464 indirect) bytes in 33 blocks are 
> definitely lost in loss record 4,261 of 4,323*
> ==9409==    at 0x4C2CB6B: malloc (vg_replace_malloc.c:299)
> ==9409==    by 0x4E7D336: qdr_error_from_pn (error.c:37)
> ==9409==    by 0x4E905D7: AMQP_link_detach_handler (router_node.c:822)
> ==9409==    by 0x4E60A6C: close_links (container.c:298)
> ==9409==    by 0x4E6109F: close_handler (container.c:311)
> ==9409==    by 0x4E6109F: qd_container_handle_event (container.c:639)
> ==9409==    by 0x4E93971: handle (server.c:985)
> ==9409==    by 0x4E944C8: thread_run (server.c:1010)
> {color}
> {color:#ff}==9409==    by 0x550150A: start_thread (in 
> /usr/lib64/[libpthread-2.26.so|http://libpthread-2.26.so/]){color}
>  {color:#ff}==9409==    by 0x628138E: clone (in 
> /usr/lib64/[libc-2.26.so|http://libc-2.26.so/])
> ==9409==
> ==9409== *575,713 (24 direct, 575,689 indirect) bytes in 1 blocks are 
> definitely lost in loss record 4,321 of 4,323*
> ==9409==    at 0x4C2CB6B: malloc (vg_replace_malloc.c:299)
> ==9409==    by 0x4E83FCA: qdr_add_link_ref (router_core.c:518)
> ==9409==    by 0x4E7A3BF: qdr_link_inbound_first_attach_CT 
> (connections.c:1517)
> ==9409==    by 0x4E8484B: router_core_thread (router_core_thread.c:116)
> ==9409==    by 0x550150A: start_thread (in 
> /usr/lib64/[libpthread-2.26.so|http://libpthread-2.26.so/])
> ==9409==    by 0x628138E: clone (in 
> /usr/lib64/[libc-2.26.so|http://libc-2.26.so/])
> ==9409==
> ==9409== LEAK SUMMARY:
> ==9409==    definitely lost: 1,296 bytes in 54 blocks
> ==9409==    indirectly lost: 597,313 bytes in 3,096 blocks
> ==9409==  possibly lost: 1,473,248 bytes in 6,538 blocks
> ==9409==    still reachable: 4,861,833 bytes in 32,215 blocks
> ==9409== suppressed: 0 bytes in 0 blocks
> ==9409== Reachable blocks (those to which a pointer was found) are not shown.
> ==9409== To see them, rerun with: --leak-check=full --show-leak-kinds=all
> ==9409==
> ==9409== For counts of detected an

[jira] [Reopened] (DISPATCH-1348) Avoid qdr_error_t allocation if not necessary

2019-06-07 Thread Ganesh Murthy (JIRA)


 [ 
https://issues.apache.org/jira/browse/DISPATCH-1348?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ganesh Murthy reopened DISPATCH-1348:
-

> Avoid qdr_error_t allocation if not necessary
> -
>
> Key: DISPATCH-1348
> URL: https://issues.apache.org/jira/browse/DISPATCH-1348
> Project: Qpid Dispatch
>  Issue Type: Improvement
>  Components: Routing Engine
>Affects Versions: 1.7.0
>Reporter: Francesco Nigro
>Assignee: Ganesh Murthy
>Priority: Major
> Fix For: 1.8.0
>
>
> qdr_error_from_pn on error.c is allocating qdr_error_t on the hot path ie 
> AMQP_disposition_handler: saving those allocations would reduce CPU usage 
> (and cache misses) on both core and worker threads, making the router able to 
> scale better while under load.
> Initial tests has shown some improvements under load (ie with core CPU thread 
> ~97% with the new version and ~99% with master):
> 12 pairs master (no lock-free queues, no qdr_error_t fix): 285 K msg/sec
> 12 pairs master (no lock-free queues, yes qdr_error_t fix): 402 K msg/sec
> 12 pairs lock-free q (no qdr_error_t fix):  311 K msg/sec
> 12 pairs lock-free q (yes qdr_error_t fix):  510 K msg/sec



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Resolved] (DISPATCH-1348) Avoid qdr_error_t allocation if not necessary

2019-06-07 Thread Ganesh Murthy (JIRA)


 [ 
https://issues.apache.org/jira/browse/DISPATCH-1348?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ganesh Murthy resolved DISPATCH-1348.
-
Resolution: Fixed

> Avoid qdr_error_t allocation if not necessary
> -
>
> Key: DISPATCH-1348
> URL: https://issues.apache.org/jira/browse/DISPATCH-1348
> Project: Qpid Dispatch
>  Issue Type: Improvement
>  Components: Routing Engine
>Affects Versions: 1.7.0
>Reporter: Francesco Nigro
>Assignee: Ganesh Murthy
>Priority: Major
> Fix For: 1.8.0
>
>
> qdr_error_from_pn on error.c is allocating qdr_error_t on the hot path ie 
> AMQP_disposition_handler: saving those allocations would reduce CPU usage 
> (and cache misses) on both core and worker threads, making the router able to 
> scale better while under load.
> Initial tests has shown some improvements under load (ie with core CPU thread 
> ~97% with the new version and ~99% with master):
> 12 pairs master (no lock-free queues, no qdr_error_t fix): 285 K msg/sec
> 12 pairs master (no lock-free queues, yes qdr_error_t fix): 402 K msg/sec
> 12 pairs lock-free q (no qdr_error_t fix):  311 K msg/sec
> 12 pairs lock-free q (yes qdr_error_t fix):  510 K msg/sec



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Updated] (DISPATCH-1265) Delivery_abort test causes inter-router session error

2019-06-07 Thread Ganesh Murthy (JIRA)


 [ 
https://issues.apache.org/jira/browse/DISPATCH-1265?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ganesh Murthy updated DISPATCH-1265:

Fix Version/s: (was: 1.8.0)

> Delivery_abort test causes inter-router session error
> -
>
> Key: DISPATCH-1265
> URL: https://issues.apache.org/jira/browse/DISPATCH-1265
> Project: Qpid Dispatch
>  Issue Type: Bug
>  Components: Router Node
>Affects Versions: 1.5.0
> Environment: Fedora 29, Python 3.
> branch DISPATCH-1264#a5aab9
> ctest -VV -R system_tests_delivery_abort
>Reporter: Chuck Rolke
>Assignee: Ganesh Murthy
>Priority: Critical
> Attachments: DISPATCH-1265_ctest-log.txt, 
> DISPATCH-1265_scraped-logs-timeout.html
>
>
> Inter-router connection closes with:
> error :"amqp:session:invalid-field" "sequencing error, expected delivery-id 
> 24, got 23"



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Updated] (QPID-8320) [linearstore] Empty journal files orphaned and accumulate when the broker is restarted

2019-06-07 Thread Kim van der Riet (JIRA)


 [ 
https://issues.apache.org/jira/browse/QPID-8320?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kim van der Riet updated QPID-8320:
---
Description: 
If a queue journal is filled until the last file in the journal is full, then 
the store preemptively adds a new journal file to the queue store. This new 
file is at first uninitialized, but is initialized when it is first used.

If, at recovery, such an uninitialized file exists in a journal, then on 
recovery, this file is ignored, and a new uninitialized file is added. Hence 
two uninitialized files now exist in the journal. If the broker is repeatedly 
stopped, then started with a journal in this state, a new uninitialized file is 
added for each restart.

In addition, the journal recovery does not dispose of the unused uninitialized 
files, so they accumulate and continue to exist through multiple restarts.

+*Reproducer:*+

Start with a clean store:
{noformat}
rm -rf ~/.qpidd
{noformat}
Start the broker, then:
{noformat}
$ qpid-config add queue --durable test_queue
$ ls ~/.qpidd/qls/jrnl2/test_queue/
f965476e-eea0-4c02-be50-cbfbce6da71a.jrnl
$ hexdump -C 
~/.qpidd/qls/jrnl2/test_queue/f965476e-eea0-4c02-be50-cbfbce6da71a.jrnl 
  51 4c 53 66 02 00 00 00  00 00 00 00 00 00 00 00  |QLSf|
0010  00 00 00 00 00 00 00 00  01 00 01 00 00 00 00 00  ||
0020  00 08 00 00 00 00 00 00  00 00 00 00 00 00 00 00  ||
0030  00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00  ||
*
00201000
{noformat}
 
 which is an uninitialized empty journal file. Now add 1024 messages that when 
encoded consume almost 2048 bytes per message on disk. This should fill the 
first file exactly, so that the last enqueue record coincides with the physical 
end of the file at offset {{0x201000}}:
{noformat}
$ qpid-send -a test_queue --durable=yes -m 1024 --content-size=1865
$ ls ~/.qpidd/qls/jrnl2/test_queue/
e404051f-8af7-422d-a088-7e957c4db3af.jrnl
f965476e-eea0-4c02-be50-cbfbce6da71a.jrnl
$ hexdump -C  
~/.qpidd/qls/jrnl2/test_queue/f965476e-eea0-4c02-be50-cbfbce6da71a.jrnl | grep 
QLS
  51 4c 53 66 02 00 00 00  4f c8 73 20 db c2 df 1d  |QLSfO.s |
1000  51 4c 53 65 02 00 00 00  4f c8 73 20 db c2 df 1d  |QLSeO.s |
1800  51 4c 53 65 02 00 00 00  4f c8 73 20 db c2 df 1d  |QLSeO.s |

...

001ff800  51 4c 53 65 02 00 00 00  4f c8 73 20 db c2 df 1d  |QLSeO.s |
0020  51 4c 53 65 02 00 00 00  4f c8 73 20 db c2 df 1d  |QLSeO.s |
00200800  51 4c 53 65 02 00 00 00  4f c8 73 20 db c2 df 1d  |QLSeO.s |
{noformat}
Check that the newly added file is empty:
{noformat}
hexdump -C  
~/.qpidd/qls/jrnl2/test_queue/e404051f-8af7-422d-a088-7e957c4db3af.jrnl
  51 4c 53 66 02 00 00 00  00 00 00 00 00 00 00 00  |QLSf|
0010  00 00 00 00 00 00 00 00  01 00 01 00 00 00 00 00  ||
0020  00 08 00 00 00 00 00 00  00 00 00 00 00 00 00 00  ||
0030  00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00  ||
*
00201000
{noformat}
It is important to check that the second file is empty other than the file 
header. If there are any records present, then the file will not be considered 
empty during recovery, and the conditions for the bug will not be met. 
Depending on network and threading conditions, the store may add in filler 
records {{"QLSx"}} at one or two points during the writing of the files, so 
this may push the final record to be written in the second file. If this 
happens, try again, or adjust the number of records down slightly until this 
condition is met.

Once this condition has been met, stop the broker, then restart it. There will 
now be two empty files present, the original, plus a new one added at the 
broker restart.

Start and stop the broker several times. For each recovery, one new empty file 
is added to the journal. The old files are still present, but are orphaned, and 
are never moved, used or put back into the Empty File Pool.

  was:
If a queue journal is filled until the last file in the journal is full, then 
the store preemptively adds a new journal file to the queue store. This new 
file is at first uninitialized, but is initialized when it is first used.

If, at recovery, such an uninitialized file exists in a journal, then on 
recovery, this file is ignored, and a new uninitialized file is added. Hence 
two uninitialized files now exist in the journal. If the broker is repeatedly 
stopped, then started with a journal in this state, a new uninitialized file is 
added for each restart.

In addition, the journal recovery does not dispose of the unused uninitialized 
files, so they accumulate and continue to exist through multiple restarts.

+*Reproducer:*+

Start with a clean store:
{noformat}
rm -rf ~/.qpidd
{noformat}
Start the broker, then:
{noformat}
$ qpid-config add queue --durable test_queue
$ ls ~/.qp

[jira] [Updated] (QPID-8320) [linearstore] Empty journal files orphaned and accumulate when the broker is restarted

2019-06-07 Thread Kim van der Riet (JIRA)


 [ 
https://issues.apache.org/jira/browse/QPID-8320?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kim van der Riet updated QPID-8320:
---
Description: 
If a queue journal is filled until the last file in the journal is full, then 
the store preemptively adds a new journal file to the queue store. This new 
file is at first uninitialized, but is initialized when it is first used.

If, at recovery, such an uninitialized file exists in a journal, then on 
recovery, this file is ignored, and a new uninitialized file is added. Hence 
two uninitialized files now exist in the journal. If the broker is repeatedly 
stopped, then started with a journal in this state, a new uninitialized file is 
added for each restart.

In addition, the journal recovery does not dispose of the unused uninitialized 
files, so they accumulate and continue to exist through multiple restarts.

+*Reproducer:*+

Start with a clean store:
{noformat}
rm -rf ~/.qpidd
{noformat}
Start the broker, then:
{noformat}
$ qpid-config add queue --durable test_queue
$ ls ~/.qpidd/qls/jrnl2/test_queue/
f965476e-eea0-4c02-be50-cbfbce6da71a.jrnl
$ hexdump -C 
~/.qpidd/qls/jrnl2/test_queue/f965476e-eea0-4c02-be50-cbfbce6da71a.jrnl 
  51 4c 53 66 02 00 00 00  00 00 00 00 00 00 00 00  |QLSf|
0010  00 00 00 00 00 00 00 00  01 00 01 00 00 00 00 00  ||
0020  00 08 00 00 00 00 00 00  00 00 00 00 00 00 00 00  ||
0030  00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00  ||
*
00201000
{noformat}
 
 which is an uninitialized empty journal file. Now add 1024 messages that when 
encoded consume almost 2048 bytes per message on disk. This should fill the 
first file exactly, so that the last enqueue record coincides with the physical 
end of the file at offset {{0x201000}}:
{noformat}
$ qpid-send -a test-queue --durable=yes -m 1024 --content-size=1865
$ ls ~/.qpidd/qls/jrnl2/test_queue/
e404051f-8af7-422d-a088-7e957c4db3af.jrnl
f965476e-eea0-4c02-be50-cbfbce6da71a.jrnl
$ hexdump -C  
~/.qpidd/qls/jrnl2/test_queue/f965476e-eea0-4c02-be50-cbfbce6da71a.jrnl | grep 
QLS
  51 4c 53 66 02 00 00 00  4f c8 73 20 db c2 df 1d  |QLSfO.s |
1000  51 4c 53 65 02 00 00 00  4f c8 73 20 db c2 df 1d  |QLSeO.s |
1800  51 4c 53 65 02 00 00 00  4f c8 73 20 db c2 df 1d  |QLSeO.s |

...

001ff800  51 4c 53 65 02 00 00 00  4f c8 73 20 db c2 df 1d  |QLSeO.s |
0020  51 4c 53 65 02 00 00 00  4f c8 73 20 db c2 df 1d  |QLSeO.s |
00200800  51 4c 53 65 02 00 00 00  4f c8 73 20 db c2 df 1d  |QLSeO.s |
{noformat}
Check that the newly added file is empty:
{noformat}
hexdump -C  
~/.qpidd/qls/jrnl2/test_queue/e404051f-8af7-422d-a088-7e957c4db3af.jrnl
  51 4c 53 66 02 00 00 00  00 00 00 00 00 00 00 00  |QLSf|
0010  00 00 00 00 00 00 00 00  01 00 01 00 00 00 00 00  ||
0020  00 08 00 00 00 00 00 00  00 00 00 00 00 00 00 00  ||
0030  00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00  ||
*
00201000
{noformat}
It is important to check that the second file is empty other than the file 
header. If there are any records present, then the file will not be considered 
empty during recovery, and the conditions for the bug will not be met. 
Depending on network and threading conditions, the store may add in filler 
records {{"QLSx"}} at one or two points during the writing of the files, so 
this may push the final record to be written in the second file. If this 
happens, try again, or adjust the number of records down slightly until this 
condition is met.

Once this condition has been met, stop the broker, then restart it. There will 
now be two empty files present, the original, plus a new one added at the 
broker restart.

Start and stop the broker several times. For each recovery, one new empty file 
is added to the journal. The old files are still present, but are orphaned, and 
are never moved, used or put back into the Empty File Pool.

  was:
If a queue journal is filled until the last file in the journal is full, then 
the store preemptively adds a new journal file to the queue store. This new 
file is at first uninitialized, but is initialized when it is first used.

If, at recovery, such an uninitialized file exists in a journal, then on 
recovery, this file is ignored, and a new uninitialized file is added. Hence 
two uninitialized files now exist in the journal. If the broker is repeatedly 
stopped, then started with a journal in this state, a new uninitialized file is 
added for each restart.

In addition, the journal recovery does not dispose of the unused uninitialized 
files, so they accumulate and continue to exist through multiple restarts.

+*Reproducer:*+

Start with a clean store:
{noformat}
rm -rf ~/.qpidd
{noformat}
Start the broker, then:
{noformat}
$ qpid-config add queue --durable test_queue
$ ls ~/.qp

[jira] [Comment Edited] (QPID-7987) [Qpid Broker-J][AMQP 0-10] QpidByteBuffer can leak when local transaction open/idle time exceeds the maximum open/idle timeout causing session close but a new messa

2019-06-07 Thread Alex Rudyy (JIRA)


[ 
https://issues.apache.org/jira/browse/QPID-7987?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16858431#comment-16858431
 ] 

Alex Rudyy edited comment on QPID-7987 at 6/7/19 9:29 AM:
--

Hi Utkarsh,
 Thanks for volunteering for the task.

Unfortunately, I did not put much details for the problem into JIRA description 
and cannot recollect what exactly was happening . I tried to reproduce the 
problem as suggested in my earlier comment, but, that did not replicate the 
issue so far.

The Qpid Broker utilizes direct memory for keeping message content/headers. The 
direct memory is managed via {{QpidByteBuffer}} objects which are the wrappers 
around java {{ByteBuffer}}. They have a reference counter mechanism. Every time 
when buffer is duplicated, sliced, or any other operation invoked resulting in 
creation of new {{QpidByteBuffer}} instance from the original one, the 
reference counter is incremented. When duplicate/slice/view is disposed the 
reference counter is decremented. When counter reaches 0, the pooled byte 
buffer is returned into pool, otherwise, the instance of {{QpidByteBuffer}} is 
garbage collected eventually.

Method {{QpidByteBuffer#dispose()}} has to be invoked every time when 
{{QpidByteBuffer}} instance is not needed any more. If {{dispose}} is not 
invoked, that can result in leakage of {{QpidByteBuffer}} s .

This JIRA was raised to track a race condition when {{transaction}} is idle or 
inactive for some time (The application does not commit/rollback the 
transaction for a while). It seems when AMQP 0-10 session is closed by 
transaction timeout functionality and a new message is arriving at the same 
time, there is a possibility for a race when incoming message 
{{QpidByteBuffer}} s are not disposed. As result, the {{QpidByteBuffer}} leak 
can occur.

Taking that the issue occurrence frequency was quite rear and 0-10 is a legacy 
protocol, the issue was not addressed promptly. Since than, some changes had 
been made in {{transaction timeout}} functionality which is currently closing 
impacted connection rather than session (for consistency reasons). It looks 
like that those changes might prevented the issue from occurrence in my 
reproduction attempts.

I will try to reproduce the problem again later. If I will not manage to do it, 
I will close the JIRA with "Cannot reproduce".

Potentially, you can try to replicate the issue by writing a test which can 
send the messages when transaction timeout functionality is closing underlying 
connection. You can follow existing integration tests 
[TransactionTimeoutTest|https://github.com/apache/qpid-broker-j/blob/master/systests/qpid-systests-jms_1.1/src/test/java/org/apache/qpid/systests/jms_1_1/extensions/transactiontimeout/TransactionTimeoutTest.java].
 It is going to be tricky to catch the moment...

Please, note that issue only affects AMQP 0-10 code


was (Author: alex.rufous):
Hi Utkarsh,
 Thanks for volunteering for the task.

Unfortunately, I did not put much details for the problem into JIRA description 
and cannot recollect what exactly was happening . I tried to reproduce the 
problem as suggested in my earlier comment, but, that did not replicate the 
issue so far.

The Qpid Broker utilizes direct memory for keeping message content/headers. The 
direct memory is managed via {{QpidByteBuffer}} objects which are the wrappers 
around java {{ByteBuffer}}. They have a reference counter mechanism. Every time 
when buffer is duplicated, sliced, or any other operation invoked resulting in 
creation of new {{QpidByteBuffer}} instance from the original one, the 
reference counter is incremented. When duplicate/slice/view is disposed the 
reference counter is decremented. When counter reaches 0, the pooled byte 
buffer is returned into pool, otherwise, the instance of {{QpidByteBuffer}} is 
garbage collected eventually.

Method {{QpidByteBuffer#dispose()}} has to be invoked every time when 
{{QpidByteBuffer}} instance is not needed any more. If {{dispose}} is not 
invoked, that can result in leakage of {{QpidByteBuffer}} s .

This JIRA was raised to track a race condition when {{transaction}} is idle or 
inactive for some time (The application does not commit/rollback the 
transaction for a while). It seems when AMQP 0-10 session is closed by 
transaction timeout functionality and a new message is arriving at the same 
time, there is a possibility for a race when incoming message 
{{QpidByteBuffer}} s are not disposed. As result, the {{QpidByteBuffer}} leak 
can occur.

Taking that the issue occurrence frequency was quite rear and 0-10 is a legacy 
protocol, the issue was not addressed promptly. Since than, some changes had 
been made in {{transaction timeout}} functionality which is currently closing 
impacted connection rather than session (for consistency reasons). It looks 
like that those changes might prevented the issue from 

[jira] [Comment Edited] (QPID-7987) [Qpid Broker-J][AMQP 0-10] QpidByteBuffer can leak when local transaction open/idle time exceeds the maximum open/idle timeout causing session close but a new messa

2019-06-07 Thread Alex Rudyy (JIRA)


[ 
https://issues.apache.org/jira/browse/QPID-7987?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16858431#comment-16858431
 ] 

Alex Rudyy edited comment on QPID-7987 at 6/7/19 9:29 AM:
--

Hi Utkarsh,
 Thanks for volunteering for the task.

Unfortunately, I did not put much details for the problem into JIRA description 
and cannot recollect what exactly was happening . I tried to reproduce the 
problem as suggested in my earlier comment, but, that did not replicate the 
issue so far.

The Qpid Broker utilizes direct memory for keeping message content/headers. The 
direct memory is managed via {{QpidByteBuffer}} objects which are the wrappers 
around java {{ByteBuffer}}. They have a reference counter mechanism. Every time 
when buffer is duplicated, sliced, or any other operation invoked resulting in 
creation of new {{QpidByteBuffer}} instance from the original one, the 
reference counter is incremented. When duplicate/slice/view is disposed the 
reference counter is decremented. When counter reaches 0, the pooled byte 
buffer is returned into pool, otherwise, the instance of {{QpidByteBuffer}} is 
garbage collected eventually.

Method {{QpidByteBuffer#dispose()}} has to be invoked every time when 
{{QpidByteBuffer}} instance is not needed any more. If {{dispose}} is not 
invoked, that can result in leakage of {{QpidByteBuffer}} s .

This JIRA was raised to track a race condition when {{transaction}} is idle or 
inactive for some time (The application does not commit/rollback the 
transaction for a while). It seems when AMQP 0-10 session is closed by 
transaction timeout functionality and a new message is arriving at the same 
time, there is a possibility for a race when incoming message 
{{QpidByteBuffer}} s are not disposed. As result, the {{QpidByteBuffer}} leak 
can occur.

Taking that the issue occurrence frequency was quite rear and 0-10 is a legacy 
protocol, the issue was not addressed promptly. Since than, some changes had 
been made in {{transaction timeout}} functionality which is currently closing 
impacted connection rather than session (for consistency reasons). It looks 
like that those changes might prevented the issue from occurrence in my 
reproduction attempts.

I will try to reproduce the problem again later. If I will not manage to do it, 
I will close the JIRA with "Cannot reproduce".

Potentially, you can try to replicate the issue by writing a test which can 
send the messages when transaction timeout functionality is closing underlying 
connection. You can follow existing integration tests 
[TransactionTimeoutTest|https://github.com/apache/qpid-broker-j/blob/master/systests/qpid-systests-jms_1.1/src/test/java/org/apache/qpid/systests/jms_1_1/extensions/transactiontimeout/TransactionTimeoutTest.java].
 It is going to be tricky to catch the moment...

Please, note that issue only affect AMQP 0-10 code


was (Author: alex.rufous):
Hi Utkarsh,
Thanks for volunteering for the task.

Unfortunately, I did not put much details for the problem into JIRA description 
and cannot recollect what exactly was happening . I tried to reproduce the 
problem as  suggested in my earlier comment, but, that did not replicate the 
issue so far.

The Qpid Broker utilizes direct memory for keeping message content/headers. The 
direct memory is managed via {{QpidByteBuffer}} objects which are the wrappers 
around java {{ByteBuffer}}. They have a reference counter mechanism. Every time 
when buffer is duplicated, sliced, or any other operation invoked resulting in 
creation of new {{QpidByteBuffer}} instance from the original one, the 
reference counter is incremented. When duplicate/slice/view is disposed the 
reference counter is decremented. When counter reaches 0, the pooled byte 
buffer is returned into pool, otherwise, the instance of {{QpidByteBuffer}} is 
garbage collected eventually.

Method {{QpidByteBuffer#dispose()}} has to be invoked every time when  
{{QpidByteBuffer}} instance is not needed any more. If {{dispose}} is not 
invoked, that can result in leakage of {{QpidByteBuffer}} s . 

This JIRA was raised to track a race condition  when {{transaction}} is idle or 
inactive for some time (The application does not commit/rollback the 
transaction for a while). It seems when AMQP 0-10 session is closed by 
transaction timeout functionality and a new message is arriving at the same 
time, there is a possibility for a race when incoming message 
{{QpidByteBuffer}} s are not disposed. As result, the {{QpidByteBuffer}} leak 
can occur.

Taking that the issue occurrence frequency was quite rear and 0-10 is a legacy 
protocol, the issue was not addressed promptly. Since than, some changes had 
been made in {{transaction timeout}} functionality which is currently closing 
impacted connection rather than session (for consistency reasons). It looks 
like that those changes might prevented the issue fro

[jira] [Comment Edited] (QPID-7987) [Qpid Broker-J][AMQP 0-10] QpidByteBuffer can leak when local transaction open/idle time exceeds the maximum open/idle timeout causing session close but a new messa

2019-06-07 Thread Alex Rudyy (JIRA)


[ 
https://issues.apache.org/jira/browse/QPID-7987?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16858431#comment-16858431
 ] 

Alex Rudyy edited comment on QPID-7987 at 6/7/19 9:27 AM:
--

Hi Utkarsh,
Thanks for volunteering for the task.

Unfortunately, I did not put much details for the problem into JIRA description 
and cannot recollect what exactly was happening . I tried to reproduce the 
problem as  suggested in my earlier comment, but, that did not replicate the 
issue so far.

The Qpid Broker utilizes direct memory for keeping message content/headers. The 
direct memory is managed via {{QpidByteBuffer}} objects which are the wrappers 
around java {{ByteBuffer}}. They have a reference counter mechanism. Every time 
when buffer is duplicated, sliced, or any other operation invoked resulting in 
creation of new {{QpidByteBuffer}} instance from the original one, the 
reference counter is incremented. When duplicate/slice/view is disposed the 
reference counter is decremented. When counter reaches 0, the pooled byte 
buffer is returned into pool, otherwise, the instance of {{QpidByteBuffer}} is 
garbage collected eventually.

Method {{QpidByteBuffer#dispose()}} has to be invoked every time when  
{{QpidByteBuffer}} instance is not needed any more. If {{dispose}} is not 
invoked, that can result in leakage of {{QpidByteBuffer}} s . 

This JIRA was raised to track a race condition  when {{transaction}} is idle or 
inactive for some time (The application does not commit/rollback the 
transaction for a while). It seems when AMQP 0-10 session is closed by 
transaction timeout functionality and a new message is arriving at the same 
time, there is a possibility for a race when incoming message 
{{QpidByteBuffer}} s are not disposed. As result, the {{QpidByteBuffer}} leak 
can occur.

Taking that the issue occurrence frequency was quite rear and 0-10 is a legacy 
protocol, the issue was not addressed promptly. Since than, some changes had 
been made in {{transaction timeout}} functionality which is currently closing 
impacted connection rather than session (for consistency reasons). It looks 
like that those changes might prevented the issue from occurrence in my 
reproduction attempts.

I will try to reproduce the problem again later. If I will not manage to do it, 
I will close the JIRA with "Cannot reproduce".

Potentially, you can try to replicate the issue by writing a test which can 
send the messages when transaction timeout functionality is closing underlying 
connection. You can follow existing integration tests 
[TransactionTimeoutTest|https://github.com/apache/qpid-broker-j/blob/master/systests/qpid-systests-jms_1.1/src/test/java/org/apache/qpid/systests/jms_1_1/extensions/transactiontimeout/TransactionTimeoutTest.java].
 It is going to be tricky to catch the moment...




was (Author: alex.rufous):
Hi Utkarsh,
Thanks for volunteering for the task.

Unfortunately, I did not put much details for the problem into JIRA description 
and cannot recollect what exactly was happening . I tried to reproduce the 
problem as  suggested in my earlier comment, but, that did not replicate the 
issue so far.

The Qpid Broker utilizes direct memory for keeping message content/headers. The 
direct memory is managed via {{QpidByteBuffer}} objects which are the wrappers 
around java {{ByteBuffer}}. They have a reference counter mechanism. Every time 
when buffer is duplicated, sliced, or any other operation invoked resulting in 
creation of new {{QpidByteBuffer}} instance from the original one, the 
reference counter is incremented. When duplicate/slice/view is disposed the 
reference counter is decremented. When counter reaches 0, the pooled byte 
buffer is returned into pool, otherwise, the instance of {{QpidByteBuffer}} is 
garbage collected eventually.

Method {{QpidByteBuffer#dispose()}} has to be invoked every time when  
{{QpidByteBuffer}} instance is not needed any more. If {{dispose}} is not 
invoked, that can result in leakage of {{QpidByteBuffer}}s . 

This JIRA was raised to track a race condition  when {{transaction}} is idle or 
inactive for some time (The application does not commit/rollback the 
transaction for a while). It seems when AMQP 0-10 session is closed by 
transaction timeout functionality and a new message is arriving at the same 
time, there is a possibility for a race when incoming message 
{{QpidByteBuffer}}s are not disposed. As result, the {{QpidByteBuffer}} leak 
can occur.

Taking that the issue occurrence frequency was quite rear and 0-10 is a legacy 
protocol, the issue was not addressed promptly. Since than, some changes had 
been made in {{transaction timeout}} functionality which is currently closing 
impacted connection rather than session (for consistency reasons). It looks 
like that those changes might prevented the issue from occurrence in my 
reproduction attempts.

I wil

[jira] [Commented] (QPID-7987) [Qpid Broker-J][AMQP 0-10] QpidByteBuffer can leak when local transaction open/idle time exceeds the maximum open/idle timeout causing session close but a new message ar

2019-06-07 Thread Alex Rudyy (JIRA)


[ 
https://issues.apache.org/jira/browse/QPID-7987?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16858431#comment-16858431
 ] 

Alex Rudyy commented on QPID-7987:
--

Hi Utkarsh,
Thanks for volunteering for the task.

Unfortunately, I did not put much details for the problem into JIRA description 
and cannot recollect what exactly was happening . I tried to reproduce the 
problem as  suggested in my earlier comment, but, that did not replicate the 
issue so far.

The Qpid Broker utilizes direct memory for keeping message content/headers. The 
direct memory is managed via {{QpidByteBuffer}} objects which are the wrappers 
around java {{ByteBuffer}}. They have a reference counter mechanism. Every time 
when buffer is duplicated, sliced, or any other operation invoked resulting in 
creation of new {{QpidByteBuffer}} instance from the original one, the 
reference counter is incremented. When duplicate/slice/view is disposed the 
reference counter is decremented. When counter reaches 0, the pooled byte 
buffer is returned into pool, otherwise, the instance of {{QpidByteBuffer}} is 
garbage collected eventually.

Method {{QpidByteBuffer#dispose()}} has to be invoked every time when  
{{QpidByteBuffer}} instance is not needed any more. If {{dispose}} is not 
invoked, that can result in leakage of {{QpidByteBuffer}}s . 

This JIRA was raised to track a race condition  when {{transaction}} is idle or 
inactive for some time (The application does not commit/rollback the 
transaction for a while). It seems when AMQP 0-10 session is closed by 
transaction timeout functionality and a new message is arriving at the same 
time, there is a possibility for a race when incoming message 
{{QpidByteBuffer}}s are not disposed. As result, the {{QpidByteBuffer}} leak 
can occur.

Taking that the issue occurrence frequency was quite rear and 0-10 is a legacy 
protocol, the issue was not addressed promptly. Since than, some changes had 
been made in {{transaction timeout}} functionality which is currently closing 
impacted connection rather than session (for consistency reasons). It looks 
like that those changes might prevented the issue from occurrence in my 
reproduction attempts.

I will try to reproduce the problem again later. If I will not manage to do it, 
I will close the JIRA with "Cannot reproduce".

Potentially, you can try to replicate the issue by writing a test which can 
send the messages when transaction timeout functionality is closing underlying 
connection. You can follow existing integration tests 
[TransactionTimeoutTest|https://github.com/apache/qpid-broker-j/blob/master/systests/qpid-systests-jms_1.1/src/test/java/org/apache/qpid/systests/jms_1_1/extensions/transactiontimeout/TransactionTimeoutTest.java].
 It is going to be tricky to catch the moment...



> [Qpid Broker-J][AMQP 0-10] QpidByteBuffer can leak when local transaction 
> open/idle time exceeds the maximum open/idle timeout causing session close 
> but a new message arrives at the same time when session is in process of close
> ---
>
> Key: QPID-7987
> URL: https://issues.apache.org/jira/browse/QPID-7987
> Project: Qpid
>  Issue Type: Bug
>  Components: Broker-J
>Affects Versions: qpid-java-6.1.4, qpid-java-broker-7.0.0, qpid-java-6.1.5
>Reporter: Alex Rudyy
>Priority: Major
> Attachments: 0002-instrumentation.patch
>
>
> {{QpidByteBuffer}} can leak when broker operates with configured open and 
> idle transaction close timeouts ( 
> {{virtualhost.storeTransactionOpenTimeoutClose}} or/and 
> {{virtualhost.storeTransactionIdleTimeoutClose}}) and the transaction timeout 
> occurs  causing the Broker to close the underlying session but at the same 
> time a new message arrives. The new message {{QpidByteBuffer}} might not be 
> disposed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org