[jira] [Commented] (DISPATCH-1097) Fix Coverity issue on master branch
[ https://issues.apache.org/jira/browse/DISPATCH-1097?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16571044#comment-16571044 ] ASF GitHub Bot commented on DISPATCH-1097: -- Github user codecov-io commented on the issue: https://github.com/apache/qpid-dispatch/pull/354 # [Codecov](https://codecov.io/gh/apache/qpid-dispatch/pull/354?src=pr=h1) Report > Merging [#354](https://codecov.io/gh/apache/qpid-dispatch/pull/354?src=pr=desc) into [master](https://codecov.io/gh/apache/qpid-dispatch/commit/83f5d524a63dec84f648a8afa126f794cbcafccb?src=pr=desc) will **decrease** coverage by `0.04%`. > The diff coverage is `42.85%`. [![Impacted file tree graph](https://codecov.io/gh/apache/qpid-dispatch/pull/354/graphs/tree.svg?src=pr=650=rk2Cgd27pP=150)](https://codecov.io/gh/apache/qpid-dispatch/pull/354?src=pr=tree) ```diff @@Coverage Diff @@ ## master #354 +/- ## == - Coverage 84.56% 84.51% -0.05% == Files 69 69 Lines 1572015730 +10 == + Hits1329313294 +1 - Misses 2427 2436 +9 ``` | [Impacted Files](https://codecov.io/gh/apache/qpid-dispatch/pull/354?src=pr=tree) | Coverage Δ | | |---|---|---| | [src/router\_core/terminus.c](https://codecov.io/gh/apache/qpid-dispatch/pull/354/diff?src=pr=tree#diff-c3JjL3JvdXRlcl9jb3JlL3Rlcm1pbnVzLmM=) | `94.96% <ø> (ø)` | :arrow_up: | | [src/router\_node.c](https://codecov.io/gh/apache/qpid-dispatch/pull/354/diff?src=pr=tree#diff-c3JjL3JvdXRlcl9ub2RlLmM=) | `93.51% <0%> (-0.75%)` | :arrow_down: | | [src/server.c](https://codecov.io/gh/apache/qpid-dispatch/pull/354/diff?src=pr=tree#diff-c3JjL3NlcnZlci5j) | `84.37% <100%> (+0.02%)` | :arrow_up: | | [src/router\_core/route\_control.c](https://codecov.io/gh/apache/qpid-dispatch/pull/354/diff?src=pr=tree#diff-c3JjL3JvdXRlcl9jb3JlL3JvdXRlX2NvbnRyb2wuYw==) | `95.83% <50%> (-0.66%)` | :arrow_down: | | [src/router\_core/connections.c](https://codecov.io/gh/apache/qpid-dispatch/pull/354/diff?src=pr=tree#diff-c3JjL3JvdXRlcl9jb3JlL2Nvbm5lY3Rpb25zLmM=) | `95.42% <0%> (-0.12%)` | :arrow_down: | -- [Continue to review full report at Codecov](https://codecov.io/gh/apache/qpid-dispatch/pull/354?src=pr=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/apache/qpid-dispatch/pull/354?src=pr=footer). Last update [83f5d52...61b2e96](https://codecov.io/gh/apache/qpid-dispatch/pull/354?src=pr=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments). > Fix Coverity issue on master branch > --- > > Key: DISPATCH-1097 > URL: https://issues.apache.org/jira/browse/DISPATCH-1097 > Project: Qpid Dispatch > Issue Type: Bug > Components: Container >Affects Versions: 1.2.0 >Reporter: Ganesh Murthy >Assignee: Ganesh Murthy >Priority: Major > Fix For: 1.3.0 > > > {noformat} > 5 new defect(s) introduced to Apache Qpid dispatch-router found with Coverity > Scan. > 1 defect(s), reported by Coverity Scan earlier, were marked fixed in the > recent build analyzed by Coverity Scan. > New defect(s) Reported-by: Coverity Scan > Showing 5 of 5 defect(s) > ** CID 308513: Null pointer dereferences (FORWARD_NULL) > /home/kgiusti/work/qpid-dispatch/src/router_core/route_control.c: 245 in > qdr_auto_link_activate_CT() > > *** CID 308513: Null pointer dereferences (FORWARD_NULL) > /home/kgiusti/work/qpid-dispatch/src/router_core/route_control.c: 245 in > qdr_auto_link_activate_CT() > 239 term = target; > 240 > 241 key = (const char*) > qd_hash_key_by_handle(al->addr->hash_handle); > 242 if (key || al->external_addr) { > 243 if (al->external_addr) { > 244 qdr_terminus_set_address(term, al->external_addr); > >>> CID 308513: Null pointer dereferences (FORWARD_NULL) > >>> Dereferencing null pointer "key". > 245 al->internal_addr = [2]; > 246 } else > 247 qdr_terminus_set_address(term, [2]); // truncate > the "Mp" annotation (where p = phase) > 248 al->link = qdr_create_link_CT(core, conn, > QD_LINK_ENDPOINT, al->dir, source, target); > 249
[GitHub] qpid-dispatch issue #354: DISPATCH-1097 - Added code to fix issues reported ...
Github user codecov-io commented on the issue: https://github.com/apache/qpid-dispatch/pull/354 # [Codecov](https://codecov.io/gh/apache/qpid-dispatch/pull/354?src=pr=h1) Report > Merging [#354](https://codecov.io/gh/apache/qpid-dispatch/pull/354?src=pr=desc) into [master](https://codecov.io/gh/apache/qpid-dispatch/commit/83f5d524a63dec84f648a8afa126f794cbcafccb?src=pr=desc) will **decrease** coverage by `0.04%`. > The diff coverage is `42.85%`. [![Impacted file tree graph](https://codecov.io/gh/apache/qpid-dispatch/pull/354/graphs/tree.svg?src=pr=650=rk2Cgd27pP=150)](https://codecov.io/gh/apache/qpid-dispatch/pull/354?src=pr=tree) ```diff @@Coverage Diff @@ ## master #354 +/- ## == - Coverage 84.56% 84.51% -0.05% == Files 69 69 Lines 1572015730 +10 == + Hits1329313294 +1 - Misses 2427 2436 +9 ``` | [Impacted Files](https://codecov.io/gh/apache/qpid-dispatch/pull/354?src=pr=tree) | Coverage Π| | |---|---|---| | [src/router\_core/terminus.c](https://codecov.io/gh/apache/qpid-dispatch/pull/354/diff?src=pr=tree#diff-c3JjL3JvdXRlcl9jb3JlL3Rlcm1pbnVzLmM=) | `94.96% <ø> (ø)` | :arrow_up: | | [src/router\_node.c](https://codecov.io/gh/apache/qpid-dispatch/pull/354/diff?src=pr=tree#diff-c3JjL3JvdXRlcl9ub2RlLmM=) | `93.51% <0%> (-0.75%)` | :arrow_down: | | [src/server.c](https://codecov.io/gh/apache/qpid-dispatch/pull/354/diff?src=pr=tree#diff-c3JjL3NlcnZlci5j) | `84.37% <100%> (+0.02%)` | :arrow_up: | | [src/router\_core/route\_control.c](https://codecov.io/gh/apache/qpid-dispatch/pull/354/diff?src=pr=tree#diff-c3JjL3JvdXRlcl9jb3JlL3JvdXRlX2NvbnRyb2wuYw==) | `95.83% <50%> (-0.66%)` | :arrow_down: | | [src/router\_core/connections.c](https://codecov.io/gh/apache/qpid-dispatch/pull/354/diff?src=pr=tree#diff-c3JjL3JvdXRlcl9jb3JlL2Nvbm5lY3Rpb25zLmM=) | `95.42% <0%> (-0.12%)` | :arrow_down: | -- [Continue to review full report at Codecov](https://codecov.io/gh/apache/qpid-dispatch/pull/354?src=pr=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Π= absolute (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/apache/qpid-dispatch/pull/354?src=pr=footer). Last update [83f5d52...61b2e96](https://codecov.io/gh/apache/qpid-dispatch/pull/354?src=pr=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments). --- - To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org For additional commands, e-mail: dev-h...@qpid.apache.org
[jira] [Commented] (DISPATCH-1097) Fix Coverity issue on master branch
[ https://issues.apache.org/jira/browse/DISPATCH-1097?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16571036#comment-16571036 ] ASF GitHub Bot commented on DISPATCH-1097: -- GitHub user ganeshmurthy opened a pull request: https://github.com/apache/qpid-dispatch/pull/354 DISPATCH-1097 - Added code to fix issues reported by Coverity You can merge this pull request into a Git repository by running: $ git pull https://github.com/ganeshmurthy/qpid-dispatch DISPATCH-1097 Alternatively you can review and apply these changes as the patch at: https://github.com/apache/qpid-dispatch/pull/354.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #354 commit 61b2e96bc62d1d750ba15580a5b69dc1c36c67d8 Author: Ganesh Murthy Date: 2018-08-07T01:36:01Z DISPATCH-1097 - Added code to fix issues reported by Coverity > Fix Coverity issue on master branch > --- > > Key: DISPATCH-1097 > URL: https://issues.apache.org/jira/browse/DISPATCH-1097 > Project: Qpid Dispatch > Issue Type: Bug > Components: Container >Affects Versions: 1.2.0 >Reporter: Ganesh Murthy >Assignee: Ganesh Murthy >Priority: Major > Fix For: 1.3.0 > > > {noformat} > 5 new defect(s) introduced to Apache Qpid dispatch-router found with Coverity > Scan. > 1 defect(s), reported by Coverity Scan earlier, were marked fixed in the > recent build analyzed by Coverity Scan. > New defect(s) Reported-by: Coverity Scan > Showing 5 of 5 defect(s) > ** CID 308513: Null pointer dereferences (FORWARD_NULL) > /home/kgiusti/work/qpid-dispatch/src/router_core/route_control.c: 245 in > qdr_auto_link_activate_CT() > > *** CID 308513: Null pointer dereferences (FORWARD_NULL) > /home/kgiusti/work/qpid-dispatch/src/router_core/route_control.c: 245 in > qdr_auto_link_activate_CT() > 239 term = target; > 240 > 241 key = (const char*) > qd_hash_key_by_handle(al->addr->hash_handle); > 242 if (key || al->external_addr) { > 243 if (al->external_addr) { > 244 qdr_terminus_set_address(term, al->external_addr); > >>> CID 308513: Null pointer dereferences (FORWARD_NULL) > >>> Dereferencing null pointer "key". > 245 al->internal_addr = [2]; > 246 } else > 247 qdr_terminus_set_address(term, [2]); // truncate > the "Mp" annotation (where p = phase) > 248 al->link = qdr_create_link_CT(core, conn, > QD_LINK_ENDPOINT, al->dir, source, target); > 249 al->link->auto_link = al; > 250 al->state = QDR_AUTO_LINK_STATE_ATTACHING; > ** CID 308512: (RESOURCE_LEAK) > /home/kgiusti/work/qpid-dispatch/src/router_core/route_control.c: 252 in > qdr_auto_link_activate_CT() > /home/kgiusti/work/qpid-dispatch/src/router_core/route_control.c: 252 in > qdr_auto_link_activate_CT() > > *** CID 308512: (RESOURCE_LEAK) > /home/kgiusti/work/qpid-dispatch/src/router_core/route_control.c: 252 in > qdr_auto_link_activate_CT() > 246 } else > 247 qdr_terminus_set_address(term, [2]); // truncate > the "Mp" annotation (where p = phase) > 248 al->link = qdr_create_link_CT(core, conn, > QD_LINK_ENDPOINT, al->dir, source, target); > 249 al->link->auto_link = al; > 250 al->state = QDR_AUTO_LINK_STATE_ATTACHING; > 251 } > >>> CID 308512: (RESOURCE_LEAK) > >>> Variable "term" going out of scope leaks the storage it points to. > 252 } > 253 } > 254 > 255 > 256 static void qdr_auto_link_deactivate_CT(qdr_core_t *core, > qdr_auto_link_t *al, qdr_connection_t *conn) > 257 { > /home/kgiusti/work/qpid-dispatch/src/router_core/route_control.c: 252 in > qdr_auto_link_activate_CT() > 246 } else > 247 qdr_terminus_set_address(term, [2]); // truncate > the "Mp" annotation (where p = phase) > 248 al->link = qdr_create_link_CT(core, conn, > QD_LINK_ENDPOINT, al->dir, source, target); > 249 al->link->auto_link = al; > 250 al->state = QDR_AUTO_LINK_STATE_ATTACHING; > 251 } > >>> CID 308512: (RESOURCE_LEAK) > >>> Variable "source" going out of scope leaks the storage it points to. > 252 } > 253 } > 254 > 255 > 256 static void qdr_auto_link_deactivate_CT(qdr_core_t *core, > qdr_auto_link_t *al,
[GitHub] qpid-dispatch pull request #354: DISPATCH-1097 - Added code to fix issues re...
GitHub user ganeshmurthy opened a pull request: https://github.com/apache/qpid-dispatch/pull/354 DISPATCH-1097 - Added code to fix issues reported by Coverity You can merge this pull request into a Git repository by running: $ git pull https://github.com/ganeshmurthy/qpid-dispatch DISPATCH-1097 Alternatively you can review and apply these changes as the patch at: https://github.com/apache/qpid-dispatch/pull/354.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #354 commit 61b2e96bc62d1d750ba15580a5b69dc1c36c67d8 Author: Ganesh Murthy Date: 2018-08-07T01:36:01Z DISPATCH-1097 - Added code to fix issues reported by Coverity --- - To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org For additional commands, e-mail: dev-h...@qpid.apache.org
[jira] [Commented] (QPID-8134) qpid::client::Message::send multiple memory leaks
[ https://issues.apache.org/jira/browse/QPID-8134?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16570940#comment-16570940 ] dan clark commented on QPID-8134: - I hope to refactor the code to provide you a more plausible example that closely mimics the 24x7 application. Note that going through the normal exit path by using connection.close() is not a valid test so the 'options.vexit' path must be used for analysis. Let me try to provide a more clear example: given an application that echos back responses to a message based on a topic, run the application 24x7 (therefore never hitting the close path). Have other applications, some of which are long running, some of which startup and shutdown, all of which send a request to the first application listening on a topic and then get a reply. The application that is sending the replies continues to lose memory all associated with the 'send' path. The use of spout and drain, was used to provide a very simple already written sample application to provide the exact same valgrind output to the location of the leak. Note, that the leak can be worked around by having the send path LINK attribute set to use a different 'reliability' attribute: So: using the following link attribute set on both sender/receiver leaks memory on the send link: {name:send-link, reliability: at-least-once, timeout:1} changing the link attribute on both sender/receiver no longer leaks memory but changes the reliability link: {name:send-link, reliability: at-most-once, timeout:1} // or reliability: unreliable // documented as equivalent. I apologize for the over-simplicity of the example setup in order to provide the most accessible of applications for analysis. However, it did recreate the point. Note, that due to the very high transaction load and the near real time nature of the application, there are times when some elasticity is required of the queues. This means that the default queue limit is quite high to provide some buffering. Is it possible to turn OFF the policy which attempts to out-wit the underling malloc implementation by caching unused memory in the QPID C++ library? This policy tends to reserve an very large cache of essentially unused memory for every application running which is not a good policy for every daemon accessing the system. For example, given the following quidd.conf file one would expect that each application using such a policy might be somewhat unbounded in growth despite only needing high message elasticity during peak demand: cat /etc/qpid/qpidd.conf auth=yes # no tracing too verbose trace=no # by default qpid sets worker threads to # processors # worker-threads=n log-enable=warning+ log-enable=info+:Broker log-enable=info+:Queue # set logging to a file (see logrotate) log-to-stderr=no log-to-stdout=no log-to-file=/var/log/qpidd.log log-to-syslog=no # drop default purge interval is 10m (600s) queue-purge-interval=300 # bump default limit queue (bytes) is 104857600 default-queue-limit=1048576000 # increase responsiveness tcp-nodelay=yes # bump the flow stop threshold default 80% default-flow-stop-threshold=90 # maintain flow resume threashold default 70% default-flow-resume-threshold=85 # start getting events when we cross 50% level (default 80%) default-event-threshold-ratio=75 # send errant message to standard topic but allow tuning (default qpid.no-group) default-message-group=qpid.no-group # allow adjustments to set receive timestamp (default no) enable-timestamp=no -- Dan Clark 503-915-3646 > qpid::client::Message::send multiple memory leaks > - > > Key: QPID-8134 > URL: https://issues.apache.org/jira/browse/QPID-8134 > Project: Qpid > Issue Type: Bug > Components: C++ Client >Affects Versions: qpid-cpp-1.37.0, qpid-cpp-1.38.0 > Environment: *CentOS* Linux release 7.4.1708 (Core) > Linux localhost.novalocal 3.10.0-327.el7.x86_64 #1 SMP Thu Nov 19 22:10:57 > UTC 2015 x86_64 x86_64 x86_64 GNU/Linux > *qpid*-qmf-1.37.0-1.el7.x86_64 > *qpid*-dispatch-debuginfo-1.0.0-1.el7.x86_64 > python-*qpid*-1.37.0-1.el7.noarch > *qpid*-proton-c-0.18.1-1.el7.x86_64 > python-*qpid*-qmf-1.37.0-1.el7.x86_64 > *qpid*-proton-debuginfo-0.18.1-1.el7.x86_64 > *qpid*-cpp-debuginfo-1.37.0-1.el7.x86_64 > *qpid*-cpp-client-devel-1.37.0-1.el7.x86_64 > *qpid*-cpp-server-1.37.0-1.el7.x86_64 > *qpid*-cpp-client-1.37.0-1.el7.x86_64 > >Reporter: dan clark >Assignee: Alan Conway >Priority: Blocker > Labels: leak, maven > Fix For: qpid-cpp-1.39.0 > > Attachments: drain.cpp, godrain.sh, gospout.sh, qpid-8134.tgz, > qpid-stat.out, spout.cpp, spout.log > > Original Estimate: 40h > Remaining Estimate: 40h > > There may be multiple leaks of the outgoing message structure and
[jira] [Commented] (PROTON-1910) Profiling indicates that cgo becomes a bottleneck during scale testing of electron
[ https://issues.apache.org/jira/browse/PROTON-1910?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16570875#comment-16570875 ] Aaron Smith commented on PROTON-1910: - Let me run a new profile and see... Thanks for the quick reply. > Profiling indicates that cgo becomes a bottleneck during scale testing of > electron > -- > > Key: PROTON-1910 > URL: https://issues.apache.org/jira/browse/PROTON-1910 > Project: Qpid Proton > Issue Type: Bug > Components: go-binding >Affects Versions: proton-c-0.24.0 >Reporter: Aaron Smith >Assignee: Alan Conway >Priority: Major > > While performing scale testing, detailed profiling of Go test clients showed > that >95% of the execution time can be devoted to the cgo call. The issues > seems to be related on sends to the NewMessage() call. For receives, the > bottleneck is both NewMessage() and the call to actually receive the message. > > > This behavior is not unexpected as CGO is a well-known bottleneck. Would it > be possible to have a NewMessage() call that return multiple messages and a > recv call that took an "At most" argument. i.e. recv(10) would receive 10 or > fewer messages that might be waiting in the queue. Also, it would be nice to > be able to trade latency for throughput in that the callback wasn't triggered > until N messages were recieved (with timeout) -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org For additional commands, e-mail: dev-h...@qpid.apache.org
[jira] [Commented] (PROTON-1910) Profiling indicates that cgo becomes a bottleneck during scale testing of electron
[ https://issues.apache.org/jira/browse/PROTON-1910?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16570852#comment-16570852 ] Alan Conway commented on PROTON-1910: - Thanks for this observation. Are you seeing the overhead in NewMessage() itself (which only makes a single C call) or could it be under the Marshal() call in NewMessageWith()? Marshalling is quite chatty with C, we might be able to bundle up some of the C calls to reduce the overhead. > Profiling indicates that cgo becomes a bottleneck during scale testing of > electron > -- > > Key: PROTON-1910 > URL: https://issues.apache.org/jira/browse/PROTON-1910 > Project: Qpid Proton > Issue Type: Bug > Components: go-binding >Affects Versions: proton-c-0.24.0 >Reporter: Aaron Smith >Assignee: Alan Conway >Priority: Major > > While performing scale testing, detailed profiling of Go test clients showed > that >95% of the execution time can be devoted to the cgo call. The issues > seems to be related on sends to the NewMessage() call. For receives, the > bottleneck is both NewMessage() and the call to actually receive the message. > > > This behavior is not unexpected as CGO is a well-known bottleneck. Would it > be possible to have a NewMessage() call that return multiple messages and a > recv call that took an "At most" argument. i.e. recv(10) would receive 10 or > fewer messages that might be waiting in the queue. Also, it would be nice to > be able to trade latency for throughput in that the callback wasn't triggered > until N messages were recieved (with timeout) -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org For additional commands, e-mail: dev-h...@qpid.apache.org
[GitHub] qpid-dispatch pull request #326: Logging
Github user alanconway closed the pull request at: https://github.com/apache/qpid-dispatch/pull/326 --- - To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org For additional commands, e-mail: dev-h...@qpid.apache.org
[jira] [Created] (DISPATCH-1097) Fix Coverity issue on master branch
Ganesh Murthy created DISPATCH-1097: --- Summary: Fix Coverity issue on master branch Key: DISPATCH-1097 URL: https://issues.apache.org/jira/browse/DISPATCH-1097 Project: Qpid Dispatch Issue Type: Bug Components: Container Affects Versions: 1.2.0 Reporter: Ganesh Murthy Assignee: Ganesh Murthy Fix For: 1.3.0 {noformat} 5 new defect(s) introduced to Apache Qpid dispatch-router found with Coverity Scan. 1 defect(s), reported by Coverity Scan earlier, were marked fixed in the recent build analyzed by Coverity Scan. New defect(s) Reported-by: Coverity Scan Showing 5 of 5 defect(s) ** CID 308513: Null pointer dereferences (FORWARD_NULL) /home/kgiusti/work/qpid-dispatch/src/router_core/route_control.c: 245 in qdr_auto_link_activate_CT() *** CID 308513: Null pointer dereferences (FORWARD_NULL) /home/kgiusti/work/qpid-dispatch/src/router_core/route_control.c: 245 in qdr_auto_link_activate_CT() 239 term = target; 240 241 key = (const char*) qd_hash_key_by_handle(al->addr->hash_handle); 242 if (key || al->external_addr) { 243 if (al->external_addr) { 244 qdr_terminus_set_address(term, al->external_addr); >>> CID 308513: Null pointer dereferences (FORWARD_NULL) >>> Dereferencing null pointer "key". 245 al->internal_addr = [2]; 246 } else 247 qdr_terminus_set_address(term, [2]); // truncate the "Mp" annotation (where p = phase) 248 al->link = qdr_create_link_CT(core, conn, QD_LINK_ENDPOINT, al->dir, source, target); 249 al->link->auto_link = al; 250 al->state = QDR_AUTO_LINK_STATE_ATTACHING; ** CID 308512: (RESOURCE_LEAK) /home/kgiusti/work/qpid-dispatch/src/router_core/route_control.c: 252 in qdr_auto_link_activate_CT() /home/kgiusti/work/qpid-dispatch/src/router_core/route_control.c: 252 in qdr_auto_link_activate_CT() *** CID 308512: (RESOURCE_LEAK) /home/kgiusti/work/qpid-dispatch/src/router_core/route_control.c: 252 in qdr_auto_link_activate_CT() 246 } else 247 qdr_terminus_set_address(term, [2]); // truncate the "Mp" annotation (where p = phase) 248 al->link = qdr_create_link_CT(core, conn, QD_LINK_ENDPOINT, al->dir, source, target); 249 al->link->auto_link = al; 250 al->state = QDR_AUTO_LINK_STATE_ATTACHING; 251 } >>> CID 308512: (RESOURCE_LEAK) >>> Variable "term" going out of scope leaks the storage it points to. 252 } 253 } 254 255 256 static void qdr_auto_link_deactivate_CT(qdr_core_t *core, qdr_auto_link_t *al, qdr_connection_t *conn) 257 { /home/kgiusti/work/qpid-dispatch/src/router_core/route_control.c: 252 in qdr_auto_link_activate_CT() 246 } else 247 qdr_terminus_set_address(term, [2]); // truncate the "Mp" annotation (where p = phase) 248 al->link = qdr_create_link_CT(core, conn, QD_LINK_ENDPOINT, al->dir, source, target); 249 al->link->auto_link = al; 250 al->state = QDR_AUTO_LINK_STATE_ATTACHING; 251 } >>> CID 308512: (RESOURCE_LEAK) >>> Variable "source" going out of scope leaks the storage it points to. 252 } 253 } 254 255 256 static void qdr_auto_link_deactivate_CT(qdr_core_t *core, qdr_auto_link_t *al, qdr_connection_t *conn) 257 { ** CID 308511: (USE_AFTER_FREE) /home/kgiusti/work/qpid-dispatch/src/server.c: 978 in thread_run() /home/kgiusti/work/qpid-dispatch/src/server.c: 978 in thread_run() *** CID 308511: (USE_AFTER_FREE) /home/kgiusti/work/qpid-dispatch/src/server.c: 978 in thread_run() 972 pn_conn = conn; 973 assert(pn_conn == conn); 974 975 if (!qd_conn) 976 qd_conn = !!pn_conn ? (qd_connection_t*) pn_connection_get_context(pn_conn) : 0; 977 >>> CID 308511: (USE_AFTER_FREE) >>> Calling "handle" frees pointer "qd_conn" which has already been freed. 978 running = handle(qd_server, e, conn, qd_conn); 979 } 980 981 // 982 // Notify the container that the batch is complete so it can do after-batch 983 // processing. /home/kgiusti/work/qpid-dispatch/src/server.c: 978 in thread_run() 972 pn_conn = conn; 973 assert(pn_conn ==
[jira] [Resolved] (DISPATCH-1008) Router should preserve original connection information when attempting to make failover connections
[ https://issues.apache.org/jira/browse/DISPATCH-1008?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ganesh Murthy resolved DISPATCH-1008. - Resolution: Fixed Fix Version/s: 1.3.0 > Router should preserve original connection information when attempting to > make failover connections > --- > > Key: DISPATCH-1008 > URL: https://issues.apache.org/jira/browse/DISPATCH-1008 > Project: Qpid Dispatch > Issue Type: Bug >Reporter: Ganesh Murthy >Assignee: Ganesh Murthy >Priority: Major > Fix For: 1.3.0, 1.2.0 > > Attachments: broker-slave.xml, broker.xml, qdrouterd-failover.conf > > > # Start artemis master and slave brokers and the router with the attached > config files. > # Notice that the router receives an open frame from the master broker with > the following failover information > # > {noformat} > 2018-05-22 22:11:11.830106 -0230 SERVER (trace) [1]:0 <- @open(16) > [container-id="localhost", max-frame-size=4294967295, channel-max=65535, > idle-time-out=3, > offered-capabilities=@PN_SYMBOL[:"sole-connection-for-container", > :"DELAYED_DELIVERY", :"SHARED-SUBS", :"ANONYMOUS-RELAY"], > properties={:product="apache-activemq-artemis", > :"failover-server-list"=[{:hostname="0.0.0.8", :scheme="amqp", :port=61617, > :"network-host"="0.0.0.0"}]"}]{noformat} > > # Now, kill the master broker and notice that the router correctly fails > over to the slave broker. But the slave broker does not provide any failover > information in its open frame and hence the router erases its original master > broker connection information > # When the master broker is now restarted and the slave broker is killed, > the router attempts to repeatedly connect only to the slave broker but never > attempts a connection to the master broker. > # If the router did not erase its failover list but preserved the original > master connection information, it would have connected the master broker. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org For additional commands, e-mail: dev-h...@qpid.apache.org
[jira] [Commented] (DISPATCH-1008) Router should preserve original connection information when attempting to make failover connections
[ https://issues.apache.org/jira/browse/DISPATCH-1008?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16570722#comment-16570722 ] ASF GitHub Bot commented on DISPATCH-1008: -- Github user asfgit closed the pull request at: https://github.com/apache/qpid-dispatch/pull/348 > Router should preserve original connection information when attempting to > make failover connections > --- > > Key: DISPATCH-1008 > URL: https://issues.apache.org/jira/browse/DISPATCH-1008 > Project: Qpid Dispatch > Issue Type: Bug >Reporter: Ganesh Murthy >Assignee: Ganesh Murthy >Priority: Major > Fix For: 1.2.0 > > Attachments: broker-slave.xml, broker.xml, qdrouterd-failover.conf > > > # Start artemis master and slave brokers and the router with the attached > config files. > # Notice that the router receives an open frame from the master broker with > the following failover information > # > {noformat} > 2018-05-22 22:11:11.830106 -0230 SERVER (trace) [1]:0 <- @open(16) > [container-id="localhost", max-frame-size=4294967295, channel-max=65535, > idle-time-out=3, > offered-capabilities=@PN_SYMBOL[:"sole-connection-for-container", > :"DELAYED_DELIVERY", :"SHARED-SUBS", :"ANONYMOUS-RELAY"], > properties={:product="apache-activemq-artemis", > :"failover-server-list"=[{:hostname="0.0.0.8", :scheme="amqp", :port=61617, > :"network-host"="0.0.0.0"}]"}]{noformat} > > # Now, kill the master broker and notice that the router correctly fails > over to the slave broker. But the slave broker does not provide any failover > information in its open frame and hence the router erases its original master > broker connection information > # When the master broker is now restarted and the slave broker is killed, > the router attempts to repeatedly connect only to the slave broker but never > attempts a connection to the master broker. > # If the router did not erase its failover list but preserved the original > master connection information, it would have connected the master broker. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org For additional commands, e-mail: dev-h...@qpid.apache.org
[jira] [Commented] (DISPATCH-1008) Router should preserve original connection information when attempting to make failover connections
[ https://issues.apache.org/jira/browse/DISPATCH-1008?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16570720#comment-16570720 ] ASF subversion and git services commented on DISPATCH-1008: --- Commit 83f5d524a63dec84f648a8afa126f794cbcafccb in qpid-dispatch's branch refs/heads/master from [~ganeshmurthy] [ https://git-wip-us.apache.org/repos/asf?p=qpid-dispatch.git;h=83f5d52 ] DISPATCH-1008 - Back out previous change that was storing every failover url obtained from every connection. Modified the code to wipe out the failover urls obtained from the previous connection if the current connection returned an empty list for failover urls. This closes #348. > Router should preserve original connection information when attempting to > make failover connections > --- > > Key: DISPATCH-1008 > URL: https://issues.apache.org/jira/browse/DISPATCH-1008 > Project: Qpid Dispatch > Issue Type: Bug >Reporter: Ganesh Murthy >Assignee: Ganesh Murthy >Priority: Major > Fix For: 1.2.0 > > Attachments: broker-slave.xml, broker.xml, qdrouterd-failover.conf > > > # Start artemis master and slave brokers and the router with the attached > config files. > # Notice that the router receives an open frame from the master broker with > the following failover information > # > {noformat} > 2018-05-22 22:11:11.830106 -0230 SERVER (trace) [1]:0 <- @open(16) > [container-id="localhost", max-frame-size=4294967295, channel-max=65535, > idle-time-out=3, > offered-capabilities=@PN_SYMBOL[:"sole-connection-for-container", > :"DELAYED_DELIVERY", :"SHARED-SUBS", :"ANONYMOUS-RELAY"], > properties={:product="apache-activemq-artemis", > :"failover-server-list"=[{:hostname="0.0.0.8", :scheme="amqp", :port=61617, > :"network-host"="0.0.0.0"}]"}]{noformat} > > # Now, kill the master broker and notice that the router correctly fails > over to the slave broker. But the slave broker does not provide any failover > information in its open frame and hence the router erases its original master > broker connection information > # When the master broker is now restarted and the slave broker is killed, > the router attempts to repeatedly connect only to the slave broker but never > attempts a connection to the master broker. > # If the router did not erase its failover list but preserved the original > master connection information, it would have connected the master broker. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org For additional commands, e-mail: dev-h...@qpid.apache.org
[GitHub] qpid-dispatch pull request #348: DISPATCH-1008 - Back out previous change th...
Github user asfgit closed the pull request at: https://github.com/apache/qpid-dispatch/pull/348 --- - To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org For additional commands, e-mail: dev-h...@qpid.apache.org
[jira] [Assigned] (DISPATCH-1094) Log file messages out of order according to time stamps
[ https://issues.apache.org/jira/browse/DISPATCH-1094?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ganesh Murthy reassigned DISPATCH-1094: --- Assignee: Ganesh Murthy > Log file messages out of order according to time stamps > --- > > Key: DISPATCH-1094 > URL: https://issues.apache.org/jira/browse/DISPATCH-1094 > Project: Qpid Dispatch > Issue Type: Bug >Affects Versions: 1.2.0 > Environment: Fedora 27 >Reporter: Chuck Rolke >Assignee: Ganesh Murthy >Priority: Major > Fix For: 1.3.0 > > > In a recent run with trace logging turned on the trace file had 5,335 lines > with several hundred instances of non-increasing timestamps. > {{2018-08-01 10:45:25.198173}} > {{2018-08-01 10:45:25.198187}} > {{2018-08-01 10:45:25.197941}} > {{2018-08-01 10:45:25.197727}} > {{2018-08-01 10:45:25.198238}} > Log file readers need to know to SORT the log file before scrutinizing it too > carefully. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org For additional commands, e-mail: dev-h...@qpid.apache.org
[jira] [Resolved] (DISPATCH-1094) Log file messages out of order according to time stamps
[ https://issues.apache.org/jira/browse/DISPATCH-1094?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ganesh Murthy resolved DISPATCH-1094. - Resolution: Fixed Fix Version/s: 1.3.0 > Log file messages out of order according to time stamps > --- > > Key: DISPATCH-1094 > URL: https://issues.apache.org/jira/browse/DISPATCH-1094 > Project: Qpid Dispatch > Issue Type: Bug >Affects Versions: 1.2.0 > Environment: Fedora 27 >Reporter: Chuck Rolke >Assignee: Ganesh Murthy >Priority: Major > Fix For: 1.3.0 > > > In a recent run with trace logging turned on the trace file had 5,335 lines > with several hundred instances of non-increasing timestamps. > {{2018-08-01 10:45:25.198173}} > {{2018-08-01 10:45:25.198187}} > {{2018-08-01 10:45:25.197941}} > {{2018-08-01 10:45:25.197727}} > {{2018-08-01 10:45:25.198238}} > Log file readers need to know to SORT the log file before scrutinizing it too > carefully. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org For additional commands, e-mail: dev-h...@qpid.apache.org
[jira] [Commented] (DISPATCH-1094) Log file messages out of order according to time stamps
[ https://issues.apache.org/jira/browse/DISPATCH-1094?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16570705#comment-16570705 ] ASF GitHub Bot commented on DISPATCH-1094: -- Github user asfgit closed the pull request at: https://github.com/apache/qpid-dispatch/pull/353 > Log file messages out of order according to time stamps > --- > > Key: DISPATCH-1094 > URL: https://issues.apache.org/jira/browse/DISPATCH-1094 > Project: Qpid Dispatch > Issue Type: Bug >Affects Versions: 1.2.0 > Environment: Fedora 27 >Reporter: Chuck Rolke >Priority: Major > > In a recent run with trace logging turned on the trace file had 5,335 lines > with several hundred instances of non-increasing timestamps. > {{2018-08-01 10:45:25.198173}} > {{2018-08-01 10:45:25.198187}} > {{2018-08-01 10:45:25.197941}} > {{2018-08-01 10:45:25.197727}} > {{2018-08-01 10:45:25.198238}} > Log file readers need to know to SORT the log file before scrutinizing it too > carefully. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org For additional commands, e-mail: dev-h...@qpid.apache.org
[jira] [Commented] (DISPATCH-1094) Log file messages out of order according to time stamps
[ https://issues.apache.org/jira/browse/DISPATCH-1094?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16570704#comment-16570704 ] ASF subversion and git services commented on DISPATCH-1094: --- Commit d28002a492abf73f9a12ad13c2a7c43007ca52c7 in qpid-dispatch's branch refs/heads/master from [~ganeshmurthy] [ https://git-wip-us.apache.org/repos/asf?p=qpid-dispatch.git;h=d28002a ] DISPATCH-1094 - Moved the creation and writing of log file inside a lock so that the order is preserved. This closes #353 > Log file messages out of order according to time stamps > --- > > Key: DISPATCH-1094 > URL: https://issues.apache.org/jira/browse/DISPATCH-1094 > Project: Qpid Dispatch > Issue Type: Bug >Affects Versions: 1.2.0 > Environment: Fedora 27 >Reporter: Chuck Rolke >Priority: Major > > In a recent run with trace logging turned on the trace file had 5,335 lines > with several hundred instances of non-increasing timestamps. > {{2018-08-01 10:45:25.198173}} > {{2018-08-01 10:45:25.198187}} > {{2018-08-01 10:45:25.197941}} > {{2018-08-01 10:45:25.197727}} > {{2018-08-01 10:45:25.198238}} > Log file readers need to know to SORT the log file before scrutinizing it too > carefully. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org For additional commands, e-mail: dev-h...@qpid.apache.org
[GitHub] qpid-dispatch pull request #353: DISPATCH-1094 - Moved the creation and writ...
Github user asfgit closed the pull request at: https://github.com/apache/qpid-dispatch/pull/353 --- - To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org For additional commands, e-mail: dev-h...@qpid.apache.org
[jira] [Commented] (DISPATCH-1094) Log file messages out of order according to time stamps
[ https://issues.apache.org/jira/browse/DISPATCH-1094?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16570584#comment-16570584 ] ASF GitHub Bot commented on DISPATCH-1094: -- GitHub user ganeshmurthy opened a pull request: https://github.com/apache/qpid-dispatch/pull/353 DISPATCH-1094 - Moved the creation and writing of log file inside a l… …ock so that the order is preserved You can merge this pull request into a Git repository by running: $ git pull https://github.com/ganeshmurthy/qpid-dispatch DISPATCH-1094 Alternatively you can review and apply these changes as the patch at: https://github.com/apache/qpid-dispatch/pull/353.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #353 commit 9fc97ae7d6ecedbd7880a7e2c1bfa8c934abe350 Author: Ganesh Murthy Date: 2018-08-06T18:16:28Z DISPATCH-1094 - Moved the creation and writing of log file inside a lock so that the order is preserved > Log file messages out of order according to time stamps > --- > > Key: DISPATCH-1094 > URL: https://issues.apache.org/jira/browse/DISPATCH-1094 > Project: Qpid Dispatch > Issue Type: Bug >Affects Versions: 1.2.0 > Environment: Fedora 27 >Reporter: Chuck Rolke >Priority: Major > > In a recent run with trace logging turned on the trace file had 5,335 lines > with several hundred instances of non-increasing timestamps. > {{2018-08-01 10:45:25.198173}} > {{2018-08-01 10:45:25.198187}} > {{2018-08-01 10:45:25.197941}} > {{2018-08-01 10:45:25.197727}} > {{2018-08-01 10:45:25.198238}} > Log file readers need to know to SORT the log file before scrutinizing it too > carefully. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org For additional commands, e-mail: dev-h...@qpid.apache.org
[GitHub] qpid-dispatch pull request #353: DISPATCH-1094 - Moved the creation and writ...
GitHub user ganeshmurthy opened a pull request: https://github.com/apache/qpid-dispatch/pull/353 DISPATCH-1094 - Moved the creation and writing of log file inside a l⦠â¦ock so that the order is preserved You can merge this pull request into a Git repository by running: $ git pull https://github.com/ganeshmurthy/qpid-dispatch DISPATCH-1094 Alternatively you can review and apply these changes as the patch at: https://github.com/apache/qpid-dispatch/pull/353.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #353 commit 9fc97ae7d6ecedbd7880a7e2c1bfa8c934abe350 Author: Ganesh Murthy Date: 2018-08-06T18:16:28Z DISPATCH-1094 - Moved the creation and writing of log file inside a lock so that the order is preserved --- - To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org For additional commands, e-mail: dev-h...@qpid.apache.org
[jira] [Updated] (QPID-8225) [Broker-J][AMQP 0-10] stops delivering queue/consumer messages after 4 GB data transfer
[ https://issues.apache.org/jira/browse/QPID-8225?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Rob Godfrey updated QPID-8225: -- Attachment: qpid-broker-plugins-amqp-0-10-protocol-7.0.6.jar > [Broker-J][AMQP 0-10] stops delivering queue/consumer messages after 4 GB > data transfer > --- > > Key: QPID-8225 > URL: https://issues.apache.org/jira/browse/QPID-8225 > Project: Qpid > Issue Type: Bug > Components: Broker-J >Affects Versions: qpid-java-broker-7.0.6 >Reporter: Michael Dyslin >Priority: Critical > Attachments: protocol.log, > qpid-broker-plugins-amqp-0-10-protocol-7.0.6.jar > > > Created this Jira so I could send a logfile attachment to be analyzed. > Attachments are removed from the user discussion email lists. > > Log file is for the java broker with a NameAndLevel filter of > org.apache.qpid.server.protocol.v0_10.ServerConnection at the DEBUG level. > Activity was: > # Start the java broker > # Start the consumer > # Start the producer > # 2 messages sent every 10 seconds > # Stopped producer after 10 messages sent > # stopped the consumer much later > This log does not contain the problem where message flow stopped, probably > due to CREDIT flow mode exceeding 4 GB credit. > > > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org For additional commands, e-mail: dev-h...@qpid.apache.org
[jira] [Commented] (QPID-8225) [Broker-J][AMQP 0-10] stops delivering queue/consumer messages after 4 GB data transfer
[ https://issues.apache.org/jira/browse/QPID-8225?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16570492#comment-16570492 ] ASF subversion and git services commented on QPID-8225: --- Commit cf40fdea39d9633702ee286d94e950a19ec7be74 in qpid-broker-j's branch refs/heads/master from Robert Godfrey [ https://git-wip-us.apache.org/repos/asf?p=qpid-broker-j.git;h=cf40fde ] QPID-8225 : Fix incorrect implementation of infinite credit > [Broker-J][AMQP 0-10] stops delivering queue/consumer messages after 4 GB > data transfer > --- > > Key: QPID-8225 > URL: https://issues.apache.org/jira/browse/QPID-8225 > Project: Qpid > Issue Type: Bug > Components: Broker-J >Affects Versions: qpid-java-broker-7.0.6 >Reporter: Michael Dyslin >Priority: Critical > Attachments: protocol.log > > > Created this Jira so I could send a logfile attachment to be analyzed. > Attachments are removed from the user discussion email lists. > > Log file is for the java broker with a NameAndLevel filter of > org.apache.qpid.server.protocol.v0_10.ServerConnection at the DEBUG level. > Activity was: > # Start the java broker > # Start the consumer > # Start the producer > # 2 messages sent every 10 seconds > # Stopped producer after 10 messages sent > # stopped the consumer much later > This log does not contain the problem where message flow stopped, probably > due to CREDIT flow mode exceeding 4 GB credit. > > > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org For additional commands, e-mail: dev-h...@qpid.apache.org
[jira] [Updated] (QPID-8134) qpid::client::Message::send multiple memory leaks
[ https://issues.apache.org/jira/browse/QPID-8134?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Alan Conway updated QPID-8134: -- Attachment: qpid-8134.tgz > qpid::client::Message::send multiple memory leaks > - > > Key: QPID-8134 > URL: https://issues.apache.org/jira/browse/QPID-8134 > Project: Qpid > Issue Type: Bug > Components: C++ Client >Affects Versions: qpid-cpp-1.37.0, qpid-cpp-1.38.0 > Environment: *CentOS* Linux release 7.4.1708 (Core) > Linux localhost.novalocal 3.10.0-327.el7.x86_64 #1 SMP Thu Nov 19 22:10:57 > UTC 2015 x86_64 x86_64 x86_64 GNU/Linux > *qpid*-qmf-1.37.0-1.el7.x86_64 > *qpid*-dispatch-debuginfo-1.0.0-1.el7.x86_64 > python-*qpid*-1.37.0-1.el7.noarch > *qpid*-proton-c-0.18.1-1.el7.x86_64 > python-*qpid*-qmf-1.37.0-1.el7.x86_64 > *qpid*-proton-debuginfo-0.18.1-1.el7.x86_64 > *qpid*-cpp-debuginfo-1.37.0-1.el7.x86_64 > *qpid*-cpp-client-devel-1.37.0-1.el7.x86_64 > *qpid*-cpp-server-1.37.0-1.el7.x86_64 > *qpid*-cpp-client-1.37.0-1.el7.x86_64 > >Reporter: dan clark >Assignee: Alan Conway >Priority: Blocker > Labels: leak, maven > Fix For: qpid-cpp-1.39.0 > > Attachments: drain.cpp, godrain.sh, gospout.sh, qpid-8134.tgz, > qpid-stat.out, spout.cpp, spout.log > > Original Estimate: 40h > Remaining Estimate: 40h > > There may be multiple leaks of the outgoing message structure and associated > fields when using the qpid::client::amqp0_10::SenderImpl::send function to > publish messages under certain setups. I will concede that there may be > options that are beyond my ken to ameliorate the leak of messages structures, > especially since there is an indication that under prolonged runs (a > demonized version of an application like spout) that the statistics for quidd > indicate increased acquires with zero releases. > The basic notion is illustrated with the test application spout (and drain). > Consider a long running daemon reducing the overhead of open/send/close by > keeping the message connection open for long periods of time. Then the logic > would be: start application/open connection. In a loop send data (and never > reach a close). Thus the drain application illustrates the behavior and > demonstrates the leak using valgrind by sending the data followed by an > exit(0). > Note also the lack of 'releases' associated with the 'acquires' in the stats > output. > Capturing the leaks using the test applications spout/drain required adding > an 'exit()' prior to the close, as during normal operations of a daemon, the > connection remains open for a sustained period of time, thus the leak of > structures within the c++ client library are found as structures still > tracked by the library and cleaned up on 'connection.close()', but they > should be cleaned up as a result of the completion of the send/receive ack or > the termination of the life of the message based on the TTL of the message, > which ever comes first. I have witnessed growth of the leaked structures > into the millions of messages lasting more than 24hours with short (300sec) > TTL of the messages based on scenarios attached using spout/drain as test > vehicle. > The attached spout.log uses a short 10message test and the spout.log contains > 5 sets of different structures leaked (found with the 'bytes in 10 blocks are > still reachable' lines, that are in line with much more sustained leaks when > running the application for multiple days with millions of messages. > The leaks seem to be associated with structures allocation 'stdstrings' to > save the "subject" and the "payload" for string based messages using send for > amq.topic output. > Suggested work arounds are welcome based on application level changes to > spout/drain (if they are missing key components) or changes to the > address/setup of the queues for amq.topic messages (see the 'gospout.sh and > godrain.sh' test drivers providing the specific address structures being used. > For example, the following is one of the 5 different categories of leaked > data from 'spout.log' on a valgrind analysis of the output post the send and > session.sync but prior connection.close(): > > ==3388== 3,680 bytes in 10 blocks are still reachable in loss record 233 of > 234 > ==3388== at 0x4C2A203: operator new(unsigned long) > (vg_replace_malloc.c:334) > ==3388== by 0x4EB046C: qpid::client::Message::Message(std::string const&, > std::string const&) (Message.cpp:31) > ==3388== by 0x51742C1: > qpid::client::amqp0_10::OutgoingMessage::OutgoingMessage() > (OutgoingMessage.cpp:167) > ==3388== by 0x5186200: > qpid::client::amqp0_10::SenderImpl::sendImpl(qpid::messaging::Message const&) > (SenderImpl.cpp:140) > ==3388== by 0x5186485: operator() (SenderImpl.h:114)
[jira] [Comment Edited] (QPID-8134) qpid::client::Message::send multiple memory leaks
[ https://issues.apache.org/jira/browse/QPID-8134?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16570478#comment-16570478 ] Alan Conway edited comment on QPID-8134 at 8/6/18 4:52 PM: --- I don't doubt your report, but I'm not able to reproduce it, there's something different about what I'm doing. [aconway@grommit bz (master *%)]$ rpm -qa '*qpid*' qpid-cpp-server-1.38.0-1.fc28.x86_64 qpid-cpp-client-1.38.0-1.fc28.x86_64 qpid-proton-c-0.21.0-2.fc28.x86_64 My script looks like this: for i in 1000 1 10; do echo "== testing: $i"; sh -x gospout.sh -a doSubject -c $i done > test.out 2>&1 grep '==.*(lost|reachable|testing):' test.out I get output like this: + echo '== testing: 1000' == testing: 1000 ==9445== definitely lost: 0 bytes in 0 blocks ==9445== indirectly lost: 0 bytes in 0 blocks ==9445== possibly lost: 0 bytes in 0 blocks ==9445== still reachable: 50,908 bytes in 319 blocks + echo '== testing: 1' == testing: 1 ==9450== definitely lost: 0 bytes in 0 blocks ==9450== indirectly lost: 0 bytes in 0 blocks ==9450== possibly lost: 0 bytes in 0 blocks ==9450== still reachable: 41,230 bytes in 265 blocks + echo '== testing: 10' == testing: 10 ==9457== definitely lost: 0 bytes in 0 blocks ==9457== indirectly lost: 0 bytes in 0 blocks ==9457== possibly lost: 0 bytes in 0 blocks ==9457== still reachable: 31,596 bytes in 214 blocks Its a bit surprising that the final memory is lower on longer runs, but valgrind --tool=massif shows that the memory holds steady during the run and then flutters a bit during shutdown so there may be some randomish effects there based on clean-up ordering etc. I did have to fix one minor bug in gospout.sh - getopts automatically shifts the arguments, so shifting again in the script means only the first option gets read. I'll attach a tar file with the full output, scripts & source I'm using: [^qpid-8134.tgz] was (Author: aconway): I don't doubt your report, but I'm not able to reproduce it, there's something different about what I'm doing. [aconway@grommit bz (master *%)]$ rpm -qa '*qpid*' qpid-cpp-server-1.38.0-1.fc28.x86_64 qpid-cpp-client-1.38.0-1.fc28.x86_64 qpid-proton-c-0.21.0-2.fc28.x86_64 My script looks like this: for i in 1000 1 10; do echo "== testing: $i"; sh -x gospout.sh -a doSubject -c $i done > test.out 2>&1 grep '==.*\(lost\|reachable\|testing\):' test.out I get output like this: + echo '== testing: 1000' == testing: 1000 ==9445== definitely lost: 0 bytes in 0 blocks ==9445== indirectly lost: 0 bytes in 0 blocks ==9445== possibly lost: 0 bytes in 0 blocks ==9445== still reachable: 50,908 bytes in 319 blocks + echo '== testing: 1' == testing: 1 ==9450== definitely lost: 0 bytes in 0 blocks ==9450== indirectly lost: 0 bytes in 0 blocks ==9450== possibly lost: 0 bytes in 0 blocks ==9450== still reachable: 41,230 bytes in 265 blocks + echo '== testing: 10' == testing: 10 ==9457== definitely lost: 0 bytes in 0 blocks ==9457== indirectly lost: 0 bytes in 0 blocks ==9457== possibly lost: 0 bytes in 0 blocks ==9457== still reachable: 31,596 bytes in 214 blocks Its a bit surprising that the final memory is lower on longer runs, but valgrind --tool=massif shows that the memory holds steady during the run and then flutters a bit during shutdown so there may be some randomish effects there based on clean-up ordering etc. I did have to fix one minor bug in gospout.sh - getopts automatically shifts the arguments, so shifting again in the script means only the first option gets read. I'll attach a tar file with the full output, scripts & source I'm using. > qpid::client::Message::send multiple memory leaks > - > > Key: QPID-8134 > URL: https://issues.apache.org/jira/browse/QPID-8134 > Project: Qpid > Issue Type: Bug > Components: C++ Client >Affects Versions: qpid-cpp-1.37.0, qpid-cpp-1.38.0 > Environment: *CentOS* Linux release 7.4.1708 (Core) > Linux localhost.novalocal 3.10.0-327.el7.x86_64 #1 SMP Thu Nov 19 22:10:57 > UTC 2015 x86_64 x86_64 x86_64 GNU/Linux > *qpid*-qmf-1.37.0-1.el7.x86_64 > *qpid*-dispatch-debuginfo-1.0.0-1.el7.x86_64 > python-*qpid*-1.37.0-1.el7.noarch > *qpid*-proton-c-0.18.1-1.el7.x86_64 > python-*qpid*-qmf-1.37.0-1.el7.x86_64 > *qpid*-proton-debuginfo-0.18.1-1.el7.x86_64 > *qpid*-cpp-debuginfo-1.37.0-1.el7.x86_64 > *qpid*-cpp-client-devel-1.37.0-1.el7.x86_64 > *qpid*-cpp-server-1.37.0-1.el7.x86_64 > *qpid*-cpp-client-1.37.0-1.el7.x86_64 > >Reporter: dan clark >Assignee: Alan Conway >Priority: Blocker > Labels: leak, maven > Fix For: qpid-cpp-1.39.0 > > Attachments:
[jira] [Updated] (QPID-8134) qpid::client::Message::send multiple memory leaks
[ https://issues.apache.org/jira/browse/QPID-8134?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Alan Conway updated QPID-8134: -- Attachment: (was: massif.out.spout_10) > qpid::client::Message::send multiple memory leaks > - > > Key: QPID-8134 > URL: https://issues.apache.org/jira/browse/QPID-8134 > Project: Qpid > Issue Type: Bug > Components: C++ Client >Affects Versions: qpid-cpp-1.37.0, qpid-cpp-1.38.0 > Environment: *CentOS* Linux release 7.4.1708 (Core) > Linux localhost.novalocal 3.10.0-327.el7.x86_64 #1 SMP Thu Nov 19 22:10:57 > UTC 2015 x86_64 x86_64 x86_64 GNU/Linux > *qpid*-qmf-1.37.0-1.el7.x86_64 > *qpid*-dispatch-debuginfo-1.0.0-1.el7.x86_64 > python-*qpid*-1.37.0-1.el7.noarch > *qpid*-proton-c-0.18.1-1.el7.x86_64 > python-*qpid*-qmf-1.37.0-1.el7.x86_64 > *qpid*-proton-debuginfo-0.18.1-1.el7.x86_64 > *qpid*-cpp-debuginfo-1.37.0-1.el7.x86_64 > *qpid*-cpp-client-devel-1.37.0-1.el7.x86_64 > *qpid*-cpp-server-1.37.0-1.el7.x86_64 > *qpid*-cpp-client-1.37.0-1.el7.x86_64 > >Reporter: dan clark >Assignee: Alan Conway >Priority: Blocker > Labels: leak, maven > Fix For: qpid-cpp-1.39.0 > > Attachments: drain.cpp, godrain.sh, gospout.sh, qpid-8134.tgz, > qpid-stat.out, spout.cpp, spout.log > > Original Estimate: 40h > Remaining Estimate: 40h > > There may be multiple leaks of the outgoing message structure and associated > fields when using the qpid::client::amqp0_10::SenderImpl::send function to > publish messages under certain setups. I will concede that there may be > options that are beyond my ken to ameliorate the leak of messages structures, > especially since there is an indication that under prolonged runs (a > demonized version of an application like spout) that the statistics for quidd > indicate increased acquires with zero releases. > The basic notion is illustrated with the test application spout (and drain). > Consider a long running daemon reducing the overhead of open/send/close by > keeping the message connection open for long periods of time. Then the logic > would be: start application/open connection. In a loop send data (and never > reach a close). Thus the drain application illustrates the behavior and > demonstrates the leak using valgrind by sending the data followed by an > exit(0). > Note also the lack of 'releases' associated with the 'acquires' in the stats > output. > Capturing the leaks using the test applications spout/drain required adding > an 'exit()' prior to the close, as during normal operations of a daemon, the > connection remains open for a sustained period of time, thus the leak of > structures within the c++ client library are found as structures still > tracked by the library and cleaned up on 'connection.close()', but they > should be cleaned up as a result of the completion of the send/receive ack or > the termination of the life of the message based on the TTL of the message, > which ever comes first. I have witnessed growth of the leaked structures > into the millions of messages lasting more than 24hours with short (300sec) > TTL of the messages based on scenarios attached using spout/drain as test > vehicle. > The attached spout.log uses a short 10message test and the spout.log contains > 5 sets of different structures leaked (found with the 'bytes in 10 blocks are > still reachable' lines, that are in line with much more sustained leaks when > running the application for multiple days with millions of messages. > The leaks seem to be associated with structures allocation 'stdstrings' to > save the "subject" and the "payload" for string based messages using send for > amq.topic output. > Suggested work arounds are welcome based on application level changes to > spout/drain (if they are missing key components) or changes to the > address/setup of the queues for amq.topic messages (see the 'gospout.sh and > godrain.sh' test drivers providing the specific address structures being used. > For example, the following is one of the 5 different categories of leaked > data from 'spout.log' on a valgrind analysis of the output post the send and > session.sync but prior connection.close(): > > ==3388== 3,680 bytes in 10 blocks are still reachable in loss record 233 of > 234 > ==3388== at 0x4C2A203: operator new(unsigned long) > (vg_replace_malloc.c:334) > ==3388== by 0x4EB046C: qpid::client::Message::Message(std::string const&, > std::string const&) (Message.cpp:31) > ==3388== by 0x51742C1: > qpid::client::amqp0_10::OutgoingMessage::OutgoingMessage() > (OutgoingMessage.cpp:167) > ==3388== by 0x5186200: > qpid::client::amqp0_10::SenderImpl::sendImpl(qpid::messaging::Message const&) > (SenderImpl.cpp:140) > ==3388== by 0x5186485:
[jira] [Updated] (QPID-8225) [Broker-J][AMQP 0-10] stops delivering queue/consumer messages after 4 GB data transfer
[ https://issues.apache.org/jira/browse/QPID-8225?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Rob Godfrey updated QPID-8225: -- Issue Type: Bug (was: Task) > [Broker-J][AMQP 0-10] stops delivering queue/consumer messages after 4 GB > data transfer > --- > > Key: QPID-8225 > URL: https://issues.apache.org/jira/browse/QPID-8225 > Project: Qpid > Issue Type: Bug > Components: Broker-J >Affects Versions: qpid-java-broker-7.0.6 >Reporter: Michael Dyslin >Priority: Minor > Attachments: protocol.log > > > Created this Jira so I could send a logfile attachment to be analyzed. > Attachments are removed from the user discussion email lists. > > Log file is for the java broker with a NameAndLevel filter of > org.apache.qpid.server.protocol.v0_10.ServerConnection at the DEBUG level. > Activity was: > # Start the java broker > # Start the consumer > # Start the producer > # 2 messages sent every 10 seconds > # Stopped producer after 10 messages sent > # stopped the consumer much later > This log does not contain the problem where message flow stopped, probably > due to CREDIT flow mode exceeding 4 GB credit. > > > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org For additional commands, e-mail: dev-h...@qpid.apache.org
[jira] [Updated] (QPID-8225) [Broker-J][AMQP 0-10] stops delivering queue/consumer messages after 4 GB data transfer
[ https://issues.apache.org/jira/browse/QPID-8225?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Rob Godfrey updated QPID-8225: -- Priority: Critical (was: Minor) > [Broker-J][AMQP 0-10] stops delivering queue/consumer messages after 4 GB > data transfer > --- > > Key: QPID-8225 > URL: https://issues.apache.org/jira/browse/QPID-8225 > Project: Qpid > Issue Type: Bug > Components: Broker-J >Affects Versions: qpid-java-broker-7.0.6 >Reporter: Michael Dyslin >Priority: Critical > Attachments: protocol.log > > > Created this Jira so I could send a logfile attachment to be analyzed. > Attachments are removed from the user discussion email lists. > > Log file is for the java broker with a NameAndLevel filter of > org.apache.qpid.server.protocol.v0_10.ServerConnection at the DEBUG level. > Activity was: > # Start the java broker > # Start the consumer > # Start the producer > # 2 messages sent every 10 seconds > # Stopped producer after 10 messages sent > # stopped the consumer much later > This log does not contain the problem where message flow stopped, probably > due to CREDIT flow mode exceeding 4 GB credit. > > > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org For additional commands, e-mail: dev-h...@qpid.apache.org
[jira] [Updated] (QPID-8134) qpid::client::Message::send multiple memory leaks
[ https://issues.apache.org/jira/browse/QPID-8134?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Alan Conway updated QPID-8134: -- Attachment: massif.out.spout_10 > qpid::client::Message::send multiple memory leaks > - > > Key: QPID-8134 > URL: https://issues.apache.org/jira/browse/QPID-8134 > Project: Qpid > Issue Type: Bug > Components: C++ Client >Affects Versions: qpid-cpp-1.37.0, qpid-cpp-1.38.0 > Environment: *CentOS* Linux release 7.4.1708 (Core) > Linux localhost.novalocal 3.10.0-327.el7.x86_64 #1 SMP Thu Nov 19 22:10:57 > UTC 2015 x86_64 x86_64 x86_64 GNU/Linux > *qpid*-qmf-1.37.0-1.el7.x86_64 > *qpid*-dispatch-debuginfo-1.0.0-1.el7.x86_64 > python-*qpid*-1.37.0-1.el7.noarch > *qpid*-proton-c-0.18.1-1.el7.x86_64 > python-*qpid*-qmf-1.37.0-1.el7.x86_64 > *qpid*-proton-debuginfo-0.18.1-1.el7.x86_64 > *qpid*-cpp-debuginfo-1.37.0-1.el7.x86_64 > *qpid*-cpp-client-devel-1.37.0-1.el7.x86_64 > *qpid*-cpp-server-1.37.0-1.el7.x86_64 > *qpid*-cpp-client-1.37.0-1.el7.x86_64 > >Reporter: dan clark >Assignee: Alan Conway >Priority: Blocker > Labels: leak, maven > Fix For: qpid-cpp-1.39.0 > > Attachments: drain.cpp, godrain.sh, gospout.sh, > massif.out.spout_10, qpid-stat.out, spout.cpp, spout.log > > Original Estimate: 40h > Remaining Estimate: 40h > > There may be multiple leaks of the outgoing message structure and associated > fields when using the qpid::client::amqp0_10::SenderImpl::send function to > publish messages under certain setups. I will concede that there may be > options that are beyond my ken to ameliorate the leak of messages structures, > especially since there is an indication that under prolonged runs (a > demonized version of an application like spout) that the statistics for quidd > indicate increased acquires with zero releases. > The basic notion is illustrated with the test application spout (and drain). > Consider a long running daemon reducing the overhead of open/send/close by > keeping the message connection open for long periods of time. Then the logic > would be: start application/open connection. In a loop send data (and never > reach a close). Thus the drain application illustrates the behavior and > demonstrates the leak using valgrind by sending the data followed by an > exit(0). > Note also the lack of 'releases' associated with the 'acquires' in the stats > output. > Capturing the leaks using the test applications spout/drain required adding > an 'exit()' prior to the close, as during normal operations of a daemon, the > connection remains open for a sustained period of time, thus the leak of > structures within the c++ client library are found as structures still > tracked by the library and cleaned up on 'connection.close()', but they > should be cleaned up as a result of the completion of the send/receive ack or > the termination of the life of the message based on the TTL of the message, > which ever comes first. I have witnessed growth of the leaked structures > into the millions of messages lasting more than 24hours with short (300sec) > TTL of the messages based on scenarios attached using spout/drain as test > vehicle. > The attached spout.log uses a short 10message test and the spout.log contains > 5 sets of different structures leaked (found with the 'bytes in 10 blocks are > still reachable' lines, that are in line with much more sustained leaks when > running the application for multiple days with millions of messages. > The leaks seem to be associated with structures allocation 'stdstrings' to > save the "subject" and the "payload" for string based messages using send for > amq.topic output. > Suggested work arounds are welcome based on application level changes to > spout/drain (if they are missing key components) or changes to the > address/setup of the queues for amq.topic messages (see the 'gospout.sh and > godrain.sh' test drivers providing the specific address structures being used. > For example, the following is one of the 5 different categories of leaked > data from 'spout.log' on a valgrind analysis of the output post the send and > session.sync but prior connection.close(): > > ==3388== 3,680 bytes in 10 blocks are still reachable in loss record 233 of > 234 > ==3388== at 0x4C2A203: operator new(unsigned long) > (vg_replace_malloc.c:334) > ==3388== by 0x4EB046C: qpid::client::Message::Message(std::string const&, > std::string const&) (Message.cpp:31) > ==3388== by 0x51742C1: > qpid::client::amqp0_10::OutgoingMessage::OutgoingMessage() > (OutgoingMessage.cpp:167) > ==3388== by 0x5186200: > qpid::client::amqp0_10::SenderImpl::sendImpl(qpid::messaging::Message const&) > (SenderImpl.cpp:140) > ==3388== by 0x5186485: operator()
[jira] [Updated] (QPID-8225) [Broker-J][AMQP 0-10] stops delivering queue/consumer messages after 4 GB data transfer
[ https://issues.apache.org/jira/browse/QPID-8225?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Rob Godfrey updated QPID-8225: -- Summary: [Broker-J][AMQP 0-10] stops delivering queue/consumer messages after 4 GB data transfer (was: Java Broker (7.0.6) stops delivering queue/consumer messages after 4 GB data transfer) > [Broker-J][AMQP 0-10] stops delivering queue/consumer messages after 4 GB > data transfer > --- > > Key: QPID-8225 > URL: https://issues.apache.org/jira/browse/QPID-8225 > Project: Qpid > Issue Type: Task > Components: Broker-J >Affects Versions: qpid-java-broker-7.0.6 >Reporter: Michael Dyslin >Priority: Minor > Attachments: protocol.log > > > Created this Jira so I could send a logfile attachment to be analyzed. > Attachments are removed from the user discussion email lists. > > Log file is for the java broker with a NameAndLevel filter of > org.apache.qpid.server.protocol.v0_10.ServerConnection at the DEBUG level. > Activity was: > # Start the java broker > # Start the consumer > # Start the producer > # 2 messages sent every 10 seconds > # Stopped producer after 10 messages sent > # stopped the consumer much later > This log does not contain the problem where message flow stopped, probably > due to CREDIT flow mode exceeding 4 GB credit. > > > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org For additional commands, e-mail: dev-h...@qpid.apache.org
[jira] [Commented] (QPID-8134) qpid::client::Message::send multiple memory leaks
[ https://issues.apache.org/jira/browse/QPID-8134?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16570478#comment-16570478 ] Alan Conway commented on QPID-8134: --- I don't doubt your report, but I'm not able to reproduce it, there's something different about what I'm doing. [aconway@grommit bz (master *%)]$ rpm -qa '*qpid*' qpid-cpp-server-1.38.0-1.fc28.x86_64 qpid-cpp-client-1.38.0-1.fc28.x86_64 qpid-proton-c-0.21.0-2.fc28.x86_64 My script looks like this: for i in 1000 1 10; do echo "== testing: $i"; sh -x gospout.sh -a doSubject -c $i done > test.out 2>&1 grep '==.*\(lost\|reachable\|testing\):' test.out I get output like this: + echo '== testing: 1000' == testing: 1000 ==9445== definitely lost: 0 bytes in 0 blocks ==9445== indirectly lost: 0 bytes in 0 blocks ==9445== possibly lost: 0 bytes in 0 blocks ==9445== still reachable: 50,908 bytes in 319 blocks + echo '== testing: 1' == testing: 1 ==9450== definitely lost: 0 bytes in 0 blocks ==9450== indirectly lost: 0 bytes in 0 blocks ==9450== possibly lost: 0 bytes in 0 blocks ==9450== still reachable: 41,230 bytes in 265 blocks + echo '== testing: 10' == testing: 10 ==9457== definitely lost: 0 bytes in 0 blocks ==9457== indirectly lost: 0 bytes in 0 blocks ==9457== possibly lost: 0 bytes in 0 blocks ==9457== still reachable: 31,596 bytes in 214 blocks Its a bit surprising that the final memory is lower on longer runs, but valgrind --tool=massif shows that the memory holds steady during the run and then flutters a bit during shutdown so there may be some randomish effects there based on clean-up ordering etc. I did have to fix one minor bug in gospout.sh - getopts automatically shifts the arguments, so shifting again in the script means only the first option gets read. I'll attach a tar file with the full output, scripts & source I'm using. > qpid::client::Message::send multiple memory leaks > - > > Key: QPID-8134 > URL: https://issues.apache.org/jira/browse/QPID-8134 > Project: Qpid > Issue Type: Bug > Components: C++ Client >Affects Versions: qpid-cpp-1.37.0, qpid-cpp-1.38.0 > Environment: *CentOS* Linux release 7.4.1708 (Core) > Linux localhost.novalocal 3.10.0-327.el7.x86_64 #1 SMP Thu Nov 19 22:10:57 > UTC 2015 x86_64 x86_64 x86_64 GNU/Linux > *qpid*-qmf-1.37.0-1.el7.x86_64 > *qpid*-dispatch-debuginfo-1.0.0-1.el7.x86_64 > python-*qpid*-1.37.0-1.el7.noarch > *qpid*-proton-c-0.18.1-1.el7.x86_64 > python-*qpid*-qmf-1.37.0-1.el7.x86_64 > *qpid*-proton-debuginfo-0.18.1-1.el7.x86_64 > *qpid*-cpp-debuginfo-1.37.0-1.el7.x86_64 > *qpid*-cpp-client-devel-1.37.0-1.el7.x86_64 > *qpid*-cpp-server-1.37.0-1.el7.x86_64 > *qpid*-cpp-client-1.37.0-1.el7.x86_64 > >Reporter: dan clark >Assignee: Alan Conway >Priority: Blocker > Labels: leak, maven > Fix For: qpid-cpp-1.39.0 > > Attachments: drain.cpp, godrain.sh, gospout.sh, qpid-stat.out, > spout.cpp, spout.log > > Original Estimate: 40h > Remaining Estimate: 40h > > There may be multiple leaks of the outgoing message structure and associated > fields when using the qpid::client::amqp0_10::SenderImpl::send function to > publish messages under certain setups. I will concede that there may be > options that are beyond my ken to ameliorate the leak of messages structures, > especially since there is an indication that under prolonged runs (a > demonized version of an application like spout) that the statistics for quidd > indicate increased acquires with zero releases. > The basic notion is illustrated with the test application spout (and drain). > Consider a long running daemon reducing the overhead of open/send/close by > keeping the message connection open for long periods of time. Then the logic > would be: start application/open connection. In a loop send data (and never > reach a close). Thus the drain application illustrates the behavior and > demonstrates the leak using valgrind by sending the data followed by an > exit(0). > Note also the lack of 'releases' associated with the 'acquires' in the stats > output. > Capturing the leaks using the test applications spout/drain required adding > an 'exit()' prior to the close, as during normal operations of a daemon, the > connection remains open for a sustained period of time, thus the leak of > structures within the c++ client library are found as structures still > tracked by the library and cleaned up on 'connection.close()', but they > should be cleaned up as a result of the completion of the send/receive ack or > the termination of the life of the message based on the TTL of the message, > which ever comes first. I have witnessed growth of the leaked structures > into the millions of
[jira] [Created] (PROTON-1910) Profiling indicates that cgo becomes a bottleneck during scale testing of electron
Aaron Smith created PROTON-1910: --- Summary: Profiling indicates that cgo becomes a bottleneck during scale testing of electron Key: PROTON-1910 URL: https://issues.apache.org/jira/browse/PROTON-1910 Project: Qpid Proton Issue Type: Bug Components: go-binding Affects Versions: proton-c-0.24.0 Reporter: Aaron Smith Assignee: Alan Conway While performing scale testing, detailed profiling of Go test clients showed that >95% of the execution time can be devoted to the cgo call. The issues seems to be related on sends to the NewMessage() call. For receives, the bottleneck is both NewMessage() and the call to actually receive the message. This behavior is not unexpected as CGO is a well-known bottleneck. Would it be possible to have a NewMessage() call that return multiple messages and a recv call that took an "At most" argument. i.e. recv(10) would receive 10 or fewer messages that might be waiting in the queue. Also, it would be nice to be able to trade latency for throughput in that the callback wasn't triggered until N messages were recieved (with timeout) -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org For additional commands, e-mail: dev-h...@qpid.apache.org
[jira] [Created] (DISPATCH-1096) support AMQP prioritized messages
michael goulish created DISPATCH-1096: - Summary: support AMQP prioritized messages Key: DISPATCH-1096 URL: https://issues.apache.org/jira/browse/DISPATCH-1096 Project: Qpid Dispatch Issue Type: New Feature Reporter: michael goulish Assignee: michael goulish Detect priority info from message header in the router code. Create separate inter-router links for the various priorities. Per connection (i.e. not globally across the router) service high-priority inter-router links before low priority links. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org For additional commands, e-mail: dev-h...@qpid.apache.org
[jira] [Created] (QPID-8225) Java Broker (7.0.6) stops delivering queue/consumer messages after 4 GB data transfer
Michael Dyslin created QPID-8225: Summary: Java Broker (7.0.6) stops delivering queue/consumer messages after 4 GB data transfer Key: QPID-8225 URL: https://issues.apache.org/jira/browse/QPID-8225 Project: Qpid Issue Type: Task Components: Broker-J Affects Versions: qpid-java-broker-7.0.6 Reporter: Michael Dyslin Attachments: protocol.log Created this Jira so I could send a logfile attachment to be analyzed. Attachments are removed from the user discussion email lists. Log file is for the java broker with a NameAndLevel filter of org.apache.qpid.server.protocol.v0_10.ServerConnection at the DEBUG level. Activity was: # Start the java broker # Start the consumer # Start the producer # 2 messages sent every 10 seconds # Stopped producer after 10 messages sent # stopped the consumer much later This log does not contain the problem where message flow stopped, probably due to CREDIT flow mode exceeding 4 GB credit. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org For additional commands, e-mail: dev-h...@qpid.apache.org
[jira] [Resolved] (DISPATCH-1085) When sender closes connection after sending a large streaming message, receiver gets aborted message
[ https://issues.apache.org/jira/browse/DISPATCH-1085?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ganesh Murthy resolved DISPATCH-1085. - Resolution: Fixed > When sender closes connection after sending a large streaming message, > receiver gets aborted message > > > Key: DISPATCH-1085 > URL: https://issues.apache.org/jira/browse/DISPATCH-1085 > Project: Qpid Dispatch > Issue Type: Bug > Components: Container >Affects Versions: 1.2.0 >Reporter: Ganesh Murthy >Assignee: Ganesh Murthy >Priority: Major > Fix For: 1.3.0 > > Attachments: amqp_consumer.py, amqp_producer.py > > > Steps to reproduce > (Sender and receiver programs are attached) > Start a router > Start a receiver - ./amqp_consumer.py "0.0.0.0:5672" MYTEST 1 > Start a sender that sends a large message - ./amqp_producer.py "0.0.0.0:5672" > MYTEST 100 > > You will see that the receiver receives an aborted transfer > {noformat} > [0x558d24f46210]:0 <- @transfer(20) [handle=0, delivery-id=0, > delivery-tag=b"\x00\x00\x00\x00\x00\x00\x00\x00", message-format=0, > more=true] (130560) > "@\xc9\xf3\xe4k\x8e\xf5\x88:Bl\x97\xc5\x11\x17\xff\x1b\x0f\x13\xc85\x87\x9f7\x05\x9a\x1dI\xd76\xe2\xe7\x84\x92\xdf\xa3&\x07\xc0\x1eF\xb5\x96\xef\xb6\xbd\xe5\xe2\xb1\xe7\xb9\x7f\xe0\x0d\xe8]\xe0\x85\xecrE\xc4\x0e'\xbd\x8d\xa8\xe1x@,%\xa0\x90\xa4+.\xcf\x93T'}\x1f\xf3\xcc0d\x016\xe9\xa0\x7fT}\xf5n\xeb\xfc\xa9\x8d > > \xdc\xc3e?\x03\x04\xc3\xa0|\x85\x80\x08\x0a\x98\xbe8-\x00?\xa5`\xb1\xe7h0\x12\x90%\xa7\xc2S\xbe\x83\x05\x12\xe0\x98B\xe3Qp\xc9F\x90\x8f\xe8\xf9\x90\xba'\xb0\xdb$\x14<\xd9\xd1\x7fm"\x84\xda\x96X\xa1\xe3\xee\xb9\xf7\xdaToFb\xaa7\x13\x12\xfd\xdb\x9e\x80\x99{\x13q~Av8\xd0\x87\x9e\xf7\xfd\x9a\x1b\xb8|tNtw\xcf\xc0U\x9e\xf5F\xc3\xd2\xb0%\xff\xb1\xfd\xb8\\xa9\xfb\xb1G\xb2/\xc7\xe8\x91\x1e\xfd\xa2\x1c\xc5\x9f\xe6M?\xe3\xacOa\x0c\xa7/\x82\xfb$km\x1a\xfb`\xb0+zrO\x8a\x06}\xd5\x0a\x17m\xad\x91L\x89\xb2}\x92\x03\xe3\xd3F\xd8:Z\x80r\xa0b\xc7C\xd9\xdc\xe7\x08\xbf2s\xbf\xf8\x98\xf5OZ\xb7\x8a:\xeb\xe7\xfc,\xdd\xbb\xa6s+]\xa2On\x14@\xc8lGa\xf58\xd7N\x94J\xe0+\x92\xbd\xb4\xce\xb0\xacL\xf7\x00\x14\xb3J\xdd@I\x8f\xb4\x9d"... > (truncated) > [0x558d24f46210]:0 <- @transfer(20) [handle=0, delivery-id=0, > delivery-tag=b"\x00\x00\x00\x00\x00\x00\x00\x00", message-format=0, > more=true] (256512) > ".\xf7\x1b\xb0\x15\x17\x8b\xb6\xfe/\x00\xde\x95\xa2\xcc\xd5K\xbd\x92\x00\xc8\xd8e\xfc\xa5\xd8o2\x80?j\xea\xb7\xe3\xa1\x96x\x0bF\x02\xfa<~\x1c\xdd\xc4\xbeb8\x13\xdf\xa4\x15\x8c\x01\x08\xa0\xf9\x8d\xc4\xfe\x89\xfc\x8ek'\xc2\x0d\xec\x11'\x1c\xc6(j\x8a\xddir\x02\x09\xbc\xcb\xadS\x8c\x17D\x92\xbf\xe1\x9bg\xb5\xb2.\xe6`2D\x9eC@\xa6\xe5\xce\xa6\x16\xc2\x13\xa9\x0a\xbe\x0c\xa7\x88\xd6\xc0\xb6\xacSz\x11\xde\xf4\x8f4\xfb\x1c~D\xbd.\x8bk\\x9a\xc9\xbfm\x9d\xcbc\xc1\xf7\x89?\x91\xd04+\xfa\xd2i\x14\xdeg+5\x1bT\x97s8\xbb{\xecS:\x97Uy\x0bt\x93o\x8a\x91\xa2\x93\xa9\xd0M\x0bol\x89\xa1\xb4\x182\x84\xb4A^#\xa5\x7f\xde\xbfo\xfc\x96\xbc)\xa2^\xa9\xd97-\xac48`BbT9Gn\xd8\xe8\xed\xc9(\x8b\xb6\xf1\xb4\x1c\x82c\xfd\xbbO\xe9c\xb61\xd7\xcd\xe2\xd2\x86W\xe8:\x04\x8c\xd4\x1fU\xfa\xc3\xc9\xc4G\xbe\x07SFx\x8e\xd6-\xa0\xf9\xe3\x92d\x0d}vtkP\x149b\xeeU > > \xbd\x96\x93\x8f@\x1f\x9b\x81V\xfd\xe5\xb6\x159*\xc1xWA\xd3\xe7o\xaa\xf4a\x7fh\x834,\x0a\xa4\xbb\x15$\xd30%\xe9LGW'\x81\x06OE\x07\xd8\xd0o\x14\xba\x88M\x0dXWm\xf2q\xc3\xc6\xef\x98\x1f\x00\x1aQ"... > (truncated) > [0x558d24f46210]:0 <- @transfer(20) [handle=0, delivery-id=0, > delivery-tag=b"\x00\x00\x00\x00\x00\x00\x00\x00", message-format=0, > settled=true, aborted=true] > {noformat} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org For additional commands, e-mail: dev-h...@qpid.apache.org
[jira] [Commented] (DISPATCH-1085) When sender closes connection after sending a large streaming message, receiver gets aborted message
[ https://issues.apache.org/jira/browse/DISPATCH-1085?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16570379#comment-16570379 ] ASF GitHub Bot commented on DISPATCH-1085: -- Github user asfgit closed the pull request at: https://github.com/apache/qpid-dispatch/pull/345 > When sender closes connection after sending a large streaming message, > receiver gets aborted message > > > Key: DISPATCH-1085 > URL: https://issues.apache.org/jira/browse/DISPATCH-1085 > Project: Qpid Dispatch > Issue Type: Bug > Components: Container >Affects Versions: 1.2.0 >Reporter: Ganesh Murthy >Assignee: Ganesh Murthy >Priority: Major > Fix For: 1.3.0 > > Attachments: amqp_consumer.py, amqp_producer.py > > > Steps to reproduce > (Sender and receiver programs are attached) > Start a router > Start a receiver - ./amqp_consumer.py "0.0.0.0:5672" MYTEST 1 > Start a sender that sends a large message - ./amqp_producer.py "0.0.0.0:5672" > MYTEST 100 > > You will see that the receiver receives an aborted transfer > {noformat} > [0x558d24f46210]:0 <- @transfer(20) [handle=0, delivery-id=0, > delivery-tag=b"\x00\x00\x00\x00\x00\x00\x00\x00", message-format=0, > more=true] (130560) > "@\xc9\xf3\xe4k\x8e\xf5\x88:Bl\x97\xc5\x11\x17\xff\x1b\x0f\x13\xc85\x87\x9f7\x05\x9a\x1dI\xd76\xe2\xe7\x84\x92\xdf\xa3&\x07\xc0\x1eF\xb5\x96\xef\xb6\xbd\xe5\xe2\xb1\xe7\xb9\x7f\xe0\x0d\xe8]\xe0\x85\xecrE\xc4\x0e'\xbd\x8d\xa8\xe1x@,%\xa0\x90\xa4+.\xcf\x93T'}\x1f\xf3\xcc0d\x016\xe9\xa0\x7fT}\xf5n\xeb\xfc\xa9\x8d > > \xdc\xc3e?\x03\x04\xc3\xa0|\x85\x80\x08\x0a\x98\xbe8-\x00?\xa5`\xb1\xe7h0\x12\x90%\xa7\xc2S\xbe\x83\x05\x12\xe0\x98B\xe3Qp\xc9F\x90\x8f\xe8\xf9\x90\xba'\xb0\xdb$\x14<\xd9\xd1\x7fm"\x84\xda\x96X\xa1\xe3\xee\xb9\xf7\xdaToFb\xaa7\x13\x12\xfd\xdb\x9e\x80\x99{\x13q~Av8\xd0\x87\x9e\xf7\xfd\x9a\x1b\xb8|tNtw\xcf\xc0U\x9e\xf5F\xc3\xd2\xb0%\xff\xb1\xfd\xb8\\xa9\xfb\xb1G\xb2/\xc7\xe8\x91\x1e\xfd\xa2\x1c\xc5\x9f\xe6M?\xe3\xacOa\x0c\xa7/\x82\xfb$km\x1a\xfb`\xb0+zrO\x8a\x06}\xd5\x0a\x17m\xad\x91L\x89\xb2}\x92\x03\xe3\xd3F\xd8:Z\x80r\xa0b\xc7C\xd9\xdc\xe7\x08\xbf2s\xbf\xf8\x98\xf5OZ\xb7\x8a:\xeb\xe7\xfc,\xdd\xbb\xa6s+]\xa2On\x14@\xc8lGa\xf58\xd7N\x94J\xe0+\x92\xbd\xb4\xce\xb0\xacL\xf7\x00\x14\xb3J\xdd@I\x8f\xb4\x9d"... > (truncated) > [0x558d24f46210]:0 <- @transfer(20) [handle=0, delivery-id=0, > delivery-tag=b"\x00\x00\x00\x00\x00\x00\x00\x00", message-format=0, > more=true] (256512) > ".\xf7\x1b\xb0\x15\x17\x8b\xb6\xfe/\x00\xde\x95\xa2\xcc\xd5K\xbd\x92\x00\xc8\xd8e\xfc\xa5\xd8o2\x80?j\xea\xb7\xe3\xa1\x96x\x0bF\x02\xfa<~\x1c\xdd\xc4\xbeb8\x13\xdf\xa4\x15\x8c\x01\x08\xa0\xf9\x8d\xc4\xfe\x89\xfc\x8ek'\xc2\x0d\xec\x11'\x1c\xc6(j\x8a\xddir\x02\x09\xbc\xcb\xadS\x8c\x17D\x92\xbf\xe1\x9bg\xb5\xb2.\xe6`2D\x9eC@\xa6\xe5\xce\xa6\x16\xc2\x13\xa9\x0a\xbe\x0c\xa7\x88\xd6\xc0\xb6\xacSz\x11\xde\xf4\x8f4\xfb\x1c~D\xbd.\x8bk\\x9a\xc9\xbfm\x9d\xcbc\xc1\xf7\x89?\x91\xd04+\xfa\xd2i\x14\xdeg+5\x1bT\x97s8\xbb{\xecS:\x97Uy\x0bt\x93o\x8a\x91\xa2\x93\xa9\xd0M\x0bol\x89\xa1\xb4\x182\x84\xb4A^#\xa5\x7f\xde\xbfo\xfc\x96\xbc)\xa2^\xa9\xd97-\xac48`BbT9Gn\xd8\xe8\xed\xc9(\x8b\xb6\xf1\xb4\x1c\x82c\xfd\xbbO\xe9c\xb61\xd7\xcd\xe2\xd2\x86W\xe8:\x04\x8c\xd4\x1fU\xfa\xc3\xc9\xc4G\xbe\x07SFx\x8e\xd6-\xa0\xf9\xe3\x92d\x0d}vtkP\x149b\xeeU > > \xbd\x96\x93\x8f@\x1f\x9b\x81V\xfd\xe5\xb6\x159*\xc1xWA\xd3\xe7o\xaa\xf4a\x7fh\x834,\x0a\xa4\xbb\x15$\xd30%\xe9LGW'\x81\x06OE\x07\xd8\xd0o\x14\xba\x88M\x0dXWm\xf2q\xc3\xc6\xef\x98\x1f\x00\x1aQ"... > (truncated) > [0x558d24f46210]:0 <- @transfer(20) [handle=0, delivery-id=0, > delivery-tag=b"\x00\x00\x00\x00\x00\x00\x00\x00", message-format=0, > settled=true, aborted=true] > {noformat} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org For additional commands, e-mail: dev-h...@qpid.apache.org
[GitHub] qpid-dispatch pull request #345: DISPATCH-1085 - Modified AMQP_link_detach_h...
Github user asfgit closed the pull request at: https://github.com/apache/qpid-dispatch/pull/345 --- - To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org For additional commands, e-mail: dev-h...@qpid.apache.org
[jira] [Commented] (DISPATCH-1085) When sender closes connection after sending a large streaming message, receiver gets aborted message
[ https://issues.apache.org/jira/browse/DISPATCH-1085?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16570378#comment-16570378 ] ASF subversion and git services commented on DISPATCH-1085: --- Commit 22ef3d167b47fe243344908ef48fe44910e476ee in qpid-dispatch's branch refs/heads/master from [~ganeshmurthy] [ https://git-wip-us.apache.org/repos/asf?p=qpid-dispatch.git;h=22ef3d1 ] DISPATCH-1085 - Modified AMQP_link_detach_handler to flush out the remaining bytes in the message buffers before responding to detaches. This closes #345 > When sender closes connection after sending a large streaming message, > receiver gets aborted message > > > Key: DISPATCH-1085 > URL: https://issues.apache.org/jira/browse/DISPATCH-1085 > Project: Qpid Dispatch > Issue Type: Bug > Components: Container >Affects Versions: 1.2.0 >Reporter: Ganesh Murthy >Assignee: Ganesh Murthy >Priority: Major > Fix For: 1.3.0 > > Attachments: amqp_consumer.py, amqp_producer.py > > > Steps to reproduce > (Sender and receiver programs are attached) > Start a router > Start a receiver - ./amqp_consumer.py "0.0.0.0:5672" MYTEST 1 > Start a sender that sends a large message - ./amqp_producer.py "0.0.0.0:5672" > MYTEST 100 > > You will see that the receiver receives an aborted transfer > {noformat} > [0x558d24f46210]:0 <- @transfer(20) [handle=0, delivery-id=0, > delivery-tag=b"\x00\x00\x00\x00\x00\x00\x00\x00", message-format=0, > more=true] (130560) > "@\xc9\xf3\xe4k\x8e\xf5\x88:Bl\x97\xc5\x11\x17\xff\x1b\x0f\x13\xc85\x87\x9f7\x05\x9a\x1dI\xd76\xe2\xe7\x84\x92\xdf\xa3&\x07\xc0\x1eF\xb5\x96\xef\xb6\xbd\xe5\xe2\xb1\xe7\xb9\x7f\xe0\x0d\xe8]\xe0\x85\xecrE\xc4\x0e'\xbd\x8d\xa8\xe1x@,%\xa0\x90\xa4+.\xcf\x93T'}\x1f\xf3\xcc0d\x016\xe9\xa0\x7fT}\xf5n\xeb\xfc\xa9\x8d > > \xdc\xc3e?\x03\x04\xc3\xa0|\x85\x80\x08\x0a\x98\xbe8-\x00?\xa5`\xb1\xe7h0\x12\x90%\xa7\xc2S\xbe\x83\x05\x12\xe0\x98B\xe3Qp\xc9F\x90\x8f\xe8\xf9\x90\xba'\xb0\xdb$\x14<\xd9\xd1\x7fm"\x84\xda\x96X\xa1\xe3\xee\xb9\xf7\xdaToFb\xaa7\x13\x12\xfd\xdb\x9e\x80\x99{\x13q~Av8\xd0\x87\x9e\xf7\xfd\x9a\x1b\xb8|tNtw\xcf\xc0U\x9e\xf5F\xc3\xd2\xb0%\xff\xb1\xfd\xb8\\xa9\xfb\xb1G\xb2/\xc7\xe8\x91\x1e\xfd\xa2\x1c\xc5\x9f\xe6M?\xe3\xacOa\x0c\xa7/\x82\xfb$km\x1a\xfb`\xb0+zrO\x8a\x06}\xd5\x0a\x17m\xad\x91L\x89\xb2}\x92\x03\xe3\xd3F\xd8:Z\x80r\xa0b\xc7C\xd9\xdc\xe7\x08\xbf2s\xbf\xf8\x98\xf5OZ\xb7\x8a:\xeb\xe7\xfc,\xdd\xbb\xa6s+]\xa2On\x14@\xc8lGa\xf58\xd7N\x94J\xe0+\x92\xbd\xb4\xce\xb0\xacL\xf7\x00\x14\xb3J\xdd@I\x8f\xb4\x9d"... > (truncated) > [0x558d24f46210]:0 <- @transfer(20) [handle=0, delivery-id=0, > delivery-tag=b"\x00\x00\x00\x00\x00\x00\x00\x00", message-format=0, > more=true] (256512) > ".\xf7\x1b\xb0\x15\x17\x8b\xb6\xfe/\x00\xde\x95\xa2\xcc\xd5K\xbd\x92\x00\xc8\xd8e\xfc\xa5\xd8o2\x80?j\xea\xb7\xe3\xa1\x96x\x0bF\x02\xfa<~\x1c\xdd\xc4\xbeb8\x13\xdf\xa4\x15\x8c\x01\x08\xa0\xf9\x8d\xc4\xfe\x89\xfc\x8ek'\xc2\x0d\xec\x11'\x1c\xc6(j\x8a\xddir\x02\x09\xbc\xcb\xadS\x8c\x17D\x92\xbf\xe1\x9bg\xb5\xb2.\xe6`2D\x9eC@\xa6\xe5\xce\xa6\x16\xc2\x13\xa9\x0a\xbe\x0c\xa7\x88\xd6\xc0\xb6\xacSz\x11\xde\xf4\x8f4\xfb\x1c~D\xbd.\x8bk\\x9a\xc9\xbfm\x9d\xcbc\xc1\xf7\x89?\x91\xd04+\xfa\xd2i\x14\xdeg+5\x1bT\x97s8\xbb{\xecS:\x97Uy\x0bt\x93o\x8a\x91\xa2\x93\xa9\xd0M\x0bol\x89\xa1\xb4\x182\x84\xb4A^#\xa5\x7f\xde\xbfo\xfc\x96\xbc)\xa2^\xa9\xd97-\xac48`BbT9Gn\xd8\xe8\xed\xc9(\x8b\xb6\xf1\xb4\x1c\x82c\xfd\xbbO\xe9c\xb61\xd7\xcd\xe2\xd2\x86W\xe8:\x04\x8c\xd4\x1fU\xfa\xc3\xc9\xc4G\xbe\x07SFx\x8e\xd6-\xa0\xf9\xe3\x92d\x0d}vtkP\x149b\xeeU > > \xbd\x96\x93\x8f@\x1f\x9b\x81V\xfd\xe5\xb6\x159*\xc1xWA\xd3\xe7o\xaa\xf4a\x7fh\x834,\x0a\xa4\xbb\x15$\xd30%\xe9LGW'\x81\x06OE\x07\xd8\xd0o\x14\xba\x88M\x0dXWm\xf2q\xc3\xc6\xef\x98\x1f\x00\x1aQ"... > (truncated) > [0x558d24f46210]:0 <- @transfer(20) [handle=0, delivery-id=0, > delivery-tag=b"\x00\x00\x00\x00\x00\x00\x00\x00", message-format=0, > settled=true, aborted=true] > {noformat} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org For additional commands, e-mail: dev-h...@qpid.apache.org
[jira] [Resolved] (DISPATCH-1067) Doc improvements for router policies
[ https://issues.apache.org/jira/browse/DISPATCH-1067?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ganesh Murthy resolved DISPATCH-1067. - Resolution: Fixed Fix Version/s: 1.3.0 > Doc improvements for router policies > > > Key: DISPATCH-1067 > URL: https://issues.apache.org/jira/browse/DISPATCH-1067 > Project: Qpid Dispatch > Issue Type: Improvement > Components: Documentation >Affects Versions: 1.2.0 >Reporter: Ben Hardesty >Assignee: Ben Hardesty >Priority: Major > Fix For: 1.3.0 > > > The router policy doc needs to be updated to cover the following enhancements: > * Patterns for policy hostnames (DISPATCH-990) > * New policy config attributes (DISPATCH-976) > * Policy username substitution improvements (DISPATCH-1011) > * Allow vhost policies to be configured in the router configuration file > (DISPATCH-1013) -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org For additional commands, e-mail: dev-h...@qpid.apache.org
[jira] [Commented] (DISPATCH-1067) Doc improvements for router policies
[ https://issues.apache.org/jira/browse/DISPATCH-1067?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16570358#comment-16570358 ] ASF subversion and git services commented on DISPATCH-1067: --- Commit a531795723fcf70feb63e8137cec0c1f5c6cf8c2 in qpid-dispatch's branch refs/heads/master from [~bhardest] [ https://git-wip-us.apache.org/repos/asf?p=qpid-dispatch.git;h=a531795 ] DISPATCH-1067 - Doc improvements for router policies. This closes #343. > Doc improvements for router policies > > > Key: DISPATCH-1067 > URL: https://issues.apache.org/jira/browse/DISPATCH-1067 > Project: Qpid Dispatch > Issue Type: Improvement > Components: Documentation >Affects Versions: 1.2.0 >Reporter: Ben Hardesty >Assignee: Ben Hardesty >Priority: Major > > The router policy doc needs to be updated to cover the following enhancements: > * Patterns for policy hostnames (DISPATCH-990) > * New policy config attributes (DISPATCH-976) > * Policy username substitution improvements (DISPATCH-1011) > * Allow vhost policies to be configured in the router configuration file > (DISPATCH-1013) -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org For additional commands, e-mail: dev-h...@qpid.apache.org
[GitHub] qpid-dispatch pull request #343: DISPATCH-1067 - updates to policy doc (upda...
Github user asfgit closed the pull request at: https://github.com/apache/qpid-dispatch/pull/343 --- - To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org For additional commands, e-mail: dev-h...@qpid.apache.org
[jira] [Commented] (DISPATCH-1067) Doc improvements for router policies
[ https://issues.apache.org/jira/browse/DISPATCH-1067?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16570359#comment-16570359 ] ASF GitHub Bot commented on DISPATCH-1067: -- Github user asfgit closed the pull request at: https://github.com/apache/qpid-dispatch/pull/343 > Doc improvements for router policies > > > Key: DISPATCH-1067 > URL: https://issues.apache.org/jira/browse/DISPATCH-1067 > Project: Qpid Dispatch > Issue Type: Improvement > Components: Documentation >Affects Versions: 1.2.0 >Reporter: Ben Hardesty >Assignee: Ben Hardesty >Priority: Major > > The router policy doc needs to be updated to cover the following enhancements: > * Patterns for policy hostnames (DISPATCH-990) > * New policy config attributes (DISPATCH-976) > * Policy username substitution improvements (DISPATCH-1011) > * Allow vhost policies to be configured in the router configuration file > (DISPATCH-1013) -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org For additional commands, e-mail: dev-h...@qpid.apache.org
[jira] [Resolved] (DISPATCH-1064) Doc link route reconnect behavior
[ https://issues.apache.org/jira/browse/DISPATCH-1064?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ganesh Murthy resolved DISPATCH-1064. - Resolution: Fixed Fix Version/s: 1.3.0 > Doc link route reconnect behavior > - > > Key: DISPATCH-1064 > URL: https://issues.apache.org/jira/browse/DISPATCH-1064 > Project: Qpid Dispatch > Issue Type: Improvement > Components: Documentation >Affects Versions: 1.2.0 >Reporter: Ben Hardesty >Assignee: Ben Hardesty >Priority: Major > Fix For: 1.3.0 > > > When the router is configured with a linkRoute and client connects using > failover, the link will not be reestablished should the router's connection > to the broker fail. If auto-reconnect is required, the router should be > configured with autoLink. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org For additional commands, e-mail: dev-h...@qpid.apache.org
[jira] [Commented] (DISPATCH-1064) Doc link route reconnect behavior
[ https://issues.apache.org/jira/browse/DISPATCH-1064?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16570340#comment-16570340 ] ASF GitHub Bot commented on DISPATCH-1064: -- Github user asfgit closed the pull request at: https://github.com/apache/qpid-dispatch/pull/339 > Doc link route reconnect behavior > - > > Key: DISPATCH-1064 > URL: https://issues.apache.org/jira/browse/DISPATCH-1064 > Project: Qpid Dispatch > Issue Type: Improvement > Components: Documentation >Affects Versions: 1.2.0 >Reporter: Ben Hardesty >Assignee: Ben Hardesty >Priority: Major > > When the router is configured with a linkRoute and client connects using > failover, the link will not be reestablished should the router's connection > to the broker fail. If auto-reconnect is required, the router should be > configured with autoLink. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org For additional commands, e-mail: dev-h...@qpid.apache.org
[jira] [Commented] (DISPATCH-1064) Doc link route reconnect behavior
[ https://issues.apache.org/jira/browse/DISPATCH-1064?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16570338#comment-16570338 ] ASF subversion and git services commented on DISPATCH-1064: --- Commit 6cc87192a98722e6380aa4708f36692e791ec27d in qpid-dispatch's branch refs/heads/master from [~bhardest] [ https://git-wip-us.apache.org/repos/asf?p=qpid-dispatch.git;h=6cc8719 ] DISPATCH-1064 - Doc link route and autolink connection failure behavior. This closes #339 > Doc link route reconnect behavior > - > > Key: DISPATCH-1064 > URL: https://issues.apache.org/jira/browse/DISPATCH-1064 > Project: Qpid Dispatch > Issue Type: Improvement > Components: Documentation >Affects Versions: 1.2.0 >Reporter: Ben Hardesty >Assignee: Ben Hardesty >Priority: Major > > When the router is configured with a linkRoute and client connects using > failover, the link will not be reestablished should the router's connection > to the broker fail. If auto-reconnect is required, the router should be > configured with autoLink. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org For additional commands, e-mail: dev-h...@qpid.apache.org
[GitHub] qpid-dispatch pull request #339: DISPATCH-1064 - Doc link route reconnect be...
Github user asfgit closed the pull request at: https://github.com/apache/qpid-dispatch/pull/339 --- - To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org For additional commands, e-mail: dev-h...@qpid.apache.org
[jira] [Assigned] (QPID-8134) qpid::client::Message::send multiple memory leaks
[ https://issues.apache.org/jira/browse/QPID-8134?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Alan Conway reassigned QPID-8134: - Assignee: Alan Conway > qpid::client::Message::send multiple memory leaks > - > > Key: QPID-8134 > URL: https://issues.apache.org/jira/browse/QPID-8134 > Project: Qpid > Issue Type: Bug > Components: C++ Client >Affects Versions: qpid-cpp-1.37.0, qpid-cpp-1.38.0 > Environment: *CentOS* Linux release 7.4.1708 (Core) > Linux localhost.novalocal 3.10.0-327.el7.x86_64 #1 SMP Thu Nov 19 22:10:57 > UTC 2015 x86_64 x86_64 x86_64 GNU/Linux > *qpid*-qmf-1.37.0-1.el7.x86_64 > *qpid*-dispatch-debuginfo-1.0.0-1.el7.x86_64 > python-*qpid*-1.37.0-1.el7.noarch > *qpid*-proton-c-0.18.1-1.el7.x86_64 > python-*qpid*-qmf-1.37.0-1.el7.x86_64 > *qpid*-proton-debuginfo-0.18.1-1.el7.x86_64 > *qpid*-cpp-debuginfo-1.37.0-1.el7.x86_64 > *qpid*-cpp-client-devel-1.37.0-1.el7.x86_64 > *qpid*-cpp-server-1.37.0-1.el7.x86_64 > *qpid*-cpp-client-1.37.0-1.el7.x86_64 > >Reporter: dan clark >Assignee: Alan Conway >Priority: Blocker > Labels: leak, maven > Fix For: qpid-cpp-1.39.0 > > Attachments: drain.cpp, godrain.sh, gospout.sh, qpid-stat.out, > spout.cpp, spout.log > > Original Estimate: 40h > Remaining Estimate: 40h > > There may be multiple leaks of the outgoing message structure and associated > fields when using the qpid::client::amqp0_10::SenderImpl::send function to > publish messages under certain setups. I will concede that there may be > options that are beyond my ken to ameliorate the leak of messages structures, > especially since there is an indication that under prolonged runs (a > demonized version of an application like spout) that the statistics for quidd > indicate increased acquires with zero releases. > The basic notion is illustrated with the test application spout (and drain). > Consider a long running daemon reducing the overhead of open/send/close by > keeping the message connection open for long periods of time. Then the logic > would be: start application/open connection. In a loop send data (and never > reach a close). Thus the drain application illustrates the behavior and > demonstrates the leak using valgrind by sending the data followed by an > exit(0). > Note also the lack of 'releases' associated with the 'acquires' in the stats > output. > Capturing the leaks using the test applications spout/drain required adding > an 'exit()' prior to the close, as during normal operations of a daemon, the > connection remains open for a sustained period of time, thus the leak of > structures within the c++ client library are found as structures still > tracked by the library and cleaned up on 'connection.close()', but they > should be cleaned up as a result of the completion of the send/receive ack or > the termination of the life of the message based on the TTL of the message, > which ever comes first. I have witnessed growth of the leaked structures > into the millions of messages lasting more than 24hours with short (300sec) > TTL of the messages based on scenarios attached using spout/drain as test > vehicle. > The attached spout.log uses a short 10message test and the spout.log contains > 5 sets of different structures leaked (found with the 'bytes in 10 blocks are > still reachable' lines, that are in line with much more sustained leaks when > running the application for multiple days with millions of messages. > The leaks seem to be associated with structures allocation 'stdstrings' to > save the "subject" and the "payload" for string based messages using send for > amq.topic output. > Suggested work arounds are welcome based on application level changes to > spout/drain (if they are missing key components) or changes to the > address/setup of the queues for amq.topic messages (see the 'gospout.sh and > godrain.sh' test drivers providing the specific address structures being used. > For example, the following is one of the 5 different categories of leaked > data from 'spout.log' on a valgrind analysis of the output post the send and > session.sync but prior connection.close(): > > ==3388== 3,680 bytes in 10 blocks are still reachable in loss record 233 of > 234 > ==3388== at 0x4C2A203: operator new(unsigned long) > (vg_replace_malloc.c:334) > ==3388== by 0x4EB046C: qpid::client::Message::Message(std::string const&, > std::string const&) (Message.cpp:31) > ==3388== by 0x51742C1: > qpid::client::amqp0_10::OutgoingMessage::OutgoingMessage() > (OutgoingMessage.cpp:167) > ==3388== by 0x5186200: > qpid::client::amqp0_10::SenderImpl::sendImpl(qpid::messaging::Message const&) > (SenderImpl.cpp:140) > ==3388== by 0x5186485: operator() (SenderImpl.h:114) > ==3388==
[jira] [Commented] (DISPATCH-845) Allow connecting containers to declare their availability for link routes
[ https://issues.apache.org/jira/browse/DISPATCH-845?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16570304#comment-16570304 ] ASF GitHub Bot commented on DISPATCH-845: - Github user ted-ross closed the pull request at: https://github.com/apache/qpid-dispatch/pull/252 > Allow connecting containers to declare their availability for link routes > - > > Key: DISPATCH-845 > URL: https://issues.apache.org/jira/browse/DISPATCH-845 > Project: Qpid Dispatch > Issue Type: New Feature > Components: Router Node >Reporter: Ted Ross >Assignee: Ted Ross >Priority: Major > Fix For: Backlog > > > In the case where a container wishes to connect to a router network and > accept incoming routed link attaches (i.e. become a destination for link > routes), it is now quite complicated to do so. First, the connected router > must be configured with a listener in the route-container role. Second, > there must be linkRoute objects configured for each prefix or pattern for the > connected container. > A more efficient mechanism for dynamic/ephemeral link routes can be supported > as follows: > * A container opening a connection to the router may provide a connection > property that contains a list of prefixes and/or patterns for link routes. > * During the lifecycle of that connection, the router maintains active > link-route addresses targeting that container. > This feature allows for lightweight establishment of link-route destinations > without the need for connection roles and configured link-routes with > independently managed lifecycles (active, inactive, etc.). -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org For additional commands, e-mail: dev-h...@qpid.apache.org
[GitHub] qpid-dispatch pull request #252: Tross dispatch 845 1
Github user ted-ross closed the pull request at: https://github.com/apache/qpid-dispatch/pull/252 --- - To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org For additional commands, e-mail: dev-h...@qpid.apache.org
[jira] [Resolved] (PROTON-1816) [c] deprecate old netaddr function names
[ https://issues.apache.org/jira/browse/PROTON-1816?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Alan Conway resolved PROTON-1816. - Resolution: Fixed > [c] deprecate old netaddr function names > > > Key: PROTON-1816 > URL: https://issues.apache.org/jira/browse/PROTON-1816 > Project: Qpid Proton > Issue Type: Improvement > Components: proton-c >Affects Versions: proton-j-0.22.0 >Reporter: Alan Conway >Assignee: Alan Conway >Priority: Minor > Fix For: proton-c-0.25.0 > > > See PROTON-1781 - the functions were re-named but the deprecation macros were > commented out to give people a release cycle to adjust to the new names. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org For additional commands, e-mail: dev-h...@qpid.apache.org
[jira] [Commented] (PROTON-1816) [c] deprecate old netaddr function names
[ https://issues.apache.org/jira/browse/PROTON-1816?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16570302#comment-16570302 ] ASF subversion and git services commented on PROTON-1816: - Commit 38a8ab79f9da96ff0fdd8ce12a3c7b6d827b8430 in qpid-proton's branch refs/heads/master from [~aconway] [ https://git-wip-us.apache.org/repos/asf?p=qpid-proton.git;h=38a8ab7 ] PROTON-1816: [c] deprecate old netaddr function names > [c] deprecate old netaddr function names > > > Key: PROTON-1816 > URL: https://issues.apache.org/jira/browse/PROTON-1816 > Project: Qpid Proton > Issue Type: Improvement > Components: proton-c >Affects Versions: proton-j-0.22.0 >Reporter: Alan Conway >Assignee: Alan Conway >Priority: Minor > Fix For: proton-c-0.25.0 > > > See PROTON-1781 - the functions were re-named but the deprecation macros were > commented out to give people a release cycle to adjust to the new names. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org For additional commands, e-mail: dev-h...@qpid.apache.org