[jira] [Resolved] (QPID-8226) build up of 'unacknowledged' deliveries for browsing client

2018-08-07 Thread Gordon Sim (JIRA)


 [ 
https://issues.apache.org/jira/browse/QPID-8226?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gordon Sim resolved QPID-8226.
--
Resolution: Fixed

> build up of 'unacknowledged' deliveries for browsing client
> ---
>
> Key: QPID-8226
> URL: https://issues.apache.org/jira/browse/QPID-8226
> Project: Qpid
>  Issue Type: Bug
>  Components: C++ Broker
>Affects Versions: qpid-cpp-1.38.0
>Reporter: Gordon Sim
>Assignee: Gordon Sim
>Priority: Major
> Fix For: qpid-cpp-1.39.0
>
>
> A receiver with accept-mode=1 and acquire-mode=1 will cause the broker to 
> keep accumulating delivery records.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Commented] (QPID-8226) build up of 'unacknowledged' deliveries for browsing client

2018-08-07 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/QPID-8226?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16572359#comment-16572359
 ] 

ASF subversion and git services commented on QPID-8226:
---

Commit 5d0565ab605d2508d72fe46371a9c2401d514d22 in qpid-cpp's branch 
refs/heads/master from Gordon Sim
[ https://git-wip-us.apache.org/repos/asf?p=qpid-cpp.git;h=5d0565a ]

QPID-8226: mark delivery record as 'ended' if acquire mode is not-acquired and 
accept mode is none

In this case the message cannot be acquired. As soon as the delivery is 
completed at the session,
the record can be considered redundant and removed.


> build up of 'unacknowledged' deliveries for browsing client
> ---
>
> Key: QPID-8226
> URL: https://issues.apache.org/jira/browse/QPID-8226
> Project: Qpid
>  Issue Type: Bug
>  Components: C++ Broker
>Affects Versions: qpid-cpp-1.38.0
>Reporter: Gordon Sim
>Assignee: Gordon Sim
>Priority: Major
> Fix For: qpid-cpp-1.39.0
>
>
> A receiver with accept-mode=1 and acquire-mode=1 will cause the broker to 
> keep accumulating delivery records.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Created] (QPID-8226) build up of 'unacknowledged' deliveries for browsing client

2018-08-07 Thread Gordon Sim (JIRA)
Gordon Sim created QPID-8226:


 Summary: build up of 'unacknowledged' deliveries for browsing 
client
 Key: QPID-8226
 URL: https://issues.apache.org/jira/browse/QPID-8226
 Project: Qpid
  Issue Type: Bug
  Components: C++ Broker
Affects Versions: qpid-cpp-1.38.0
Reporter: Gordon Sim
Assignee: Gordon Sim
 Fix For: qpid-cpp-1.39.0


A receiver with accept-mode=1 and acquire-mode=1 will cause the broker to keep 
accumulating delivery records.




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Resolved] (DISPATCH-993) Test system_tests_topology_disposition fails intermittently

2018-08-07 Thread Chuck Rolke (JIRA)


 [ 
https://issues.apache.org/jira/browse/DISPATCH-993?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chuck Rolke resolved DISPATCH-993.
--
   Resolution: Fixed
Fix Version/s: 1.3.0

> Test system_tests_topology_disposition fails intermittently
> ---
>
> Key: DISPATCH-993
> URL: https://issues.apache.org/jira/browse/DISPATCH-993
> Project: Qpid Dispatch
>  Issue Type: Bug
>  Components: Tests
>Affects Versions: 1.1.0
>Reporter: Chuck Rolke
>Assignee: Chuck Rolke
>Priority: Major
> Fix For: 1.3.0
>
>
> Test: DeleteSpuriousConnector
> This test might run 20 times OK but it fails (for me!) eventually on today's 
> master branch.
> For some reason instead of the messages being _received_ they are being 
> _released_. This messes up the on_message n_received count so that it never 
> reaches the magic 13 and therefore it never sends out the kill connector 
> command.
> Here is a debug trace. Not that after 'receiving' 30 messages the n_received 
> count only gets to 12.
> {quote}39: 
> ==
> 39: FAIL: test_01_delete_spurious_connector 
> (system_tests_topology_disposition.TopologyDispositionTests)
> 39: --
> 39: Traceback (most recent call last):
> 39:   File 
> "/home/chug/git/qpid-dispatch/tests/system_tests_topology_disposition.py", 
> line 379, in test_01_delete_spurious_connector
> 39: self.assertEqual ( None, test.error )
> 39: AssertionError: None != 'No confirmed kill on connector.'
> 39: 
> 39: --
> 39: Ran 2 tests in 10.250s
> 39: 
> 39: FAILED (failures=1, skipped=1)
> 39: 1525876585.928072 on_link_opened
> 39: 1525876585.928256 on_link_opened
> 39: 1525876585.928463 on_link_opened
> 39: 1525876585.928535 on_link_opened
> 39: 1525876587.925026 timeout sender
> 39: 1525876587.925467 sent: 1
> 39: 1525876587.925707 sent: 2
> 39: 1525876587.925920 sent: 3
> 39: 1525876587.930813 on_released 1
> 39: 1525876587.932044 on_released 2
> 39: 1525876587.932671 on_released 3
> 39: 1525876588.424884 timeout sender
> 39: 1525876588.425253 sent: 4
> 39: 1525876588.425495 sent: 5
> 39: 1525876588.425698 sent: 6
> 39: 1525876588.432105 on_released 4
> 39: 1525876588.433035 on_released 5
> 39: 1525876588.433282 on_released 6
> 39: 1525876588.925559 timeout sender
> 39: 1525876588.925939 sent: 7
> 39: 1525876588.926234 sent: 8
> 39: 1525876588.926466 sent: 9
> 39: 1525876588.931285 on_released 7
> 39: 1525876588.931881 on_released 8
> 39: 1525876588.932027 on_released 9
> 39: 1525876589.426222 timeout sender
> 39: 1525876589.426562 sent: 10
> 39: 1525876589.426783 sent: 11
> 39: 1525876589.426976 sent: 12
> 39: 1525876589.432599 on_released 10
> 39: 1525876589.433407 on_released 11
> 39: 1525876589.433601 on_released 12
> 39: 1525876589.927745 timeout sender
> 39: 1525876589.927996 sent: 13
> 39: 1525876589.928149 sent: 14
> 39: 1525876589.928321 sent: 15
> 39: 1525876589.932239 on_released 13
> 39: 1525876589.932775 on_released 14
> 39: 1525876589.932907 on_released 15
> 39: 1525876590.427907 timeout sender
> 39: 1525876590.428106 sent: 16
> 39: 1525876590.428294 sent: 17
> 39: 1525876590.428404 sent: 18
> 39: 1525876590.431869 on_released 16
> 39: 1525876590.432348 on_released 17
> 39: 1525876590.432463 on_released 18
> 39: 1525876590.928454 timeout sender
> 39: 1525876590.928758 sent: 19
> 39: 1525876590.928955 sent: 20
> 39: 1525876590.929130 sent: 21
> 39: 1525876590.935306 received message 18
> 39: 1525876590.935326 n_received == 1
> 39: 1525876590.936363 received message 19
> 39: 1525876590.936381 n_received == 2
> 39: 1525876590.936989 received message 20
> 39: 1525876590.937010 n_received == 3
> 39: 1525876590.938963 on_accepted 1
> 39: 1525876590.940796 on_accepted 2
> 39: 1525876590.941263 on_accepted 3
> 39: 1525876591.429399 timeout sender
> 39: 1525876591.429752 sent: 22
> 39: 1525876591.429980 sent: 23
> 39: 1525876591.430198 sent: 24
> 39: 1525876591.438830 received message 21
> 39: 1525876591.438858 n_received == 4
> 39: 1525876591.439422 received message 22
> 39: 1525876591.439432 n_received == 5
> 39: 1525876591.440099 received message 23
> 39: 1525876591.440111 n_received == 6
> 39: 1525876591.442100 on_accepted 4
> 39: 1525876591.442532 on_accepted 5
> 39: 1525876591.442632 on_accepted 6
> 39: 1525876591.930557 timeout sender
> 39: 1525876591.930793 sent: 25
> 39: 1525876591.930934 sent: 26
> 39: 1525876591.931059 sent: 27
> 39: 1525876591.936629 received message 24
> 39: 1525876591.936649 n_received == 7
> 39: 1525876591.937648 received message 25
> 39: 1525876591.937665 n_received == 8
> 39: 1525876591.938337 received message 26
> 39: 1525876591.938355 n_recei

[jira] [Commented] (QPID-8134) qpid::client::Message::send multiple memory leaks

2018-08-07 Thread Alan Conway (JIRA)


[ 
https://issues.apache.org/jira/browse/QPID-8134?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16572187#comment-16572187
 ] 

Alan Conway commented on QPID-8134:
---

> will try the use of massif with the visualization tool, although under 
> valgrind the performance degrades too significantly to test at much load so I 
> fear that massif may contribute yet another performance drop...

Actually massif is less degrading to performance than memcheck (the default 
--tool for valgrind). It just maintains records of alloc/dealloc locations, it 
doesn't do all the byte-colouring  stuff needed to catch use of un-initialized 
memory, double frees etc.

> Sender has the problem in general, requires a large elasticity as the sender 
> must tolerate spikes of behavior that the receiver then catches up to.

"unreliable" means that there is no acknowledgement, the sender simply forgets 
each message as it is sent, there is no cache. That is a good option if you 
don't need the Qpid client to do failover and re-send for you.

"at-least-once" means the sender remembers each message it sends until it 
receives an acknowledgement, to allow re-sending in the event of fail-over. I 
suspect that's where memory is building up.

Your sender capacity is very high. I suggest reducing sender capacity to deal 
with normal conditions, and have your application wait to send messages when 
the sender is at capacity. In other words, keep message data under your own 
control until you have a reasonable expectation that Qpid will be able to send 
it. That will improve performance and also uncertainty during failures: if you 
push a million unacked messages into Qpid, you'll have a huge and probably 
unhelpful re-send if the client re-connects after a failure, and they'll be 
lost entirely if the client crashes. If you keep them under application control 
until it's reasonable to send, you have less message loss and more recovery 
options in a failure.

 

> qpid::client::Message::send multiple memory leaks
> -
>
> Key: QPID-8134
> URL: https://issues.apache.org/jira/browse/QPID-8134
> Project: Qpid
>  Issue Type: Bug
>  Components: C++ Client
>Affects Versions: qpid-cpp-1.37.0, qpid-cpp-1.38.0
> Environment: *CentOS* Linux release 7.4.1708 (Core)
> Linux localhost.novalocal 3.10.0-327.el7.x86_64 #1 SMP Thu Nov 19 22:10:57 
> UTC 2015 x86_64 x86_64 x86_64 GNU/Linux
> *qpid*-qmf-1.37.0-1.el7.x86_64
> *qpid*-dispatch-debuginfo-1.0.0-1.el7.x86_64
> python-*qpid*-1.37.0-1.el7.noarch
> *qpid*-proton-c-0.18.1-1.el7.x86_64
> python-*qpid*-qmf-1.37.0-1.el7.x86_64
> *qpid*-proton-debuginfo-0.18.1-1.el7.x86_64
> *qpid*-cpp-debuginfo-1.37.0-1.el7.x86_64
> *qpid*-cpp-client-devel-1.37.0-1.el7.x86_64
> *qpid*-cpp-server-1.37.0-1.el7.x86_64
> *qpid*-cpp-client-1.37.0-1.el7.x86_64
>  
>Reporter: dan clark
>Assignee: Alan Conway
>Priority: Blocker
>  Labels: leak, maven
> Fix For: qpid-cpp-1.39.0
>
> Attachments: drain.cpp, godrain.sh, gospout.sh, qpid-8134.tgz, 
> qpid-stat.out, spout.cpp, spout.log
>
>   Original Estimate: 40h
>  Remaining Estimate: 40h
>
> There may be multiple leaks of the outgoing message structure and associated 
> fields when using the qpid::client::amqp0_10::SenderImpl::send function to 
> publish messages under certain setups. I will concede that there may be 
> options that are beyond my ken to ameliorate the leak of messages structures, 
> especially since there is an indication that under prolonged runs (a 
> demonized version of an application like spout) that the statistics for quidd 
> indicate increased acquires with zero releases.
> The basic notion is illustrated with the test application spout (and drain).  
> Consider a long running daemon reducing the overhead of open/send/close by 
> keeping the message connection open for long periods of time.  Then the logic 
> would be: start application/open connection.  In a loop send data (and never 
> reach a close).  Thus the drain application illustrates the behavior and 
> demonstrates the leak using valgrind by sending the data followed by an 
> exit(0).  
> Note also the lack of 'releases' associated with the 'acquires' in the stats 
> output.
> Capturing the leaks using the test applications spout/drain required adding 
> an 'exit()' prior to the close, as during normal operations of a daemon, the 
> connection remains open for a sustained period of time, thus the leak of 
> structures within the c++ client library are found as structures still 
> tracked by the library and cleaned up on 'connection.close()', but they 
> should be cleaned up as a result of the completion of the send/receive ack or 
> the termination of the life of the message based on the TTL of the message, 
> which ever comes first.  I h

[jira] [Commented] (PROTON-1910) Profiling indicates that cgo becomes a bottleneck during scale testing of electron

2018-08-07 Thread Alan Conway (JIRA)


[ 
https://issues.apache.org/jira/browse/PROTON-1910?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16572136#comment-16572136
 ] 

Alan Conway commented on PROTON-1910:
-

New thought: From a bit of reading it seems the main issue with CGO is that C 
calls can't be made with Go's tiny stacks, so they have to be executed on a 
native thread, which messes up Go's normal scheduling. The electron/proton 
libraries are already designed to serialize C calls to a dedicated 
goroutine-per-connection, since the C library isn't thread safe.

It's interesting that the pn_message and marshal/unmarshal-related calls you've 
identified are the only ones that *don't* happen in that dedicated goroutine - 
each message is an independent data object. So perhaps if we make all the 
Message methods operate on native Go data, and defer the required C calls to 
the proton goroutine, that might help Go schedule them efficiently.

Don't know when I'll get time to try it but it seems the most promising 
approach.

> Profiling indicates that cgo becomes a bottleneck during scale testing of 
> electron
> --
>
> Key: PROTON-1910
> URL: https://issues.apache.org/jira/browse/PROTON-1910
> Project: Qpid Proton
>  Issue Type: Bug
>  Components: go-binding
>Affects Versions: proton-c-0.24.0
>Reporter: Aaron Smith
>Assignee: Alan Conway
>Priority: Major
>
> While performing scale testing, detailed profiling of Go test clients showed 
> that >95% of the execution time can be devoted to the cgo call.  The issues 
> seems to be related on sends to the NewMessage() call.  For receives, the 
> bottleneck is both NewMessage() and the call to actually receive the message. 
>  
>  
> This behavior is not unexpected as CGO is a well-known bottleneck.  Would it 
> be possible to have a NewMessage() call that return multiple messages and a 
> recv call that took an "At most" argument.  i.e. recv(10) would receive 10 or 
> fewer messages that might be waiting in the queue.  Also, it would be nice to 
> be able to trade latency for throughput in that the callback wasn't triggered 
> until N messages were recieved (with timeout)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Comment Edited] (PROTON-1910) Profiling indicates that cgo becomes a bottleneck during scale testing of electron

2018-08-07 Thread Alan Conway (JIRA)


[ 
https://issues.apache.org/jira/browse/PROTON-1910?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16572116#comment-16572116
 ] 

Alan Conway edited comment on PROTON-1910 at 8/7/18 6:30 PM:
-

Bah - good point. Clearly we need to fix this issue in the implementation, and 
make the API efficient under normal use.


was (Author: aconway):
Bah - good point. Clearly we need to fix this issue in the implementation, to 
the API to make it efficient under normal use.

> Profiling indicates that cgo becomes a bottleneck during scale testing of 
> electron
> --
>
> Key: PROTON-1910
> URL: https://issues.apache.org/jira/browse/PROTON-1910
> Project: Qpid Proton
>  Issue Type: Bug
>  Components: go-binding
>Affects Versions: proton-c-0.24.0
>Reporter: Aaron Smith
>Assignee: Alan Conway
>Priority: Major
>
> While performing scale testing, detailed profiling of Go test clients showed 
> that >95% of the execution time can be devoted to the cgo call.  The issues 
> seems to be related on sends to the NewMessage() call.  For receives, the 
> bottleneck is both NewMessage() and the call to actually receive the message. 
>  
>  
> This behavior is not unexpected as CGO is a well-known bottleneck.  Would it 
> be possible to have a NewMessage() call that return multiple messages and a 
> recv call that took an "At most" argument.  i.e. recv(10) would receive 10 or 
> fewer messages that might be waiting in the queue.  Also, it would be nice to 
> be able to trade latency for throughput in that the callback wasn't triggered 
> until N messages were recieved (with timeout)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Commented] (PROTON-1910) Profiling indicates that cgo becomes a bottleneck during scale testing of electron

2018-08-07 Thread Alan Conway (JIRA)


[ 
https://issues.apache.org/jira/browse/PROTON-1910?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16572116#comment-16572116
 ] 

Alan Conway commented on PROTON-1910:
-

Bah - good point. Clearly we need to fix this issue in the implementation, to 
the API to make it efficient under normal use.

> Profiling indicates that cgo becomes a bottleneck during scale testing of 
> electron
> --
>
> Key: PROTON-1910
> URL: https://issues.apache.org/jira/browse/PROTON-1910
> Project: Qpid Proton
>  Issue Type: Bug
>  Components: go-binding
>Affects Versions: proton-c-0.24.0
>Reporter: Aaron Smith
>Assignee: Alan Conway
>Priority: Major
>
> While performing scale testing, detailed profiling of Go test clients showed 
> that >95% of the execution time can be devoted to the cgo call.  The issues 
> seems to be related on sends to the NewMessage() call.  For receives, the 
> bottleneck is both NewMessage() and the call to actually receive the message. 
>  
>  
> This behavior is not unexpected as CGO is a well-known bottleneck.  Would it 
> be possible to have a NewMessage() call that return multiple messages and a 
> recv call that took an "At most" argument.  i.e. recv(10) would receive 10 or 
> fewer messages that might be waiting in the queue.  Also, it would be nice to 
> be able to trade latency for throughput in that the callback wasn't triggered 
> until N messages were recieved (with timeout)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Commented] (QPID-8134) qpid::client::Message::send multiple memory leaks

2018-08-07 Thread dan clark (JIRA)


[ 
https://issues.apache.org/jira/browse/QPID-8134?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16572050#comment-16572050
 ] 

dan clark commented on QPID-8134:
-

Hi Alan!
Thanks for your engagement with this issue, I appreciate it.  I will do
some additional work on using the smaller test programs to see if I can get
a better use case stand alone.  I will try the use of massif with the
visualization tool, although under valgrind the performance degrades too
significantly to test at much load so I fear that massif may contribute yet
another performance drop...




Agreed, closing up the connection generally does the correct thing and
frees the memory cached for the sender.

Thanks.

Generally the application grabs a message an immediately acknowledges,
unfortunately there are bursts of activity and as a result the queues can
get fairly large for brief periods of tie.  Thus to have message guarantees
it was necessary to have a fairly large capacity. I'll try tuning down.  Is
it possible we could consider decoupling the cacheing policy from the
message count?  The problem is two fold:
a) it might be nice to have no caching - preferred in my case
b) the message sizes might vary by a large amount (< 100 bytes to > 10k
bytes), so using count would create a very unusual sized cache
c) there may be a large number of daemons providing different services so
having a large memory cache mostly unused except during peak time is not a
judicious use of memory
d) malloc tends to do a much better job of managing memory then most
internal library methods, thus avoiding memory fragmentation and other
issues.

Agreed, although the capacity is quite large.

Yes, default use of 0-10 protocol.  I did not see a simple path to move
over to the 1.0 protocol and maintain the application library mechanism for
creating and configuration the topic queues and send parameters.

sender set capacity 100
receiver set capacity 1000

Sender has the problem in general, requires a large elasticity as the
sender must tolerate spikes of behavior that the receiver then catches up
to.

Generally the application is receiving and immediately the acknowledging
but during activity spikes delays may be a few seconds thus the build of
the queue.


May not be able to provide this as there is too much unrelated activity.




Likewise, I am clearly hoping to tune DOWN the sending capacity because it
is directly tied to a cache I would prefer to disable.

Is there a potential of this cache also being used when:

link: {name: send-link,durable: false,reliability: unreliable",timeout: 1}

as well as

link: {name: send-link,durable: false,at-least-once: unreliable",timeout: 1}





-- 
Dan Clark   503-915-3646


> qpid::client::Message::send multiple memory leaks
> -
>
> Key: QPID-8134
> URL: https://issues.apache.org/jira/browse/QPID-8134
> Project: Qpid
>  Issue Type: Bug
>  Components: C++ Client
>Affects Versions: qpid-cpp-1.37.0, qpid-cpp-1.38.0
> Environment: *CentOS* Linux release 7.4.1708 (Core)
> Linux localhost.novalocal 3.10.0-327.el7.x86_64 #1 SMP Thu Nov 19 22:10:57 
> UTC 2015 x86_64 x86_64 x86_64 GNU/Linux
> *qpid*-qmf-1.37.0-1.el7.x86_64
> *qpid*-dispatch-debuginfo-1.0.0-1.el7.x86_64
> python-*qpid*-1.37.0-1.el7.noarch
> *qpid*-proton-c-0.18.1-1.el7.x86_64
> python-*qpid*-qmf-1.37.0-1.el7.x86_64
> *qpid*-proton-debuginfo-0.18.1-1.el7.x86_64
> *qpid*-cpp-debuginfo-1.37.0-1.el7.x86_64
> *qpid*-cpp-client-devel-1.37.0-1.el7.x86_64
> *qpid*-cpp-server-1.37.0-1.el7.x86_64
> *qpid*-cpp-client-1.37.0-1.el7.x86_64
>  
>Reporter: dan clark
>Assignee: Alan Conway
>Priority: Blocker
>  Labels: leak, maven
> Fix For: qpid-cpp-1.39.0
>
> Attachments: drain.cpp, godrain.sh, gospout.sh, qpid-8134.tgz, 
> qpid-stat.out, spout.cpp, spout.log
>
>   Original Estimate: 40h
>  Remaining Estimate: 40h
>
> There may be multiple leaks of the outgoing message structure and associated 
> fields when using the qpid::client::amqp0_10::SenderImpl::send function to 
> publish messages under certain setups. I will concede that there may be 
> options that are beyond my ken to ameliorate the leak of messages structures, 
> especially since there is an indication that under prolonged runs (a 
> demonized version of an application like spout) that the statistics for quidd 
> indicate increased acquires with zero releases.
> The basic notion is illustrated with the test application spout (and drain).  
> Consider a long running daemon reducing the overhead of open/send/close by 
> keeping the message connection open for long periods of time.  Then the logic 
> would be: start application/open connection.  In a loop send data (and never 
> reach a close).  Thus the drain application illustrates the behavior and 

[jira] [Commented] (PROTON-1911) Performance issue in EncoderImpl#writeRaw(String)

2018-08-07 Thread Robbie Gemmell (JIRA)


[ 
https://issues.apache.org/jira/browse/PROTON-1911?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16571887#comment-16571887
 ] 

Robbie Gemmell commented on PROTON-1911:


Independent of any improvements that can be made there, I'd first note that if 
you used a Data section (which you should probably be doing if setting the 
content type, since it "SHOULD NOT be set" with other sections), or sent a 
Binary, the encoder wouldn't have to waste time UTF-8 encoding a String at all. 
The mention of proton-j 0.22.0 in addition to 0.28.0 suggests you are using an 
older version, there are a lot of perf improvements in newer proton-j you 
should look into if still on 0.22.0, and knowing that you are using 
vertx-proton I'd note there are perf improvements in newer version of that 
also, though some wont arrive until 3.6.0 goes out.

> Performance issue in EncoderImpl#writeRaw(String)
> -
>
> Key: PROTON-1911
> URL: https://issues.apache.org/jira/browse/PROTON-1911
> Project: Qpid Proton
>  Issue Type: Improvement
>  Components: proton-j
>Affects Versions: proton-j-0.22.0, proton-j-0.28.0
>Reporter: Jens Reimann
>Priority: Major
>  Labels: pull-request-available
> Attachments: qpid_encode_1.png, qpid_encode_2.png, qpid_encode_3.png, 
> qpid_encode_4.png, strings_encode_after.json, strings_encode_before.json
>
>
> While digging into performance issues in the Eclipse Hono project I noticed a 
> high consumption of CPU time when encoding AMQP messages using proton-j.
> I made a small reproducer and threw the same profiler at it, here are the 
> results:
> As you can see in the attach screenshot (the first is the initial run with 
> the current code) most of the time is consumed in 
> EncoderImpl#writeRaw(String). This due to the fact that is call "put" for 
> every byte it want to encode.
> The following screenshots are from a patched version which uses a small 
> thread local buffer to locally encode the raw data and then flush it to the 
> buffer in bigger chunks.
> Screenshot 3 and 4 show the improve performance, but also show that the 
> memory consumption stays low.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Commented] (PROTON-1910) Profiling indicates that cgo becomes a bottleneck during scale testing of electron

2018-08-07 Thread Aaron Smith (JIRA)


[ 
https://issues.apache.org/jira/browse/PROTON-1910?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16571870#comment-16571870
 ] 

Aaron Smith commented on PROTON-1910:
-

;)  I am using SendAsync() which I don't think allows message reuse.

> Profiling indicates that cgo becomes a bottleneck during scale testing of 
> electron
> --
>
> Key: PROTON-1910
> URL: https://issues.apache.org/jira/browse/PROTON-1910
> Project: Qpid Proton
>  Issue Type: Bug
>  Components: go-binding
>Affects Versions: proton-c-0.24.0
>Reporter: Aaron Smith
>Assignee: Alan Conway
>Priority: Major
>
> While performing scale testing, detailed profiling of Go test clients showed 
> that >95% of the execution time can be devoted to the cgo call.  The issues 
> seems to be related on sends to the NewMessage() call.  For receives, the 
> bottleneck is both NewMessage() and the call to actually receive the message. 
>  
>  
> This behavior is not unexpected as CGO is a well-known bottleneck.  Would it 
> be possible to have a NewMessage() call that return multiple messages and a 
> recv call that took an "At most" argument.  i.e. recv(10) would receive 10 or 
> fewer messages that might be waiting in the queue.  Also, it would be nice to 
> be able to trade latency for throughput in that the callback wasn't triggered 
> until N messages were recieved (with timeout)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Commented] (PROTON-1911) Performance issue in EncoderImpl#writeRaw(String)

2018-08-07 Thread Jens Reimann (JIRA)


[ 
https://issues.apache.org/jira/browse/PROTON-1911?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16571866#comment-16571866
 ] 

Jens Reimann commented on PROTON-1911:
--

Added two attachments with performance data from {{performance-jmh}} of before 
and after.

> Performance issue in EncoderImpl#writeRaw(String)
> -
>
> Key: PROTON-1911
> URL: https://issues.apache.org/jira/browse/PROTON-1911
> Project: Qpid Proton
>  Issue Type: Improvement
>  Components: proton-j
>Affects Versions: proton-j-0.22.0, proton-j-0.28.0
>Reporter: Jens Reimann
>Priority: Major
>  Labels: pull-request-available
> Attachments: qpid_encode_1.png, qpid_encode_2.png, qpid_encode_3.png, 
> qpid_encode_4.png, strings_encode_after.json, strings_encode_before.json
>
>
> While digging into performance issues in the Eclipse Hono project I noticed a 
> high consumption of CPU time when encoding AMQP messages using proton-j.
> I made a small reproducer and threw the same profiler at it, here are the 
> results:
> As you can see in the attach screenshot (the first is the initial run with 
> the current code) most of the time is consumed in 
> EncoderImpl#writeRaw(String). This due to the fact that is call "put" for 
> every byte it want to encode.
> The following screenshots are from a patched version which uses a small 
> thread local buffer to locally encode the raw data and then flush it to the 
> buffer in bigger chunks.
> Screenshot 3 and 4 show the improve performance, but also show that the 
> memory consumption stays low.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Updated] (PROTON-1911) Performance issue in EncoderImpl#writeRaw(String)

2018-08-07 Thread Jens Reimann (JIRA)


 [ 
https://issues.apache.org/jira/browse/PROTON-1911?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jens Reimann updated PROTON-1911:
-
Attachment: strings_encode_before.json
strings_encode_after.json

> Performance issue in EncoderImpl#writeRaw(String)
> -
>
> Key: PROTON-1911
> URL: https://issues.apache.org/jira/browse/PROTON-1911
> Project: Qpid Proton
>  Issue Type: Improvement
>  Components: proton-j
>Affects Versions: proton-j-0.22.0, proton-j-0.28.0
>Reporter: Jens Reimann
>Priority: Major
>  Labels: pull-request-available
> Attachments: qpid_encode_1.png, qpid_encode_2.png, qpid_encode_3.png, 
> qpid_encode_4.png, strings_encode_after.json, strings_encode_before.json
>
>
> While digging into performance issues in the Eclipse Hono project I noticed a 
> high consumption of CPU time when encoding AMQP messages using proton-j.
> I made a small reproducer and threw the same profiler at it, here are the 
> results:
> As you can see in the attach screenshot (the first is the initial run with 
> the current code) most of the time is consumed in 
> EncoderImpl#writeRaw(String). This due to the fact that is call "put" for 
> every byte it want to encode.
> The following screenshots are from a patched version which uses a small 
> thread local buffer to locally encode the raw data and then flush it to the 
> buffer in bigger chunks.
> Screenshot 3 and 4 show the improve performance, but also show that the 
> memory consumption stays low.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Commented] (PROTON-1910) Profiling indicates that cgo becomes a bottleneck during scale testing of electron

2018-08-07 Thread Alan Conway (JIRA)


[ 
https://issues.apache.org/jira/browse/PROTON-1910?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16571860#comment-16571860
 ] 

Alan Conway commented on PROTON-1910:
-

>  I looked into re-using a message with electron.  I had gotten some 
> suggestions from the "web" ;).  but I wasn't able to figure it out.

You can for Sender but not for Receiver (sigh). I guess I need to look into 
automatic caching/re-use after all.
{code:java}
m := NewMessage()

m.SetBlahBlah(blahblah);
m.Marshal("one")
s.SendSync(m)

m.Clear()
m.Marshal("two") 
s.SendSync(m)


{code}

> Profiling indicates that cgo becomes a bottleneck during scale testing of 
> electron
> --
>
> Key: PROTON-1910
> URL: https://issues.apache.org/jira/browse/PROTON-1910
> Project: Qpid Proton
>  Issue Type: Bug
>  Components: go-binding
>Affects Versions: proton-c-0.24.0
>Reporter: Aaron Smith
>Assignee: Alan Conway
>Priority: Major
>
> While performing scale testing, detailed profiling of Go test clients showed 
> that >95% of the execution time can be devoted to the cgo call.  The issues 
> seems to be related on sends to the NewMessage() call.  For receives, the 
> bottleneck is both NewMessage() and the call to actually receive the message. 
>  
>  
> This behavior is not unexpected as CGO is a well-known bottleneck.  Would it 
> be possible to have a NewMessage() call that return multiple messages and a 
> recv call that took an "At most" argument.  i.e. recv(10) would receive 10 or 
> fewer messages that might be waiting in the queue.  Also, it would be nice to 
> be able to trade latency for throughput in that the callback wasn't triggered 
> until N messages were recieved (with timeout)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Commented] (PROTON-1910) Profiling indicates that cgo becomes a bottleneck during scale testing of electron

2018-08-07 Thread Aaron Smith (JIRA)


[ 
https://issues.apache.org/jira/browse/PROTON-1910?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16571846#comment-16571846
 ] 

Aaron Smith commented on PROTON-1910:
-

Hi,

 I looked into re-using a message with electron.  I had gotten some suggestions 
from the "web" ;).  but I wasn't able to figure it out.

I would be glad to help.

> Profiling indicates that cgo becomes a bottleneck during scale testing of 
> electron
> --
>
> Key: PROTON-1910
> URL: https://issues.apache.org/jira/browse/PROTON-1910
> Project: Qpid Proton
>  Issue Type: Bug
>  Components: go-binding
>Affects Versions: proton-c-0.24.0
>Reporter: Aaron Smith
>Assignee: Alan Conway
>Priority: Major
>
> While performing scale testing, detailed profiling of Go test clients showed 
> that >95% of the execution time can be devoted to the cgo call.  The issues 
> seems to be related on sends to the NewMessage() call.  For receives, the 
> bottleneck is both NewMessage() and the call to actually receive the message. 
>  
>  
> This behavior is not unexpected as CGO is a well-known bottleneck.  Would it 
> be possible to have a NewMessage() call that return multiple messages and a 
> recv call that took an "At most" argument.  i.e. recv(10) would receive 10 or 
> fewer messages that might be waiting in the queue.  Also, it would be nice to 
> be able to trade latency for throughput in that the callback wasn't triggered 
> until N messages were recieved (with timeout)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Commented] (PROTON-1911) Performance issue in EncoderImpl#writeRaw(String)

2018-08-07 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/PROTON-1911?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16571843#comment-16571843
 ] 

ASF GitHub Bot commented on PROTON-1911:


GitHub user ctron opened a pull request:

https://github.com/apache/qpid-proton-j/pull/14

PROTON-1911: Improve performance of EncoderImpl#writeRaw(String)

This change uses a small thread local buffer to gather data locally
before sending it in bigger chunks to the buffer. This keeps the memory
consumption low and doesn't put more stress on the garbage collector.
But improves the performance of the method by factor 4.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/ctron/qpid-proton-j feture/fix_string_perf_1

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/qpid-proton-j/pull/14.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #14


commit bdb202b4acc447ed5385d9712e6ba949cc84e34c
Author: Jens Reimann 
Date:   2018-08-07T15:29:02Z

PROTON-1911: Improve performance of EncoderImpl#writeRaw(String)

This change uses a small thread local buffer to gather data locally
before sending it in bigger chunks to the buffer. This keeps the memory
consumption low and doesn't put more stress on the garbage collector.
But improves the performance of the method by factor 4.




> Performance issue in EncoderImpl#writeRaw(String)
> -
>
> Key: PROTON-1911
> URL: https://issues.apache.org/jira/browse/PROTON-1911
> Project: Qpid Proton
>  Issue Type: Improvement
>  Components: proton-j
>Affects Versions: proton-j-0.22.0, proton-j-0.28.0
>Reporter: Jens Reimann
>Priority: Major
>  Labels: pull-request-available
> Attachments: qpid_encode_1.png, qpid_encode_2.png, qpid_encode_3.png, 
> qpid_encode_4.png
>
>
> While digging into performance issues in the Eclipse Hono project I noticed a 
> high consumption of CPU time when encoding AMQP messages using proton-j.
> I made a small reproducer and threw the same profiler at it, here are the 
> results:
> As you can see in the attach screenshot (the first is the initial run with 
> the current code) most of the time is consumed in 
> EncoderImpl#writeRaw(String). This due to the fact that is call "put" for 
> every byte it want to encode.
> The following screenshots are from a patched version which uses a small 
> thread local buffer to locally encode the raw data and then flush it to the 
> buffer in bigger chunks.
> Screenshot 3 and 4 show the improve performance, but also show that the 
> memory consumption stays low.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[GitHub] qpid-proton-j pull request #14: PROTON-1911: Improve performance of EncoderI...

2018-08-07 Thread ctron
GitHub user ctron opened a pull request:

https://github.com/apache/qpid-proton-j/pull/14

PROTON-1911: Improve performance of EncoderImpl#writeRaw(String)

This change uses a small thread local buffer to gather data locally
before sending it in bigger chunks to the buffer. This keeps the memory
consumption low and doesn't put more stress on the garbage collector.
But improves the performance of the method by factor 4.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/ctron/qpid-proton-j feture/fix_string_perf_1

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/qpid-proton-j/pull/14.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #14


commit bdb202b4acc447ed5385d9712e6ba949cc84e34c
Author: Jens Reimann 
Date:   2018-08-07T15:29:02Z

PROTON-1911: Improve performance of EncoderImpl#writeRaw(String)

This change uses a small thread local buffer to gather data locally
before sending it in bigger chunks to the buffer. This keeps the memory
consumption low and doesn't put more stress on the garbage collector.
But improves the performance of the method by factor 4.




---

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Commented] (PROTON-1910) Profiling indicates that cgo becomes a bottleneck during scale testing of electron

2018-08-07 Thread Alan Conway (JIRA)


[ 
https://issues.apache.org/jira/browse/PROTON-1910?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16571841#comment-16571841
 ] 

Alan Conway commented on PROTON-1910:
-

First tip - creating/freeing messages is relatively expensive in C, if you can 
create a single Message and re-use it that will help quite a bit. That's an 
issue that needs fixing in the C library. The Go library could perhaps have a 
caching scheme to keep some pn_message objects around but I'd rather avoid that.

The Go library could be made more efficient by caching message data in a Go 
data structure and making a single C call to read/write all the data in the 
pn_message (which could be created on demand). That would collapse the 
pn_message, put_binary, and encode into a single C call.

Sadly I wasn't aware of the CGO overheads when I designed all this, so it'll 
take some work to improve the situation. Currently the Go proton package is a 
direct function-by-function wrapper around the C library and electron uses 
that. It might be time to throw away the proton package and rework electron to 
use CGO more efficiently.

> Profiling indicates that cgo becomes a bottleneck during scale testing of 
> electron
> --
>
> Key: PROTON-1910
> URL: https://issues.apache.org/jira/browse/PROTON-1910
> Project: Qpid Proton
>  Issue Type: Bug
>  Components: go-binding
>Affects Versions: proton-c-0.24.0
>Reporter: Aaron Smith
>Assignee: Alan Conway
>Priority: Major
>
> While performing scale testing, detailed profiling of Go test clients showed 
> that >95% of the execution time can be devoted to the cgo call.  The issues 
> seems to be related on sends to the NewMessage() call.  For receives, the 
> bottleneck is both NewMessage() and the call to actually receive the message. 
>  
>  
> This behavior is not unexpected as CGO is a well-known bottleneck.  Would it 
> be possible to have a NewMessage() call that return multiple messages and a 
> recv call that took an "At most" argument.  i.e. recv(10) would receive 10 or 
> fewer messages that might be waiting in the queue.  Also, it would be nice to 
> be able to trade latency for throughput in that the callback wasn't triggered 
> until N messages were recieved (with timeout)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Commented] (PROTON-1911) Performance issue in EncoderImpl#writeRaw(String)

2018-08-07 Thread Jens Reimann (JIRA)


[ 
https://issues.apache.org/jira/browse/PROTON-1911?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16571836#comment-16571836
 ] 

Jens Reimann commented on PROTON-1911:
--

Reproducer:
{code:java}
package foo.bar;

import org.apache.qpid.proton.amqp.messaging.AmqpValue;
import org.apache.qpid.proton.message.Message;

public class Application {

    private static final String DATA = 
"0123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789"
    + 
"0123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789"
    + 
"0123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789"
    + 
"0123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789"
    + 
"0123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789";

    public static void main(final String[] args) throws Exception {

    final Message m = Message.Factory.create();

    m.setSubject("FOO");
    m.setAddress("send-to");
    m.setContentType("foo/bar");
    m.setBody(new AmqpValue(DATA));

    final byte[] buffer = new byte[64 * 1024];

    for (int i = 0; i < 1024; i++) {
    m.encode(buffer, 0, buffer.length);
    }
    }
}

{code}

> Performance issue in EncoderImpl#writeRaw(String)
> -
>
> Key: PROTON-1911
> URL: https://issues.apache.org/jira/browse/PROTON-1911
> Project: Qpid Proton
>  Issue Type: Improvement
>  Components: proton-j
>Affects Versions: proton-j-0.22.0, proton-j-0.28.0
>Reporter: Jens Reimann
>Priority: Major
>  Labels: pull-request-available
> Attachments: qpid_encode_1.png, qpid_encode_2.png, qpid_encode_3.png, 
> qpid_encode_4.png
>
>
> While digging into performance issues in the Eclipse Hono project I noticed a 
> high consumption of CPU time when encoding AMQP messages using proton-j.
> I made a small reproducer and threw the same profiler at it, here are the 
> results:
> As you can see in the attach screenshot (the first is the initial run with 
> the current code) most of the time is consumed in 
> EncoderImpl#writeRaw(String). This due to the fact that is call "put" for 
> every byte it want to encode.
> The following screenshots are from a patched version which uses a small 
> thread local buffer to locally encode the raw data and then flush it to the 
> buffer in bigger chunks.
> Screenshot 3 and 4 show the improve performance, but also show that the 
> memory consumption stays low.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Updated] (PROTON-1911) Performance issue in EncoderImpl#writeRaw(String)

2018-08-07 Thread Jens Reimann (JIRA)


 [ 
https://issues.apache.org/jira/browse/PROTON-1911?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jens Reimann updated PROTON-1911:
-
Attachment: qpid_encode_4.png
qpid_encode_3.png
qpid_encode_2.png
qpid_encode_1.png

> Performance issue in EncoderImpl#writeRaw(String)
> -
>
> Key: PROTON-1911
> URL: https://issues.apache.org/jira/browse/PROTON-1911
> Project: Qpid Proton
>  Issue Type: Improvement
>  Components: proton-j
>Affects Versions: proton-j-0.22.0, proton-j-0.28.0
>Reporter: Jens Reimann
>Priority: Major
>  Labels: pull-request-available
> Attachments: qpid_encode_1.png, qpid_encode_2.png, qpid_encode_3.png, 
> qpid_encode_4.png
>
>
> While digging into performance issues in the Eclipse Hono project I noticed a 
> high consumption of CPU time when encoding AMQP messages using proton-j.
> I made a small reproducer and threw the same profiler at it, here are the 
> results:
> As you can see in the attach screenshot (the first is the initial run with 
> the current code) most of the time is consumed in 
> EncoderImpl#writeRaw(String). This due to the fact that is call "put" for 
> every byte it want to encode.
> The following screenshots are from a patched version which uses a small 
> thread local buffer to locally encode the raw data and then flush it to the 
> buffer in bigger chunks.
> Screenshot 3 and 4 show the improve performance, but also show that the 
> memory consumption stays low.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Created] (PROTON-1911) Performance issue in EncoderImpl#writeRaw(String)

2018-08-07 Thread Jens Reimann (JIRA)
Jens Reimann created PROTON-1911:


 Summary: Performance issue in EncoderImpl#writeRaw(String)
 Key: PROTON-1911
 URL: https://issues.apache.org/jira/browse/PROTON-1911
 Project: Qpid Proton
  Issue Type: Improvement
  Components: proton-j
Affects Versions: proton-j-0.28.0, proton-j-0.22.0
Reporter: Jens Reimann
 Attachments: qpid_encode_1.png, qpid_encode_2.png, qpid_encode_3.png, 
qpid_encode_4.png

While digging into performance issues in the Eclipse Hono project I noticed a 
high consumption of CPU time when encoding AMQP messages using proton-j.

I made a small reproducer and threw the same profiler at it, here are the 
results:

As you can see in the attach screenshot (the first is the initial run with the 
current code) most of the time is consumed in EncoderImpl#writeRaw(String). 
This due to the fact that is call "put" for every byte it want to encode.

The following screenshots are from a patched version which uses a small thread 
local buffer to locally encode the raw data and then flush it to the buffer in 
bigger chunks.

Screenshot 3 and 4 show the improve performance, but also show that the memory 
consumption stays low.

 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Resolved] (DISPATCH-1097) Fix Coverity issue on master branch

2018-08-07 Thread Ganesh Murthy (JIRA)


 [ 
https://issues.apache.org/jira/browse/DISPATCH-1097?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ganesh Murthy resolved DISPATCH-1097.
-
Resolution: Fixed

> Fix Coverity issue on master branch
> ---
>
> Key: DISPATCH-1097
> URL: https://issues.apache.org/jira/browse/DISPATCH-1097
> Project: Qpid Dispatch
>  Issue Type: Bug
>  Components: Container
>Affects Versions: 1.2.0
>Reporter: Ganesh Murthy
>Assignee: Ganesh Murthy
>Priority: Major
> Fix For: 1.3.0
>
>
> {noformat}
> 5 new defect(s) introduced to Apache Qpid dispatch-router found with Coverity 
> Scan.
> 1 defect(s), reported by Coverity Scan earlier, were marked fixed in the 
> recent build analyzed by Coverity Scan.
> New defect(s) Reported-by: Coverity Scan
> Showing 5 of 5 defect(s)
> ** CID 308513:  Null pointer dereferences  (FORWARD_NULL)
> /home/kgiusti/work/qpid-dispatch/src/router_core/route_control.c: 245 in 
> qdr_auto_link_activate_CT()
> 
> *** CID 308513:  Null pointer dereferences  (FORWARD_NULL)
> /home/kgiusti/work/qpid-dispatch/src/router_core/route_control.c: 245 in 
> qdr_auto_link_activate_CT()
> 239 term = target;
> 240     
> 241 key = (const char*) 
> qd_hash_key_by_handle(al->addr->hash_handle);
> 242 if (key || al->external_addr) {
> 243 if (al->external_addr) {
> 244 qdr_terminus_set_address(term, al->external_addr);
> >>> CID 308513:  Null pointer dereferences  (FORWARD_NULL)
> >>> Dereferencing null pointer "key".
> 245 al->internal_addr = &key[2];
> 246 } else
> 247 qdr_terminus_set_address(term, &key[2]); // truncate 
> the "Mp" annotation (where p = phase)
> 248 al->link = qdr_create_link_CT(core, conn, 
> QD_LINK_ENDPOINT, al->dir, source, target);
> 249 al->link->auto_link = al;
> 250 al->state = QDR_AUTO_LINK_STATE_ATTACHING;
> ** CID 308512:    (RESOURCE_LEAK)
> /home/kgiusti/work/qpid-dispatch/src/router_core/route_control.c: 252 in 
> qdr_auto_link_activate_CT()
> /home/kgiusti/work/qpid-dispatch/src/router_core/route_control.c: 252 in 
> qdr_auto_link_activate_CT()
> 
> *** CID 308512:    (RESOURCE_LEAK)
> /home/kgiusti/work/qpid-dispatch/src/router_core/route_control.c: 252 in 
> qdr_auto_link_activate_CT()
> 246 } else
> 247 qdr_terminus_set_address(term, &key[2]); // truncate 
> the "Mp" annotation (where p = phase)
> 248 al->link = qdr_create_link_CT(core, conn, 
> QD_LINK_ENDPOINT, al->dir, source, target);
> 249 al->link->auto_link = al;
> 250 al->state = QDR_AUTO_LINK_STATE_ATTACHING;
> 251 }
> >>> CID 308512:    (RESOURCE_LEAK)
> >>> Variable "term" going out of scope leaks the storage it points to.
> 252 }
> 253 }
> 254     
> 255     
> 256 static void qdr_auto_link_deactivate_CT(qdr_core_t *core, 
> qdr_auto_link_t *al, qdr_connection_t *conn)
> 257 {
> /home/kgiusti/work/qpid-dispatch/src/router_core/route_control.c: 252 in 
> qdr_auto_link_activate_CT()
> 246 } else
> 247 qdr_terminus_set_address(term, &key[2]); // truncate 
> the "Mp" annotation (where p = phase)
> 248 al->link = qdr_create_link_CT(core, conn, 
> QD_LINK_ENDPOINT, al->dir, source, target);
> 249 al->link->auto_link = al;
> 250 al->state = QDR_AUTO_LINK_STATE_ATTACHING;
> 251 }
> >>> CID 308512:    (RESOURCE_LEAK)
> >>> Variable "source" going out of scope leaks the storage it points to.
> 252 }
> 253 }
> 254     
> 255     
> 256 static void qdr_auto_link_deactivate_CT(qdr_core_t *core, 
> qdr_auto_link_t *al, qdr_connection_t *conn)
> 257 {
> ** CID 308511:    (USE_AFTER_FREE)
> /home/kgiusti/work/qpid-dispatch/src/server.c: 978 in thread_run()
> /home/kgiusti/work/qpid-dispatch/src/server.c: 978 in thread_run()
> 
> *** CID 308511:    (USE_AFTER_FREE)
> /home/kgiusti/work/qpid-dispatch/src/server.c: 978 in thread_run()
> 972 pn_conn = conn;
> 973 assert(pn_conn == conn);
> 974     
> 975 if (!qd_conn)
> 976 qd_conn = !!pn_conn ? (qd_connection_t*) 
> pn_connection_get_context(pn_conn) : 0;
> 977     
> >>> CID 308511:    (USE_AFTER_FREE)
> >>> Calling "handle" frees pointer "qd_conn" which has already

[jira] [Commented] (DISPATCH-1097) Fix Coverity issue on master branch

2018-08-07 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/DISPATCH-1097?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16571828#comment-16571828
 ] 

ASF subversion and git services commented on DISPATCH-1097:
---

Commit e545dd9cc3c3bc967e04752b6c28414987a86e63 in qpid-dispatch's branch 
refs/heads/master from [~ganeshmurthy]
[ https://git-wip-us.apache.org/repos/asf?p=qpid-dispatch.git;h=e545dd9 ]

DISPATCH-1097 - Added code to fix issues reported by Coverity. This closes #354.


> Fix Coverity issue on master branch
> ---
>
> Key: DISPATCH-1097
> URL: https://issues.apache.org/jira/browse/DISPATCH-1097
> Project: Qpid Dispatch
>  Issue Type: Bug
>  Components: Container
>Affects Versions: 1.2.0
>Reporter: Ganesh Murthy
>Assignee: Ganesh Murthy
>Priority: Major
> Fix For: 1.3.0
>
>
> {noformat}
> 5 new defect(s) introduced to Apache Qpid dispatch-router found with Coverity 
> Scan.
> 1 defect(s), reported by Coverity Scan earlier, were marked fixed in the 
> recent build analyzed by Coverity Scan.
> New defect(s) Reported-by: Coverity Scan
> Showing 5 of 5 defect(s)
> ** CID 308513:  Null pointer dereferences  (FORWARD_NULL)
> /home/kgiusti/work/qpid-dispatch/src/router_core/route_control.c: 245 in 
> qdr_auto_link_activate_CT()
> 
> *** CID 308513:  Null pointer dereferences  (FORWARD_NULL)
> /home/kgiusti/work/qpid-dispatch/src/router_core/route_control.c: 245 in 
> qdr_auto_link_activate_CT()
> 239 term = target;
> 240     
> 241 key = (const char*) 
> qd_hash_key_by_handle(al->addr->hash_handle);
> 242 if (key || al->external_addr) {
> 243 if (al->external_addr) {
> 244 qdr_terminus_set_address(term, al->external_addr);
> >>> CID 308513:  Null pointer dereferences  (FORWARD_NULL)
> >>> Dereferencing null pointer "key".
> 245 al->internal_addr = &key[2];
> 246 } else
> 247 qdr_terminus_set_address(term, &key[2]); // truncate 
> the "Mp" annotation (where p = phase)
> 248 al->link = qdr_create_link_CT(core, conn, 
> QD_LINK_ENDPOINT, al->dir, source, target);
> 249 al->link->auto_link = al;
> 250 al->state = QDR_AUTO_LINK_STATE_ATTACHING;
> ** CID 308512:    (RESOURCE_LEAK)
> /home/kgiusti/work/qpid-dispatch/src/router_core/route_control.c: 252 in 
> qdr_auto_link_activate_CT()
> /home/kgiusti/work/qpid-dispatch/src/router_core/route_control.c: 252 in 
> qdr_auto_link_activate_CT()
> 
> *** CID 308512:    (RESOURCE_LEAK)
> /home/kgiusti/work/qpid-dispatch/src/router_core/route_control.c: 252 in 
> qdr_auto_link_activate_CT()
> 246 } else
> 247 qdr_terminus_set_address(term, &key[2]); // truncate 
> the "Mp" annotation (where p = phase)
> 248 al->link = qdr_create_link_CT(core, conn, 
> QD_LINK_ENDPOINT, al->dir, source, target);
> 249 al->link->auto_link = al;
> 250 al->state = QDR_AUTO_LINK_STATE_ATTACHING;
> 251 }
> >>> CID 308512:    (RESOURCE_LEAK)
> >>> Variable "term" going out of scope leaks the storage it points to.
> 252 }
> 253 }
> 254     
> 255     
> 256 static void qdr_auto_link_deactivate_CT(qdr_core_t *core, 
> qdr_auto_link_t *al, qdr_connection_t *conn)
> 257 {
> /home/kgiusti/work/qpid-dispatch/src/router_core/route_control.c: 252 in 
> qdr_auto_link_activate_CT()
> 246 } else
> 247 qdr_terminus_set_address(term, &key[2]); // truncate 
> the "Mp" annotation (where p = phase)
> 248 al->link = qdr_create_link_CT(core, conn, 
> QD_LINK_ENDPOINT, al->dir, source, target);
> 249 al->link->auto_link = al;
> 250 al->state = QDR_AUTO_LINK_STATE_ATTACHING;
> 251 }
> >>> CID 308512:    (RESOURCE_LEAK)
> >>> Variable "source" going out of scope leaks the storage it points to.
> 252 }
> 253 }
> 254     
> 255     
> 256 static void qdr_auto_link_deactivate_CT(qdr_core_t *core, 
> qdr_auto_link_t *al, qdr_connection_t *conn)
> 257 {
> ** CID 308511:    (USE_AFTER_FREE)
> /home/kgiusti/work/qpid-dispatch/src/server.c: 978 in thread_run()
> /home/kgiusti/work/qpid-dispatch/src/server.c: 978 in thread_run()
> 
> *** CID 308511:    (USE_AFTER_FREE)
> /home/kgiusti/work/qpid-dispatch/src/server.c: 978 in thread_run()
> 972 

[jira] [Commented] (DISPATCH-1097) Fix Coverity issue on master branch

2018-08-07 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/DISPATCH-1097?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16571829#comment-16571829
 ] 

ASF GitHub Bot commented on DISPATCH-1097:
--

Github user asfgit closed the pull request at:

https://github.com/apache/qpid-dispatch/pull/354


> Fix Coverity issue on master branch
> ---
>
> Key: DISPATCH-1097
> URL: https://issues.apache.org/jira/browse/DISPATCH-1097
> Project: Qpid Dispatch
>  Issue Type: Bug
>  Components: Container
>Affects Versions: 1.2.0
>Reporter: Ganesh Murthy
>Assignee: Ganesh Murthy
>Priority: Major
> Fix For: 1.3.0
>
>
> {noformat}
> 5 new defect(s) introduced to Apache Qpid dispatch-router found with Coverity 
> Scan.
> 1 defect(s), reported by Coverity Scan earlier, were marked fixed in the 
> recent build analyzed by Coverity Scan.
> New defect(s) Reported-by: Coverity Scan
> Showing 5 of 5 defect(s)
> ** CID 308513:  Null pointer dereferences  (FORWARD_NULL)
> /home/kgiusti/work/qpid-dispatch/src/router_core/route_control.c: 245 in 
> qdr_auto_link_activate_CT()
> 
> *** CID 308513:  Null pointer dereferences  (FORWARD_NULL)
> /home/kgiusti/work/qpid-dispatch/src/router_core/route_control.c: 245 in 
> qdr_auto_link_activate_CT()
> 239 term = target;
> 240     
> 241 key = (const char*) 
> qd_hash_key_by_handle(al->addr->hash_handle);
> 242 if (key || al->external_addr) {
> 243 if (al->external_addr) {
> 244 qdr_terminus_set_address(term, al->external_addr);
> >>> CID 308513:  Null pointer dereferences  (FORWARD_NULL)
> >>> Dereferencing null pointer "key".
> 245 al->internal_addr = &key[2];
> 246 } else
> 247 qdr_terminus_set_address(term, &key[2]); // truncate 
> the "Mp" annotation (where p = phase)
> 248 al->link = qdr_create_link_CT(core, conn, 
> QD_LINK_ENDPOINT, al->dir, source, target);
> 249 al->link->auto_link = al;
> 250 al->state = QDR_AUTO_LINK_STATE_ATTACHING;
> ** CID 308512:    (RESOURCE_LEAK)
> /home/kgiusti/work/qpid-dispatch/src/router_core/route_control.c: 252 in 
> qdr_auto_link_activate_CT()
> /home/kgiusti/work/qpid-dispatch/src/router_core/route_control.c: 252 in 
> qdr_auto_link_activate_CT()
> 
> *** CID 308512:    (RESOURCE_LEAK)
> /home/kgiusti/work/qpid-dispatch/src/router_core/route_control.c: 252 in 
> qdr_auto_link_activate_CT()
> 246 } else
> 247 qdr_terminus_set_address(term, &key[2]); // truncate 
> the "Mp" annotation (where p = phase)
> 248 al->link = qdr_create_link_CT(core, conn, 
> QD_LINK_ENDPOINT, al->dir, source, target);
> 249 al->link->auto_link = al;
> 250 al->state = QDR_AUTO_LINK_STATE_ATTACHING;
> 251 }
> >>> CID 308512:    (RESOURCE_LEAK)
> >>> Variable "term" going out of scope leaks the storage it points to.
> 252 }
> 253 }
> 254     
> 255     
> 256 static void qdr_auto_link_deactivate_CT(qdr_core_t *core, 
> qdr_auto_link_t *al, qdr_connection_t *conn)
> 257 {
> /home/kgiusti/work/qpid-dispatch/src/router_core/route_control.c: 252 in 
> qdr_auto_link_activate_CT()
> 246 } else
> 247 qdr_terminus_set_address(term, &key[2]); // truncate 
> the "Mp" annotation (where p = phase)
> 248 al->link = qdr_create_link_CT(core, conn, 
> QD_LINK_ENDPOINT, al->dir, source, target);
> 249 al->link->auto_link = al;
> 250 al->state = QDR_AUTO_LINK_STATE_ATTACHING;
> 251 }
> >>> CID 308512:    (RESOURCE_LEAK)
> >>> Variable "source" going out of scope leaks the storage it points to.
> 252 }
> 253 }
> 254     
> 255     
> 256 static void qdr_auto_link_deactivate_CT(qdr_core_t *core, 
> qdr_auto_link_t *al, qdr_connection_t *conn)
> 257 {
> ** CID 308511:    (USE_AFTER_FREE)
> /home/kgiusti/work/qpid-dispatch/src/server.c: 978 in thread_run()
> /home/kgiusti/work/qpid-dispatch/src/server.c: 978 in thread_run()
> 
> *** CID 308511:    (USE_AFTER_FREE)
> /home/kgiusti/work/qpid-dispatch/src/server.c: 978 in thread_run()
> 972 pn_conn = conn;
> 973 assert(pn_conn == conn);
> 974     
> 975 if (!qd_conn)
> 976 qd_conn = !!pn_conn ? (qd_connection_t*) 
> pn_connection_get_conte

[GitHub] qpid-dispatch pull request #354: DISPATCH-1097 - Added code to fix issues re...

2018-08-07 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/qpid-dispatch/pull/354


---

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Commented] (PROTON-1764) Slow performance seen when running Go client

2018-08-07 Thread Alan Conway (JIRA)


[ 
https://issues.apache.org/jira/browse/PROTON-1764?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16571807#comment-16571807
 ] 

Alan Conway commented on PROTON-1764:
-

On reading around, CGO is well known to have very high per-call overhead - the 
Go binding was not written with that in mind, it is  a very thin wrapper around 
the C library. There are two short-term paths to improvement:
 * re-write minor C functionality Go where possible.
 * re-write critical Go code, esp. code that calls C in a loop in C.

Both of these may be challenging.

Longer term I would like to rewrite the entire codec in Go, using the C codec 
is not really providing any benefits as the pn_data_t is pretty much equally 
hard to use as parsing AMQP bytes directly.

There may be low-hanging fruit in Message, see PROTON-1910

 

> Slow performance seen when running Go client
> 
>
> Key: PROTON-1764
> URL: https://issues.apache.org/jira/browse/PROTON-1764
> Project: Qpid Proton
>  Issue Type: Bug
>  Components: go-binding
>Affects Versions: proton-c-0.18.1
>Reporter: Aaron Smith
>Assignee: Alan Conway
>Priority: Major
>  Labels: perf
>
> Slower than expected message deliver rate seen while running simple 
> benchmarking test.  Setup:
>   Client(Go) Sender -> QPID Router -> Client Receiver(Go)
> Profiling of reveals that a large percentage of time is spent in wrapper call 
> from Go to c.
> Not sure if the call to C from Go is the issue.
>  
> Here's the pointer for sender/receiver (both uses qpid go-binding).
> Sender (with '-limit' option, it sends AMQP messages in 10sec for 
> benchmarking):
>  
> [https://github.com/redhat-nfvpe/service-assurance-poc/tree/master/tools/sa-bench]
> Receiver (based on electron sample):
>  [http://kagaribi.s1061123.net/receive.go]
> Here are the results:
> {noformat}
> [root@nfvha-comp-01 sa-bench]# ./sa-bench -mode limit 
> amqp://127.0.0.1:5672/collectd/telemetry
> sending AMQP in 10 seconds...Done!
> Total: 171908 sent (duration:10.000103521s, mesg/sec: 17190.62204096157)
> [root@nfvha-comp-01 electron_sample]# ./receive -prefetch 12 
> amqp://127.0.0.1:5672/collectd/telemetry
> Listening on 1 connections
> ^C2018/02/15 01:44:51 Total: 171908 received.
> 2018/02/15 01:44:51 captured interrupt, stopping profiler and exiting...
> {noformat}
> Both program can collect profile data using '-pprofile' option as following:
> {noformat}
> [root@nfvha-comp-01 sa-bench]# ./sa-bench -mode limit -pprofile profile.out 
> amqp://127.0.0.1:5672/collectd/telemetry
> sending AMQP in 10 seconds...Done!
> Total: 189305 sent (duration:10.000111611s, mesg/sec: 18930.2887171546)
> [root@nfvha-comp-01 sa-bench]# go tool pprof sa-bench profile.out 
> File: sa-bench
> Build ID: 7ffec7b98a532892d7b9932b70b7451866cd4e5e
> Type: cpu
> Time: Feb 15, 2018 at 1:49am (EST)
> Duration: 10.11s, Total samples = 15.75s (155.79%)
> Entering interactive mode (type "help" for commands, "o" for options)
> (pprof) top5
> Showing nodes accounting for 9990ms, 63.43% of 15750ms total
> Dropped 200 nodes (cum <= 78.75ms)
> Showing top 5 nodes out of 144
>   flat  flat%   sum%cum   cum%
> 6750ms 42.86% 42.86% 7080ms 44.95%  runtime.cgocall 
> /usr/local/go/src/runtime/cgocall.go
> 1570ms  9.97% 52.83% 1590ms 10.10%  syscall.Syscall 
> /usr/local/go/src/syscall/asm_linux_amd64.s
>  800ms  5.08% 57.90%  800ms  5.08%  runtime.futex 
> /usr/local/go/src/runtime/sys_linux_amd64.s
>  440ms  2.79% 60.70%  770ms  4.89%  runtime.runqgrab 
> /usr/local/go/src/runtime/proc.go
>  430ms  2.73% 63.43% 1070ms  6.79%  runtime.selectgo 
> /usr/local/go/src/runtime/select.go
> [root@nfvha-comp-01 electron_sample]# ./receive -prefetch 12 -pprofile 
> profile.out amqp://127.0.0.1:5672/collectd/telemetry
> Listening on 1 connections
> ^C2018/02/15 01:49:25 Total: 181422 received.
> 2018/02/15 01:49:25 captured interrupt, stopping profiler and exiting...
> [root@nfvha-comp-01 electron_sample]# go tool pprof receive profile.out 
> File: receive
> Build ID: 66addd89d429ca678cbd6e336872bc604406f83e
> Type: cpu
> Time: Feb 15, 2018 at 1:49am (EST)
> Duration: 14.78s, Total samples = 16.60s (112.31%)
> Entering interactive mode (type "help" for commands, "o" for options)
> (pprof) top 5
> Showing nodes accounting for 10620ms, 63.98% of 16600ms total
> Dropped 160 nodes (cum <= 83ms)
> Showing top 5 nodes out of 124
>   flat  flat%   sum%cum   cum%
> 5730ms 34.52% 34.52% 5960ms 35.90%  runtime.cgocall 
> /usr/local/go/src/runtime/cgocall.go
> 2190ms 13.19% 47.71% 2220ms 13.37%  syscall.Syscall 
> /usr/local/go/src/syscall/asm_linux_amd64.s
> 1070ms  6.45% 54.16% 1070ms  6.45%  runtime.epollwait 
> /usr/local/go/src/runtime/sys_linux_amd64.s
> 

[jira] [Commented] (PROTON-1910) Profiling indicates that cgo becomes a bottleneck during scale testing of electron

2018-08-07 Thread Aaron Smith (JIRA)


[ 
https://issues.apache.org/jira/browse/PROTON-1910?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16571806#comment-16571806
 ] 

Aaron Smith commented on PROTON-1910:
-

For the send side, the top three are _Cfunc_pn_message, 
_Cfunc_pn_message_encode, _Cfunc_pn_message_free.

This is when the test client is sending 10k messages second with a buffer size 
of 50 on the receive side.

--+-
 flat flat% sum% cum cum% calls calls% + context 
--+-
 0.58s 27.36% | qpid.apache.org/amqp._Cfunc_pn_message
 0.58s 27.36% | qpid.apache.org/amqp._Cfunc_pn_message_encode
 0.33s 15.57% | qpid.apache.org/amqp._Cfunc_pn_message_free
 0.11s 5.19% | qpid.apache.org/proton.makeEvent
 0.07s 3.30% | qpid.apache.org/electron.(*sender).SendAsyncTimeout.func1
 0.04s 1.89% | qpid.apache.org/amqp._Cfunc_pn_data_put_binary
 0.03s 1.42% | qpid.apache.org/proton.makeEvent.func4
 0.02s 0.94% | qpid.apache.org/proton._Cfunc_pn_collector_peek
 0.01s 0.47% | qpid.apache.org/amqp.clearMarshal
 1.95s 35.39% 35.39% 2.12s 38.48% | runtime.cgocall
 0.14s 6.60% | runtime.exitsyscall
 0.03s 1.42% | runtime.entersyscall

> Profiling indicates that cgo becomes a bottleneck during scale testing of 
> electron
> --
>
> Key: PROTON-1910
> URL: https://issues.apache.org/jira/browse/PROTON-1910
> Project: Qpid Proton
>  Issue Type: Bug
>  Components: go-binding
>Affects Versions: proton-c-0.24.0
>Reporter: Aaron Smith
>Assignee: Alan Conway
>Priority: Major
>
> While performing scale testing, detailed profiling of Go test clients showed 
> that >95% of the execution time can be devoted to the cgo call.  The issues 
> seems to be related on sends to the NewMessage() call.  For receives, the 
> bottleneck is both NewMessage() and the call to actually receive the message. 
>  
>  
> This behavior is not unexpected as CGO is a well-known bottleneck.  Would it 
> be possible to have a NewMessage() call that return multiple messages and a 
> recv call that took an "At most" argument.  i.e. recv(10) would receive 10 or 
> fewer messages that might be waiting in the queue.  Also, it would be nice to 
> be able to trade latency for throughput in that the callback wasn't triggered 
> until N messages were recieved (with timeout)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Commented] (PROTON-1910) Profiling indicates that cgo becomes a bottleneck during scale testing of electron

2018-08-07 Thread Alan Conway (JIRA)


[ 
https://issues.apache.org/jira/browse/PROTON-1910?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16571778#comment-16571778
 ] 

Alan Conway commented on PROTON-1910:
-

Quick replies, slow action. I really want to give this some attention but it's 
at the back of my priority queue right now :( 

Anything you can figure out much appreciated.

> Profiling indicates that cgo becomes a bottleneck during scale testing of 
> electron
> --
>
> Key: PROTON-1910
> URL: https://issues.apache.org/jira/browse/PROTON-1910
> Project: Qpid Proton
>  Issue Type: Bug
>  Components: go-binding
>Affects Versions: proton-c-0.24.0
>Reporter: Aaron Smith
>Assignee: Alan Conway
>Priority: Major
>
> While performing scale testing, detailed profiling of Go test clients showed 
> that >95% of the execution time can be devoted to the cgo call.  The issues 
> seems to be related on sends to the NewMessage() call.  For receives, the 
> bottleneck is both NewMessage() and the call to actually receive the message. 
>  
>  
> This behavior is not unexpected as CGO is a well-known bottleneck.  Would it 
> be possible to have a NewMessage() call that return multiple messages and a 
> recv call that took an "At most" argument.  i.e. recv(10) would receive 10 or 
> fewer messages that might be waiting in the queue.  Also, it would be nice to 
> be able to trade latency for throughput in that the callback wasn't triggered 
> until N messages were recieved (with timeout)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Resolved] (QPIDJMS-408) update to proton-j 0.28.1

2018-08-07 Thread Robbie Gemmell (JIRA)


 [ 
https://issues.apache.org/jira/browse/QPIDJMS-408?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robbie Gemmell resolved QPIDJMS-408.

Resolution: Fixed

> update to proton-j 0.28.1
> -
>
> Key: QPIDJMS-408
> URL: https://issues.apache.org/jira/browse/QPIDJMS-408
> Project: Qpid JMS
>  Issue Type: Task
>  Components: qpid-jms-client
>Reporter: Robbie Gemmell
>Assignee: Robbie Gemmell
>Priority: Major
> Fix For: 0.36.0
>
>
> Update to proton-j 0.28.1, get the latest fixes such as PROTON-1901



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Commented] (QPIDJMS-408) update to proton-j 0.28.1

2018-08-07 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/QPIDJMS-408?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16571753#comment-16571753
 ] 

ASF subversion and git services commented on QPIDJMS-408:
-

Commit e2e0cee97a0a14a6d65bc0b1d2d7fa312b55273c in qpid-jms's branch 
refs/heads/master from [~gemmellr]
[ https://git-wip-us.apache.org/repos/asf?p=qpid-jms.git;h=e2e0cee ]

QPIDJMS-408: update to proton-j-0.28.1


> update to proton-j 0.28.1
> -
>
> Key: QPIDJMS-408
> URL: https://issues.apache.org/jira/browse/QPIDJMS-408
> Project: Qpid JMS
>  Issue Type: Task
>  Components: qpid-jms-client
>Reporter: Robbie Gemmell
>Assignee: Robbie Gemmell
>Priority: Major
> Fix For: 0.36.0
>
>
> Update to proton-j 0.28.1, get the latest fixes such as PROTON-1901



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Assigned] (QPIDJMS-407) Reconnect not working reliable for connections with more than 1 producer JMS session

2018-08-07 Thread Timothy Bish (JIRA)


 [ 
https://issues.apache.org/jira/browse/QPIDJMS-407?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Timothy Bish reassigned QPIDJMS-407:


Assignee: Timothy Bish

> Reconnect not working reliable for connections with more than 1 producer JMS 
> session
> 
>
> Key: QPIDJMS-407
> URL: https://issues.apache.org/jira/browse/QPIDJMS-407
> Project: Qpid JMS
>  Issue Type: Bug
>  Components: qpid-jms-client
>Affects Versions: 0.35.0
>Reporter: Johan Stenberg
>Assignee: Timothy Bish
>Priority: Critical
> Attachments: QPIDJMS-407.zip
>
>
> When a JMS connection with more than one producer session looses the 
> underlying TCP connection to the broker auto reconnect (failover) is not 
> working properly. After the reconnect attempt no new messages will be sent.
> When only one producer session is used, reconnect apparently works as 
> expected.
> I attached a maven project with a test case where the TCP connection is 
> dropped by the broker to provoke the reconnect attempt. In most cases when I 
> run the test class the *testAutoReconnectWith2ProducerSessions()* stops 
> sending messages after the first reconnect attempt. Maybe there occurs some 
> qpid internal race condition.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Created] (QPIDJMS-408) update to proton-j 0.28.1

2018-08-07 Thread Robbie Gemmell (JIRA)
Robbie Gemmell created QPIDJMS-408:
--

 Summary: update to proton-j 0.28.1
 Key: QPIDJMS-408
 URL: https://issues.apache.org/jira/browse/QPIDJMS-408
 Project: Qpid JMS
  Issue Type: Task
  Components: qpid-jms-client
Reporter: Robbie Gemmell
Assignee: Robbie Gemmell
 Fix For: 0.36.0


Update to proton-j 0.28.1, get the latest fixes such as PROTON-1901



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Commented] (QPID-8134) qpid::client::Message::send multiple memory leaks

2018-08-07 Thread Alan Conway (JIRA)


[ 
https://issues.apache.org/jira/browse/QPID-8134?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16571743#comment-16571743
 ] 

Alan Conway commented on QPID-8134:
---

Hi Dan,

I'll keep trying to reproduce based on all the info you've given me, if you 
come up with anything to make it easier that will be much appreciated.

> the 'options.vexit' path must be used for analysis.

Yes, gospout.sh sets that option and I do see "still reachable" blocks like you 
do, but they're not growing when I send more messages. If I run without --vexit 
no memory is leaked. I'm convinced I'm missing something from your scenario, 
just not sure what yet.

I recommend 'valgrind --tool=massif' as well. It tracks memory use while the 
process is running, not just leaks on exit. You can get reports dumped on exit 
or periodically from a long-running process. Install the separate package 
"massif-visualizer" to view the results, the built-in text reports are hard to 
read.

> Is it possible to turn OFF the policy which attempts to out-wit the underling 
> malloc implementation by caching unused memory in the QPID C++ library?

Sender/Receiver data structures can grow up to the message-count limit 
established by setCapacity(), so you can reduce memory use by lowering the 
capacity. Theoretically the max useful buffer size is based on the 
latency-bandwidth product for the total send-ack round-trip - i.e. the number 
of messages you can send before the first one gets an ack. In practice it's 
usually best to experiment and measure the latency/throughput/memory trade-offs.

However, you are seeing unbounded growth over time which is definitely a bug, 
not the expected memory caching. That's what I need to track down. Some 
questions:

I believe you still use the 0-10 protocol - please confirm. This will be a 
different code path under the 1.0 protocol.

What value do you use for setCapacity() on your Senders and Receivers, or do 
you use the default capacity (50)?

Do you know if the memory leaks you are seeing are related to Sender only, 
Receiver only, both or unsure?

How often do you acknowledge() received messages - approx what is the delay 
between receiving and acknowledging a message? Do you use the sync=true on 
acknowledge?

If you can, run your application for a short period - 100 messages or so - with 
env QPID_TRACE=1 and send me the trace, it may give me some idea what's 
happening differently in your system.

I'll keep digging on my end.

 

> qpid::client::Message::send multiple memory leaks
> -
>
> Key: QPID-8134
> URL: https://issues.apache.org/jira/browse/QPID-8134
> Project: Qpid
>  Issue Type: Bug
>  Components: C++ Client
>Affects Versions: qpid-cpp-1.37.0, qpid-cpp-1.38.0
> Environment: *CentOS* Linux release 7.4.1708 (Core)
> Linux localhost.novalocal 3.10.0-327.el7.x86_64 #1 SMP Thu Nov 19 22:10:57 
> UTC 2015 x86_64 x86_64 x86_64 GNU/Linux
> *qpid*-qmf-1.37.0-1.el7.x86_64
> *qpid*-dispatch-debuginfo-1.0.0-1.el7.x86_64
> python-*qpid*-1.37.0-1.el7.noarch
> *qpid*-proton-c-0.18.1-1.el7.x86_64
> python-*qpid*-qmf-1.37.0-1.el7.x86_64
> *qpid*-proton-debuginfo-0.18.1-1.el7.x86_64
> *qpid*-cpp-debuginfo-1.37.0-1.el7.x86_64
> *qpid*-cpp-client-devel-1.37.0-1.el7.x86_64
> *qpid*-cpp-server-1.37.0-1.el7.x86_64
> *qpid*-cpp-client-1.37.0-1.el7.x86_64
>  
>Reporter: dan clark
>Assignee: Alan Conway
>Priority: Blocker
>  Labels: leak, maven
> Fix For: qpid-cpp-1.39.0
>
> Attachments: drain.cpp, godrain.sh, gospout.sh, qpid-8134.tgz, 
> qpid-stat.out, spout.cpp, spout.log
>
>   Original Estimate: 40h
>  Remaining Estimate: 40h
>
> There may be multiple leaks of the outgoing message structure and associated 
> fields when using the qpid::client::amqp0_10::SenderImpl::send function to 
> publish messages under certain setups. I will concede that there may be 
> options that are beyond my ken to ameliorate the leak of messages structures, 
> especially since there is an indication that under prolonged runs (a 
> demonized version of an application like spout) that the statistics for quidd 
> indicate increased acquires with zero releases.
> The basic notion is illustrated with the test application spout (and drain).  
> Consider a long running daemon reducing the overhead of open/send/close by 
> keeping the message connection open for long periods of time.  Then the logic 
> would be: start application/open connection.  In a loop send data (and never 
> reach a close).  Thus the drain application illustrates the behavior and 
> demonstrates the leak using valgrind by sending the data followed by an 
> exit(0).  
> Note also the lack of 'releases' associated with the 'acquires' in the stats 
> output.
> Capturing the leaks using the test applications spout/d

[jira] [Commented] (QPID-7153) Allow expired messages to be sent to DLQ

2018-08-07 Thread Alex Rudyy (JIRA)


[ 
https://issues.apache.org/jira/browse/QPID-7153?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16571694#comment-16571694
 ] 

Alex Rudyy commented on QPID-7153:
--

Rob,
I created a [pull request 
12|https://github.com/apache/qpid-broker-j/pull/12/commits/06a6463ffb3bef730d10c16e03a8175f0e58cc78]
 based on your proposed changes and Keith's review comments.  Are you happy to 
commit the changes?

> Allow expired messages to be sent to DLQ
> 
>
> Key: QPID-7153
> URL: https://issues.apache.org/jira/browse/QPID-7153
> Project: Qpid
>  Issue Type: Improvement
>  Components: Broker-J
>Reporter: Keith Wall
>Priority: Major
> Fix For: qpid-java-broker-7.1.0
>
> Attachments: 0002-QPID-7153-Adress-review-comments-v2.patch, 
> 0002-QPID-7153-Adress-review-comments.patch, QPID-7153v2.diff
>
>
> Currently the Java Broker simply discards messages that expire (TTL). The 
> behaviour should be configurable and allow for expired messages to be 
> directed to the alternate exchange to allow for dead-lettering.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Commented] (QPID-8214) Reduce the table names size in the JDBC configuration store to fit Oracle's 30 characters limitation

2018-08-07 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/QPID-8214?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16571693#comment-16571693
 ] 

ASF GitHub Bot commented on QPID-8214:
--

Github user overmeulen commented on the issue:

https://github.com/apache/qpid-broker-j/pull/10
  
Why wasn't this PR merged in the master?


> Reduce the table names size in the JDBC configuration store to fit Oracle's 
> 30 characters limitation
> 
>
> Key: QPID-8214
> URL: https://issues.apache.org/jira/browse/QPID-8214
> Project: Qpid
>  Issue Type: Improvement
>  Components: Broker-J
>Affects Versions: qpid-java-broker-7.0.4
>Reporter: Olivier VERMEULEN
>Assignee: Alex Rudyy
>Priority: Major
> Attachments: 
> 0001-QPID-8214-Broker-J-JDBC-Reduce-the-sizes-of-table-na.patch
>
>
> 'QPID_CONFIGURED_OBJECT_HIERARCHY' is already 32 characters long.
> And we should also take into account that users can specify a table name 
> prefix.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[GitHub] qpid-broker-j issue #10: QPID-8214: Reduce the sizes of table names in the J...

2018-08-07 Thread overmeulen
Github user overmeulen commented on the issue:

https://github.com/apache/qpid-broker-j/pull/10
  
Why wasn't this PR merged in the master?


---

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Commented] (QPID-7153) Allow expired messages to be sent to DLQ

2018-08-07 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/QPID-7153?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16571686#comment-16571686
 ] 

ASF GitHub Bot commented on QPID-7153:
--

GitHub user alex-rufous opened a pull request:

https://github.com/apache/qpid-broker-j/pull/12

QPID-7153: [Broker-J] Allow expired messages to be sent to DLQ



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/alex-rufous/qpid-broker-j QPID-7153

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/qpid-broker-j/pull/12.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #12


commit 06a6463ffb3bef730d10c16e03a8175f0e58cc78
Author: Alex Rudyy 
Date:   2018-08-07T13:39:30Z

QPID-7153: [Broker-J] Allow expired messages to be sent to DLQ




> Allow expired messages to be sent to DLQ
> 
>
> Key: QPID-7153
> URL: https://issues.apache.org/jira/browse/QPID-7153
> Project: Qpid
>  Issue Type: Improvement
>  Components: Broker-J
>Reporter: Keith Wall
>Priority: Major
> Fix For: qpid-java-broker-7.1.0
>
> Attachments: 0002-QPID-7153-Adress-review-comments-v2.patch, 
> 0002-QPID-7153-Adress-review-comments.patch, QPID-7153v2.diff
>
>
> Currently the Java Broker simply discards messages that expire (TTL). The 
> behaviour should be configurable and allow for expired messages to be 
> directed to the alternate exchange to allow for dead-lettering.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[GitHub] qpid-broker-j pull request #12: QPID-7153: [Broker-J] Allow expired messages...

2018-08-07 Thread alex-rufous
GitHub user alex-rufous opened a pull request:

https://github.com/apache/qpid-broker-j/pull/12

QPID-7153: [Broker-J] Allow expired messages to be sent to DLQ



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/alex-rufous/qpid-broker-j QPID-7153

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/qpid-broker-j/pull/12.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #12


commit 06a6463ffb3bef730d10c16e03a8175f0e58cc78
Author: Alex Rudyy 
Date:   2018-08-07T13:39:30Z

QPID-7153: [Broker-J] Allow expired messages to be sent to DLQ




---

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Commented] (QPID-8214) Reduce the table names size in the JDBC configuration store to fit Oracle's 30 characters limitation

2018-08-07 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/QPID-8214?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16571685#comment-16571685
 ] 

ASF GitHub Bot commented on QPID-8214:
--

Github user alex-rufous closed the pull request at:

https://github.com/apache/qpid-broker-j/pull/10


> Reduce the table names size in the JDBC configuration store to fit Oracle's 
> 30 characters limitation
> 
>
> Key: QPID-8214
> URL: https://issues.apache.org/jira/browse/QPID-8214
> Project: Qpid
>  Issue Type: Improvement
>  Components: Broker-J
>Affects Versions: qpid-java-broker-7.0.4
>Reporter: Olivier VERMEULEN
>Assignee: Alex Rudyy
>Priority: Major
> Attachments: 
> 0001-QPID-8214-Broker-J-JDBC-Reduce-the-sizes-of-table-na.patch
>
>
> 'QPID_CONFIGURED_OBJECT_HIERARCHY' is already 32 characters long.
> And we should also take into account that users can specify a table name 
> prefix.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[GitHub] qpid-broker-j pull request #10: QPID-8214: Reduce the sizes of table names i...

2018-08-07 Thread alex-rufous
Github user alex-rufous closed the pull request at:

https://github.com/apache/qpid-broker-j/pull/10


---

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Resolved] (QPID-8224) [Broker-J][WMC] Add UI to configure exchange unroutable message behaviour for AMQP 1.0

2018-08-07 Thread Alex Rudyy (JIRA)


 [ 
https://issues.apache.org/jira/browse/QPID-8224?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alex Rudyy resolved QPID-8224.
--
Resolution: Fixed

The required changes have been implemented on master and merged into 7.0.x 
branch

> [Broker-J][WMC] Add UI to configure exchange unroutable message behaviour for 
> AMQP 1.0
> --
>
> Key: QPID-8224
> URL: https://issues.apache.org/jira/browse/QPID-8224
> Project: Qpid
>  Issue Type: Improvement
>  Components: Broker-J
>Affects Versions: qpid-java-broker-7.0.3, qpid-java-broker-7.0.2, 
> qpid-java-broker-7.0.0, qpid-java-broker-7.0.1, qpid-java-broker-7.0.4, 
> qpid-java-broker-7.0.5, qpid-java-broker-7.0.6
>Reporter: Alex Rudyy
>Assignee: Alex Rudyy
>Priority: Major
> Fix For: qpid-java-broker-7.1.0, qpid-java-broker-7.0.7
>
>
> Improve exchange UI to allow configuring exchange unroutable message behaviour



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Commented] (QPID-8224) [Broker-J][WMC] Add UI to configure exchange unroutable message behaviour for AMQP 1.0

2018-08-07 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/QPID-8224?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16571536#comment-16571536
 ] 

ASF subversion and git services commented on QPID-8224:
---

Commit 78748379fdc5f2d50003455cea2922e8f2a0aab8 in qpid-broker-j's branch 
refs/heads/7.0.x from [~alex.rufous]
[ https://git-wip-us.apache.org/repos/asf?p=qpid-broker-j.git;h=7874837 ]

QPID-8224: [Broker-J][WMC] Add UI to configure exchange unroutable message 
behaviour for AMQP 1.0

(cherry picked from commit c034522893685c8ee773c8600b6159b3f634138c)


> [Broker-J][WMC] Add UI to configure exchange unroutable message behaviour for 
> AMQP 1.0
> --
>
> Key: QPID-8224
> URL: https://issues.apache.org/jira/browse/QPID-8224
> Project: Qpid
>  Issue Type: Improvement
>  Components: Broker-J
>Affects Versions: qpid-java-broker-7.0.3, qpid-java-broker-7.0.2, 
> qpid-java-broker-7.0.0, qpid-java-broker-7.0.1, qpid-java-broker-7.0.4, 
> qpid-java-broker-7.0.5, qpid-java-broker-7.0.6
>Reporter: Alex Rudyy
>Assignee: Alex Rudyy
>Priority: Major
> Fix For: qpid-java-broker-7.1.0, qpid-java-broker-7.0.7
>
>
> Improve exchange UI to allow configuring exchange unroutable message behaviour



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Resolved] (QPID-8223) [Broker-J] [AMQP 1.0] Broker can stop delivering messages when sending link delivery-count value exceeds Integer.MAX_VALUE, wraps around and turns negative

2018-08-07 Thread Alex Rudyy (JIRA)


 [ 
https://issues.apache.org/jira/browse/QPID-8223?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alex Rudyy resolved QPID-8223.
--
Resolution: Fixed

Closing the JIRA as changes merged into 7.0.x branch

> [Broker-J] [AMQP 1.0] Broker can stop delivering messages when sending link 
> delivery-count value exceeds Integer.MAX_VALUE, wraps around and turns 
> negative
> ---
>
> Key: QPID-8223
> URL: https://issues.apache.org/jira/browse/QPID-8223
> Project: Qpid
>  Issue Type: Bug
>  Components: Broker-J
>Affects Versions: qpid-java-broker-7.0.3, qpid-java-broker-7.0.2, 
> qpid-java-broker-7.0.0, qpid-java-broker-7.0.1, qpid-java-broker-7.0.4, 
> qpid-java-broker-7.0.5, qpid-java-broker-7.0.6
>Reporter: Alex Rudyy
>Assignee: Alex Rudyy
>Priority: Major
> Fix For: qpid-java-broker-7.1.0, qpid-java-broker-7.0.7
>
>
> The value of {{delivery-count}}  is conceptually unbounded but it is encoded 
> as a 32-bit
> integer that wraps around and compares according to RFC-1982 serial number 
> arithmetic. Broker-J does not handle the case of "negative credit" on a link 
> correctly. For example, if deliver count is {{-1}} and credit is {{1}}, then 
> limit would calculate as {{0}} and broker would stop sending messages



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Commented] (QPID-8219) [Broker-J] Authentication results are cached in SimpleLdap and OAUTH2 authentication providers per connection basis

2018-08-07 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/QPID-8219?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16571467#comment-16571467
 ] 

ASF subversion and git services commented on QPID-8219:
---

Commit 603be3e7c523d1e796bad0281ca503ddafd21a93 in qpid-broker-j's branch 
refs/heads/7.0.x from [~alex.rufous]
[ https://git-wip-us.apache.org/repos/asf?p=qpid-broker-j.git;h=603be3e ]

QPID-8219: [Broker-J] Cache authentication results for the same remote hosts 
and credentials

(cherry picked from commit 4e240ba1a9bcdea65002c37101fd1889e16c6955)

# Conflicts:
#   
broker-core/src/test/java/org/apache/qpid/server/security/auth/manager/AuthenticationResultCacherTest.java


> [Broker-J] Authentication results are cached in SimpleLdap and OAUTH2 
> authentication providers per connection basis
> ---
>
> Key: QPID-8219
> URL: https://issues.apache.org/jira/browse/QPID-8219
> Project: Qpid
>  Issue Type: Bug
>  Components: Broker-J
>Affects Versions: qpid-java-6.1.6, qpid-java-broker-7.0.3, 
> qpid-java-broker-7.0.2, qpid-java-6.1, qpid-java-6.1.1, qpid-java-6.1.2, 
> qpid-java-6.1.3, qpid-java-6.1.4, qpid-java-broker-7.0.0, qpid-java-6.1.5, 
> qpid-java-broker-7.0.1, qpid-java-broker-7.0.4, qpid-java-broker-7.0.5, 
> qpid-java-broker-7.0.6
>Reporter: Alex Rudyy
>Assignee: Alex Rudyy
>Priority: Major
> Fix For: qpid-java-6.1.7, qpid-java-broker-7.1.0, 
> qpid-java-broker-7.0.7
>
>
> SimpleLdap and OAUTH2 authentication providers were supposed to cache 
> authentication results per remote host basis. Thus, when connections are made 
> from the same host using the same credentials, the cached authentication 
> result should be reused. The current caching approach takes into 
> consideration an ephemeral port of the connection. As result, a new 
> connection from the same host with the same credentials cannot reuse previous 
> authentication result due to a different ephemeral port.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Commented] (QPID-8223) [Broker-J] [AMQP 1.0] Broker can stop delivering messages when sending link delivery-count value exceeds Integer.MAX_VALUE, wraps around and turns negative

2018-08-07 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/QPID-8223?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16571466#comment-16571466
 ] 

ASF subversion and git services commented on QPID-8223:
---

Commit de4c356749aa87c5792bd966786eae0d8af90f39 in qpid-broker-j's branch 
refs/heads/7.0.x from [~alex.rufous]
[ https://git-wip-us.apache.org/repos/asf?p=qpid-broker-j.git;h=de4c356 ]

QPID-8223:[Broker-J][AMQP 1.0] Fix evaluation of sending link credit

(cherry picked from commit 42bbdb6598f402fb7f2cb7ab194db27de9653e2c)


> [Broker-J] [AMQP 1.0] Broker can stop delivering messages when sending link 
> delivery-count value exceeds Integer.MAX_VALUE, wraps around and turns 
> negative
> ---
>
> Key: QPID-8223
> URL: https://issues.apache.org/jira/browse/QPID-8223
> Project: Qpid
>  Issue Type: Bug
>  Components: Broker-J
>Affects Versions: qpid-java-broker-7.0.3, qpid-java-broker-7.0.2, 
> qpid-java-broker-7.0.0, qpid-java-broker-7.0.1, qpid-java-broker-7.0.4, 
> qpid-java-broker-7.0.5, qpid-java-broker-7.0.6
>Reporter: Alex Rudyy
>Assignee: Alex Rudyy
>Priority: Major
> Fix For: qpid-java-broker-7.1.0, qpid-java-broker-7.0.7
>
>
> The value of {{delivery-count}}  is conceptually unbounded but it is encoded 
> as a 32-bit
> integer that wraps around and compares according to RFC-1982 serial number 
> arithmetic. Broker-J does not handle the case of "negative credit" on a link 
> correctly. For example, if deliver count is {{-1}} and credit is {{1}}, then 
> limit would calculate as {{0}} and broker would stop sending messages



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Closed] (QPID-8225) [Broker-J][AMQP 0-10] stops delivering queue/consumer messages after 4 GB data transfer

2018-08-07 Thread Alex Rudyy (JIRA)


 [ 
https://issues.apache.org/jira/browse/QPID-8225?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alex Rudyy closed QPID-8225.

Resolution: Fixed

The changes implemented to fix the defect look good to me. I merged them into 
7.0.x branch. 

> [Broker-J][AMQP 0-10] stops delivering queue/consumer messages after 4 GB 
> data transfer
> ---
>
> Key: QPID-8225
> URL: https://issues.apache.org/jira/browse/QPID-8225
> Project: Qpid
>  Issue Type: Bug
>  Components: Broker-J
>Affects Versions: qpid-java-6.1.6, qpid-java-broker-7.0.3, 
> qpid-java-broker-7.0.2, 0.18, 0.20, 0.22, 0.24, 0.26, 0.28, 0.30, 0.32, 
> qpid-java-6.0, qpid-java-6.0.1, qpid-java-6.0.2, qpid-java-6.0.3, 
> qpid-java-6.0.4, qpid-java-6.0.5, qpid-java-6.1, qpid-java-6.0.6, 
> qpid-java-6.1.1, qpid-java-6.1.2, qpid-java-6.0.7, qpid-java-6.1.3, 
> qpid-java-6.0.8, qpid-java-6.1.4, qpid-java-broker-7.0.0, qpid-java-6.1.5, 
> qpid-java-broker-7.0.1, qpid-java-broker-7.0.4, qpid-java-broker-7.0.5, 
> qpid-java-broker-7.0.6
>Reporter: Michael Dyslin
>Priority: Critical
> Fix For: qpid-java-broker-7.1.0, qpid-java-broker-7.0.7
>
> Attachments: protocol.log, 
> qpid-broker-plugins-amqp-0-10-protocol-7.0.6.jar
>
>
> Created this Jira so I could send a logfile attachment to be analyzed.  
> Attachments are removed from the user discussion email lists.
>  
> Log file is for the java broker with a NameAndLevel filter of 
> org.apache.qpid.server.protocol.v0_10.ServerConnection at the DEBUG level.
> Activity was:
>  # Start the java broker
>  # Start the consumer
>  # Start the producer
>  # 2 messages sent every 10 seconds
>  # Stopped producer after 10 messages sent
>  # stopped the consumer much later
> This log does not contain the problem where message flow stopped, probably 
> due to CREDIT flow mode exceeding 4 GB credit.
>  
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Commented] (QPID-8225) [Broker-J][AMQP 0-10] stops delivering queue/consumer messages after 4 GB data transfer

2018-08-07 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/QPID-8225?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16571364#comment-16571364
 ] 

ASF subversion and git services commented on QPID-8225:
---

Commit 11a0845fe098ebf1039e99a054424a522fa2598a in qpid-broker-j's branch 
refs/heads/7.0.x from Robert Godfrey
[ https://git-wip-us.apache.org/repos/asf?p=qpid-broker-j.git;h=11a0845 ]

QPID-8225 : Fix incorrect implementation of infinite credit

(cherry picked from commit cf40fdea39d9633702ee286d94e950a19ec7be74)


> [Broker-J][AMQP 0-10] stops delivering queue/consumer messages after 4 GB 
> data transfer
> ---
>
> Key: QPID-8225
> URL: https://issues.apache.org/jira/browse/QPID-8225
> Project: Qpid
>  Issue Type: Bug
>  Components: Broker-J
>Affects Versions: qpid-java-6.1.6, qpid-java-broker-7.0.3, 
> qpid-java-broker-7.0.2, 0.18, 0.20, 0.22, 0.24, 0.26, 0.28, 0.30, 0.32, 
> qpid-java-6.0, qpid-java-6.0.1, qpid-java-6.0.2, qpid-java-6.0.3, 
> qpid-java-6.0.4, qpid-java-6.0.5, qpid-java-6.1, qpid-java-6.0.6, 
> qpid-java-6.1.1, qpid-java-6.1.2, qpid-java-6.0.7, qpid-java-6.1.3, 
> qpid-java-6.0.8, qpid-java-6.1.4, qpid-java-broker-7.0.0, qpid-java-6.1.5, 
> qpid-java-broker-7.0.1, qpid-java-broker-7.0.4, qpid-java-broker-7.0.5, 
> qpid-java-broker-7.0.6
>Reporter: Michael Dyslin
>Priority: Critical
> Fix For: qpid-java-broker-7.1.0, qpid-java-broker-7.0.7
>
> Attachments: protocol.log, 
> qpid-broker-plugins-amqp-0-10-protocol-7.0.6.jar
>
>
> Created this Jira so I could send a logfile attachment to be analyzed.  
> Attachments are removed from the user discussion email lists.
>  
> Log file is for the java broker with a NameAndLevel filter of 
> org.apache.qpid.server.protocol.v0_10.ServerConnection at the DEBUG level.
> Activity was:
>  # Start the java broker
>  # Start the consumer
>  # Start the producer
>  # 2 messages sent every 10 seconds
>  # Stopped producer after 10 messages sent
>  # stopped the consumer much later
> This log does not contain the problem where message flow stopped, probably 
> due to CREDIT flow mode exceeding 4 GB credit.
>  
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Closed] (PROTON-1901) arriving transfers for multiplexed multi-frame deliveries on a session are mishandled

2018-08-07 Thread Robbie Gemmell (JIRA)


 [ 
https://issues.apache.org/jira/browse/PROTON-1901?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robbie Gemmell closed PROTON-1901.
--

> arriving transfers for multiplexed multi-frame deliveries on a session are 
> mishandled
> -
>
> Key: PROTON-1901
> URL: https://issues.apache.org/jira/browse/PROTON-1901
> Project: Qpid Proton
>  Issue Type: Bug
>  Components: proton-j
>Affects Versions: proton-j-0.27.2, proton-j-0.28.0
>Reporter: Robbie Gemmell
>Assignee: Robbie Gemmell
>Priority: Critical
> Fix For: proton-j-0.27.3, proton-j-0.28.1
>
>
> If a session has a delivery arriving split into multiple transfer frames, and 
> those frames are multiplexed with transfer[s] for other deliveries on the 
> session (i.e there are multiple links on the session), then the transfer 
> frames are mishandled.
> The handling in the transport session is essentially expecting a single 
> outstanding delivery at a time, whereas the spec allows (and proton-j will in 
> certain situations send) multiplexed deliveries for different links on a 
> session. The mishandling can result in a transfer being incorrectly treated 
> as the start of another delivery (potentially leaving earlier bits/ones 
> incomplete) and/or merged with the wrong preceding transfer content.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Closed] (PROTON-1902) arriving aborted transfers are mishandled and can't be observed

2018-08-07 Thread Robbie Gemmell (JIRA)


 [ 
https://issues.apache.org/jira/browse/PROTON-1902?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robbie Gemmell closed PROTON-1902.
--

> arriving aborted transfers are mishandled and can't be observed
> ---
>
> Key: PROTON-1902
> URL: https://issues.apache.org/jira/browse/PROTON-1902
> Project: Qpid Proton
>  Issue Type: Bug
>  Components: proton-j
>Affects Versions: proton-j-0.27.2, proton-j-0.28.0
>Reporter: Robbie Gemmell
>Assignee: Robbie Gemmell
>Priority: Major
> Fix For: proton-j-0.27.3, proton-j-0.28.1
>
>
> When an arriving delivery is aborted by its final transfer frame, the 
> transport session mishandles it. The 'aborted' flag is not considered 
> properly and fails to override the flags for 'more' and 'settled' (aborted 
> deliveries are implicitly settled), the transfer frames payload if any is not 
> discarded as required, and the delivery count and credit aren't updated, so 
> any future flow frames sent will fail to account for it properly. No attempt 
> is made to track that the abort happened, so there is also no way for the 
> using application code to detect that the delivery was aborted and will 
> remain 'partial' indefinitely.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Closed] (PROTON-1906) duplicating an empty composite buffer can lead to NPE

2018-08-07 Thread Robbie Gemmell (JIRA)


 [ 
https://issues.apache.org/jira/browse/PROTON-1906?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robbie Gemmell closed PROTON-1906.
--

> duplicating an empty composite buffer can lead to NPE
> -
>
> Key: PROTON-1906
> URL: https://issues.apache.org/jira/browse/PROTON-1906
> Project: Qpid Proton
>  Issue Type: Bug
>  Components: proton-j
>Affects Versions: proton-j-0.27.0, proton-j-0.27.1, proton-j-0.27.2, 
> proton-j-0.28.0
>Reporter: Robbie Gemmell
>Assignee: Robbie Gemmell
>Priority: Major
> Fix For: proton-j-0.27.3, proton-j-0.28.1
>
>
> Duplicating an empty composite buffer can lead to NPE (but may not, depending 
> on how it came to exist and be in that state).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Updated] (QPID-8225) [Broker-J][AMQP 0-10] stops delivering queue/consumer messages after 4 GB data transfer

2018-08-07 Thread Alex Rudyy (JIRA)


 [ 
https://issues.apache.org/jira/browse/QPID-8225?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alex Rudyy updated QPID-8225:
-
Affects Version/s: qpid-java-6.1.6
   qpid-java-broker-7.0.3
   qpid-java-broker-7.0.2
   0.18
   0.20
   0.22
   0.24
   0.26
   0.28
   0.30
   0.32
   qpid-java-6.0
   qpid-java-6.0.1
   qpid-java-6.0.2
   qpid-java-6.0.3
   qpid-java-6.0.4
   qpid-java-6.0.5
   qpid-java-6.1
   qpid-java-6.0.6
   qpid-java-6.1.1
   qpid-java-6.1.2
   qpid-java-6.0.7
   qpid-java-6.1.3
   qpid-java-6.0.8
   qpid-java-6.1.4
   qpid-java-broker-7.0.0
   qpid-java-6.1.5
   qpid-java-broker-7.0.1
   qpid-java-broker-7.0.4
   qpid-java-broker-7.0.5
Fix Version/s: qpid-java-broker-7.0.7
   qpid-java-broker-7.1.0

> [Broker-J][AMQP 0-10] stops delivering queue/consumer messages after 4 GB 
> data transfer
> ---
>
> Key: QPID-8225
> URL: https://issues.apache.org/jira/browse/QPID-8225
> Project: Qpid
>  Issue Type: Bug
>  Components: Broker-J
>Affects Versions: qpid-java-6.1.6, qpid-java-broker-7.0.3, 
> qpid-java-broker-7.0.2, 0.18, 0.20, 0.22, 0.24, 0.26, 0.28, 0.30, 0.32, 
> qpid-java-6.0, qpid-java-6.0.1, qpid-java-6.0.2, qpid-java-6.0.3, 
> qpid-java-6.0.4, qpid-java-6.0.5, qpid-java-6.1, qpid-java-6.0.6, 
> qpid-java-6.1.1, qpid-java-6.1.2, qpid-java-6.0.7, qpid-java-6.1.3, 
> qpid-java-6.0.8, qpid-java-6.1.4, qpid-java-broker-7.0.0, qpid-java-6.1.5, 
> qpid-java-broker-7.0.1, qpid-java-broker-7.0.4, qpid-java-broker-7.0.5, 
> qpid-java-broker-7.0.6
>Reporter: Michael Dyslin
>Priority: Critical
> Fix For: qpid-java-broker-7.1.0, qpid-java-broker-7.0.7
>
> Attachments: protocol.log, 
> qpid-broker-plugins-amqp-0-10-protocol-7.0.6.jar
>
>
> Created this Jira so I could send a logfile attachment to be analyzed.  
> Attachments are removed from the user discussion email lists.
>  
> Log file is for the java broker with a NameAndLevel filter of 
> org.apache.qpid.server.protocol.v0_10.ServerConnection at the DEBUG level.
> Activity was:
>  # Start the java broker
>  # Start the consumer
>  # Start the producer
>  # 2 messages sent every 10 seconds
>  # Stopped producer after 10 messages sent
>  # stopped the consumer much later
> This log does not contain the problem where message flow stopped, probably 
> due to CREDIT flow mode exceeding 4 GB credit.
>  
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@qpid.apache.org
For additional commands, e-mail: dev-h...@qpid.apache.org



[jira] [Closed] (PROTON-1905) CompositeReadableBuffer.duplicate seen to throw a NPE during frame logging

2018-08-07 Thread Robbie Gemmell (JIRA)


 [ 
https://issues.apache.org/jira/browse/PROTON-1905?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robbie Gemmell closed PROTON-1905.
--
Resolution: Duplicate

I forgot this was here and created another Jira for the fix, PROTON-1906, so 
closing this out.

> CompositeReadableBuffer.duplicate seen to throw a NPE during frame logging
> --
>
> Key: PROTON-1905
> URL: https://issues.apache.org/jira/browse/PROTON-1905
> Project: Qpid Proton
>  Issue Type: Bug
>  Components: proton-j
>Affects Versions: proton-j-0.27.0, proton-j-0.28.0
>Reporter: Keith Wall
>Priority: Major
>
> Whilst verifying the RC1 candidate for the 0.27.3 RC1, the 
> {{proton_tests.engine.CreditTest.testDrainOrder}} Jython test caused a NPE 
> with Proton-J's application code.I realised later my shell had  the 
> {{PN_TRACE_FRM=true}} environment variable set. Unsetting the variable allows 
> the test to pass.  The regression was introduced at 0.27.0.
> {noformat}
> proton_tests.engine.CreditTest.testDrainOrder 
> ...[1346009488:0] -> Open{ containerId='', 
> hostname='null', maxFrameSize=16384, channelMax=65535, idleTimeOut=null, 
> outgoingLocales=null, incomingLocales=null, offeredCapabilities=null, 
> desiredCapabilities=null, properties=null}
> [1346009488:0] -> Begin{remoteChannel=null, nextOutgoingId=1, 
> incomingWindow=2147483647, outgoingWindow=2147483647, handleMax=65535, 
> offeredCapabilities=null, desiredCapabilities=null, properties=null}
> [1346009488:0] <- Open{ containerId='', hostname='null', maxFrameSize=16384, 
> channelMax=65535, idleTimeOut=null, outgoingLocales=null, 
> incomingLocales=null, offeredCapabilities=null, desiredCapabilities=null, 
> properties=null}
> [1346009488:0] <- Begin{remoteChannel=0, nextOutgoingId=1, 
> incomingWindow=2147483647, outgoingWindow=2147483647, handleMax=65535, 
> offeredCapabilities=null, desiredCapabilities=null, properties=null}
> [1346009488:0] -> Attach{name='test-link', handle=0, role=SENDER, 
> sndSettleMode=MIXED, rcvSettleMode=FIRST, source=Source{address='null', 
> durable=NONE, expiryPolicy=SESSION_END, timeout=0, dynamic=false, 
> dynamicNodeProperties=null, distributionMode=null, filter=null, 
> defaultOutcome=null, outcomes=null, capabilities=null}, 
> target=Target{address='null', durable=NONE, expiryPolicy=SESSION_END, 
> timeout=0, dynamic=false, dynamicNodeProperties=null, capabilities=null}, 
> unsettled=null, incompleteUnsettled=false, initialDeliveryCount=0, 
> maxMessageSize=null, offeredCapabilities=null, desiredCapabilities=null, 
> properties=null}
> [1346009488:0] <- Attach{name='test-link', handle=0, role=RECEIVER, 
> sndSettleMode=MIXED, rcvSettleMode=FIRST, source=Source{address='null', 
> durable=NONE, expiryPolicy=SESSION_END, timeout=0, dynamic=false, 
> dynamicNodeProperties=null, distributionMode=null, filter=null, 
> defaultOutcome=null, outcomes=null, capabilities=null}, 
> target=Target{address='null', durable=NONE, expiryPolicy=SESSION_END, 
> timeout=0, dynamic=false, dynamicNodeProperties=null, capabilities=null}, 
> unsettled=null, incompleteUnsettled=false, initialDeliveryCount=null, 
> maxMessageSize=null, offeredCapabilities=null, desiredCapabilities=null, 
> properties=null}
> [1346009488:0] <- Flow{nextIncomingId=1, incomingWindow=2147483647, 
> nextOutgoingId=1, outgoingWindow=2147483647, handle=0, deliveryCount=0, 
> linkCredit=10, available=null, drain=false, echo=false, properties=null}
> [1346009488:0] -> Transfer{handle=0, deliveryId=0, deliveryTag=tagA, 
> messageFormat=0, settled=null, more=true, rcvSettleMode=null, state=null, 
> resume=false, aborted=false, batchable=false} (1) "A"
>  fail
> Error during test:  Traceback (most recent call last):
> File 
> "/Users/keith/releases/0273/apache-qpid-proton-j-0.27.3-src/tests/python/proton-test",
>  line 362, in run
>   phase()
> File 
> "/Users/keith/releases/0273/apache-qpid-proton-j-0.27.3-src/tests/python/proton_tests/engine.py",
>  line 1556, in testDrainOrder
>   self.pump()
> File 
> "/Users/keith/releases/0273/apache-qpid-proton-j-0.27.3-src/tests/python/proton_tests/engine.py",
>  line 112, in pump
>   pump(t1, t2, buffer_size)
> File 
> "/Users/keith/releases/0273/apache-qpid-proton-j-0.27.3-src/tests/python/proton_tests/common.py",
>  line 113, in pump
>   while (pump_uni(transport1, transport2, buffer_size) or
> File 
> "/Users/keith/releases/0273/apache-qpid-proton-j-0.27.3-src/tests/python/proton_tests/common.py",
>  line 86, in pump_uni
>   p = src.pending()
> File 
> "/Users/keith/releases/0273/apache-qpid-proton-j-0.27.3-src/tests/java/shim/binding/proton/__init__.py",
>  line 2764, in pending
>   p = pn_transport_pending(self._impl)
> File 
> "/Users/keith/releases/0