[
https://issues.apache.org/jira/browse/DISPATCH-767?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16078264#comment-16078264
]
ASF GitHub Bot commented on DISPATCH-767:
-----------------------------------------
Github user alanconway commented on a diff in the pull request:
https://github.com/apache/qpid-dispatch/pull/172#discussion_r126164203
--- Diff: src/message.c ---
@@ -1026,16 +1150,24 @@ qd_message_t *qd_message_receive(pn_delivery_t
*delivery)
// will only happen if the size of the message content is an
exact multiple
// of the buffer size.
//
-
- if (qd_buffer_size(buf) == 0) {
+ if (buf && qd_buffer_size(buf) == 0) {
+ sys_mutex_lock(msg->content->lock);
DEQ_REMOVE_TAIL(msg->content->buffers);
+ sys_mutex_unlock(msg->content->lock);
qd_buffer_free(buf);
}
+ //
+ // We have received the entire message since rc == PN_EOS, set
the receive_complete flag to false
--- End diff --
Comment typo: says "false" should say "true"
> Message Cut-Through/Streaming for efficient handling of large messages
> ----------------------------------------------------------------------
>
> Key: DISPATCH-767
> URL: https://issues.apache.org/jira/browse/DISPATCH-767
> Project: Qpid Dispatch
> Issue Type: Improvement
> Components: Router Node
> Reporter: Ted Ross
> Assignee: Ganesh Murthy
> Fix For: 1.0.0
>
>
> When large, multi-frame messages are sent through the router, there is no
> need to wait for the entire message to arrive before starting to send it
> onward.
> This feature causes the router to route the first frame and allow subsequent
> frames in a delivery to be streamed out in pipeline fashion. Ideally, the
> memory usage in the router should only involve pending frames. This would
> allow the router to handle arbitrary numbers of concurrent arbitrarily large
> messages.
--
This message was sent by Atlassian JIRA
(v6.4.14#64029)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]