[
https://issues.apache.org/jira/browse/DISPATCH-767?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16100156#comment-16100156
]
ASF GitHub Bot commented on DISPATCH-767:
-----------------------------------------
Github user ganeshmurthy commented on a diff in the pull request:
https://github.com/apache/qpid-dispatch/pull/177#discussion_r129332913
--- Diff: src/buffer.c ---
@@ -83,7 +85,26 @@ size_t qd_buffer_size(qd_buffer_t *buf)
void qd_buffer_insert(qd_buffer_t *buf, size_t len)
{
buf->size += len;
- assert(buf->size <= buffer_size);
+ assert(buf->size <= BUFFER_SIZE);
+}
+
+void qd_buffer_add_fanout(qd_buffer_t *buf)
+{
+ buf->fanout++;
--- End diff --
Currently, the only function calling qd_buffer_add_fanout() is
qdr_forward_new_delivery_CT(in forwarder.c) which is only called by the core
thread. So, there should be no contention. But I am happy to change the fanout
to atomic so that anybody else calling it outside of the core thread will not
be affected.
> Message Cut-Through/Streaming for efficient handling of large messages
> ----------------------------------------------------------------------
>
> Key: DISPATCH-767
> URL: https://issues.apache.org/jira/browse/DISPATCH-767
> Project: Qpid Dispatch
> Issue Type: Improvement
> Components: Router Node
> Reporter: Ted Ross
> Assignee: Ganesh Murthy
> Fix For: 1.0.0
>
>
> When large, multi-frame messages are sent through the router, there is no
> need to wait for the entire message to arrive before starting to send it
> onward.
> This feature causes the router to route the first frame and allow subsequent
> frames in a delivery to be streamed out in pipeline fashion. Ideally, the
> memory usage in the router should only involve pending frames. This would
> allow the router to handle arbitrary numbers of concurrent arbitrarily large
> messages.
--
This message was sent by Atlassian JIRA
(v6.4.14#64029)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]