[ 
https://issues.apache.org/jira/browse/GIRAPH-328?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13459091#comment-13459091
 ] 

Avery Ching commented on GIRAPH-328:
------------------------------------

This is a great stuff Eli.  I'm really happy that you undertook this. =)

I also like the change of getPartitionId to getTaskId for less confusion with 
Giraph partitions.

NettyWorkerClient.java:179 - Can we separate this into 2 lines?  I'm also 
confused on whether this appears to be correct (given that 
USE_WORKERINFO_ADDRESS is final int)

Last, but most important, it would be great to preserve the notion of 
partitions in the message and the request (i.e. 

private Map<WorkerInfo, Map<Integer, Map<I, Collection<M>>>> messageCache

It is a bit more complex, but on the request processing side, it allows us the 
ability to not have to look up the partitions when adding the messages to them. 
 In my work on GIRAPH-329, I find that this improves performance quite a bit 
and we are limited by our ability to add messages to that map on the receiver.

Otherwise, this looks good.
                
> Outgoing messages from current superstep should be grouped at the sender by 
> owning worker, not by partition
> -----------------------------------------------------------------------------------------------------------
>
>                 Key: GIRAPH-328
>                 URL: https://issues.apache.org/jira/browse/GIRAPH-328
>             Project: Giraph
>          Issue Type: Improvement
>          Components: bsp, graph
>    Affects Versions: 0.2.0
>            Reporter: Eli Reisman
>            Assignee: Eli Reisman
>            Priority: Minor
>             Fix For: 0.2.0
>
>         Attachments: GIRAPH-328-1.patch, GIRAPH-328-2.patch, 
> GIRAPH-328-3.patch
>
>
> Currently, outgoing messages created by the Vertex#compute() cycle on each 
> worker are stored and grouped by the partitionId on the destination worker to 
> which the messages belong. This results in messages being duplicated on the 
> wire per partition on a given receiving worker that has delivery vertices for 
> those messages.
> By partitioning the outgoing, current-superstep messages by destination 
> worker, we can split them into partitions at insertion into a MessageStore on 
> the destination worker. What we trade in come compute time while inserting at 
> the receiver side, we gain in fine grained control over the real number of 
> messages each worker caches outbound for any given worker before flushing, 
> and how those flush messages are aggregated for delivery as well. 
> Potentially, it allows for a great reduction in duplicate messages sent in 
> situations like Vertex#sendMessageToAllEdges() -- see GIRAPH-322, GIRAPH-314. 
> You get the idea.
> This might be a poor idea, and it can certainly use some additional 
> refinement, but it passes mvn verify and may even run ;) It interoperates 
> with the disk spill code, but not as well as it could. Consider this a 
> request for comment on the idea (and the approach) rather than a finished 
> product.
> Comments/ideas/help welcome! Thanks

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

Reply via email to