[
https://issues.apache.org/jira/browse/TINKERPOP-1519?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16053760#comment-16053760
]
ASF GitHub Bot commented on TINKERPOP-1519:
-------------------------------------------
GitHub user sheldonkhall opened a pull request:
https://github.com/apache/tinkerpop/pull/632
TINKERPOP-1519: tinker graph computer does not handle multiple scopes
https://issues.apache.org/jira/browse/TINKERPOP-1519
This change modifies the tinker graph computer so that each message sent in
a vertex program remembers its scope. Previously when the receiveMessages
method on TinkerMessenger was called it would loop through ALL message scopes
and then ALL messages, which is incorrect. Now the method loops over each
scope, and then each message within that scope.
I have added the regression test suggested in the JIRA ticket and run mvn
clean install locally to confirm everything passes OK.
I had a quick look at the latest master, and this bugfix may need to be
merged there too.
You can merge this pull request into a Git repository by running:
$ git pull https://github.com/sheldonkhall/tinkerpop tp31
Alternatively you can review and apply these changes as the patch at:
https://github.com/apache/tinkerpop/pull/632.patch
To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:
This closes #632
----
commit 28c514da9065683ed90ea6aabc66ffdcbab99c11
Author: Sheldon <[email protected]>
Date: 2017-06-15T17:00:32Z
bugfix
commit 99679a037db3e86112471a40cd454114231852b6
Author: Sheldon <[email protected]>
Date: 2017-06-19T09:35:16Z
add the regression test
----
> TinkerGraphComputer doesn't handle multiple MessageScopes in single iteration
> -----------------------------------------------------------------------------
>
> Key: TINKERPOP-1519
> URL: https://issues.apache.org/jira/browse/TINKERPOP-1519
> Project: TinkerPop
> Issue Type: Bug
> Components: tinkergraph
> Affects Versions: 3.1.1-incubating
> Environment: Mac OSX
> Reporter: Felix Chapman
> Priority: Minor
>
> When executing a VertexProgram that sends messages on multiple MessageScopes
> in a single iteration, then the messages behave as if they were sent on all
> scopes within that iteration.
> e.g. if you send message {{A}} on {{out}} edges, and message {{B}} on {{in}}
> edges, then {{A}} and {{B}} will instead be sent over both {{in}} and {{out}}
> edges.
> The problem can be resolved by using only a single MessageScope per
> iteration, but this involves increasing the number of iterations.
> An example of this behaviour is below:
> {code:java}
> public class TinkerTest {
> public static void main(String[] args) throws ExecutionException,
> InterruptedException {
> TinkerGraph graph = TinkerGraph.open();
> Vertex a = graph.addVertex("a");
> Vertex b = graph.addVertex("b");
> Vertex c = graph.addVertex("c");
> a.addEdge("edge", b);
> b.addEdge("edge", c);
> // Simple graph:
> // a -> b -> c
> // Execute a traversal program that sends an incoming message of "2"
> and an outgoing message of "1" from "b"
> // then each vertex sums any received messages
> ComputerResult result = graph.compute().program(new
> MyVertexProgram()).submit().get();
> // We expect the results to be {a=2, b=0, c=1}. Instead it is {a=3,
> b=0, c=3}
>
> System.out.println(result.graph().traversal().V().group().by(Element::label).by("count").next());
> }
> }
> class MyVertexProgram implements VertexProgram<Long> {
> private final MessageScope.Local<Long> countMessageScopeIn =
> MessageScope.Local.of(__::inE);
> private final MessageScope.Local<Long> countMessageScopeOut =
> MessageScope.Local.of(__::outE);
> private static final String MEMORY_KEY = "count";
> private static final Set<String> COMPUTE_KEYS =
> Collections.singleton(MEMORY_KEY);
> @Override
> public void setup(final Memory memory) {}
> @Override
> public GraphComputer.Persist getPreferredPersist() {
> return GraphComputer.Persist.VERTEX_PROPERTIES;
> }
> @Override
> public Set<String> getElementComputeKeys() {
> return COMPUTE_KEYS;
> }
> @Override
> public Set<MessageScope> getMessageScopes(final Memory memory) {
> return Sets.newHashSet(countMessageScopeIn, countMessageScopeOut);
> }
> @Override
> public void execute(Vertex vertex, Messenger<Long> messenger, Memory
> memory) {
> switch (memory.getIteration()) {
> case 0:
> if (vertex.label().equals("b")) {
> messenger.sendMessage(this.countMessageScopeIn, 2L);
> messenger.sendMessage(this.countMessageScopeOut, 1L);
> }
> break;
> case 1:
> long edgeCount =
> IteratorUtils.reduce(messenger.receiveMessages(), 0L, (a, b) -> a + b);
> vertex.property(MEMORY_KEY, edgeCount);
> break;
> }
> }
> @Override
> public boolean terminate(final Memory memory) {
> return memory.getIteration() == 1;
> }
> @Override
> public GraphComputer.ResultGraph getPreferredResultGraph() {
> return GraphComputer.ResultGraph.NEW;
> }
> @Override
> public MyVertexProgram clone() {
> try {
> return (MyVertexProgram) super.clone();
> } catch (final CloneNotSupportedException e) {
> throw new RuntimeException(e);
> }
> }
> }
> {code}
--
This message was sent by Atlassian JIRA
(v6.4.14#64029)