[
https://issues.apache.org/jira/browse/TINKERPOP3-991?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15025345#comment-15025345
]
Ashish Nagavaram edited comment on TINKERPOP3-991 at 11/24/15 8:44 PM:
-----------------------------------------------------------------------
Summary from mail thread with [~okram]:
Hi Ashish,
OTLP is serial and OLAP is parallel. What would be smart is if we had an
interface liked "ThreadSafeStep" which meant that a particular step was thread
safe. Then for a chain of "thread safe" steps, we could thread that section.
For things like ReducingBarrierSteps, etc., (not thread safe) it would go back
to single threaded. Finally, we could then have a
ParallelTraversalOptimizationStrategy that would introspect and do the
appropriate threading infrastructure at compile time. Perhaps you could make a
ticket detailing your use case and requirements and we can work towards getting
that into 3.2.0.
https://issues.apache.org/jira/browse/TINKERPOP3
Thanks,
Marko.
http://markorodriguez.com
Hide original message
On Nov 23, 2015, at 4:41 PM, Ashish Nagavaram <[email protected]>
wrote:
> hi,
> The default traversal code , traverses the graph sequentially after
> contracting the traversal plan. Is there a parallel version available? I
> understand that this will have a impact on throughput of the machine, but for
> latency critical applications throughput vs latency might be ok. We can maybe
> initialize the traversal with a ParallelTraversal (similar to default
> traversal api).
> Currently for my use-case, I create a future task for each fork and then run
> a intersection across results .. which might not always work.
was (Author: nagav.ashish):
Summary from mail thread:
Hi Ashish,
OTLP is serial and OLAP is parallel. What would be smart is if we had an
interface liked "ThreadSafeStep" which meant that a particular step was thread
safe. Then for a chain of "thread safe" steps, we could thread that section.
For things like ReducingBarrierSteps, etc., (not thread safe) it would go back
to single threaded. Finally, we could then have a
ParallelTraversalOptimizationStrategy that would introspect and do the
appropriate threading infrastructure at compile time. Perhaps you could make a
ticket detailing your use case and requirements and we can work towards getting
that into 3.2.0.
https://issues.apache.org/jira/browse/TINKERPOP3
Thanks,
Marko.
http://markorodriguez.com
Hide original message
On Nov 23, 2015, at 4:41 PM, Ashish Nagavaram <[email protected]>
wrote:
> hi,
> The default traversal code , traverses the graph sequentially after
> contracting the traversal plan. Is there a parallel version available? I
> understand that this will have a impact on throughput of the machine, but for
> latency critical applications throughput vs latency might be ok. We can maybe
> initialize the traversal with a ParallelTraversal (similar to default
> traversal api).
> Currently for my use-case, I create a future task for each fork and then run
> a intersection across results .. which might not always work.
> [Proposal] Capability to traverse OLTP in parallel
> --------------------------------------------------
>
> Key: TINKERPOP3-991
> URL: https://issues.apache.org/jira/browse/TINKERPOP3-991
> Project: TinkerPop 3
> Issue Type: New Feature
> Components: process
> Reporter: Ashish Nagavaram
> Priority: Minor
>
> The current traverser is serial in nature, where some of the steps can be
> parallelized for latency critical cases.
> Eg:
> Consider a movies graph where evert actor has a edge to the movie he has
> acted in.
> Query:
> movies by X and Y
> The current way to do it would be to get movies by X and for each movie check
> if it has a edge to Y which is slow. I by passed this by getting movies by X
> and movies by Y in parallel and then do a intersection. But I would have do
> manually do this for every new pattern.
> Please close this ticket if you think this proposal does not make sense
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)