Hi

I am working on two different use cases where the basic problem is same but 
scale is very different.

In case 1 we have two entities that can have many to many relation and we would 
want to identify all subgraphs in the full graph and then further prune the 
graph to find the best relation. There are close to 1 billion edges with a few 
100 million entities.

In case 2 the entities are more and they all can have many to many relations 
but the scale is much larger. We will have close to 50 billion entities and 
many more edges but again we would want to find subgraphs and then prune to 
find the best edges.

Is GraphFrame a good choice for this use case or we should use spark just for 
processing with some other graph database like Neo4j?

Thanks for any help!!

Thanks
Ankur

Sent from my iPhone
---------------------------------------------------------------------
To unsubscribe e-mail: user-unsubscr...@spark.apache.org

Reply via email to