yncxcw wrote
> hi,
> 
> It highly depends on the algorithms you are going to apply to your data
> sets.  Graph applications are usually memory hungry and probably cause
> long
> GC or even OOM.
> 
> Suggestions include:  1. make some highly reused RDD as
> StorageLevel.MEMORY_ONLY
> and leave the rest  MEMORY_AND_DISK.
> 
>                                     2. slight decrease the parallelism for
> each executor.
> 
> 
> Wei Chen


Thanks for the response have a implementation of K core decomposition
running using pregel framework.

I will try constructing the graph with storagelevel:MEMORY_AND_DISK and 
post the outcome here

The GC overhead error is happening even before the algorithm starts its
pregel iterations it failing in the GraphLoader.fromEdgeList stage.

Aritra



--
View this message in context: 
http://apache-spark-user-list.1001560.n3.nabble.com/Spark-2-1-1-Graphx-graph-loader-GC-overhead-error-tp28841p28843.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

---------------------------------------------------------------------
To unsubscribe e-mail: user-unsubscr...@spark.apache.org

Reply via email to