Hi,

I am not a spark expert but I found that passing a small partitions value
might help. Try to use this option "--numEPart=$partitions" where
partitions=3 (number of workers) or at most 3*40 (total number of cores).

Thanks,
-Khaled

On Thu, Jul 9, 2015 at 11:37 AM, AshutoshRaghuvanshi <
ashutosh.raghuvans...@gmail.com> wrote:

> I am running spark cluster over ssh in standalone mode,
>
> I have run pagerank LiveJounral example:
>
> MASTER=spark://172.17.27.12:7077 bin/run-example graphx.SynthBenchmark
> -app=pagerank -niters=100 -nverts=4847571 >> Output/soc-liveJounral.txt
>
> its been running for more than 2hours, I guess this is not normal, what am
> i
> doing wrong?
>
> system details:
> 4 nodes (1+3)
> 40 cores each, 64G memory out of which I have given spark.executer 50G
>
> one more this I notice one of the server is used more than others.
>
> Please help ASAP.
>
> Thank you
> <http://apache-spark-user-list.1001560.n3.nabble.com/file/n23747/13.png>
>
>
>
> --
> View this message in context:
> http://apache-spark-user-list.1001560.n3.nabble.com/GraphX-Synth-Benchmark-tp23747.html
> Sent from the Apache Spark User List mailing list archive at Nabble.com.
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
> For additional commands, e-mail: user-h...@spark.apache.org
>
>


-- 
Thanks,
-Khaled

Reply via email to