[SPARK][GRAPHX] 'Executor Deserialize Time' is too big

2015-07-16 Thread Hlib Mykhailenko
big (like ~10200ms when all others took only ~200ms) Does anybody have any idea what it can be? -- Cordialement, Hlib Mykhailenko Doctorant à INRIA Sophia-Antipolis Méditerranée 2004 Route des Lucioles BP93 06902 SOPHIA ANTIPOLIS cedex

[GRAPHX] could not process graph with 230M edges

2015-03-13 Thread Hlib Mykhailenko
emoryError: Java heap space errors and of course I did not get a result. Do I have problem in the code? Or in cluster configuration? Because it works fine for relatively small graphs. But for this graph it never worked. (And I do not think that 230M edges is too big data) Thank you f

force spark to use all available memory on each node

2014-11-26 Thread Hlib Mykhailenko
Hello, Spark has 'spark.executor.memory' property which defines amount of memory which will be used on each computational node. And by default it is equal to 512Mb. Is there way to tell spark to use 'all available memory minus 1Gb'? Thank you in advance. -- Cordialemen

how to force graphx to execute transfomtation

2014-11-26 Thread Hlib Mykhailenko
any action? -- Cordialement, Hlib Mykhailenko Doctorant à INRIA Sophia-Antipolis Méditerranée 2004 Route des Lucioles BP93 06902 SOPHIA ANTIPOLIS cedex

Re: HDFS read text file

2014-11-17 Thread Hlib Mykhailenko
Hello Naveen, I think you should first override "toString" method of your sample.spark.test.Student class. -- Cordialement, Hlib Mykhailenko Doctorant à INRIA Sophia-Antipolis Méditerranée 2004 Route des Lucioles BP93 06902 SOPHIA ANTIPOLIS cedex - Original Message ---

How to measure communication between nodes in Spark Standalone Cluster?

2014-11-17 Thread Hlib Mykhailenko
of vertices were transferred among nodes? Thanks! -- Cordialement, Hlib Mykhailenko Doctorant à INRIA Sophia-Antipolis Méditerranée 2004 Route des Lucioles BP93 06902 SOPHIA ANTIPOLIS cedex