Thanks, Kelvin :)
The error seems to disappear after I decreased both
spark.storage.memoryFraction and spark.shuffle.memoryFraction to 0.2
And, some increase on driver memory.
Best,
Yifan LI
On 10 Feb 2015, at 18:58, Kelvin Chu 2dot7kel...@gmail.com wrote:
Since the stacktrace
Since the stacktrace shows kryo is being used, maybe, you could also try
increasing spark.kryoserializer.buffer.max.mb. Hope this help.
Kelvin
On Tue, Feb 10, 2015 at 1:26 AM, Akhil Das ak...@sigmoidanalytics.com
wrote:
You could try increasing the driver memory. Also, can you be more specific
You could try increasing the driver memory. Also, can you be more specific
about the data volume?
Thanks
Best Regards
On Mon, Feb 9, 2015 at 3:30 PM, Yifan LI iamyifa...@gmail.com wrote:
Hi,
I just found the following errors during computation(graphx), anyone has
ideas on this? thanks so
Hi Akhil,
Excuse me, I am trying a random-walk algorithm over a not that large graph(~1GB
raw dataset, including ~5million vertices and ~60million edges) on a cluster
which has 20 machines.
And, the property of each vertex in graph is a hash map, of which size will
increase dramatically
Yes, I have read it, and am trying to find some way to do that… Thanks :)
Best,
Yifan LI
On 10 Feb 2015, at 12:06, Akhil Das ak...@sigmoidanalytics.com wrote:
Did you have a chance to look at this doc
http://spark.apache.org/docs/1.2.0/tuning.html
Did you have a chance to look at this doc
http://spark.apache.org/docs/1.2.0/tuning.html
Thanks
Best Regards
On Tue, Feb 10, 2015 at 4:13 PM, Yifan LI iamyifa...@gmail.com wrote:
Hi Akhil,
Excuse me, I am trying a random-walk algorithm over a not that large
graph(~1GB raw dataset, including
Hi,
I just found the following errors during computation(graphx), anyone has ideas
on this? thanks so much!
(I think the memory is sufficient, spark.executor.memory 30GB )
15/02/09 00:37:12 ERROR Executor: Exception in task 162.0 in stage 719.0 (TID
7653)
java.lang.OutOfMemoryError: Java