Hi,

I am running my graphx application on Spark, but it failed since there is an 
error on one executor node(on which available hdfs space is small) that “no 
space left on device”.

I can understand why it happened, because my vertex(-attribute) rdd was 
becoming bigger and bigger during computation…, so maybe sometime the request 
on that node was too bigger than available space.

But, is there any way to avoid this kind of error? I am sure that the overall 
disk space of all nodes is enough for my application.

Thanks in advance!



Best,
Yifan LI





Reply via email to