Hello,

I don't get much from the logs but the error seems related to memory issue
from Spark. From your old emails I get that you are using 3 node cluster. Is
that all 3 node has nodemanager and datanodes?
So better give only less number of executors and provide more memory to it
like below. While data loading it is recommended to use one executor per
nodemanager.

spark-shell --master yarn-client --driver-memory 10G --executor-cores 4
--num-executors 3  --executor-memory 25G

And also if any configuration gives any error please provide the executor
log.

Thank you,
Ravindra.



--
Sent from: 
http://apache-carbondata-dev-mailing-list-archive.1130556.n5.nabble.com/

Reply via email to