what was the error when you are trying to run mapreduce import job when the
thrift server is running.
this is only config changed? what was the config before... 
also share the spark thrift server job config such as no of executors, cores
memory etc.

My guess is your mapreduce job is unable to get sufficient resources,
container couldn't be launched and so failing to start, this could either
because of non availability sufficient cores or RAM

9 worker nodes 12GB RAM each with 6 cores (max allowed cores 4 per
container)
you have to keep some room for operation system and other daemons. 

if thrift server is setup to have 11 executors with 3 cores each = 33 cores
for workers and 1 for driver so 34 cores required for spark job and rest for
any other jobs. 

spark driver and worker memory is ~9GB 
with 9 12 GB RAM worker nodes not sure how much you can allocate.

thanks
Vijay



--
Sent from: http://apache-spark-user-list.1001560.n3.nabble.com/

---------------------------------------------------------------------
To unsubscribe e-mail: user-unsubscr...@spark.apache.org

Reply via email to