When i increase my input data size, the executor will be failed and lost.
see below:

14/03/11 20:44:18 INFO AppClient$ClientActor: Executor updated:
app-20140311204343-0008/8 is now FAILED (Command exited with code 134)
14/03/11 20:44:18 INFO SparkDeploySchedulerBackend: Executor
app-20140311204343-0008/8 removed: Command exited with code 134
14/03/11 20:44:18 INFO SparkDeploySchedulerBackend: Executor 8
disconnected, so removing it
14/03/11 20:44:18 ERROR TaskSchedulerImpl: Lost executor 8 on
Salve10.Hadoop: Unknown executor exit code (134) (died from signal 6?)
14/03/11 20:44:18 INFO TaskSetManager: Re-queueing tasks for 8 from TaskSet
1.0
14/03/11 20:44:18 WARN TaskSetManager: Lost TID 8 (task 1.0:22)
14/03/11 20:44:18 WARN TaskSetManager: Lost TID 49 (task 1.0:49)
14/03/11 20:44:18 WARN TaskSetManager: Lost TID 31 (task 1.0:40)
14/03/11 20:44:18 WARN TaskSetManager: Lost TID 1 (task 1.0:10)
14/03/11 20:44:18 WARN TaskSetManager: Lost TID 15 (task 1.0:48)
14/03/11 20:44:18 INFO AppClient$ClientActor: Executor added:
app-20140311204343-0008/11 on worker-20140311172513-Salve10.Hadoop-56435
(Salve10.Hadoop:56435) with 8 cores
14/03/11 20:44:18 INFO SparkDeploySchedulerBackend: Granted executor ID
app-20140311204343-0008/11 on hostPort Salve10.Hadoop:56435 with 8 cores,
4.0 GB RAM
14/03/11 20:44:18 INFO AppClient$ClientActor: Executor updated:
app-20140311204343-0008/11 is now RUNNING

It seems that too many tasks are running in a container and the total task
memory exceed the "spark.executor.memory"

so how to set the task number in a executor?
thank you

Reply via email to