Hi,

  We are running Spark 1.3 on CDH 5.4.1 on top of YARN. we want to know how
do we control task timeout when node fails and task running on it should be
restarted on another node. at present job wait for approximately 10 min to
restart the task were running on failed node.

http://spark.apache.org/docs/latest/configuration.html Here i see many
timeout config, just dont know which one to override.

any help here ?



--
View this message in context: 
http://apache-spark-user-list.1001560.n3.nabble.com/how-to-control-timeout-in-node-failure-for-spark-task-tp24825.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org

Reply via email to