Could you upload the spark assembly to HDFS and then set spark.yarn.jar to the 
path where you uploaded it? That can help minimize start-up time. How long if 
you start just a spark shell?




On 5/11/15, 11:15 AM, "stanley" <wangshua...@yahoo.com> wrote:

>I am running Spark jobs on YARN cluster. It took ~30 seconds to create a
>spark context, while it takes only 1-2 seconds running Spark in local mode.
>The master is set as yarn-client, and both the machine that submits the
>Spark job and the YARN cluster are in the same domain. 
>
>Originally I suspected that the following config might play a role,
>especially spark.scheduler.maxRegisteredResourcesWaitingTime was set to 30
>seconds. However, lowering it to 10 makes no difference. Modifications to
>other two parameters also has no effect.
>
>spark.scheduler.maxRegisteredResourcesWaitingTime=1000 (lowered from 30000
>default)
>spark.yarn.applicationMaster.waitTries=1 (lowered from 10 default)
>spark.yarn.scheduler.heartbeat.interval-ms = 1000 (lowered from 5000
>default) 
>
>What could be the possible reason?
>
>
>
>--
>View this message in context: 
>http://apache-spark-user-list.1001560.n3.nabble.com/It-takes-too-long-30-seconds-to-create-Spark-Context-with-SPARK-YARN-tp22847.html
>Sent from the Apache Spark User List mailing list archive at Nabble.com.
>
>---------------------------------------------------------------------
>To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
>For additional commands, e-mail: user-h...@spark.apache.org
>

---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org

Reply via email to