We encounter a problem on a Spark job 1.6(on yarn) that never ends, whene
several jobs launched simultaneously.
We found that by launching the job spark in yarn-client mode we do not have
this problem, unlike launching it in yarn-cluster mode.
it could be a trail to find the cause.

we changed the code to add a sparkContext.stop ()
Indeed, the SparkContext was created (val sparkContext = createSparkContext)
but not stopped. this solution has allowed us to decrease the number of jobs
that remains blocked but nevertheless we still have some jobs blocked.

by analyzing the logs we have found this log that repeats without stopping:
/17/09/29 11:04:37 DEBUG SparkEventPublisher: Enqueue
SparkListenerExecutorMetricsUpdate(1,WrappedArray())
17/09/29 11:04:41 DEBUG ApplicationMaster: Sending progress
17/09/29 11:04:41 DEBUG ApplicationMaster: Number of pending allocations is
0. Sleeping for 5000. /

do someone has any idea about this issue ? 
Thank you in advance



--
Sent from: http://apache-spark-user-list.1001560.n3.nabble.com/

---------------------------------------------------------------------
To unsubscribe e-mail: user-unsubscr...@spark.apache.org

Reply via email to