Re: yarn-cluster spark-submit process not dying

2015-05-28 Thread Corey Nolet
Thanks Sandy- I was digging through the code in the deploy.yarn.Client and
literally found that property right before I saw your reply. I'm on 1.2.x
right now which doesn't have the property. I guess I need to update sooner
rather than later.

On Thu, May 28, 2015 at 3:56 PM, Sandy Ryza  wrote:

> Hi Corey,
>
> As of this PR https://github.com/apache/spark/pull/5297/files, this can
> be controlled with spark.yarn.submit.waitAppCompletion.
>
> -Sandy
>
> On Thu, May 28, 2015 at 11:48 AM, Corey Nolet  wrote:
>
>> I am submitting jobs to my yarn cluster via the yarn-cluster mode and I'm
>> noticing the jvm that fires up to allocate the resources, etc... is not
>> going away after the application master and executors have been allocated.
>> Instead, it just sits there printing 1 second status updates to the
>> console. If I kill it, my job still runs (as expected).
>>
>> Is there an intended way to stop this from happening and just have the
>> local JVM die when it's done allocating the resources and deploying the
>> application master?
>>
>
>


Re: yarn-cluster spark-submit process not dying

2015-05-28 Thread Sandy Ryza
Hi Corey,

As of this PR https://github.com/apache/spark/pull/5297/files, this can be
controlled with spark.yarn.submit.waitAppCompletion.

-Sandy

On Thu, May 28, 2015 at 11:48 AM, Corey Nolet  wrote:

> I am submitting jobs to my yarn cluster via the yarn-cluster mode and I'm
> noticing the jvm that fires up to allocate the resources, etc... is not
> going away after the application master and executors have been allocated.
> Instead, it just sits there printing 1 second status updates to the
> console. If I kill it, my job still runs (as expected).
>
> Is there an intended way to stop this from happening and just have the
> local JVM die when it's done allocating the resources and deploying the
> application master?
>


yarn-cluster spark-submit process not dying

2015-05-28 Thread Corey Nolet
I am submitting jobs to my yarn cluster via the yarn-cluster mode and I'm
noticing the jvm that fires up to allocate the resources, etc... is not
going away after the application master and executors have been allocated.
Instead, it just sits there printing 1 second status updates to the
console. If I kill it, my job still runs (as expected).

Is there an intended way to stop this from happening and just have the
local JVM die when it's done allocating the resources and deploying the
application master?