Re: When worker is killed driver continues to run causing issues in supervise mode

2016-07-13 Thread Noorul Islam Kamal Malmiyoda
Adding dev list
On Jul 13, 2016 5:38 PM, "Noorul Islam K M"  wrote:

>
> Spark version: 1.6.1
> Cluster Manager: Standalone
>
> I am experimenting with cluster mode deployment along with supervise for
> high availability of streaming applications.
>
> 1. Submit a streaming job in cluster mode with supervise
> 2. Say that driver is scheduled on worker1. The app started
>successfully.
> 3. Kill worker1 java process. This does not kill driver process and
>hence the application (context) is still alive.
> 4. Because of supervise flag, driver gets scheduled to new worker
>worker2 and hence a new context is created, making it a duplicate.
>
> I think this seems to be a bug.
>
> Regards,
> Noorul
>


When worker is killed driver continues to run causing issues in supervise mode

2016-07-13 Thread Noorul Islam K M

Spark version: 1.6.1
Cluster Manager: Standalone

I am experimenting with cluster mode deployment along with supervise for
high availability of streaming applications.

1. Submit a streaming job in cluster mode with supervise
2. Say that driver is scheduled on worker1. The app started
   successfully.
3. Kill worker1 java process. This does not kill driver process and
   hence the application (context) is still alive.
4. Because of supervise flag, driver gets scheduled to new worker
   worker2 and hence a new context is created, making it a duplicate.

I think this seems to be a bug.

Regards,
Noorul

-
To unsubscribe e-mail: user-unsubscr...@spark.apache.org