Spark version: 1.6.1
Cluster Manager: Standalone

I am experimenting with cluster mode deployment along with supervise for
high availability of streaming applications.

1. Submit a streaming job in cluster mode with supervise
2. Say that driver is scheduled on worker1. The app started
   successfully.
3. Kill worker1 java process. This does not kill driver process and
   hence the application (context) is still alive.
4. Because of supervise flag, driver gets scheduled to new worker
   worker2 and hence a new context is created, making it a duplicate.

I think this seems to be a bug.

Regards,
Noorul

---------------------------------------------------------------------
To unsubscribe e-mail: user-unsubscr...@spark.apache.org

Reply via email to