Have you used awaitTermination() on your ssc ? --> Yes, i have used that.
Also try setting the deployment mode to yarn-client. --> Is this not
supported on yarn-cluster mode? I am trying to find root cause for
yarn-cluster mode.
Have you tested graceful shutdown on yarn-cluster mode?
On Fri, May
Rakesh
Have you used awaitTermination() on your ssc ?
If not , dd this and see if it changes the behavior.
I am guessing this issue may be related to yarn deployment mode.
Also try setting the deployment mode to yarn-client.
Thanks
Deepak
On Fri, May 13, 2016 at 10:17 AM, Rakesh H (Marketing
Ping!!
Has anybody tested graceful shutdown of a spark streaming in yarn-cluster
mode?It looks like a defect to me.
On Thu, May 12, 2016 at 12:53 PM Rakesh H (Marketing Platform-BLR) <
rakes...@flipkart.com> wrote:
> We are on spark 1.5.1
> Above change was to add a shutdown hook.
> I am not
We are on spark 1.5.1
Above change was to add a shutdown hook.
I am not adding shutdown hook in code, so inbuilt shutdown hook is being
called.
Driver signals that it is going to to graceful shutdown, but executor sees
that Driver is dead and it shuts down abruptly.
Could this issue be related to
This is happening because spark context shuts down without shutting down
the ssc first.
This was behavior till spark 1.4 ans was addressed in later releases.
https://github.com/apache/spark/pull/6307
Which version of spark are you on?
Thanks
Deepak
On Thu, May 12, 2016 at 12:14 PM, Rakesh H
Yes, it seems to be the case.
In this case executors should have continued logging values till 300, but
they are shutdown as soon as i do "yarn kill .."
On Thu, May 12, 2016 at 12:11 PM Deepak Sharma
wrote:
> So in your case , the driver is shutting down gracefully ,
So in your case , the driver is shutting down gracefully , but the
executors are not.
IS this the problem?
Thanks
Deepak
On Thu, May 12, 2016 at 11:49 AM, Rakesh H (Marketing Platform-BLR) <
rakes...@flipkart.com> wrote:
> Yes, it is set to true.
> Log of driver :
>
> 16/05/12 10:18:29 ERROR
Yes, it is set to true.
Log of driver :
16/05/12 10:18:29 ERROR yarn.ApplicationMaster: RECEIVED SIGNAL 15: SIGTERM
16/05/12 10:18:29 INFO streaming.StreamingContext: Invoking
stop(stopGracefully=true) from shutdown hook
16/05/12 10:18:29 INFO scheduler.JobGenerator: Stopping JobGenerator
Hi Rakesh
Did you tried setting *spark.streaming.stopGracefullyOnShutdown to true *for
your spark configuration instance?
If not try this , and let us know if this helps.
Thanks
Deepak
On Thu, May 12, 2016 at 11:42 AM, Rakesh H (Marketing Platform-BLR) <
rakes...@flipkart.com> wrote:
> Issue i
From logs, it seems that Spark Streaming does handle *kill -SIGINT* with
graceful shutdown.
Please could you confirm?
Thanks!
On 30 July 2015 at 08:19, anshu shukla anshushuk...@gmail.com wrote:
Yes I was doing same , if You mean that this is the correct way to do
Then I will verify it
Note that this is true only from Spark 1.4 where the shutdown hooks were
added.
On Mon, Aug 10, 2015 at 12:12 PM, Michal Čizmazia mici...@gmail.com wrote:
From logs, it seems that Spark Streaming does handle *kill -SIGINT* with
graceful shutdown.
Please could you confirm?
Thanks!
On 30
Yes I was doing same , if You mean that this is the correct way to do
Then I will verify it once more in my case .
On Thu, Jul 30, 2015 at 1:02 PM, Tathagata Das t...@databricks.com wrote:
How is sleep not working? Are you doing
streamingContext.start()
Thread.sleep(xxx)
How is sleep not working? Are you doing
streamingContext.start()
Thread.sleep(xxx)
streamingContext.stop()
On Wed, Jul 29, 2015 at 6:55 PM, anshu shukla anshushuk...@gmail.com
wrote:
If we want to stop the application after fix-time period , how it will
work . (How to give the duration in
If we want to stop the application after fix-time period , how it will
work . (How to give the duration in logic , in my case sleep(t.s.) is not
working .) So i used to kill coarseGrained job at each slave by script
.Please suggest something .
On Thu, Jul 30, 2015 at 5:14 AM, Tathagata Das
StreamingContext.stop(stopGracefully = true) stops the streaming context
gracefully.
Then you can safely terminate the Spark cluster. They are two different
steps and needs to be done separately ensuring that the driver process has
been completely terminated before the Spark cluster is the
15 matches
Mail list logo