Yes I  was  doing same  , if  You mean that this is the correct way to do
 Then I will verify it  once more in my case .

On Thu, Jul 30, 2015 at 1:02 PM, Tathagata Das <t...@databricks.com> wrote:

> How is sleep not working? Are you doing
>
> streamingContext.start()
> Thread.sleep(xxx)
> streamingContext.stop()
>
> On Wed, Jul 29, 2015 at 6:55 PM, anshu shukla <anshushuk...@gmail.com>
> wrote:
>
>> If we want to stop the  application after fix-time period , how it will
>> work . (How to give the duration in logic , in my case  sleep(t.s.)  is not
>> working .)  So i used to kill coarseGrained job at each slave by script
>> .Please suggest something .
>>
>> On Thu, Jul 30, 2015 at 5:14 AM, Tathagata Das <t...@databricks.com>
>> wrote:
>>
>>> StreamingContext.stop(stopGracefully = true) stops the streaming context
>>> gracefully.
>>> Then you can safely terminate the Spark cluster. They are two different
>>> steps and needs to be done separately ensuring that the driver process has
>>> been completely terminated before the Spark cluster is the terminated.
>>>
>>> On Wed, Jul 29, 2015 at 6:43 AM, Michal Čizmazia <mici...@gmail.com>
>>> wrote:
>>>
>>>> How to initiate graceful shutdown from outside of the Spark Streaming
>>>> driver process? Both for the local and cluster mode of Spark Standalone as
>>>> well as EMR.
>>>>
>>>> Does sbin/stop-all.sh stop the context gracefully? How is it done? Is
>>>> there a signal sent to the driver process?
>>>>
>>>> For EMR, is there a way how to terminate an EMR cluster with Spark
>>>> Streaming graceful shutdown?
>>>>
>>>> Thanks!
>>>>
>>>>
>>>>
>>>
>>
>>
>> --
>> Thanks & Regards,
>> Anshu Shukla
>>
>
>


-- 
Thanks & Regards,
Anshu Shukla

Reply via email to