I’m deploying the job from the master node of the cluster itself using 
bin/flink run -c <class_name> <jar> <config_file>. 

The cluster consists of 4 workers and a master node. 

Dominik

> On 8 Mar 2017, at 15:16, Ufuk Celebi <u...@apache.org> wrote:
> 
> How are you deploying your job?
> 
> Shutdown hooks are executed when the JVM terminates whereas the cancel
> command only cancels the Flink job and the JVM process potentially
> keeps running. For example, running a standalone cluster would keep
> the JVMs running.
> 
> On Wed, Mar 8, 2017 at 9:36 AM, Timo Walther <twal...@apache.org> wrote:
>> Hi Dominik,
>> 
>> did you take a look into the logs? Maybe the exception is not shown in the
>> CLI but in the logs.
>> 
>> Timo
>> 
>> Am 07/03/17 um 23:58 schrieb Dominik Safaric:
>> 
>>> Hi all,
>>> 
>>> I would appreciate for any help or advice in regard to default Java
>>> runtime shutdown hooks and canceling Flink jobs.
>>> 
>>> Namely part of my Flink application I am using a Kafka interceptor class
>>> that defines a shutdown hook thread. When stopping the Flink streaming job
>>> on my local machine the shutdown hook gets executed, however I do not see
>>> the same behaviour when stopping the Flink application using bin/flink
>>> cancel <job_id>.
>>> 
>>> Considering there are no exceptions thrown from the shutdown thread, what
>>> could the root cause of this be?
>>> 
>>> Thanks,
>>> Dominik
>> 
>> 
>> 

Reply via email to