Hi!

We haven't experienced this problem in general (and other users also
haven't reported).

Could you please share your operator logs when you triggered the deletion
that takes long?
Long job cancellation times can lead to delayed deletion times.

Cheers,
Gyula

On Wed, Aug 3, 2022 at 11:07 AM Sigalit Eliazov <e.siga...@gmail.com> wrote:

> hello
> we upgraded to version 1.1.0 and i am afraid the problem exists in that
> version as well.
>  I would appreciate any additional ideas or guidelines on how to do the
> cleanup correctly.
>
> thanks
> Sigalit
>
>
> On Tue, Aug 2, 2022 at 3:39 PM Sigalit Eliazov <e.siga...@gmail.com>
> wrote:
>
>> Will do, thanks!
>>
>> On Tue, Aug 2, 2022 at 3:39 PM Gyula Fóra <gyula.f...@gmail.com> wrote:
>>
>>> Before trying to solve any already fixed problems please upgrade to
>>> 1.1.0 :)
>>>
>>>
>>>
>>> On Tue, Aug 2, 2022 at 2:33 PM Sigalit Eliazov <e.siga...@gmail.com>
>>> wrote:
>>>
>>>> we are working with 1.0.0
>>>>
>>>> On Tue, Aug 2, 2022 at 3:24 PM Gyula Fóra <gyula.f...@gmail.com> wrote:
>>>>
>>>>> Are you running the latest 1.1.0 version of the operator?
>>>>>
>>>>> Gyula
>>>>>
>>>>> On Tue, Aug 2, 2022 at 2:18 PM Sigalit Eliazov <e.siga...@gmail.com>
>>>>> wrote:
>>>>>
>>>>>> Hi,
>>>>>>
>>>>>> We are deploying a few flink clusters via the flink operator in our
>>>>>> CI.
>>>>>>
>>>>>> In each run we first do a clean-up where one of the first steps is to
>>>>>>  run 'kubectl delete flinkdeployments --all -n <name-space>'
>>>>>> after that we also delete the flink operator pod and our all
>>>>>> namespace.
>>>>>>
>>>>>> Lately we face issues where the deletion of the crd takes a lot of
>>>>>> time and sometimes it just gets stuck and we need to manually modify
>>>>>> finalizers so they will be deleted.
>>>>>>
>>>>>> Anyone faced this issue?
>>>>>> Any suggestions on how to overcome it?
>>>>>>
>>>>>> Thanks
>>>>>> Sigalit
>>>>>>
>>>>>

Reply via email to