Hi Dan,

First, I want to get more information about your submission so that we
could make the question clear.

Are you using TableEnvironment to execute multiple "INSERT INTO" sentences
and find that each one will
be executed in a separated Flink cluster? It is really strange, and I want
to know how your are deploying your
Flink cluster on Kubernetes, via standalone[1] or native integration[2]. If
it is the former, I am afraid you need
`kubectl` to start/stop your Flink application manually. If it is the
latter, I think the Flink cluster will be destroyed
automatically when the Flink job failed. Also all the SQL jobs will be
executed in a shared Flink application.

[1].
https://ci.apache.org/projects/flink/flink-docs-master/ops/deployment/kubernetes.html
[2].
https://ci.apache.org/projects/flink/flink-docs-master/ops/deployment/native_kubernetes.html


Best,
Yang

Dan Hill <quietgol...@gmail.com> 于2020年9月21日周一 上午8:15写道:

> I've read the following upgrade application page
> <https://ci.apache.org/projects/flink/flink-docs-stable/ops/upgrading.html>.
> This seems to focus on doing this in a wrapper layer (e.g. Kubernetes).
> Just checking to see if this is the common practice or do people do this
> from their client jars.
>
>
>
> On Sun, Sep 20, 2020 at 5:13 PM Dan Hill <quietgol...@gmail.com> wrote:
>
>> I'm prototyping with Flink SQL.  I'm iterating on a client job with
>> multiple INSERT INTOs.  Whenever I have an error, my Kubernetes job
>> retries.  This creates multiple stream jobs with the same names.
>>
>> Is it up to clients to delete the existing jobs?  I see Flink CLI
>> functions for this.  Do most people usually do this from inside their
>> client jar or their wrapper code (e.g. Kubernetes job).
>>
>> - Dan
>>
>

Reply via email to