Hi Dan,

If you are using a K8s job to deploy the "INSERT INTO" SQL jobs into the
existing Flink cluster, then
you have to manage the lifecycle of these jobs by yourself. I think you
could use flink command line or
rest API to check the job status first.

Best,
Yang

Dan Hill <quietgol...@gmail.com> 于2020年9月23日周三 上午8:07写道:

> Hi Yang!
>
> The multiple "INSERT INTO" jobs all go to the same Flink cluster.  I'm
> using this Helm chart
> <https://github.com/riskfocus/helm-charts-public/tree/master/flink> (which
> looks like the standalone option).  I deploy the job using a simple k8
> Job.  Sounds like I should do this myself.  Thanks!
>
> Thanks!
> - Dan
>
>
>
> On Tue, Sep 22, 2020 at 5:37 AM Yang Wang <danrtsey...@gmail.com> wrote:
>
>> Hi Dan,
>>
>> First, I want to get more information about your submission so that we
>> could make the question clear.
>>
>> Are you using TableEnvironment to execute multiple "INSERT
>> INTO" sentences and find that each one will
>> be executed in a separated Flink cluster? It is really strange, and I
>> want to know how your are deploying your
>> Flink cluster on Kubernetes, via standalone[1] or native integration[2].
>> If it is the former, I am afraid you need
>> `kubectl` to start/stop your Flink application manually. If it is the
>> latter, I think the Flink cluster will be destroyed
>> automatically when the Flink job failed. Also all the SQL jobs will be
>> executed in a shared Flink application.
>>
>> [1].
>> https://ci.apache.org/projects/flink/flink-docs-master/ops/deployment/kubernetes.html
>> [2].
>> https://ci.apache.org/projects/flink/flink-docs-master/ops/deployment/native_kubernetes.html
>>
>>
>> Best,
>> Yang
>>
>> Dan Hill <quietgol...@gmail.com> 于2020年9月21日周一 上午8:15写道:
>>
>>> I've read the following upgrade application page
>>> <https://ci.apache.org/projects/flink/flink-docs-stable/ops/upgrading.html>.
>>> This seems to focus on doing this in a wrapper layer (e.g. Kubernetes).
>>> Just checking to see if this is the common practice or do people do this
>>> from their client jars.
>>>
>>>
>>>
>>> On Sun, Sep 20, 2020 at 5:13 PM Dan Hill <quietgol...@gmail.com> wrote:
>>>
>>>> I'm prototyping with Flink SQL.  I'm iterating on a client job with
>>>> multiple INSERT INTOs.  Whenever I have an error, my Kubernetes job
>>>> retries.  This creates multiple stream jobs with the same names.
>>>>
>>>> Is it up to clients to delete the existing jobs?  I see Flink CLI
>>>> functions for this.  Do most people usually do this from inside their
>>>> client jar or their wrapper code (e.g. Kubernetes job).
>>>>
>>>> - Dan
>>>>
>>>

Reply via email to