I'm prototyping with Flink SQL.  I'm iterating on a client job with
multiple INSERT INTOs.  Whenever I have an error, my Kubernetes job
retries.  This creates multiple stream jobs with the same names.

Is it up to clients to delete the existing jobs?  I see Flink CLI functions
for this.  Do most people usually do this from inside their client jar or
their wrapper code (e.g. Kubernetes job).

- Dan

Reply via email to