Yes, the wrong button was pushed when replying last time. -.-
Looking into the code once again [1], you're right. It looks like for
"last-state", no job is cancelled but the cluster deployment is just
deleted. I was assuming that the artifacts the documentation about the
JobResultStore resource le
Hi Matthias,
I think you didn't include the mailing list in your response.
According to my experiments, using last-state means the operator simply
deletes the Flink pods, and I believe that doesn't count as Cancelled, so
the artifacts for blobs and submitted job graphs are not cleaned up. I
imagi
I see, thanks for the details.
I do mean replacing the job without stopping it terminally. Specifically, I
mean updating the container image with one that contains an updated job
jar. Naturally, the new version must not break state compatibility, but as
long as that is fulfilled, the job should be
Hi Matthias,
Then the explanation is likely that the job has not reached a terminal
state. I was testing updates *without* savepoints (but with HA), so I guess
that never triggers automatic cleanup.
Since, in my case, the job will theoretically never reach a terminal state
with this configuration
One concrete question, under the HA folder I also see these sample entries:
- job_name/blob/job_uuid/blob_...
- job_name/submittedJobGraphX
- job_name/submittedJobGraphY
Is it safe to clean these up when the job is in a healthy state?
Regards,
Alexis.
Am Mo., 5. Dez. 2022 um 20:09 Uhr schrieb A
Hi Gyula,
that certainly helps, but to set up automatic cleanup (in my case, of azure
blob storage), the ideal option would be to be able to set a simple policy
that deletes blobs that haven't been updated in some time, but that would
assume that anything that's actually relevant for the latest st
Hi!
There are some files that are not cleaned up over time in the HA dir that
need to be cleaned up by the user:
https://nightlies.apache.org/flink/flink-kubernetes-operator-docs-main/docs/concepts/overview/#jobresultstore-resource-leak
Hope this helps
Gyula
On Mon, 5 Dec 2022 at 11:56, Alexis
Hello,
I see the number of entries in the directory configured for HA increases
over time, particularly in the context of job upgrades in a Kubernetes
environment managed by the operator. Would it be safe to assume that any
files that haven't been updated in a while can be deleted? Assuming the
ch