Hi Gyula,
that certainly helps, but to set up automatic cleanup (in my case, of azure
blob storage), the ideal option would be to be able to set a simple policy
that deletes blobs that haven't been updated in some time, but that would
assume that anything that's actually relevant for the latest st
Hi!
There are some files that are not cleaned up over time in the HA dir that
need to be cleaned up by the user:
https://nightlies.apache.org/flink/flink-kubernetes-operator-docs-main/docs/concepts/overview/#jobresultstore-resource-leak
Hope this helps
Gyula
On Mon, 5 Dec 2022 at 11:56, Alexis
Hello,
I see the number of entries in the directory configured for HA increases
over time, particularly in the context of job upgrades in a Kubernetes
environment managed by the operator. Would it be safe to assume that any
files that haven't been updated in a while can be deleted? Assuming the
ch
The Apache Flink community is very happy to announce the release of Apache
flink-connector-aws 3.0.0. This release includes a new Amazon DynamoDB
connector.
Apache Flink® is an open-source stream processing framework for
distributed, high-performing, always-available, and accurate data streaming
a
Hello,
I have a doubt about a very particular scenario with this configuration:
- Flink HA enabled (Kubernetes).
- ExternalizedCheckpointCleanup set to RETAIN_ON_CANCELLATION.
- Savepoint restore mode left as default NO_CLAIM.
During an upgrade, a stop-job-with-savepoint is triggered, and then t