I may not have understood what you mean by the naming scheme. I think the
limitation "pods in a StatefulSet are always terminated in the reverse
order as they are created" comes from Kubernetes and has nothing to do with
the naming scheme.
Best,
Xintong
On Mon, Jun 3, 2024 at 1:13 PM Alexis
Hi Xintong,
After experimenting a bit, I came to roughly the same conclusion: cleanup
is what's more or less incompatible if Kubernetes manages the pods. Then it
might be better to just allow using a more stable pod naming scheme that
doesn't depend on the attempt number and thus produces more
Amarjeet created FLINK-35507:
Summary: Support For Individual Job Level Resource Allocation in
Session Cluster in k8s
Key: FLINK-35507
URL: https://issues.apache.org/jira/browse/FLINK-35507
Project:
elon_X created FLINK-35506:
--
Summary: disable kafka auto-commit and rely on flink’s
checkpointing if both are enabled
Key: FLINK-35506
URL: https://issues.apache.org/jira/browse/FLINK-35506
Project: Flink
As far as I know, after the Flink-CDC table structure is changed, a
broadcast will be sent, causing other operators to suspend data
synchronization tasks and wait for the structure change to succeed before
continuing. This will cause data synchronization to terminate during the
structure change
Weijie Guo created FLINK-35505:
--
Summary: RegionFailoverITCase.testMultiRegionFailover has never
ever restored state
Key: FLINK-35505
URL: https://issues.apache.org/jira/browse/FLINK-35505
Project:
I think the reason we didn't choose StatefulSet when introducing the Native
K8s Deployment is that, IIRC, we want Flink's ResourceManager to have full
control of the individual pod lifecycles.
E.g.,
- Pods in a StatefulSet are always terminated in the reverse order as they
are created. This
Thanks for the proposal. As I understand it the idea is to make the status
of a Flink deployment more accessible to standard k8s tooling, which would
be a nice improvement and further strengthen the k8s native experience!
Regarding the FLIP document's overall structure: Before diving into the
Hi devs,
Some time ago I asked about the way Task Manager pods are handled by the
native Kubernetes driver [1]. I have now looked a bit through the source
code and I think it could be possible to deploy TMs with a stateful set,
which could allow tracking OOM kills as I mentioned in my original