Hi Team!
This is probably something for after the release but I created a simple
prototype for the scaling subresource based on taskmanager replica count.
You can take a look here:
https://github.com/apache/flink-kubernetes-operator/pull/227
After some consideration I decided against using paral
Hi Jay!
I will take a closer look into this and see if we can use the parallelism
in the scale subresource.
If you could experiment with this and see if it works with the current CRD
that would be helpful . Not sure if we need to change the status or
anything as parallelism is only part of the sp
Hi Team,
Yes we can change the parallelism of flink job. So going through the roadmap ,
what I understand that we have put the standalone mode as second priority due
to right reasons. So , if possible can I be of any help to accelerate this as
we have a tight release schedule so would want to c
Currently, the flink-kubernetes-operator is using Flink native K8s
integration[1], which means Flink ResourceManager will dynamically allocate
TaskManager on demand.
So the users do not need to specify the replicas of TaskManager.
Just like Gyula said, one possible solution to make "kubectl scale"
Hi Jay!
Interesting question/proposal to add the scale-subresource.
I am not an expert on this area but we will look into this a little and
give you some feedback and see if we can incorporate something into the
upcoming release if it makes sense.
On a high level there is not a single replicas v
Hi Team,
I have been experimenting the Flink Kubernetes operator. One of the biggest
miss that we have is it does not support scale sub resource as of now to
support reactive scaling. Without that commercially it becomes very difficult
for products like us who have very varied loads for every