Hi Arvid,

  Thanks for responding. I did check the configuration tab of the job
manager and the setting cluster.evenly-spread-out-slots: true is
there. However I'm still observing unevenness in the distribution of
source tasks. Perhaps this additional information could shed light.

Version: 1.12.1
Deployment Mode: Application
Deployment Type: Standalone,  Docker on Kubernetes using the Lyft
Flink operator https://github.com/lyft/flinkk8soperator

I did place the setting under the flinkConfig section,

apiVersion: flink.k8s.io/v1beta1
....
spec:
  flinkConfig:
    cluster.evenly-spread-out-slots: true
    high-availability: zookeeper
    ...
    state.backend: filesystem
    ...
  jobManagerConfig:
    envConfig:
        ....

Would you explain how the setting ends up evenly distributing active
kafka consumers? Is it a result of just assigning tasks toTM1, TM2,
TM3 ... TM18 in order and starting again. In my case I have 36
partitions and 18 nodes so after the second pass in assignment I would
end up with 2 subtasks in the consumer group on each TM. And then
subsequent passes result in inactive consumers.


Thank you,
Aeden

On Thu, Mar 11, 2021 at 5:26 AM Arvid Heise <ar...@apache.org> wrote:
>
> Hi Aeden,
>
> the option that you mentioned should have actually caused your desired 
> behavior. Can you double-check that it's set for the job (you can look at the 
> config in the Flink UI to be 100% sure).
>
> Another option is to simply give all task managers 2 slots. In that way, the 
> scheduler can only evenly distribute.
>
> On Wed, Mar 10, 2021 at 7:21 PM Aeden Jameson <aeden.jame...@gmail.com> wrote:
>>
>>     I have a cluster with 18 task managers 4 task slots each running a
>> job whose source/sink(s) are declared with FlinkSQL using the Kafka
>> connector. The topic being read has 36 partitions. The problem I'm
>> observing is that the subtasks for the sources are not evenly
>> distributed. For example, 1 task manager will have 4 active source
>> subtasks and other TM's none. Is there a way to force  each task
>> manager to have 2 active source subtasks.  I tried using the setting
>> cluster.evenly-spread-out-slots: true , but that didn't have the
>> desired effect.
>>
>> --
>> Thank you,
>> Aeden

Reply via email to