Hi

Please, can anybody help me with this?

Regards and Thanks
Deepak Raghav



On Tue, May 19, 2020 at 1:37 PM Deepak Raghav <deepakragha...@gmail.com>
wrote:

> Hi Team
>
> We have two worker node in a cluster and 2 connector with having 10 tasks
> each.
>
> Now, suppose if we have two kafka connect process W1(Port 8080) and
> W2(Port 8078) started already in distribute mode and then register the
> connectors, task of one connector i.e 10 tasks are divided equally between
> two worker i.e first task of A connector to W1 worker node and sec task of
> A connector to W2 worker node, similarly for first task of B connector,
> will go to W1 node and sec task of B connector go to W2 node.
>
> e.g
> *#First Connector : *
> {
>   "name": "REGION_CODE_UPPER-Cdb_Dchchargeableevent",
>   "connector": {
>     "state": "RUNNING",
>     "worker_id": "10.0.0.4:*8080*"
>   },
>   "tasks": [
>     {
>       "id": 0,
>       "state": "RUNNING",
>       "worker_id": "10.0.0.4:*8078*"
>     },
>     {
>       "id": 1,
>       "state": "RUNNING",
>       "worker_id": "10.0.0.4:8080"
>     },
>     {
>       "id": 2,
>       "state": "RUNNING",
>       "worker_id": "10.0.0.4:8078"
>     },
>     {
>       "id": 3,
>       "state": "RUNNING",
>       "worker_id": "10.0.0.4:8080"
>     },
>     {
>       "id": 4,
>       "state": "RUNNING",
>       "worker_id": "10.0.0.4:8078"
>     },
>     {
>       "id": 5,
>       "state": "RUNNING",
>       "worker_id": "10.0.0.4:8080"
>     },
>     {
>       "id": 6,
>       "state": "RUNNING",
>       "worker_id": "10.0.0.4:8078"
>     },
>     {
>       "id": 7,
>       "state": "RUNNING",
>       "worker_id": "10.0.0.4:8080"
>     },
>     {
>       "id": 8,
>       "state": "RUNNING",
>       "worker_id": "10.0.0.4:8078"
>     },
>     {
>       "id": 9,
>       "state": "RUNNING",
>       "worker_id": "10.0.0.4:8080"
>     }
>   ],
>   "type": "sink"
> }
>
>
> *#Sec connector*
>
> {
>   "name": "REGION_CODE_UPPER-Cdb_Neatransaction",
>   "connector": {
>     "state": "RUNNING",
>     "worker_id": "10.0.0.4:8078"
>   },
>   "tasks": [
>     {
>       "id": 0,
>       "state": "RUNNING",
>       "worker_id": "10.0.0.4:8078"
>     },
>     {
>       "id": 1,
>       "state": "RUNNING",
>       "worker_id": "10.0.0.4:8080"
>     },
>     {
>       "id": 2,
>       "state": "RUNNING",
>       "worker_id": "10.0.0.4:8078"
>     },
>     {
>       "id": 3,
>       "state": "RUNNING",
>       "worker_id": "10.0.0.4:8080"
>     },
>     {
>       "id": 4,
>       "state": "RUNNING",
>       "worker_id": "10.0.0.4:8078"
>     },
>     {
>       "id": 5,
>       "state": "RUNNING",
>       "worker_id": "10.0.0.4:8080"
>     },
>     {
>       "id": 6,
>       "state": "RUNNING",
>       "worker_id": "10.0.0.4:8078"
>     },
>     {
>       "id": 7,
>       "state": "RUNNING",
>       "worker_id": "10.0.0.4:8080"
>     },
>     {
>       "id": 8,
>       "state": "RUNNING",
>       "worker_id": "10.0.0.4:8078"
>     },
>     {
>       "id": 9,
>       "state": "RUNNING",
>       "worker_id": "10.0.0.4:8080"
>     }
>   ],
>   "type": "sink"
> }
>
> But I have seen a strange behavior, when I just shutdown W2 worker node
> and start it again task are divided but in diff way i.e all the tasks of A
> connector will get into W1 node and tasks of B Connector into W2 node.
>
> Can you please have a look for this.
>
> *#First Connector*
>
> {
>   "name": "REGION_CODE_UPPER-Cdb_Dchchargeableevent",
>   "connector": {
>     "state": "RUNNING",
>     "worker_id": "10.0.0.4:8080"
>   },
>   "tasks": [
>     {
>       "id": 0,
>       "state": "RUNNING",
>       "worker_id": "10.0.0.4:8080"
>     },
>     {
>       "id": 1,
>       "state": "RUNNING",
>       "worker_id": "10.0.0.4:8080"
>     },
>     {
>       "id": 2,
>       "state": "RUNNING",
>       "worker_id": "10.0.0.4:8080"
>     },
>     {
>       "id": 3,
>       "state": "RUNNING",
>       "worker_id": "10.0.0.4:8080"
>     },
>     {
>       "id": 4,
>       "state": "RUNNING",
>       "worker_id": "10.0.0.4:8080"
>     },
>     {
>       "id": 5,
>       "state": "RUNNING",
>       "worker_id": "10.0.0.4:8080"
>     },
>     {
>       "id": 6,
>       "state": "RUNNING",
>       "worker_id": "10.0.0.4:8080"
>     },
>     {
>       "id": 7,
>       "state": "RUNNING",
>       "worker_id": "10.0.0.4:8080"
>     },
>     {
>       "id": 8,
>       "state": "RUNNING",
>       "worker_id": "10.0.0.4:8080"
>     },
>     {
>       "id": 9,
>       "state": "RUNNING",
>       "worker_id": "10.0.0.4:8080"
>     }
>   ],
>   "type": "sink"
> }
>
> *#Second Connector *:
>
> {
>   "name": "REGION_CODE_UPPER-Cdb_Neatransaction",
>   "connector": {
>     "state": "RUNNING",
>     "worker_id": "10.0.0.4:8078"
>   },
>   "tasks": [
>     {
>       "id": 0,
>       "state": "RUNNING",
>       "worker_id": "10.0.0.4:8078"
>     },
>     {
>       "id": 1,
>       "state": "RUNNING",
>       "worker_id": "10.0.0.4:8078"
>     },
>     {
>       "id": 2,
>       "state": "RUNNING",
>       "worker_id": "10.0.0.4:8078"
>     },
>     {
>       "id": 3,
>       "state": "RUNNING",
>       "worker_id": "10.0.0.4:8078"
>     },
>     {
>       "id": 4,
>       "state": "RUNNING",
>       "worker_id": "10.0.0.4:8078"
>     },
>     {
>       "id": 5,
>       "state": "RUNNING",
>       "worker_id": "10.0.0.4:8078"
>     },
>     {
>       "id": 6,
>       "state": "RUNNING",
>       "worker_id": "10.0.0.4:8078"
>     },
>     {
>       "id": 7,
>       "state": "RUNNING",
>       "worker_id": "10.0.0.4:8078"
>     },
>     {
>       "id": 8,
>       "state": "RUNNING",
>       "worker_id": "10.0.0.4:8078"
>     },
>     {
>       "id": 9,
>       "state": "RUNNING",
>       "worker_id": "10.0.0.4:8078"
>     }
>   ],
>   "type": "sink"
> }
>
>
> Regards and Thanks
> Deepak Raghav
>
>

Reply via email to