jadami10 opened a new issue, #12180:
URL: https://github.com/apache/pinot/issues/12180

   I've seen this a few times, but caught it pretty clearly today. It seems 
segment rebalance will report complete after the last idealstate update happens 
but before the externalview has converged.
   
   In our logs, we see at 14:30 UTC the last idealstate update happens, and the 
"rebalance status" API as well as the logs state the rebalance is done.
   ```
   [2023-12-19 14:30:54.838965] INFO [TableRebalancer] 
[jersey-server-managed-async-executor-7:345] For rebalanceId: 
8e3f8c3f-ddad-476d-9d31-4082a44f55f9, successfully updated the IdealState for 
table: <table_name>
   
   [2023-12-19 14:30:54.847158] INFO [TableRebalancer] 
[jersey-server-managed-async-executor-7:345] For rebalanceId: 
8e3f8c3f-ddad-476d-9d31-4082a44f55f9, finished rebalancing table: <table_name> 
with minAvailableReplicas: 1, enableStrictReplicaGroup: false, bestEfforts: 
false in 1733682 ms.
   ```
   
   At this point, I could clearly see in the table UI that 1 of the 3 servers 
we had added to the replica group still didn't have any segments, and our 
metrics clearly showed disk usage moving around for hosts.
   <img width="672" alt="image" 
src="https://github.com/apache/pinot/assets/4760722/7d1d7d92-aed1-4b2e-98d6-6f784d23d31e";>
   
   I was also surprised to see 1 the hosts run out of disk. This was a 3 
replica groups with 2 instances each consuming from a 3 partition kafka topic. 
Meaning 1 instance in each replica group had 2 partitions and the other 
instance had 1. With `minimizeDataMovement`, I would have expected the 
rebalance to simply "move" 1 partition from the instance with 2 partitions over 
to the new instance. But whatever the rebalance algorithm is doing is causing 
both existing instances to first grab new segments then delete other ones.
   ```
   "instanceAssignmentConfigMap": {
         "CONSUMING": {
           "tagPoolConfig": {
             "tag": "<tenant>_REALTIME",
             "poolBased": true,
             "numPools": 0
           },
           "replicaGroupPartitionConfig": {
             "replicaGroupBased": true,
             "numInstances": 0,
             "numReplicaGroups": 3,
             "numInstancesPerReplicaGroup": 0,
             "numPartitions": 0,
             "numInstancesPerPartition": 0,
             "minimizeDataMovement": true
           },
           "partitionSelector": "INSTANCE_REPLICA_GROUP_PARTITION_SELECTOR"
         }
       },
   ```
   
   cc @Jackie-Jiang. I know we added the "delete segments first" feature. But 
this isn't even an upsert table where we know `minimizeDataMovement` has a bug.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to