praveenc7 commented on code in PR #15109:
URL: https://github.com/apache/pinot/pull/15109#discussion_r2112729335


##########
pinot-controller/src/main/java/org/apache/pinot/controller/workload/scheme/PropagationUtils.java:
##########
@@ -0,0 +1,208 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+package org.apache.pinot.controller.workload.scheme;
+
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.Collections;
+import java.util.EnumMap;
+import java.util.HashMap;
+import java.util.HashSet;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+import org.apache.helix.model.InstanceConfig;
+import org.apache.pinot.common.utils.config.TagNameUtils;
+import org.apache.pinot.controller.helix.core.PinotHelixResourceManager;
+import org.apache.pinot.spi.config.table.TableConfig;
+import org.apache.pinot.spi.config.table.TableType;
+import org.apache.pinot.spi.config.table.TenantConfig;
+import org.apache.pinot.spi.config.workload.NodeConfig;
+import org.apache.pinot.spi.config.workload.PropagationScheme;
+import org.apache.pinot.spi.config.workload.QueryWorkloadConfig;
+import org.apache.pinot.spi.utils.builder.TableNameBuilder;
+
+
+/**
+ * This class provides utility methods for workload propagation.
+ */
+public class PropagationUtils {
+
+  private PropagationUtils() {
+  }
+
+  /**
+   * Get the mapping tableNameWithType → {NON_LEAF_NODE→brokerTag, 
LEAF_NODE→(serverTag + overrides)}
+   * 1. Get all table configs from the PinotHelixResourceManager
+   * 2. For each table config, extract the tenant config
+   * 3. For each tenant config, get the broker and server tags
+   * 4. Populate the helix tags for NON_LEAF_NODE and LEAF_NODE separately
+   */
+  public static Map<String, Map<NodeConfig.Type, Set<String>>> 
getTableToHelixTags(
+          PinotHelixResourceManager pinotResourceManager) {
+    Map<String, Map<NodeConfig.Type, Set<String>>> tableToTags = new 
HashMap<>();
+    for (TableConfig tableConfig : pinotResourceManager.getAllTableConfigs()) {
+      TenantConfig tenantConfig = tableConfig.getTenantConfig();
+      TableType tableType = tableConfig.getTableType();
+
+      // Gather all relevant tags for this tenant
+      Set<String> tenantTags = new HashSet<>();
+      collectHelixTagsForTable(tenantTags, tenantConfig, tableType);

Review Comment:
   Added logic to support for that



##########
pinot-common/src/main/java/org/apache/pinot/common/messages/QueryWorkloadRefreshMessage.java:
##########
@@ -0,0 +1,64 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+package org.apache.pinot.common.messages;
+
+import java.util.UUID;
+import org.apache.helix.model.Message;
+import org.apache.pinot.common.utils.config.QueryWorkloadConfigUtils;
+import org.apache.pinot.spi.config.workload.InstanceCost;
+import org.apache.pinot.spi.config.workload.QueryWorkloadConfig;
+
+
+/**
+ * Message to refresh the query workload on the instances.
+ * This message include the host level cost for each instance.
+ */
+public class QueryWorkloadRefreshMessage extends Message {
+  public static final String REFRESH_QUERY_WORKLOAD_MSG_SUB_TYPE = 
"REFRESH_QUERY_WORKLOAD";
+  public static final String INSTANCE_COST = "instanceCost";
+
+  /**
+   * Constructor for the sender.
+   */
+  public QueryWorkloadRefreshMessage(String queryWorkloadName, InstanceCost 
instanceCost) {
+    super(Message.MessageType.USER_DEFINE_MSG, UUID.randomUUID().toString());
+    setMsgSubType(REFRESH_QUERY_WORKLOAD_MSG_SUB_TYPE);
+    // Give it infinite time to process the message, as long as session is 
alive
+    setExecutionTimeout(-1);
+    QueryWorkloadConfigUtils.updateZNRecordWithInstanceCost(getRecord(), 
queryWorkloadName, instanceCost);
+  }
+
+  /**
+   * Constructor for the receiver.
+   */
+  public QueryWorkloadRefreshMessage(Message message) {
+    super(message.getRecord());
+    if (!message.getMsgSubType().equals(REFRESH_QUERY_WORKLOAD_MSG_SUB_TYPE)) {
+      throw new IllegalArgumentException("Invalid message subtype:" + 
message.getMsgSubType());

Review Comment:
   We already have when a server receives a message it doesn't known about yet. 
It logs it with a warning.
   
   
https://github.com/apache/pinot/blob/cc3eb2e9c296761aa3db7e7e6a9fa7d66bd213bc/pinot-server/src/main/java/org/apache/pinot/server/starter/helix/SegmentMessageHandlerFactory.java#L78
 



##########
pinot-controller/src/main/java/org/apache/pinot/controller/workload/scheme/PropagationUtils.java:
##########
@@ -0,0 +1,208 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+package org.apache.pinot.controller.workload.scheme;
+
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.Collections;
+import java.util.EnumMap;
+import java.util.HashMap;
+import java.util.HashSet;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+import org.apache.helix.model.InstanceConfig;
+import org.apache.pinot.common.utils.config.TagNameUtils;
+import org.apache.pinot.controller.helix.core.PinotHelixResourceManager;
+import org.apache.pinot.spi.config.table.TableConfig;
+import org.apache.pinot.spi.config.table.TableType;
+import org.apache.pinot.spi.config.table.TenantConfig;
+import org.apache.pinot.spi.config.workload.NodeConfig;
+import org.apache.pinot.spi.config.workload.PropagationScheme;
+import org.apache.pinot.spi.config.workload.QueryWorkloadConfig;
+import org.apache.pinot.spi.utils.builder.TableNameBuilder;
+
+
+/**
+ * This class provides utility methods for workload propagation.
+ */
+public class PropagationUtils {
+
+  private PropagationUtils() {
+  }
+
+  /**
+   * Get the mapping tableNameWithType → {NON_LEAF_NODE→brokerTag, 
LEAF_NODE→(serverTag + overrides)}
+   * 1. Get all table configs from the PinotHelixResourceManager
+   * 2. For each table config, extract the tenant config
+   * 3. For each tenant config, get the broker and server tags
+   * 4. Populate the helix tags for NON_LEAF_NODE and LEAF_NODE separately
+   */
+  public static Map<String, Map<NodeConfig.Type, Set<String>>> 
getTableToHelixTags(
+          PinotHelixResourceManager pinotResourceManager) {
+    Map<String, Map<NodeConfig.Type, Set<String>>> tableToTags = new 
HashMap<>();
+    for (TableConfig tableConfig : pinotResourceManager.getAllTableConfigs()) {
+      TenantConfig tenantConfig = tableConfig.getTenantConfig();
+      TableType tableType = tableConfig.getTableType();
+
+      // Gather all relevant tags for this tenant
+      Set<String> tenantTags = new HashSet<>();
+      collectHelixTagsForTable(tenantTags, tenantConfig, tableType);
+
+      // Populate the helix tags for NON_LEAF_NODE and LEAF_NODE separately to 
provide flexibility
+      // in workload propagation to either leaf nodes or non-leaf nodes
+      String brokerTag = 
TagNameUtils.getBrokerTagForTenant(tenantConfig.getBroker());

Review Comment:
   Added a high level TODO to handle for MSE



##########
pinot-common/src/main/java/org/apache/pinot/common/utils/config/QueryWorkloadConfigUtils.java:
##########
@@ -0,0 +1,237 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+package org.apache.pinot.common.utils.config;
+
+import com.fasterxml.jackson.core.type.TypeReference;
+import com.google.common.base.Preconditions;
+import java.net.URI;
+import java.util.Collections;
+import java.util.List;
+import java.util.ArrayList;
+import java.util.concurrent.atomic.AtomicReference;
+import org.apache.hc.core5.http.ClassicHttpRequest;
+import org.apache.hc.core5.http.HttpHeaders;
+import org.apache.hc.core5.http.HttpStatus;
+import org.apache.hc.core5.http.HttpVersion;
+import org.apache.hc.core5.http.io.support.ClassicRequestBuilder;
+import org.apache.helix.zookeeper.datamodel.ZNRecord;
+import org.apache.pinot.common.messages.QueryWorkloadRefreshMessage;
+import org.apache.pinot.common.utils.SimpleHttpResponse;
+import org.apache.pinot.common.utils.http.HttpClient;
+import org.apache.pinot.common.utils.http.HttpClientConfig;
+import org.apache.pinot.common.utils.tls.TlsUtils;
+import org.apache.pinot.spi.config.workload.EnforcementProfile;
+import org.apache.pinot.spi.config.workload.InstanceCost;
+import org.apache.pinot.spi.config.workload.NodeConfig;
+import org.apache.pinot.spi.config.workload.PropagationScheme;
+import org.apache.pinot.spi.config.workload.QueryWorkloadConfig;
+import org.apache.pinot.spi.utils.JsonUtils;
+import org.apache.pinot.spi.utils.retry.RetryPolicies;
+import org.apache.pinot.spi.utils.retry.RetryPolicy;
+import org.slf4j.Logger;
+
+
+public class QueryWorkloadConfigUtils {
+  private QueryWorkloadConfigUtils() {
+  }
+
+  private static final Logger LOGGER = 
org.slf4j.LoggerFactory.getLogger(QueryWorkloadConfigUtils.class);
+  private static final HttpClient HTTP_CLIENT = new 
HttpClient(HttpClientConfig.DEFAULT_HTTP_CLIENT_CONFIG,
+          TlsUtils.getSslContext());
+
+  /**
+   * Converts a ZNRecord into a QueryWorkloadConfig object by extracting 
mapFields.
+   *
+   * @param znRecord The ZNRecord containing workload config data.
+   * @return A QueryWorkloadConfig object.
+   */
+  public static QueryWorkloadConfig fromZNRecord(ZNRecord znRecord) {
+    Preconditions.checkNotNull(znRecord, "ZNRecord cannot be null");
+    String queryWorkloadName = 
znRecord.getSimpleField(QueryWorkloadConfig.QUERY_WORKLOAD_NAME);
+    Preconditions.checkNotNull(queryWorkloadName, "queryWorkloadName cannot be 
null");
+    String nodeConfigsJson = 
znRecord.getSimpleField(QueryWorkloadConfig.NODE_CONFIGS);
+    Preconditions.checkNotNull(nodeConfigsJson, "nodeConfigs cannot be null");
+    try {
+      List<NodeConfig> nodeConfigs = JsonUtils.stringToObject(nodeConfigsJson, 
new TypeReference<>() { });
+      return new QueryWorkloadConfig(queryWorkloadName, nodeConfigs);
+    } catch (Exception e) {
+      String errorMessage = String.format("Failed to convert ZNRecord : %s to 
QueryWorkloadConfig", znRecord);

Review Comment:
   It does log the record name as part of logging the ZNRecord



##########
pinot-spi/src/main/java/org/apache/pinot/spi/config/workload/NodeConfig.java:
##########
@@ -0,0 +1,96 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+package org.apache.pinot.spi.config.workload;
+
+import com.fasterxml.jackson.annotation.JsonCreator;
+import com.fasterxml.jackson.annotation.JsonProperty;
+import com.fasterxml.jackson.annotation.JsonPropertyDescription;
+import com.fasterxml.jackson.annotation.JsonValue;
+import javax.annotation.Nullable;
+import org.apache.pinot.spi.config.BaseJsonConfig;
+
+
+public class NodeConfig extends BaseJsonConfig {
+
+  public enum Type {
+    LEAF_NODE("leafNode"),
+    NON_LEAF_NODE("nonLeafNode");

Review Comment:
   Added a high level TODO to handle for MSE



##########
pinot-spi/src/main/java/org/apache/pinot/spi/config/workload/QueryWorkloadConfig.java:
##########
@@ -0,0 +1,57 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+package org.apache.pinot.spi.config.workload;
+
+import com.fasterxml.jackson.annotation.JsonCreator;
+import com.fasterxml.jackson.annotation.JsonProperty;
+import com.fasterxml.jackson.annotation.JsonPropertyDescription;
+import java.util.List;
+import org.apache.pinot.spi.config.BaseJsonConfig;
+
+/**
+ * Class to represent the query workload configuration.
+ * A QueryWorkload is applied to a collection of queries at the Helix Cluster 
Level, queries specify the workload they
+ * belong to by specifying the query workload name in the query options
+ */
+public class QueryWorkloadConfig extends BaseJsonConfig {
+
+  public static final String QUERY_WORKLOAD_NAME = "queryWorkloadName";

Review Comment:
   Added comments



##########
pinot-controller/src/main/java/org/apache/pinot/controller/helix/core/PinotHelixResourceManager.java:
##########
@@ -4764,6 +4777,48 @@ public Map<String, Integer> 
minimumInstancesRequiredForTags() {
     return tagMinInstanceMap;
   }
 
+  @Nullable
+  public List<QueryWorkloadConfig> getAllQueryWorkloadConfigs() {
+    return ZKMetadataProvider.getAllQueryWorkloadConfigs(_propertyStore);
+  }
+
+  @Nullable
+  public QueryWorkloadConfig getQueryWorkloadConfig(String queryWorkloadName) {
+    return ZKMetadataProvider.getQueryWorkloadConfig(_propertyStore, 
queryWorkloadName);
+  }
+
+  public void setQueryWorkloadConfig(QueryWorkloadConfig queryWorkloadConfig) {
+    if (!ZKMetadataProvider.setQueryWorkloadConfig(_propertyStore, 
queryWorkloadConfig)) {
+      throw new RuntimeException("Failed to set workload config for 
queryWorkloadName: "
+          + queryWorkloadConfig.getQueryWorkloadName());
+    }
+    _queryWorkloadManager.propagateWorkload(queryWorkloadConfig);
+  }
+
+  public void sendQueryWorkloadRefreshMessage(Map<String, 
QueryWorkloadRefreshMessage> instanceToRefreshMessageMap) {
+    instanceToRefreshMessageMap.forEach((instance, message) -> {
+      Criteria criteria = new Criteria();
+      criteria.setRecipientInstanceType(InstanceType.PARTICIPANT);
+      criteria.setInstanceName(instance);
+      criteria.setSessionSpecific(true);
+
+      int numMessagesSent = 
_helixZkManager.getMessagingService().send(criteria, message, null, -1);
+      if (numMessagesSent > 0) {
+        LOGGER.info("Sent {} query workload config refresh messages to 
instance: {}", numMessagesSent, instance);
+      } else {
+        LOGGER.warn("No query workload config refresh message sent to 
instance: {}", instance);
+      }
+    });
+  }
+
+  public void deleteQueryWorkloadConfig(String workload) {
+    ZKMetadataProvider.deleteQueryWorkloadConfig(_propertyStore, workload);

Review Comment:
   I was thinking since initially we can rely on restarts to get rid of the 
local-state on the host. But that could be expensive, let me send a refresh 
message



##########
pinot-common/src/main/java/org/apache/pinot/common/utils/config/QueryWorkloadConfigUtils.java:
##########
@@ -0,0 +1,237 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+package org.apache.pinot.common.utils.config;
+
+import com.fasterxml.jackson.core.type.TypeReference;
+import com.google.common.base.Preconditions;
+import java.net.URI;
+import java.util.Collections;
+import java.util.List;
+import java.util.ArrayList;
+import java.util.concurrent.atomic.AtomicReference;
+import org.apache.hc.core5.http.ClassicHttpRequest;
+import org.apache.hc.core5.http.HttpHeaders;
+import org.apache.hc.core5.http.HttpStatus;
+import org.apache.hc.core5.http.HttpVersion;
+import org.apache.hc.core5.http.io.support.ClassicRequestBuilder;
+import org.apache.helix.zookeeper.datamodel.ZNRecord;
+import org.apache.pinot.common.messages.QueryWorkloadRefreshMessage;
+import org.apache.pinot.common.utils.SimpleHttpResponse;
+import org.apache.pinot.common.utils.http.HttpClient;
+import org.apache.pinot.common.utils.http.HttpClientConfig;
+import org.apache.pinot.common.utils.tls.TlsUtils;
+import org.apache.pinot.spi.config.workload.EnforcementProfile;
+import org.apache.pinot.spi.config.workload.InstanceCost;
+import org.apache.pinot.spi.config.workload.NodeConfig;
+import org.apache.pinot.spi.config.workload.PropagationScheme;
+import org.apache.pinot.spi.config.workload.QueryWorkloadConfig;
+import org.apache.pinot.spi.utils.JsonUtils;
+import org.apache.pinot.spi.utils.retry.RetryPolicies;
+import org.apache.pinot.spi.utils.retry.RetryPolicy;
+import org.slf4j.Logger;
+
+
+public class QueryWorkloadConfigUtils {
+  private QueryWorkloadConfigUtils() {
+  }
+
+  private static final Logger LOGGER = 
org.slf4j.LoggerFactory.getLogger(QueryWorkloadConfigUtils.class);
+  private static final HttpClient HTTP_CLIENT = new 
HttpClient(HttpClientConfig.DEFAULT_HTTP_CLIENT_CONFIG,
+          TlsUtils.getSslContext());
+
+  /**
+   * Converts a ZNRecord into a QueryWorkloadConfig object by extracting 
mapFields.
+   *
+   * @param znRecord The ZNRecord containing workload config data.
+   * @return A QueryWorkloadConfig object.
+   */
+  public static QueryWorkloadConfig fromZNRecord(ZNRecord znRecord) {
+    Preconditions.checkNotNull(znRecord, "ZNRecord cannot be null");
+    String queryWorkloadName = 
znRecord.getSimpleField(QueryWorkloadConfig.QUERY_WORKLOAD_NAME);
+    Preconditions.checkNotNull(queryWorkloadName, "queryWorkloadName cannot be 
null");
+    String nodeConfigsJson = 
znRecord.getSimpleField(QueryWorkloadConfig.NODE_CONFIGS);
+    Preconditions.checkNotNull(nodeConfigsJson, "nodeConfigs cannot be null");
+    try {
+      List<NodeConfig> nodeConfigs = JsonUtils.stringToObject(nodeConfigsJson, 
new TypeReference<>() { });
+      return new QueryWorkloadConfig(queryWorkloadName, nodeConfigs);
+    } catch (Exception e) {
+      String errorMessage = String.format("Failed to convert ZNRecord : %s to 
QueryWorkloadConfig", znRecord);
+      throw new RuntimeException(errorMessage, e);
+    }
+  }
+
+  /**
+   * Updates a ZNRecord with the fields from a WorkloadConfig object.
+   *
+   * @param queryWorkloadConfig The QueryWorkloadConfig object to convert.
+   * @param znRecord The ZNRecord to update.
+   */
+  public static void updateZNRecordWithWorkloadConfig(ZNRecord znRecord, 
QueryWorkloadConfig queryWorkloadConfig) {
+    znRecord.setSimpleField(QueryWorkloadConfig.QUERY_WORKLOAD_NAME, 
queryWorkloadConfig.getQueryWorkloadName());
+    try {
+      znRecord.setSimpleField(QueryWorkloadConfig.NODE_CONFIGS,
+          JsonUtils.objectToString(queryWorkloadConfig.getNodeConfigs()));
+    } catch (Exception e) {
+      String errorMessage = String.format("Failed to convert 
QueryWorkloadConfig : %s to ZNRecord",
+          queryWorkloadConfig);
+      throw new RuntimeException(errorMessage, e);
+    }
+  }
+
+  public static void updateZNRecordWithInstanceCost(ZNRecord znRecord, String 
queryWorkloadName,
+      InstanceCost instanceCost) {
+    Preconditions.checkNotNull(znRecord, "ZNRecord cannot be null");
+    Preconditions.checkNotNull(instanceCost, "InstanceCost cannot be null");
+    try {
+      znRecord.setSimpleField(QueryWorkloadConfig.QUERY_WORKLOAD_NAME, 
queryWorkloadName);
+      znRecord.setSimpleField(QueryWorkloadRefreshMessage.INSTANCE_COST, 
JsonUtils.objectToString(instanceCost));
+    } catch (Exception e) {
+      String errorMessage = String.format("Failed to convert InstanceCost : %s 
to ZNRecord",
+          instanceCost);
+      throw new RuntimeException(errorMessage, e);
+    }
+  }
+
+  public static InstanceCost getInstanceCostFromZNRecord(ZNRecord znRecord) {
+    Preconditions.checkNotNull(znRecord, "ZNRecord cannot be null");
+    String instanceCostJson = 
znRecord.getSimpleField(QueryWorkloadRefreshMessage.INSTANCE_COST);
+    Preconditions.checkNotNull(instanceCostJson, "InstanceCost cannot be 
null");
+    try {
+      return JsonUtils.stringToObject(instanceCostJson, InstanceCost.class);
+    } catch (Exception e) {
+      String errorMessage = String.format("Failed to convert ZNRecord : %s to 
InstanceCost", znRecord);
+      throw new RuntimeException(errorMessage, e);
+    }
+  }
+
+  public static List<QueryWorkloadConfig> 
getQueryWorkloadConfigsFromController(String controllerUrl, String instanceId,
+                                                                               
 NodeConfig.Type nodeType) {
+    try {
+      if (controllerUrl == null || controllerUrl.isEmpty()) {
+        LOGGER.warn("Controller URL is empty, cannot fetch query workload 
configs for instance: {}", instanceId);
+        return Collections.emptyList();
+      }
+      URI queryWorkloadURI = new URI(controllerUrl + 
"/queryWorkloadConfigs/instance/" + instanceId + "?nodeType="
+              + nodeType);
+      ClassicHttpRequest request = ClassicRequestBuilder.get(queryWorkloadURI)
+              .setVersion(HttpVersion.HTTP_1_1)
+              .setHeader(HttpHeaders.CONTENT_TYPE, 
HttpClient.JSON_CONTENT_TYPE)
+              .build();
+      AtomicReference<List<QueryWorkloadConfig>> workloadConfigs = new 
AtomicReference<>(null);
+      RetryPolicy retryPolicy = RetryPolicies.exponentialBackoffRetryPolicy(3, 
3000L, 1.2f);
+      retryPolicy.attempt(() -> {
+        try {
+          SimpleHttpResponse response = HttpClient.wrapAndThrowHttpException(
+                  HTTP_CLIENT.sendRequest(request, 
HttpClient.DEFAULT_SOCKET_TIMEOUT_MS)
+          );
+          if (response.getStatusCode() == HttpStatus.SC_OK) {
+            
workloadConfigs.set(QueryWorkloadConfigUtils.getQueryWorkloadConfigs(response.getResponse()));
+            LOGGER.info("Successfully fetched query workload configs from 
controller: {}, Instance: {}",
+                    controllerUrl, instanceId);
+            return true;
+          } else if (response.getStatusCode() == HttpStatus.SC_NOT_FOUND) {
+            LOGGER.info("No query workload configs found for controller: {}, 
Instance: {}", controllerUrl, instanceId);
+            workloadConfigs.set(Collections.emptyList());
+            return true;
+          } else {
+            LOGGER.warn("Failed to fetch query workload configs from 
controller: {}, Instance: {}, Status: {}",
+                    controllerUrl, instanceId, response.getStatusCode());
+            return false;
+          }
+        } catch (Exception e) {
+          LOGGER.warn("Failed to fetch query workload configs from controller: 
{}, Instance: {}",
+                  controllerUrl, instanceId, e);
+          return false;
+        }
+      });
+      return workloadConfigs.get();
+    } catch (Exception e) {
+      LOGGER.warn("Failed to fetch query workload configs from controller: {}, 
Instance: {}",
+              controllerUrl, instanceId, e);
+      return Collections.emptyList();
+    }
+  }
+
+  public static List<QueryWorkloadConfig> getQueryWorkloadConfigs(String 
queryWorkloadConfigsJson) {
+    Preconditions.checkNotNull(queryWorkloadConfigsJson, "Query workload 
configs JSON cannot be null");
+    try {
+      return JsonUtils.stringToObject(queryWorkloadConfigsJson, new 
TypeReference<>() { });
+    } catch (Exception e) {
+      String errorMessage = String.format("Failed to convert query workload 
configs: %s to list of QueryWorkloadConfig",
+          queryWorkloadConfigsJson);
+      throw new RuntimeException(errorMessage, e);
+    }
+  }
+
+  /**
+   * Validates the given QueryWorkloadConfig and returns a list of validation 
error messages.
+   *
+   * @param config the QueryWorkloadConfig to validate
+   * @return a list of validation errors; empty if config is valid
+   */
+  public static List<String> validateQueryWorkloadConfig(QueryWorkloadConfig 
config) {

Review Comment:
   I taught about this. Since `QueryWorkloadConfig` is only ever consumed by 
the controller—and never directly by leaf or non-leaf nodes—we might need a 
versioning.
   
   We can safely add any future properties as optional fields; existing clients 
will simply ignore unknown entries. We use `InstanceCost`  field to expose 
instance level to the leaf/non-leaf nodes, if down the road we need to expose 
additional fields we can repurpose this class
   



##########
pinot-controller/src/main/java/org/apache/pinot/controller/api/resources/PinotQueryWorkloadConfigRestletResource.java:
##########
@@ -0,0 +1,296 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+package org.apache.pinot.controller.api.resources;
+
+import io.swagger.annotations.Api;
+import io.swagger.annotations.ApiKeyAuthDefinition;
+import io.swagger.annotations.ApiOperation;
+import io.swagger.annotations.Authorization;
+import io.swagger.annotations.SecurityDefinition;
+import io.swagger.annotations.SwaggerDefinition;
+import java.util.List;
+import java.util.Map;
+import javax.inject.Inject;
+import javax.ws.rs.DELETE;
+import javax.ws.rs.GET;
+import javax.ws.rs.POST;
+import javax.ws.rs.Path;
+import javax.ws.rs.PathParam;
+import javax.ws.rs.Produces;
+import javax.ws.rs.QueryParam;
+import javax.ws.rs.core.Context;
+import javax.ws.rs.core.HttpHeaders;
+import javax.ws.rs.core.MediaType;
+import javax.ws.rs.core.Response;
+
+import org.apache.pinot.common.utils.config.QueryWorkloadConfigUtils;
+import org.apache.pinot.controller.api.access.AccessType;
+import org.apache.pinot.controller.api.access.Authenticate;
+import 
org.apache.pinot.controller.api.exception.ControllerApplicationException;
+import org.apache.pinot.controller.helix.core.PinotHelixResourceManager;
+import org.apache.pinot.core.auth.Actions;
+import org.apache.pinot.core.auth.Authorize;
+import org.apache.pinot.core.auth.TargetType;
+import org.apache.pinot.spi.config.workload.InstanceCost;
+import org.apache.pinot.spi.config.workload.NodeConfig;
+import org.apache.pinot.spi.config.workload.QueryWorkloadConfig;
+import org.apache.pinot.spi.utils.CommonConstants;
+import org.apache.pinot.spi.utils.JsonUtils;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import static 
org.apache.pinot.spi.utils.CommonConstants.SWAGGER_AUTHORIZATION_KEY;
+
+@Api(tags = Constants.QUERY_WORKLOAD_TAG, authorizations = 
{@Authorization(value = SWAGGER_AUTHORIZATION_KEY)})
+@SwaggerDefinition(securityDefinition = 
@SecurityDefinition(apiKeyAuthDefinitions = {
+    @ApiKeyAuthDefinition(name = HttpHeaders.AUTHORIZATION, in = 
ApiKeyAuthDefinition.ApiKeyLocation.HEADER, key =
+        SWAGGER_AUTHORIZATION_KEY, description =
+        "The format of the key is  ```\"Basic <token>\" or \"Bearer "
+            + "<token>\"```"), @ApiKeyAuthDefinition(name = 
CommonConstants.QUERY_WORKLOAD, in =
+    ApiKeyAuthDefinition.ApiKeyLocation.HEADER, key = 
CommonConstants.QUERY_WORKLOAD, description =
+    "Workload context passed through http header. If no context is provided 
'default' workload "
+        + "context will be considered.")
+}))
+@Path("/")
+public class PinotQueryWorkloadConfigRestletResource {
+  public static final Logger LOGGER = 
LoggerFactory.getLogger(PinotQueryWorkloadConfigRestletResource.class);
+
+  @Inject
+  PinotHelixResourceManager _pinotHelixResourceManager;
+
+  @GET
+  @Produces(MediaType.APPLICATION_JSON)
+  @Path("/queryWorkloadConfigs")
+  @Authorize(targetType = TargetType.CLUSTER, action = 
Actions.Cluster.GET_QUERY_WORKLOAD_CONFIG)
+  @Authenticate(AccessType.READ)
+  @ApiOperation(value = "Get all query workload configs", notes = "Get all 
workload configs")
+  public String getQueryWorkloadConfigs(@Context HttpHeaders httpHeaders) {
+    try {
+      LOGGER.info("Received request to get all queryWorkloadConfigs");
+      List<QueryWorkloadConfig> queryWorkloadConfigs = 
_pinotHelixResourceManager.getAllQueryWorkloadConfigs();
+      if (queryWorkloadConfigs == null || queryWorkloadConfigs.isEmpty()) {
+        throw new ControllerApplicationException(LOGGER, "No workload configs 
found", Response.Status.NOT_FOUND, null);

Review Comment:
   You are right, it makes more sense to return an empty array ([]) rather than 
a 404 when there simply aren’t any items. A 404 might imply that the endpoint 
itself doesn’t exist. 
   
   By contrast, I guess for lookups of a specific resource (e.g. “GET 
/queryWorkloadConfigs/{name}”),  returning 404 should be fine. That way clients 
get a clear signal that they asked for something that isn’t there.



##########
pinot-common/src/main/java/org/apache/pinot/common/utils/config/QueryWorkloadConfigUtils.java:
##########
@@ -0,0 +1,237 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+package org.apache.pinot.common.utils.config;
+
+import com.fasterxml.jackson.core.type.TypeReference;
+import com.google.common.base.Preconditions;
+import java.net.URI;
+import java.util.Collections;
+import java.util.List;
+import java.util.ArrayList;
+import java.util.concurrent.atomic.AtomicReference;
+import org.apache.hc.core5.http.ClassicHttpRequest;
+import org.apache.hc.core5.http.HttpHeaders;
+import org.apache.hc.core5.http.HttpStatus;
+import org.apache.hc.core5.http.HttpVersion;
+import org.apache.hc.core5.http.io.support.ClassicRequestBuilder;
+import org.apache.helix.zookeeper.datamodel.ZNRecord;
+import org.apache.pinot.common.messages.QueryWorkloadRefreshMessage;
+import org.apache.pinot.common.utils.SimpleHttpResponse;
+import org.apache.pinot.common.utils.http.HttpClient;
+import org.apache.pinot.common.utils.http.HttpClientConfig;
+import org.apache.pinot.common.utils.tls.TlsUtils;
+import org.apache.pinot.spi.config.workload.EnforcementProfile;
+import org.apache.pinot.spi.config.workload.InstanceCost;
+import org.apache.pinot.spi.config.workload.NodeConfig;
+import org.apache.pinot.spi.config.workload.PropagationScheme;
+import org.apache.pinot.spi.config.workload.QueryWorkloadConfig;
+import org.apache.pinot.spi.utils.JsonUtils;
+import org.apache.pinot.spi.utils.retry.RetryPolicies;
+import org.apache.pinot.spi.utils.retry.RetryPolicy;
+import org.slf4j.Logger;
+
+
+public class QueryWorkloadConfigUtils {
+  private QueryWorkloadConfigUtils() {
+  }
+
+  private static final Logger LOGGER = 
org.slf4j.LoggerFactory.getLogger(QueryWorkloadConfigUtils.class);
+  private static final HttpClient HTTP_CLIENT = new 
HttpClient(HttpClientConfig.DEFAULT_HTTP_CLIENT_CONFIG,
+          TlsUtils.getSslContext());
+
+  /**
+   * Converts a ZNRecord into a QueryWorkloadConfig object by extracting 
mapFields.
+   *
+   * @param znRecord The ZNRecord containing workload config data.
+   * @return A QueryWorkloadConfig object.
+   */
+  public static QueryWorkloadConfig fromZNRecord(ZNRecord znRecord) {
+    Preconditions.checkNotNull(znRecord, "ZNRecord cannot be null");
+    String queryWorkloadName = 
znRecord.getSimpleField(QueryWorkloadConfig.QUERY_WORKLOAD_NAME);
+    Preconditions.checkNotNull(queryWorkloadName, "queryWorkloadName cannot be 
null");
+    String nodeConfigsJson = 
znRecord.getSimpleField(QueryWorkloadConfig.NODE_CONFIGS);
+    Preconditions.checkNotNull(nodeConfigsJson, "nodeConfigs cannot be null");
+    try {
+      List<NodeConfig> nodeConfigs = JsonUtils.stringToObject(nodeConfigsJson, 
new TypeReference<>() { });
+      return new QueryWorkloadConfig(queryWorkloadName, nodeConfigs);
+    } catch (Exception e) {
+      String errorMessage = String.format("Failed to convert ZNRecord : %s to 
QueryWorkloadConfig", znRecord);
+      throw new RuntimeException(errorMessage, e);
+    }
+  }
+
+  /**
+   * Updates a ZNRecord with the fields from a WorkloadConfig object.
+   *
+   * @param queryWorkloadConfig The QueryWorkloadConfig object to convert.
+   * @param znRecord The ZNRecord to update.
+   */
+  public static void updateZNRecordWithWorkloadConfig(ZNRecord znRecord, 
QueryWorkloadConfig queryWorkloadConfig) {
+    znRecord.setSimpleField(QueryWorkloadConfig.QUERY_WORKLOAD_NAME, 
queryWorkloadConfig.getQueryWorkloadName());
+    try {
+      znRecord.setSimpleField(QueryWorkloadConfig.NODE_CONFIGS,
+          JsonUtils.objectToString(queryWorkloadConfig.getNodeConfigs()));
+    } catch (Exception e) {
+      String errorMessage = String.format("Failed to convert 
QueryWorkloadConfig : %s to ZNRecord",
+          queryWorkloadConfig);
+      throw new RuntimeException(errorMessage, e);
+    }
+  }
+
+  public static void updateZNRecordWithInstanceCost(ZNRecord znRecord, String 
queryWorkloadName,
+      InstanceCost instanceCost) {
+    Preconditions.checkNotNull(znRecord, "ZNRecord cannot be null");
+    Preconditions.checkNotNull(instanceCost, "InstanceCost cannot be null");
+    try {
+      znRecord.setSimpleField(QueryWorkloadConfig.QUERY_WORKLOAD_NAME, 
queryWorkloadName);
+      znRecord.setSimpleField(QueryWorkloadRefreshMessage.INSTANCE_COST, 
JsonUtils.objectToString(instanceCost));

Review Comment:
   You are right the keys should ideally be in one place. This parsing is for 
`QueryWorkloadRefreshMessage`, so moved the constant there



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]


Reply via email to