This is an automated email from the ASF dual-hosted git repository.

yzheng pushed a commit to branch main
in repository https://gitbox.apache.org/repos/asf/polaris.git


The following commit(s) were added to refs/heads/main by this push:
     new ae2afc933 Helm: add support for topologySpreadConstraints (#3216)
ae2afc933 is described below

commit ae2afc933c7414cca3e93f7acccbfefa36219d0a
Author: Yong Zheng <[email protected]>
AuthorDate: Mon Dec 8 19:46:47 2025 -0600

    Helm: add support for topologySpreadConstraints (#3216)
---
 CHANGELOG.md                            |  1 +
 helm/polaris/README.md                  |  1 +
 helm/polaris/templates/deployment.yaml  | 11 +++++++++++
 helm/polaris/tests/deployment_test.yaml | 25 +++++++++++++++++++++++++
 helm/polaris/values.yaml                |  6 ++++++
 site/content/in-dev/unreleased/helm.md  |  1 +
 6 files changed, 45 insertions(+)

diff --git a/CHANGELOG.md b/CHANGELOG.md
index e5bbc8868..2819b0615 100644
--- a/CHANGELOG.md
+++ b/CHANGELOG.md
@@ -57,6 +57,7 @@ request adding CHANGELOG notes for breaking (!) changes and 
possibly other secti
 - Added `--no-sts` flag to CLI to support S3-compatible storage systems that 
do not have Security Token Service available.
 - Support credential vending for federated catalogs. 
`ALLOW_FEDERATED_CATALOGS_CREDENTIAL_VENDING` (default: true) was added to 
toggle this feature.
 - Enhanced catalog federation with SigV4 authentication support, additional 
authentication types for credential vending, and location-based access 
restrictions to block credential vending for remote tables outside allowed 
location lists.
+- Added `topologySpreadConstraints` support in Helm chart.
 
 ### Changes
 
diff --git a/helm/polaris/README.md b/helm/polaris/README.md
index f6c8eb1b3..e1f7fd7db 100644
--- a/helm/polaris/README.md
+++ b/helm/polaris/README.md
@@ -437,6 +437,7 @@ ct install --namespace polaris --charts ./helm/polaris
 | tasks.maxConcurrentTasks | string | `nil` | The maximum number of concurrent 
tasks that can be executed at the same time. The default is the number of 
available cores. |
 | tasks.maxQueuedTasks | string | `nil` | The maximum number of tasks that can 
be queued up for execution. The default is Integer.MAX_VALUE. |
 | tolerations | list | `[]` | A list of tolerations to apply to polaris pods. 
See 
https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/. |
+| topologySpreadConstraints | list | `[]` | Topology spread constraints for 
polaris pods. See 
https://kubernetes.io/docs/concepts/scheduling-eviction/topology-spread-constraints/#topologyspreadconstraints-field.
 |
 | tracing.attributes | object | `{}` | Resource attributes to identify the 
polaris service among other tracing sources. See 
https://opentelemetry.io/docs/reference/specification/resource/semantic_conventions/#service.
 If left empty, traces will be attached to a service named "Apache Polaris"; to 
change this, provide a service.name attribute here. |
 | tracing.enabled | bool | `false` | Specifies whether tracing for the polaris 
server should be enabled. |
 | tracing.endpoint | string | `"http://otlp-collector:4317"` | The collector 
endpoint URL to connect to (required). The endpoint URL must have either the 
http:// or the https:// scheme. The collector must talk the OpenTelemetry 
protocol (OTLP) and the port must be its gRPC port (by default 4317). See 
https://quarkus.io/guides/opentelemetry for more information. |
diff --git a/helm/polaris/templates/deployment.yaml 
b/helm/polaris/templates/deployment.yaml
index 9ee0a1892..c4b02efcb 100644
--- a/helm/polaris/templates/deployment.yaml
+++ b/helm/polaris/templates/deployment.yaml
@@ -139,6 +139,17 @@ spec:
         {{- if .Values.extraVolumes }}
         {{- tpl (toYaml .Values.extraVolumes) . | nindent 8 }}
         {{- end }}
+      {{- if .Values.topologySpreadConstraints }}
+      topologySpreadConstraints:
+      {{- range .Values.topologySpreadConstraints }}
+        - maxSkew: {{ .maxSkew }}
+          topologyKey: {{ .topologyKey }}
+          whenUnsatisfiable: {{ .whenUnsatisfiable }}
+          labelSelector:
+            matchLabels:
+              {{- include "polaris.selectorLabels" $ | nindent 14 }}
+      {{- end }}
+      {{- end }}
       {{- if .Values.nodeSelector }}
       nodeSelector:
         {{- tpl (toYaml .Values.nodeSelector) . | nindent 8 }}
diff --git a/helm/polaris/tests/deployment_test.yaml 
b/helm/polaris/tests/deployment_test.yaml
index df16d0a15..5b5fea0c1 100644
--- a/helm/polaris/tests/deployment_test.yaml
+++ b/helm/polaris/tests/deployment_test.yaml
@@ -1303,3 +1303,28 @@ tests:
               secretKeyRef:
                 name: polaris-oidc-secret
                 key: client-secret
+
+  - it: should not set topologySpreadConstraints by default
+    template: deployment.yaml
+    asserts:
+      - notExists:
+          path: spec.template.spec.topologySpreadConstraints
+
+  - it: should set topologySpreadConstraints and inject label selector
+    template: deployment.yaml
+    set:
+      topologySpreadConstraints:
+        - maxSkew: 1
+          topologyKey: "kubernetes.io/hostname"
+          whenUnsatisfiable: DoNotSchedule
+    asserts:
+      - equal:
+          path: spec.template.spec.topologySpreadConstraints
+          value:
+            - maxSkew: 1
+              topologyKey: "kubernetes.io/hostname"
+              whenUnsatisfiable: DoNotSchedule
+              labelSelector:
+                matchLabels:
+                  app.kubernetes.io/name: polaris
+                  app.kubernetes.io/instance: polaris-release
diff --git a/helm/polaris/values.yaml b/helm/polaris/values.yaml
index db0211b9b..f3ba7ee6f 100644
--- a/helm/polaris/values.yaml
+++ b/helm/polaris/values.yaml
@@ -294,6 +294,12 @@ affinity: {}
 #                values:
 #                  - polaris
 
+# -- Topology spread constraints for polaris pods. See 
https://kubernetes.io/docs/concepts/scheduling-eviction/topology-spread-constraints/#topologyspreadconstraints-field.
+topologySpreadConstraints: []
+  # - maxSkew: 1
+  #   topologyKey: topology.kubernetes.io/zone
+  #   whenUnsatisfiable: DoNotSchedule
+
 # -- Configures the liveness probe for polaris pods.
 livenessProbe:
   # -- Number of seconds after the container has started before liveness 
probes are initiated. Minimum value is 0.
diff --git a/site/content/in-dev/unreleased/helm.md 
b/site/content/in-dev/unreleased/helm.md
index a6b82448e..5e89609cf 100644
--- a/site/content/in-dev/unreleased/helm.md
+++ b/site/content/in-dev/unreleased/helm.md
@@ -423,6 +423,7 @@ ct install --namespace polaris --charts ./helm/polaris
 | tasks.maxConcurrentTasks | string | `nil` | The maximum number of concurrent 
tasks that can be executed at the same time. The default is the number of 
available cores. |
 | tasks.maxQueuedTasks | string | `nil` | The maximum number of tasks that can 
be queued up for execution. The default is Integer.MAX_VALUE. |
 | tolerations | list | `[]` | A list of tolerations to apply to polaris pods. 
See 
https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/. |
+| topologySpreadConstraints | list | `[]` | Topology spread constraints for 
polaris pods. See 
https://kubernetes.io/docs/concepts/scheduling-eviction/topology-spread-constraints/#topologyspreadconstraints-field.
 |
 | tracing.attributes | object | `{}` | Resource attributes to identify the 
polaris service among other tracing sources. See 
https://opentelemetry.io/docs/reference/specification/resource/semantic_conventions/#service.
 If left empty, traces will be attached to a service named "Apache Polaris"; to 
change this, provide a service.name attribute here. |
 | tracing.enabled | bool | `false` | Specifies whether tracing for the polaris 
server should be enabled. |
 | tracing.endpoint | string | `"http://otlp-collector:4317"` | The collector 
endpoint URL to connect to (required). The endpoint URL must have either the 
http:// or the https:// scheme. The collector must talk the OpenTelemetry 
protocol (OTLP) and the port must be its gRPC port (by default 4317). See 
https://quarkus.io/guides/opentelemetry for more information. |

Reply via email to