dongjoon-hyun commented on code in PR #113: URL: https://github.com/apache/spark-kubernetes-operator/pull/113#discussion_r1744843093
########## docs/operations.md: ########## @@ -0,0 +1,132 @@ +<!-- +Licensed to the Apache Software Foundation (ASF) under one +or more contributor license agreements. See the NOTICE file +distributed with this work for additional information +regarding copyright ownership. The ASF licenses this file +to you under the Apache License, Version 2.0 (the +"License"); you may not use this file except in compliance +with the License. You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, +software distributed under the License is distributed on an +"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY +KIND, either express or implied. See the License for the +specific language governing permissions and limitations +under the License. +--> + +### Compatibility + +- JDK17 +- Operator used fabric8 which assumes to be compatible with available k8s versions. However for using status subresource, please use k8s version 1.14 or above. +- Spark versions 3.4 or above + +## Manage Your Spark Operator + +The operator installation is managed by a helm chart. To install run: + +``` +helm install spark-kubernetes-operator \ + -f build-tools/helm/spark-kubernetes-operator/values.yaml \ + build-tools/helm/spark-kubernetes-operator/ +``` + +Alternatively to install the operator (and also the helm chart) to a specific namespace: + +``` +helm install spark-kubernetes-operator \ + -f build-tools/helm/spark-kubernetes-operator/values.yaml \ + build-tools/helm/spark-kubernetes-operator/ \ + --namespace spark-system --create-namespace +``` + +Note that in this case you will need to update the namespace in the examples accordingly. + +### Spark Application Namespaces + +By default, Spark applications are created in the same namespace as the operator deployment. +You many also configure the chart deployment to add necessary RBAC resources for +applications to enable them running in additional namespaces. + +## Overriding configuration parameters during Helm install + +Helm provides different ways to override the default installation parameters (contained +in `values.yaml`) for the Helm chart. + +To override single parameters you can use `--set`, for example: + +``` +helm install --set image.repository=<my_registory>/spark-kubernetes-operator \ + -f build-tools/helm/spark-kubernetes-operator/values.yaml \ + build-tools/helm/spark-kubernetes-operator/ +``` + +You can also provide multiple custom values file by using the `-f` flag, the latest takes +higher precedence: + +``` +helm install spark-kubernetes-operator \ + -f build-tools/helm/spark-kubernetes-operator/values.yaml \ + -f my_values.yaml \ + build-tools/helm/spark-kubernetes-operator/ +``` + +The configurable parameters of the Helm chart and which default values as detailed in the +following table: + +| Parameters | Description | Default value | +|----------------------------------------------------------|----------------------------------------------------------------------------------------------------------------------------------------------------------|---------------------------------------------------------------------------------------------------------| +| image.repository | The image repository of spark-kubernetes-operator. | spark-kubernetes-operator | +| image.pullPolicy | The image pull policy of spark-kubernetes-operator. | IfNotPresent | +| image.tag | The image tag of spark-kubernetes-operator. | | +| image.digest | The image digest of spark-kubernetes-operator. If set then it takes precedence and the image tag will be ignored. | | +| imagePullSecrets | The image pull secrets of spark-kubernetes-operator. | | +| operatorDeployment.replica | Operator replica count. Must be 1 unless leader election is configured. | 1 | +| operatorDeployment.strategy.type | Operator pod upgrade strategy. Must be Recreate unless leader election is configured. | Recreate | +| operatorDeployment.operatorPod.annotations | Custom annotations to be added to the operator pod | | +| operatorDeployment.operatorPod.labels | Custom labels to be added to the operator pod | | +| operatorDeployment.operatorPod.nodeSelector | Custom nodeSelector to be added to the operator pod. | | +| operatorDeployment.operatorPod.topologySpreadConstraints | Custom topologySpreadConstraints to be added to the operator pod. | | +| operatorDeployment.operatorPod.dnsConfig | DNS configuration to be used by the operator pod. | | +| operatorDeployment.operatorPod.volumes | Additional volumes to be added to the operator pod. | | +| operatorDeployment.operatorPod.priorityClassName | Priority class name to be used for the operator pod | | +| operatorDeployment.operatorPod.securityContext | Security context overrides for the operator pod | | +| operatorDeployment.operatorContainer.jvmArgs | JVM arg override for the operator container. | `-XX:+UseG1GC -Xms3G -Xmx3G -Dfile.encoding=UTF8` | +| operatorDeployment.operatorContainer.env | Custom env to be added to the operator container. | | +| operatorDeployment.operatorContainer.envFrom | Custom envFrom to be added to the operator container, e.g. for downward API. | | +| operatorDeployment.operatorContainer.probes | Probe config for the operator container. | | +| operatorDeployment.operatorContainer.securityContext | Security context overrides for the operator container. | run as non root for baseline secuirty standard compliance | +| operatorDeployment.operatorContainer.resources | Resources for the operator container. | memory 4Gi, ephemeral storage 2Gi and 1 cpu | +| operatorDeployment.additionalContainers | Additional containers to be added to the operator pod, e.g. sidecar. | | +| operatorRbac.serviceAccount.create | Whether to create service account for operator to use. | | +| operatorRbac.clusterRole.create | Whether to create ClusterRole for operator to use. | true | +| operatorRbac.clusterRoleBinding.create | Whether to create ClusterRoleBinding for operator to use. | true | +| operatorRbac.role.create | Whether to create Role for operator to use. At least one of `clusterRole.create` or `role.create` should be enabled | true | +| operatorRbac.roleBinding.create | Whether to create RoleBinding for operator to use. At least one of `clusterRoleBinding.create` or `roleBinding.create` should be enabled | true | +| operatorRbac.clusterRole.configManagement.roleName | Role name for operator configuration management (hot property loading and leader election) | `spark-operator-config-role` | +| appResources.namespaces.create | Whether to create dedicated namespaces for Spark apps. | `spark-operator-config-role-binding` | Review Comment: Shall we add `clusterResources` first before adding this document? It looks a little weird because the document is missing one of the part while Apache Spark Operator supports both `SparkApp` CRD and `SparkCluster` CRD. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org --------------------------------------------------------------------- To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org