This is an automated email from the ASF dual-hosted git repository.

dongjoon pushed a commit to branch main
in repository https://gitbox.apache.org/repos/asf/spark-kubernetes-operator.git


The following commit(s) were added to refs/heads/main by this push:
     new db7e640  [SPARK-53679] Fix typos in Spark Kubernetes Operator 
documentation
db7e640 is described below

commit db7e6403fd14ab86c302532423e26d3799a6e4c8
Author: Peter Toth <[email protected]>
AuthorDate: Tue Sep 23 11:59:17 2025 -0700

    [SPARK-53679] Fix typos in Spark Kubernetes Operator documentation
    
    ### What changes were proposed in this pull request?
    
    This PR fixes a few typos in the documentation:
    - Fix grammar error in config property description ("to for" → "for")
    - Correct property name consistency in configuration.md
    - Fix spelling errors: registory→registry, secuirty→security, etc.
    - Update cluster state diagram reference to use correct image
    - Fix table header typo and various other spelling corrections
    
    ### Why are the changes needed?
    To have better documentation.
    
    ### Does this PR introduce _any_ user-facing change?
    No.
    
    ### How was this patch tested?
    This PR changes only documentation, no test is needed.
    
    ### Was this patch authored or co-authored using generative AI tooling?
    Yes, this PR was generated with `claude-sonnet-4` and was reviewed manually.
    
    Closes #334 from peter-toth/fix-typos.
    
    Authored-by: Peter Toth <[email protected]>
    Signed-off-by: Dongjoon Hyun <[email protected]>
---
 docs/architecture.md                                              | 2 +-
 docs/config_properties.md                                         | 2 +-
 docs/configuration.md                                             | 2 +-
 docs/operations.md                                                | 6 +++---
 docs/spark_custom_resources.md                                    | 8 ++++----
 .../org/apache/spark/k8s/operator/config/SparkOperatorConf.java   | 2 +-
 6 files changed, 11 insertions(+), 11 deletions(-)

diff --git a/docs/architecture.md b/docs/architecture.md
index 040db9f..477a0f8 100644
--- a/docs/architecture.md
+++ b/docs/architecture.md
@@ -74,7 +74,7 @@ launching Spark deployments and submitting jobs under the 
hood. It also uses
 
 ## Cluster State Transition
 
-[![Cluster State 
Transition](resources/application_state_machine.png)](resources/application_state_machine.png)
+[![Cluster State 
Transition](resources/cluster_state_machine.png)](resources/cluster_state_machine.png)
 
 * Spark clusters are expected to be always running after submitted.
 * Similar to Spark applications, K8s resources created for a cluster would be 
deleted as the final
diff --git a/docs/config_properties.md b/docs/config_properties.md
index 55369e7..1c166c2 100644
--- a/docs/config_properties.md
+++ b/docs/config_properties.md
@@ -8,7 +8,7 @@
  | spark.kubernetes.operator.terminateOnInformerFailureEnabled | Boolean | 
false | false | Enable to indicate informer errors should stop operator 
startup. If disabled, operator startup will ignore recoverable errors, caused 
for example by RBAC issues and will retry periodically. | 
  | spark.kubernetes.operator.reconciler.terminationTimeoutSeconds | Integer | 
30 | false | Grace period for operator shutdown before reconciliation threads 
are killed. | 
  | spark.kubernetes.operator.reconciler.parallelism | Integer | 50 | false | 
Thread pool size for Spark Operator reconcilers. Unbounded pool would be used 
if set to non-positive number. | 
- | spark.kubernetes.operator.reconciler.foregroundRequestTimeoutSeconds | Long 
| 30 | true | Timeout (in seconds) to for requests made to API server. This 
applies only to foreground requests. | 
+ | spark.kubernetes.operator.reconciler.foregroundRequestTimeoutSeconds | Long 
| 30 | true | Timeout (in seconds) for requests made to API server. This 
applies only to foreground requests. | 
  | spark.kubernetes.operator.reconciler.intervalSeconds | Long | 120 | true | 
Interval (in seconds, non-negative) to reconcile Spark applications. Note that 
reconciliation is always expected to be triggered when app spec / status is 
updated. This interval controls the reconcile behavior of operator 
reconciliation even when there's no update on SparkApplication, e.g. to 
determine whether a hanging app needs to be proactively terminated. Thus this 
is recommended to set to above 2 minutes t [...]
  | spark.kubernetes.operator.reconciler.trimStateTransitionHistoryEnabled | 
Boolean | true | true | When enabled, operator would trim state transition 
history when a new attempt starts, keeping previous attempt summary only. | 
  | spark.kubernetes.operator.reconciler.appStatusListenerClassNames | String | 
 | false | Comma-separated names of SparkAppStatusListener class 
implementations | 
diff --git a/docs/configuration.md b/docs/configuration.md
index f471b02..1cedd1a 100644
--- a/docs/configuration.md
+++ b/docs/configuration.md
@@ -45,7 +45,7 @@ To enable hot properties loading, update the **helm chart 
values file** with
 ```yaml
 operatorConfiguration:
   spark-operator.properties: |+
-    spark.operator.dynamic.config.enabled=true
+    spark.kubernetes.operator.dynamicConfig.enabled=true
     # ... all other config overides...
   dynamicConfig:
     create: true
diff --git a/docs/operations.md b/docs/operations.md
index 433e127..6577aae 100644
--- a/docs/operations.md
+++ b/docs/operations.md
@@ -41,7 +41,7 @@ in `values.yaml`) for the Helm chart.
 To override single parameters you can use `--set`, for example:
 
 ```bash
-helm install --set image.repository=<my_registory>/spark-kubernetes-operator \
+helm install --set image.repository=<my_registry>/spark-kubernetes-operator \
    -f build-tools/helm/spark-kubernetes-operator/values.yaml \
   build-tools/helm/spark-kubernetes-operator/
 ```
@@ -80,7 +80,7 @@ following table:
 | operatorDeployment.operatorPod.operatorContainer.env             | Custom 
env to be added to the operator container.                                      
                                                                                
        |                                                                       
                                  |
 | operatorDeployment.operatorPod.operatorContainer.envFrom         | Custom 
envFrom to be added to the operator container, e.g. for downward API.           
                                                                                
        |                                                                       
                                  |
 | operatorDeployment.operatorPod.operatorContainer.probes          | Probe 
config for the operator container.                                              
                                                                                
         |                                                                      
                                   |
-| operatorDeployment.operatorPod.operatorContainer.securityContext | Security 
context overrides for the operator container.                                   
                                                                                
      | run as non root for baseline secuirty standard compliance               
                                |
+| operatorDeployment.operatorPod.operatorContainer.securityContext | Security 
context overrides for the operator container.                                   
                                                                                
      | run as non root for baseline security standard compliance               
                                |
 | operatorDeployment.operatorPod.operatorContainer.resources       | Resources 
for the operator container.                                                     
                                                                                
     | memory 2Gi, ephemeral storage 2Gi and 1 cpu                              
                               |
 | operatorDeployment.additionalContainers                          | 
Additional containers to be added to the operator pod, e.g. sidecar.            
                                                                                
               |                                                                
                                         |
 | operatorRbac.serviceAccount.create                               | Whether 
to create service account for operator to use.                                  
                                                                                
       | true                                                                   
                                 |
@@ -125,7 +125,7 @@ following table:
 For more information check the [Helm 
documentation](https://helm.sh/docs/helm/helm_install/).
 
 __Notice__: The pod resources should be set as your workload in different 
environments to
-archive a matched K8s pod QoS. See
+achieve a matched K8s pod QoS. See
 also [Pod Quality of Service 
Classes](https://kubernetes.io/docs/concepts/workloads/pods/pod-qos/#quality-of-service-classes).
 
 ## Operator Health(Liveness) Probe with Sentinel Resource
diff --git a/docs/spark_custom_resources.md b/docs/spark_custom_resources.md
index 1f39ba5..569503c 100644
--- a/docs/spark_custom_resources.md
+++ b/docs/spark_custom_resources.md
@@ -225,9 +225,9 @@ sample restart config snippet:
 
 ``` yaml
 restartConfig:
-  # accptable values are 'Never', 'Always', 'OnFailure' and 
'OnInfrastructureFailure'
+  # acceptable values are 'Never', 'Always', 'OnFailure' and 
'OnInfrastructureFailure'
   restartPolicy: Never
-  # operator would retry the application if configured. All resources from 
current attepmt
+  # operator would retry the application if configured. All resources from 
current attempt
   # would be deleted before starting next attempt
   maxRestartAttempts: 3
   # backoff time (in millis) that operator would wait before next attempt
@@ -239,7 +239,7 @@ restartConfig:
 It's possible to configure applications to be proactively terminated and 
resubmitted in particular
 cases to avoid resource deadlock.
 
-| Field                                                                        
           | Type    | Default Value | Descritpion                              
                                                                          |
+| Field                                                                        
           | Type    | Default Value | Description                              
                                                                          |
 
|-----------------------------------------------------------------------------------------|---------|---------------|--------------------------------------------------------------------------------------------------------------------|
 | 
.spec.applicationTolerations.applicationTimeoutConfig.driverStartTimeoutMillis  
        | integer | 300000        | Time to wait for driver reaches running 
state after requested driver.                                              |
 | 
.spec.applicationTolerations.applicationTimeoutConfig.executorStartTimeoutMillis
        | integer | 300000        | Time to wait for driver to acquire minimal 
number of running executors.                                            |
@@ -270,7 +270,7 @@ sparkConf:
 Spark would try to bring up 10 executors as defined in SparkConf. In addition, 
from
 operator perspective,
 
-* If Spark app acquires less than 5 executors in given tine window (.spec.
+* If Spark app acquires less than 5 executors in given time window (.spec.
   applicationTolerations.applicationTimeoutConfig.executorStartTimeoutMillis) 
after
   submitted, it would be shut down proactively in order to avoid resource 
deadlock.
 * Spark app would be marked as 'RunningWithBelowThresholdExecutors' if it 
loses executors after
diff --git 
a/spark-operator/src/main/java/org/apache/spark/k8s/operator/config/SparkOperatorConf.java
 
b/spark-operator/src/main/java/org/apache/spark/k8s/operator/config/SparkOperatorConf.java
index 0ce8fa8..953c84c 100644
--- 
a/spark-operator/src/main/java/org/apache/spark/k8s/operator/config/SparkOperatorConf.java
+++ 
b/spark-operator/src/main/java/org/apache/spark/k8s/operator/config/SparkOperatorConf.java
@@ -120,7 +120,7 @@ public final class SparkOperatorConf {
           
.key("spark.kubernetes.operator.reconciler.foregroundRequestTimeoutSeconds")
           .enableDynamicOverride(true)
           .description(
-              "Timeout (in seconds) to for requests made to API server. This "
+              "Timeout (in seconds) for requests made to API server. This "
                   + "applies only to foreground requests.")
           .typeParameterClass(Long.class)
           .defaultValue(30L)


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to