rickchengx commented on code in PR #11860:
URL: https://github.com/apache/dolphinscheduler/pull/11860#discussion_r973835351


##########
docs/docs/en/guide/task/spark.md:
##########
@@ -15,34 +15,33 @@ Spark task type for executing Spark application. When 
executing the Spark task,
 
 ## Task Parameters
 
-| **Parameter** | **Description** |
-| ------- | ---------- |
-| Node Name | Set the name of the task. Node names within a workflow 
definition are unique. |
-| Run flag | Indicates whether the node can be scheduled normally. If it is 
not necessary to execute, you can turn on the prohibiting execution switch. |
-| Description | Describes the function of this node. |
-| Task priority | When the number of worker threads is insufficient, they are 
executed in order from high to low according to the priority, and they are 
executed according to the first-in, first-out principle when the priority is 
the same. |
-| Worker group | The task is assigned to the machines in the worker group for 
execution. If Default is selected, a worker machine will be randomly selected 
for execution. |
-| Task group name | The group in Resources, if not configured, it will not be 
used. | 
-| Environment Name | Configure the environment in which to run the script. |
-| Number of failed retries | The number of times the task is resubmitted after 
failure. It supports drop-down and manual filling. | 
-| Failure Retry Interval | The time interval for resubmitting the task if the 
task fails. It supports drop-down and manual filling. | 
-| Timeout alarm | Check Timeout Alarm and Timeout Failure. When the task 
exceeds the "timeout duration", an alarm email will be sent and the task 
execution will fail. |
-| Program type | Supports Java, Scala, Python, and SQL. |
-| Spark version | Support Spark1 and Spark2. |
-| The class of main function | The **full path** of Main Class, the entry 
point of the Spark program. |
-| Main jar package | The Spark jar package (upload by Resource Center). |
-| SQL scripts | SQL statements in .sql files that Spark sql runs. |
-| Deployment mode | <ul><li>spark submit supports three modes: yarn-clusetr, 
yarn-client and local.</li><li>spark sql supports yarn-client and local 
modes.</li></ul> |
-| Task name | Spark task name. |
-| Driver core number | Set the number of Driver core, which can be set 
according to the actual production environment. |
-| Driver memory size | Set the size of Driver memories, which can be set 
according to the actual production environment. |
-| Number of Executor | Set the number of Executor, which can be set according 
to the actual production environment. |
-| Executor memory size | Set the size of Executor memories, which can be set 
according to the actual production environment. |
-| Main program parameters | Set the input parameters of the Spark program and 
support the substitution of custom parameter variables. |
-| Optional parameters | Support `--jars`, `--files`,` --archives`, `--conf` 
format. |
-| Resource | Appoint resource files in the `Resource` if parameters refer to 
them. |
-| Custom parameter | It is a local user-defined parameter for Spark, and will 
replace the content with `${variable}` in the script. |
-| Predecessor task | Selecting a predecessor task for the current task, will 
set the selected predecessor task as upstream of the current task. |
+|       **Parameter**        |                                                 
                                                       **Description**          
                                                                                
              |
+|----------------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+| Node Name                  | Set the name of the task. Node names within a 
workflow definition are unique.                                                 
                                                                                
                |
+| Run flag                   | Indicates whether the node can be scheduled 
normally. If it is not necessary to execute, you can turn on the prohibiting 
execution switch.                                                               
                     |
+| Description                | Describes the function of this node.            
                                                                                
                                                                                
              |
+| Task priority              | When the number of worker threads is 
insufficient, they are executed in order from high to low according to the 
priority, and they are executed according to the first-in, first-out principle 
when the priority is the same. |
+| Worker group               | The task is assigned to the machines in the 
worker group for execution. If Default is selected, a worker machine will be 
randomly selected for execution.                                                
                     |
+| Task group name            | The group in Resources, if not configured, it 
will not be used.                                                               
                                                                                
                |
+| Environment Name           | Configure the environment in which to run the 
script.                                                                         
                                                                                
                |
+| Number of failed retries   | The number of times the task is resubmitted 
after failure. It supports drop-down and manual filling.                        
                                                                                
                  |
+| Failure Retry Interval     | The time interval for resubmitting the task if 
the task fails. It supports drop-down and manual filling.                       
                                                                                
               |
+| Timeout alarm              | Check Timeout Alarm and Timeout Failure. When 
the task exceeds the "timeout duration", an alarm email will be sent and the 
task execution will fail.                                                       
                   |

Review Comment:
   Sure.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@dolphinscheduler.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org

Reply via email to