This is an automated email from the ASF dual-hosted git repository.

zhongjiajie pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/dolphinscheduler-website.git


The following commit(s) were added to refs/heads/master by this push:
     new 4ca367e  Fix dev directory document headlines (#714)
4ca367e is described below

commit 4ca367ec16e2196e413b729bdd769dff890d489f
Author: Tq <[email protected]>
AuthorDate: Fri Mar 4 17:57:05 2022 +0800

    Fix dev directory document headlines (#714)
    
    According to rules below:
    1. Use Headline-style capitalization in all headlines.
    2. Use document name as lvl.1(#) heading.
    3. Use ascend count of # to the sub-headings.
    4. Use a blank line under and below each headings.
    5. Delete number counters in the sub-headings.(like '1.1.x')
    6. replace &->and.
    7. docker-compose->Docker Compose.(except code)
    8. zookeeper->ZooKeeper.(except code)
    9. /->or.
---
 .../About_DolphinScheduler.md                      | 15 ++--
 docs/en-us/dev/user_doc/architecture/cache.md      | 14 ++--
 .../dev/user_doc/architecture/configuration.md     | 79 +++++++++++++---------
 docs/en-us/dev/user_doc/architecture/design.md     | 71 ++++++++++---------
 .../dev/user_doc/architecture/load-balance.md      | 22 +++---
 docs/en-us/dev/user_doc/architecture/metadata.md   | 47 +++++++++----
 .../dev/user_doc/architecture/task-structure.md    | 63 +++++++----------
 .../guide/alert/alert_plugin_user_guide.md         |  4 +-
 docs/en-us/dev/user_doc/guide/alert/dingtalk.md    |  2 +-
 .../user_doc/guide/alert/enterprise-webexteams.md  |  7 +-
 docs/en-us/dev/user_doc/guide/alert/telegram.md    |  5 +-
 docs/en-us/dev/user_doc/guide/datasource/hive.md   |  2 +-
 .../dev/user_doc/guide/datasource/introduction.md  |  1 -
 docs/en-us/dev/user_doc/guide/datasource/mysql.md  |  1 -
 .../dev/user_doc/guide/datasource/postgresql.md    |  2 +-
 .../dev/user_doc/guide/expansion-reduction.md      | 20 +++---
 docs/en-us/dev/user_doc/guide/flink-call.md        | 54 ++++-----------
 docs/en-us/dev/user_doc/guide/homepage.md          |  2 +-
 .../dev/user_doc/guide/installation/cluster.md     | 12 ++--
 .../dev/user_doc/guide/installation/docker.md      | 72 ++++++++++----------
 .../dev/user_doc/guide/installation/hardware.md    | 10 +--
 .../dev/user_doc/guide/installation/kubernetes.md  | 42 ++++++------
 .../user_doc/guide/installation/pseudo-cluster.md  | 20 +++---
 .../guide/installation/skywalking-agent.md         | 12 ++--
 .../dev/user_doc/guide/installation/standalone.md  |  6 +-
 docs/en-us/dev/user_doc/guide/monitor.md           | 17 +++--
 docs/en-us/dev/user_doc/guide/open-api.md          | 10 +--
 .../en-us/dev/user_doc/guide/parameter/built-in.md |  2 +-
 docs/en-us/dev/user_doc/guide/parameter/context.md |  4 +-
 .../en-us/dev/user_doc/guide/parameter/priority.md |  2 +-
 .../dev/user_doc/guide/project/project-list.md     |  4 +-
 .../dev/user_doc/guide/project/task-instance.md    |  2 +-
 .../user_doc/guide/project/workflow-definition.md  | 12 ++--
 .../user_doc/guide/project/workflow-instance.md    | 12 ++--
 docs/en-us/dev/user_doc/guide/resource.md          | 26 +++----
 docs/en-us/dev/user_doc/guide/security.md          | 15 ++--
 docs/en-us/dev/user_doc/guide/task/conditions.md   |  2 +-
 docs/en-us/dev/user_doc/guide/task/datax.md        |  3 +-
 docs/en-us/dev/user_doc/guide/task/dependent.md    |  2 +-
 docs/en-us/dev/user_doc/guide/task/emr.md          |  1 +
 docs/en-us/dev/user_doc/guide/task/flink.md        | 10 +--
 docs/en-us/dev/user_doc/guide/task/http.md         |  1 -
 docs/en-us/dev/user_doc/guide/task/map-reduce.md   |  9 +--
 docs/en-us/dev/user_doc/guide/task/pigeon.md       |  2 +-
 docs/en-us/dev/user_doc/guide/task/spark.md        | 10 +--
 docs/en-us/dev/user_doc/guide/task/sql.md          |  6 +-
 docs/en-us/dev/user_doc/guide/upgrade.md           | 30 ++++----
 47 files changed, 383 insertions(+), 384 deletions(-)

diff --git 
a/docs/en-us/dev/user_doc/About_DolphinScheduler/About_DolphinScheduler.md 
b/docs/en-us/dev/user_doc/About_DolphinScheduler/About_DolphinScheduler.md
index 5f1cb64..aafcca1 100644
--- a/docs/en-us/dev/user_doc/About_DolphinScheduler/About_DolphinScheduler.md
+++ b/docs/en-us/dev/user_doc/About_DolphinScheduler/About_DolphinScheduler.md
@@ -2,11 +2,18 @@
 
 Apache DolphinScheduler is a cloud-native visual Big Data workflow scheduler 
system, committed to “solving complex big-data task dependencies and triggering 
relationships in data OPS orchestration so that various types of big data tasks 
can be used out of the box”.
 
-# High Reliability
+## High Reliability
+
 - Decentralized multi-master and multi-worker, HA is supported by itself, 
overload processing
-# User-Friendly
+
+## User-Friendly
+
 - All process definition operations are visualized, Visualization process 
defines key information at a glance, One-click deployment
-# Rich Scenarios
+
+## Rich Scenarios
+
 - Support multi-tenant. Support many task types e.g., spark,flink,hive, mr, 
shell, python, sub_process
-# High Expansibility
+
+## High Expansibility
+
 - Support custom task types, Distributed scheduling, and the overall 
scheduling capability will increase linearly with the scale of the cluster
diff --git a/docs/en-us/dev/user_doc/architecture/cache.md 
b/docs/en-us/dev/user_doc/architecture/cache.md
index 6a7359d..a07190d 100644
--- a/docs/en-us/dev/user_doc/architecture/cache.md
+++ b/docs/en-us/dev/user_doc/architecture/cache.md
@@ -1,12 +1,12 @@
-### Cache
+# Cache
 
-#### Purpose
+## Purpose
 
-Due to the master-server scheduling process, there will be a large number of 
database read operations, such as `tenant`, `user`, `processDefinition`, etc. 
On the one hand, it will put a lot of pressure on the DB, and on the other 
hand, it will slow down the entire core scheduling process. 
+Due to the master-server scheduling process, there will be a large number of 
database read operations, such as `tenant`, `user`, `processDefinition`, etc. 
On the one hand, it will put a lot of pressure on the DB, and on the other 
hand, it will slow down the entire core scheduling process.
 
 Considering that this part of the business data is a scenario where more reads 
and less writes are performed, a cache module is introduced to reduce the DB 
read pressure and speed up the core scheduling process;
 
-#### Cache settings
+## Cache Settings
 
 ```yaml
 spring:
@@ -27,13 +27,13 @@ The cache-module use 
[spring-cache](https://spring.io/guides/gs/caching/), so yo
 
 With the config of [caffeine](https://github.com/ben-manes/caffeine), you can 
set the cache size, expire time, etc.
 
-#### Cache Read
+## Cache Read
 
 The cache adopts the annotation `@Cacheable` of spring-cache and is configured 
in the mapper layer. For example: `TenantMapper`.
 
-#### Cache Evict
+## Cache Evict
 
-The business data update comes from the api-server, and the cache end is in 
the master-server. So it is necessary to monitor the data update of the 
api-server (aspect intercept `@CacheEvict`), and the master-server will be 
notified when the cache eviction is required. 
+The business data update comes from the api-server, and the cache end is in 
the master-server. So it is necessary to monitor the data update of the 
api-server (aspect intercept `@CacheEvict`), and the master-server will be 
notified when the cache eviction is required.
 
 It should be noted that the final strategy for cache update comes from the 
user's expiration strategy configuration in caffeine, so please configure it in 
conjunction with the business;
 
diff --git a/docs/en-us/dev/user_doc/architecture/configuration.md 
b/docs/en-us/dev/user_doc/architecture/configuration.md
index 5dbfc5a..37063ea 100644
--- a/docs/en-us/dev/user_doc/architecture/configuration.md
+++ b/docs/en-us/dev/user_doc/architecture/configuration.md
@@ -1,14 +1,17 @@
 <!-- markdown-link-check-disable -->
+# Configuration
+
+## Preface
 
-# Preface
 This document explains the DolphinScheduler application configurations 
according to DolphinScheduler-1.3.x versions.
 
-# Directory Structure
+## Directory Structure
+
 Currently, all the configuration files are under [conf ] directory. Please 
check the following simplified DolphinScheduler installation directories to 
have a direct view about the position [conf] directory in and configuration 
files inside. This document only describes DolphinScheduler configurations and 
other modules are not going into.
 
 [Note: the DolphinScheduler (hereinafter called the ‘DS’) .]
-```
 
+```
 ├─bin                               DS application commands directory
 │  ├─dolphinscheduler-daemon.sh         startup/shutdown DS application 
 │  ├─start-all.sh                  A     startup all DS services with 
configurations
@@ -16,7 +19,7 @@ Currently, all the configuration files are under [conf ] 
directory. Please check
 ├─conf                              configurations directory
 │  ├─application-api.properties         API-service config properties
 │  ├─datasource.properties              datasource config properties
-│  ├─zookeeper.properties               zookeeper config properties
+│  ├─zookeeper.properties               ZooKeeper config properties
 │  ├─master.properties                  master config properties
 │  ├─worker.properties                  worker config properties
 │  ├─quartz.properties                  quartz config properties
@@ -43,22 +46,19 @@ Currently, all the configuration files are under [conf ] 
directory. Please check
 │  ├─upgrade-dolphinscheduler.sh        DS database upgrade script
 │  ├─monitor-server.sh                  DS monitor-server start script       
 │  ├─scp-hosts.sh                       transfer installation files script     
                                
-│  ├─remove-zk-node.sh                  cleanup zookeeper caches script       
+│  ├─remove-zk-node.sh                  cleanup ZooKeeper caches script       
 ├─ui                                front-end web resources directory
 ├─lib                               DS .jar dependencies directory
 ├─install.sh                        auto-setup DS services script
-
-
 ```
 
-
-# Configurations in Details
+## Configurations in Details
 
 serial number| service classification| config file|
 |--|--|--|
 1|startup/shutdown DS application|dolphinscheduler-daemon.sh
 2|datasource config properties| datasource.properties
-3|zookeeper config properties|zookeeper.properties
+3|ZooKeeper config properties|zookeeper.properties
 4|common-service[storage] config properties|common.properties
 5|API-service config properties|application-api.properties
 6|master config properties|master.properties
@@ -67,11 +67,12 @@ serial number| service classification| config file|
 9|quartz config properties|quartz.properties
 10|DS environment variables configuration script[install/start 
DS]|install_config.conf
 11|load environment variables configs <br /> [eg: JAVA_HOME,HADOOP_HOME, 
HIVE_HOME ...]|dolphinscheduler_env.sh
-12|services log config files|API-service log config : logback-api.xml  <br /> 
master-service log config  : logback-master.xml    <br /> worker-service log 
config : logback-worker.xml  <br /> alert-service log config : 
logback-alert.xml 
+12|services log config files|API-service log config : logback-api.xml  <br /> 
master-service log config  : logback-master.xml    <br /> worker-service log 
config : logback-worker.xml  <br /> alert-service log config : logback-alert.xml
+
 
+### dolphinscheduler-daemon.sh [startup/shutdown DS application]
 
-## 1.dolphinscheduler-daemon.sh [startup/shutdown DS application]
-dolphinscheduler-daemon.sh is responsible for DS startup & shutdown. 
+dolphinscheduler-daemon.sh is responsible for DS startup and shutdown.
 Essentially, start-all.sh/stop-all.sh startup/shutdown the cluster via 
dolphinscheduler-daemon.sh.
 Currently, DS just makes a basic config, please config further JVM options 
based on your practical situation of resources.
 
@@ -90,9 +91,10 @@ export DOLPHINSCHEDULER_OPTS="
 "
 ```
 
-> "-XX:DisableExplicitGC" is not recommended due to may lead to memory link 
(DS dependent on Netty to communicate). 
+> "-XX:DisableExplicitGC" is not recommended due to may lead to memory link 
(DS dependent on Netty to communicate).
+
+### datasource.properties [datasource config properties]
 
-## 2.datasource.properties [datasource config properties]
 DS uses Druid to manage database connections and default simplified configs 
are:
 |Parameters | Default value| Description|
 |--|--|--|
@@ -118,11 +120,12 @@ spring.datasource.poolPreparedStatements|true| Open 
PSCache
 spring.datasource.maxPoolPreparedStatementPerConnectionSize|20| specify the 
size of PSCache on each connection
 
 
-## 3.zookeeper.properties [zookeeper config properties]
+### zookeeper.properties [zookeeper config properties]
+
 |Parameters | Default value| Description|
 |--|--|--|
-zookeeper.quorum|localhost:2181| zookeeper cluster connection info
-zookeeper.dolphinscheduler.root|/dolphinscheduler| DS is stored under 
zookeeper root directory
+zookeeper.quorum|localhost:2181| ZooKeeper cluster connection info
+zookeeper.dolphinscheduler.root|/dolphinscheduler| DS is stored under 
ZooKeeper root directory
 zookeeper.session.timeout|60000|  session timeout
 zookeeper.connection.timeout|30000| connection timeout
 zookeeper.retry.base.sleep|100| time to wait between subsequent retries
@@ -130,8 +133,9 @@ zookeeper.retry.max.sleep|30000| maximum time to wait 
between subsequent retries
 zookeeper.retry.maxtime|10| maximum retry times
 
 
-## 4.common.properties [hadoop、s3、yarn config properties]
-Currently, common.properties mainly configures hadoop/s3a related 
configurations. 
+### common.properties [hadoop、s3、yarn config properties]
+
+Currently, common.properties mainly configures hadoop/s3a related 
configurations.
 |Parameters | Default value| Description|
 |--|--|--|
 data.basedir.path|/tmp/dolphinscheduler| local directory used to store temp 
files
@@ -154,7 +158,8 @@ dolphinscheduler.env.path|env/dolphinscheduler_env.sh|load 
environment variables
 development.state|false| specify whether in development state
 
 
-## 5.application-api.properties [API-service log config]
+### application-api.properties [API-service log config]
+
 |Parameters | Default value| Description|
 |--|--|--|
 server.port|12345|api service communication port
@@ -169,7 +174,8 @@ spring.messages.basename|i18n/messages| i18n config
 security.authentication.type|PASSWORD| authentication type
 
 
-## 6.master.properties [master-service log config]
+### master.properties [master-service log config]
+
 |Parameters | Default value| Description|
 |--|--|--|
 master.listen.port|5678|master listen port
@@ -184,7 +190,8 @@ master.max.cpuload.avg|-1|master max CPU load avg, only 
higher than the system C
 master.reserved.memory|0.3|master reserved memory, only lower than system 
available memory, master server can schedule. default value 0.3, the unit is G
 
 
-## 7.worker.properties [worker-service log config]
+### worker.properties [worker-service log config]
+
 |Parameters | Default value| Description|
 |--|--|--|
 worker.listen.port|1234|worker listen port
@@ -195,7 +202,8 @@ worker.reserved.memory|0.3|worker reserved memory, only 
lower than system availa
 worker.groups|default|worker groups separated by comma, like 
'worker.groups=default,test' <br> worker will join corresponding group 
according to this config when startup
 
 
-## 8.alert.properties [alert-service log config]
+### alert.properties [alert-service log config]
+
 |Parameters | Default value| Description|
 |--|--|--|
 alert.type|EMAIL|alter type|
@@ -222,7 +230,8 @@ enterprise.wechat.team.send.msg||group message format
 plugin.dir|/Users/xx/your/path/to/plugin/dir|plugin directory
 
 
-## 9.quartz.properties [quartz config properties]
+### quartz.properties [quartz config properties]
+
 This part describes quartz configs and please configure them based on your 
practical situation and resources.
 |Parameters | Default value| Description|
 |--|--|--|
@@ -246,18 +255,20 @@ org.quartz.jobStore.dataSource | myDs
 org.quartz.dataSource.myDs.connectionProvider.class | 
org.apache.dolphinscheduler.service.quartz.DruidConnectionProvider
 
 
-## 10.install_config.conf [DS environment variables configuration 
script[install/start DS]]
+### install_config.conf [DS environment variables configuration 
script[install/start DS]]
+
 install_config.conf is a bit complicated and is mainly used in the following 
two places.
-* 1.DS cluster auto installation
+* DS Cluster Auto Installation
 
 > System will load configs in the install_config.conf and auto-configure files 
 > below, based on the file content when executing 'install.sh'.
 > Files such as 
 > dolphinscheduler-daemon.sh、datasource.properties、zookeeper.properties、common.properties、application-api.properties、master.properties、worker.properties、alert.properties、quartz.properties
 >  and etc.
 
 
-* 2.Startup/shutdown DS cluster
+* Startup/Shutdown DS Cluster
 > The system will load masters, workers, alertServer, apiServers and other 
 > parameters inside the file to startup/shutdown DS cluster.
 
-File content as follows:
+#### File Content as Follows:
+
 ```bash
 
 # Note:  please escape the character if the file contains special characters 
such as `.*[]^${}\+?|()@#&`.
@@ -266,7 +277,7 @@ File content as follows:
 # Database type (DS currently only supports PostgreSQL and MySQL)
 dbtype="mysql"
 
-# Database url & port
+# Database url and port
 dbhost="192.168.xx.xx:3306"
 
 # Database name
@@ -279,7 +290,7 @@ username="xx"
 # Database password
 password="xx"
 
-# Zookeeper url
+# ZooKeeper url
 zkQuorum="192.168.xx.xx:2181,192.168.xx.xx:2181,192.168.xx.xx:2181"
 
 # DS installation path, such as '/data1_1T/dolphinscheduler'
@@ -381,7 +392,8 @@ alertServer="ds3"
 apiServers="ds1"
 ```
 
-## 11.dolphinscheduler_env.sh [load environment variables configs]
+### dolphinscheduler_env.sh [load environment variables configs]
+
 When using shell to commit tasks, DS will load environment variables inside 
dolphinscheduler_env.sh into the host.
 Types of tasks involved are: Shell task、Python task、Spark task、Flink 
task、Datax task and etc.
 ```bash
@@ -399,7 +411,8 @@ export 
PATH=$HADOOP_HOME/bin:$SPARK_HOME1/bin:$SPARK_HOME2/bin:$PYTHON_HOME:$JAV
 
 ```
 
-## 12. Services logback configs
+### Services logback configs
+
 Services name| logback config name |
 --|--|
 API-service logback config |logback-api.xml|
diff --git a/docs/en-us/dev/user_doc/architecture/design.md 
b/docs/en-us/dev/user_doc/architecture/design.md
index 21677ca..396e29c 100644
--- a/docs/en-us/dev/user_doc/architecture/design.md
+++ b/docs/en-us/dev/user_doc/architecture/design.md
@@ -1,7 +1,8 @@
-## System Architecture Design
+# System Architecture Design
+
 Before explaining the architecture of the scheduling system, let's first 
understand the commonly used terms of the scheduling system
 
-### 1.Glossary
+## Glossary
 **DAG:** The full name is Directed Acyclic Graph, referred to as DAG. Task 
tasks in the workflow are assembled in the form of a directed acyclic graph, 
and topological traversal is performed from nodes with zero degrees of entry 
until there are no subsequent nodes. Examples are as follows:
 
 <p align="center">
@@ -33,9 +34,10 @@ Before explaining the architecture of the scheduling system, 
let's first underst
 
 **Complement**: Supplement historical data,Supports **interval parallel and 
serial** two complement methods
 
-### 2.System Structure
+## System Structure
+
+### System Architecture Diagram
 
-#### 2.1 System architecture diagram
 <p align="center">
   <img src="/img/architecture-1.3.0.jpg" alt="System architecture diagram"  
width="70%" />
   <p align="center">
@@ -43,7 +45,8 @@ Before explaining the architecture of the scheduling system, 
let's first underst
   </p>
 </p>
 
-#### 2.2 Start process activity diagram
+### Start Process Activity Diagram
+
 <p align="center">
   <img src="/img/process-start-flow-1.3.0.png" alt="Start process activity 
diagram"  width="70%" />
   <p align="center">
@@ -51,15 +54,15 @@ Before explaining the architecture of the scheduling 
system, let's first underst
   </p>
 </p>
 
-#### 2.3 Architecture description
+### Architecture Description
 
 * **MasterServer** 
 
     MasterServer adopts a distributed and centerless design concept. 
MasterServer is mainly responsible for DAG task segmentation, task submission 
monitoring, and monitoring the health status of other MasterServer and 
WorkerServer at the same time.
-    When the MasterServer service starts, register a temporary node with 
Zookeeper, and perform fault tolerance by monitoring changes in the temporary 
node of Zookeeper.
+    When the MasterServer service starts, register a temporary node with 
ZooKeeper, and perform fault tolerance by monitoring changes in the temporary 
node of ZooKeeper.
     MasterServer provides monitoring services based on netty.
 
-    ##### The service mainly includes:
+    #### The Service Mainly Includes:
 
     - **Distributed Quartz** distributed scheduling component, which is mainly 
responsible for the start and stop operations of scheduled tasks. When Quartz 
starts the task, there will be a thread pool inside the Master that is 
specifically responsible for the follow-up operation of the processing task
 
@@ -73,9 +76,11 @@ Before explaining the architecture of the scheduling system, 
let's first underst
 
      WorkerServer also adopts a distributed and decentralized design concept. 
WorkerServer is mainly responsible for task execution and providing log 
services.
 
-     When the WorkerServer service starts, register a temporary node with 
Zookeeper and maintain a heartbeat.
+     When the WorkerServer service starts, register a temporary node with 
ZooKeeper and maintain a heartbeat.
      Server provides monitoring services based on netty. Worker
-     ##### The service mainly includes:
+  
+     #### The Service Mainly Includes:
+  
      - **Fetch TaskThread** is mainly responsible for continuously getting 
tasks from **Task Queue**, and calling **TaskScheduleThread** corresponding 
executor according to different task types.
 
 * **ZooKeeper** 
@@ -86,7 +91,7 @@ Before explaining the architecture of the scheduling system, 
let's first underst
 
 * **Task Queue** 
 
-    Provide task queue operation, the current queue is also implemented based 
on Zookeeper. Because there is less information stored in the queue, there is 
no need to worry about too much data in the queue. In fact, we have tested the 
millions of data storage queues, which has no impact on system stability and 
performance.
+    Provide task queue operation, the current queue is also implemented based 
on ZooKeeper. Because there is less information stored in the queue, there is 
no need to worry about too much data in the queue. In fact, we have tested the 
millions of data storage queues, which has no impact on system stability and 
performance.
 
 * **Alert** 
 
@@ -101,11 +106,11 @@ Before explaining the architecture of the scheduling 
system, let's first underst
   The front-end page of the system provides various visual operation 
interfaces of the system,See more
   at [Introduction to Functions](../guide/homepage.md) section。
 
-#### 2.3 Architecture design ideas
+### Architecture Design Ideas
 
-##### One、Decentralization VS centralization 
+#### Decentralization VS Centralization
 
-###### Centralized thinking
+##### Centralized Thinking
 
 The centralized design concept is relatively simple. The nodes in the 
distributed cluster are divided into roles according to roles, which are 
roughly divided into two roles:
 <p align="center">
@@ -115,16 +120,13 @@ The centralized design concept is relatively simple. The 
nodes in the distribute
 - The role of the master is mainly responsible for task distribution and 
monitoring the health status of the slave, and can dynamically balance the task 
to the slave, so that the slave node will not be in a "busy dead" or "idle 
dead" state.
 - The role of Worker is mainly responsible for task execution and maintenance 
and Master's heartbeat, so that Master can assign tasks to Slave.
 
-
-
 Problems in centralized thought design:
 
 - Once there is a problem with the Master, the dragons are headless and the 
entire cluster will collapse. In order to solve this problem, most of the 
Master/Slave architecture models adopt the design scheme of active and standby 
Master, which can be hot standby or cold standby, or automatic switching or 
manual switching, and more and more new systems are beginning to have The 
ability to automatically elect and switch Master to improve the availability of 
the system.
 - Another problem is that if the Scheduler is on the Master, although it can 
support different tasks in a DAG running on different machines, it will cause 
the Master to be overloaded. If the Scheduler is on the slave, all tasks in a 
DAG can only submit jobs on a certain machine. When there are more parallel 
tasks, the pressure on the slave may be greater.
 
+##### Decentralized
 
-
-###### Decentralized
  <p align="center">
    <img 
src="https://analysys.github.io/easyscheduler_docs_cn/images/decentralization.png";
 alt="Decentralization"  width="50%" />
  </p>
@@ -135,9 +137,9 @@ Problems in centralized thought design:
 
 
 
-- The decentralization of DolphinScheduler is that the Master/Worker is 
registered in Zookeeper, and the Master cluster and Worker cluster are 
centerless, and the Zookeeper distributed lock is used to elect one of the 
Master or Worker as the "manager" to perform the task.
+- The decentralization of DolphinScheduler is that the Master/Worker is 
registered in ZooKeeper, and the Master cluster and Worker cluster are 
centerless, and the ZooKeeper distributed lock is used to elect one of the 
Master or Worker as the "manager" to perform the task.
 
-##### Two、Distributed lock practice
+#### Distributed Lock Practice
 
 DolphinScheduler uses ZooKeeper distributed lock to realize that only one 
Master executes Scheduler at the same time, or only one Worker executes the 
submission of tasks.
 1. The core process algorithm for acquiring distributed locks is as follows:
@@ -151,7 +153,7 @@ DolphinScheduler uses ZooKeeper distributed lock to realize 
that only one Master
  </p>
 
 
-##### Three、Insufficient thread loop waiting problem
+#### Insufficient Thread Loop Waiting Problem
 
 -  If there is no sub-process in a DAG, if the number of data in the Command 
is greater than the threshold set by the thread pool, the process directly 
waits or fails.
 -  If many sub-processes are nested in a large DAG, the following figure will 
produce a "dead" state:
@@ -172,10 +174,10 @@ note: The Master Scheduler thread is executed by FIFO 
when acquiring the Command
 So we chose the third way to solve the problem of insufficient threads.
 
 
-##### Four、Fault-tolerant design
+#### Fault-Tolerant Design
 Fault tolerance is divided into service downtime fault tolerance and task 
retry, and service downtime fault tolerance is divided into master fault 
tolerance and worker fault tolerance.
 
-###### 1. Downtime fault tolerance
+##### Downtime Fault Tolerance
 
 The service fault-tolerance design relies on ZooKeeper's Watcher mechanism, 
and the implementation principle is shown in the figure:
 
@@ -212,7 +214,7 @@ Fault-tolerant post-processing: Once the Master Scheduler 
thread finds that the
 
  Note: Due to "network jitter", the node may lose its heartbeat with ZooKeeper 
in a short period of time, and the node's remove event may occur. For this 
situation, we use the simplest way, that is, once the node and ZooKeeper 
timeout connection occurs, then directly stop the Master or Worker service.
 
-###### 2.Task failed and try again
+##### Task Failed and Try Again
 
 Here we must first distinguish the concepts of task failure retry, process 
failure recovery, and process failure rerun:
 
@@ -220,8 +222,6 @@ Here we must first distinguish the concepts of task failure 
retry, process failu
 - Process failure recovery is at the process level and is performed manually. 
Recovery can only be performed **from the failed node** or **from the current 
node**
 - Process failure rerun is also at the process level and is performed 
manually, rerun is performed from the start node
 
-
-
 Next to the topic, we divide the task nodes in the workflow into two types.
 
 - One is a business node, which corresponds to an actual script or processing 
statement, such as Shell node, MR node, Spark node, and dependent node.
@@ -232,9 +232,8 @@ Each **business node** can be configured with the number of 
failed retries. When
 
 If there is a task failure in the workflow that reaches the maximum number of 
retries, the workflow will fail to stop, and the failed workflow can be 
manually rerun or process recovery operation
 
+#### Task Priority Design
 
-
-##### Five、Task priority design
 In the early scheduling design, if there is no priority design and the fair 
scheduling design is used, the task submitted first may be completed at the 
same time as the task submitted later, and the process or task priority cannot 
be set, so We have redesigned this, and our current design is as follows:
 
 -  According to **priority of different process instances** priority over 
**priority of the same process instance** priority over **priority of tasks 
within the same process**priority over **tasks within the same 
process**submission order from high to Low task processing.
@@ -250,8 +249,7 @@ In the early scheduling design, if there is no priority 
design and the fair sche
                <img 
src="https://user-images.githubusercontent.com/10797147/146744830-5eac611f-5933-4f53-a0c6-31613c283708.png";
 alt="Task priority configuration"  width="35%" />
              </p>
 
-
-##### Six、Logback and netty implement log access
+#### Logback and Netty Implement Log Access
 
 -  Since Web (UI) and Worker are not necessarily on the same machine, viewing 
the log cannot be like querying a local file. There are two options:
   -  Put logs on the ES search engine
@@ -263,11 +261,10 @@ In the early scheduling design, if there is no priority 
design and the fair sche
    <img src="https://analysys.github.io/easyscheduler_docs_cn/images/grpc.png"; 
alt="grpc remote access"  width="50%" />
  </p>
 
-
 - We use the FileAppender and Filter functions of the custom Logback to 
realize that each task instance generates a log file.
 - FileAppender is mainly implemented as follows:
 
- ```java
+```java
  /**
   * task log appender
   */
@@ -291,7 +288,7 @@ In the early scheduling design, if there is no priority 
design and the fair sche
         super.subAppend(event);
     }
 }
-
+```
 
 Generate logs in the form of /process definition id/process instance id/task 
instance id.log
 
@@ -313,8 +310,10 @@ public class TaskLogFilter extends Filter<ILoggingEvent> {
         return FilterReply.DENY;
     }
 }
+```
+
+## Module Introduction
 
-### 3.Module introduction
 - dolphinscheduler-alert alarm module, providing AlertServer service.
 
 - dolphinscheduler-api web application module, providing ApiServer service.
@@ -327,10 +326,10 @@ public class TaskLogFilter extends Filter<ILoggingEvent> {
 
 - dolphinscheduler-server MasterServer and WorkerServer services
 
-- dolphinscheduler-service service module, including Quartz, Zookeeper, log 
client access service, easy to call server module and api module
+- dolphinscheduler-service service module, including Quartz, ZooKeeper, log 
client access service, easy to call server module and api module
 
 - dolphinscheduler-ui front-end module
 
-### Sum up
+## Sum Up
 From the perspective of scheduling, this article preliminarily introduces the 
architecture principles and implementation ideas of the big data distributed 
workflow scheduling system-DolphinScheduler. To be continued
 
diff --git a/docs/en-us/dev/user_doc/architecture/load-balance.md 
b/docs/en-us/dev/user_doc/architecture/load-balance.md
index 33a8330..e21abba 100644
--- a/docs/en-us/dev/user_doc/architecture/load-balance.md
+++ b/docs/en-us/dev/user_doc/architecture/load-balance.md
@@ -1,10 +1,8 @@
-### Load Balance
+# Load Balance
 
 Load balancing refers to the reasonable allocation of server pressure through 
routing algorithms (usually in cluster environments) to achieve the maximum 
optimization of server performance.
 
-
-
-### DolphinScheduler-Worker load balancing algorithms
+## DolphinScheduler-Worker Load Balancing Algorithms
 
 DolphinScheduler-Master allocates tasks to workers, and by default provides 
three algorithms:
 
@@ -18,35 +16,35 @@ The default configuration is the linear load.
 
 As the routing is done on the client side, the master service, you can change 
master.host.selector in master.properties to configure the algorithm what you 
want.
 
-eg: master.host.selector = random (case-insensitive)
+e.g. master.host.selector = random (case-insensitive)
 
-### Worker load balancing configuration
+## Worker Load Balancing Configuration
 
 The configuration file is worker.properties
 
-#### weight
+### Weight
 
 All of the above load algorithms are weighted based on weights, which affect 
the outcome of the triage. You can set different weights for different machines 
by modifying the worker.weight value.
 
-####  Preheating
+### Preheating
 
 With JIT optimisation in mind, we will let the worker run at low power for a 
period of time after startup so that it can gradually reach its optimal state, 
a process we call preheating. If you are interested, you can read some articles 
about JIT.
 
 So the worker will gradually reach its maximum weight over time after it 
starts (by default ten minutes, we don't provide a configuration item, you can 
change it and submit a PR if needed).
 
-### Load balancing algorithm breakdown
+## Load Balancing Algorithm Breakdown
 
-#### Random (weighted)
+### Random (Weighted)
 
 This algorithm is relatively simple, one of the matched workers is selected at 
random (the weighting affects his weighting).
 
-#### Smoothed polling (weighted)
+### Smoothed Polling (Weighted)
 
 An obvious drawback of the weighted polling algorithm. Namely, under certain 
specific weights, weighted polling scheduling generates an uneven sequence of 
instances, and this unsmoothed load may cause some instances to experience 
transient high loads, leading to a risk of system downtime. To address this 
scheduling flaw, we provide a smooth weighted polling algorithm.
 
 Each worker is given two weights, weight (which remains constant after warm-up 
is complete) and current_weight (which changes dynamically), for each route. 
The current_weight + weight is iterated over all the workers, and the weight of 
all the workers is added up and counted as total_weight, then the worker with 
the largest current_weight is selected as the worker for this task. 
current_weight-total_weight.
 
-#### Linear weighting (default algorithm)
+### Linear Weighting (Default Algorithm)
 
 The algorithm reports its own load information to the registry at regular 
intervals. We base our judgement on two main pieces of information
 
diff --git a/docs/en-us/dev/user_doc/architecture/metadata.md 
b/docs/en-us/dev/user_doc/architecture/metadata.md
index 50f115e..616e50f 100644
--- a/docs/en-us/dev/user_doc/architecture/metadata.md
+++ b/docs/en-us/dev/user_doc/architecture/metadata.md
@@ -1,7 +1,9 @@
-# Dolphin Scheduler 1.3 MetaData
+# MetaData
 
 <a name="V5KOl"></a>
-### Dolphin Scheduler 1.2 DB Table Overview
+
+## DolphinScheduler DB Table Overview
+
 | Table Name | Comment |
 | :---: | :---: |
 | t_ds_access_token | token for access ds backend |
@@ -33,16 +35,22 @@
 ---
 
 <a name="XCLy1"></a>
-### E-R Diagram
+
+## E-R Diagram
+
 <a name="5hWWZ"></a>
-#### User Queue DataSource
+
+### User Queue DataSource
+
 ![image.png](/img/metadata-erd/user-queue-datasource.png)
 
 - Multiple users can belong to one tenant
 - The queue field in the t_ds_user table stores the queue_name information in 
the t_ds_queue table, but t_ds_tenant stores queue information using queue_id. 
During the execution of the process definition, the user queue has the highest 
priority. If the user queue is empty, the tenant queue is used.
 - The user_id field in the t_ds_datasource table indicates the user who 
created the data source. The user_id in t_ds_relation_datasource_user indicates 
the user who has permission to the data source.
 <a name="7euSN"></a>
-#### Project Resource Alert
+  
+### Project Resource Alert
+
 ![image.png](/img/metadata-erd/project-resource-alert.png)
 
 - User can have multiple projects, User project authorization completes the 
relationship binding using project_id and user_id in t_ds_relation_project_user 
table
@@ -50,7 +58,9 @@
 - The user_id in the t_ds_resources table represents the user who created the 
resource, and the user_id in t_ds_relation_resources_user represents the user 
who has permissions to the resource
 - The user_id in the t_ds_udfs table represents the user who created the UDF, 
and the user_id in the t_ds_relation_udfs_user table represents a user who has 
permission to the UDF
 <a name="JEw4v"></a>
-#### Command Process Task
+  
+### Command Process Task
+
 ![image.png](/img/metadata-erd/command.png)<br 
/>![image.png](/img/metadata-erd/process-task.png)
 
 - A project has multiple process definitions, a process definition can 
generate multiple process instances, and a process instance can generate 
multiple task instances
@@ -61,9 +71,13 @@
 ---
 
 <a name="yd79T"></a>
-### Core Table Schema
+
+## Core Table Schema
+
 <a name="6bVhH"></a>
-#### t_ds_process_definition
+
+### t_ds_process_definition
+
 | Field | Type | Comment |
 | --- | --- | --- |
 | id | int | primary key |
@@ -86,7 +100,9 @@
 | update_time | datetime | update time |
 
 <a name="t5uxM"></a>
-#### t_ds_process_instance
+
+### t_ds_process_instance
+
 | Field | Type | Comment |
 | --- | --- | --- |
 | id | int | primary key |
@@ -123,7 +139,9 @@
 | tenant_id | int | tenant id |
 
 <a name="tHZsY"></a>
-#### t_ds_task_instance
+
+### t_ds_task_instance
+
 | Field | Type | Comment |
 | --- | --- | --- |
 | id | int | primary key |
@@ -150,7 +168,9 @@
 | worker_group_id | int | worker group id |
 
 <a name="gLGtm"></a>
-#### t_ds_command
+
+### t_ds_command
+
 | Field | Type | Comment |
 | --- | --- | --- |
 | id | int | primary key |
@@ -167,7 +187,4 @@
 | dependence | varchar | dependence |
 | update_time | datetime | update time |
 | process_instance_priority | int | process instance priority: 0 Highest,1 
High,2 Medium,3 Low,4 Lowest |
-| worker_group_id | int | worker group id |
-
-
-
+| worker_group_id | int | worker group id |
\ No newline at end of file
diff --git a/docs/en-us/dev/user_doc/architecture/task-structure.md 
b/docs/en-us/dev/user_doc/architecture/task-structure.md
index a62f58d..0b1c28a 100644
--- a/docs/en-us/dev/user_doc/architecture/task-structure.md
+++ b/docs/en-us/dev/user_doc/architecture/task-structure.md
@@ -1,10 +1,11 @@
+# Task Structure
+
+## Overall Tasks Storage Structure
 
-# Overall Tasks Storage Structure
 All tasks created in DolphinScheduler are saved in the t_ds_process_definition 
table.
 
 The following shows the 't_ds_process_definition' table structure:
 
-
 No. | field  | type  |  description
 -------- | ---------| -------- | ---------
 1|id|int(11)|primary key
@@ -55,9 +56,10 @@ Data example:
 }
 ```
 
-# The Detailed Explanation of The Storage Structure of Each Task Type
+## The Detailed Explanation of The Storage Structure of Each Task Type
+
+### Shell Nodes
 
-## Shell Nodes
 **The node data structure is as follows:**
 No.|parameter name||type|description |notes
 -------- | ---------| ---------| -------- | --------- | ---------
@@ -81,7 +83,6 @@ No.|parameter name||type|description |notes
 18|workerGroup | |String |Worker group| |
 19|preTasks | |Array|preposition tasks | |
 
-
 **Node data example:**
 
 ```bash
@@ -128,11 +129,10 @@ No.|parameter name||type|description |notes
 
     ]
 }
-
 ```
 
+### SQL Node
 
-## SQL Node
 Perform data query and update operations on the specified datasource through 
SQL.
 
 **The node data structure is as follows:**
@@ -168,7 +168,6 @@ No.|parameter name||type|description |note
 28|workerGroup | |String |Worker group| |
 29|preTasks | |Array|preposition tasks | |
 
-
 **Node data example:**
 
 ```bash
@@ -230,12 +229,13 @@ No.|parameter name||type|description |note
 }
 ```
 
+### Procedure [stored procedures] Node
 
-## PROCEDURE [stored procedures] Node
 **The node data structure is as follows:**
 **Node data example:**
 
-## SPARK Node
+### Spark Node
+
 **The node data structure is as follows:**
 
 No.|parameter name||type|description |notes
@@ -271,7 +271,6 @@ No.|parameter name||type|description |notes
 29|workerGroup | |String |Worker group| |
 30|preTasks | |Array|preposition tasks| |
 
-
 **Node data example:**
 
 ```bash
@@ -333,9 +332,8 @@ No.|parameter name||type|description |notes
 }
 ```
 
+### MapReduce(MR) Node
 
-
-## MapReduce(MR) Node
 **The node data structure is as follows:**
 
 No.|parameter name||type|description |notes
@@ -364,8 +362,6 @@ No.|parameter name||type|description |notes
 22|workerGroup | |String |Worker group| |
 23|preTasks | |Array|preposition tasks| |
 
-
-
 **Node data example:**
 
 ```bash
@@ -420,8 +416,8 @@ No.|parameter name||type|description |notes
 }
 ```
 
+### Python Node
 
-## Python Node
 **The node data structure is as follows:**
 No.|parameter name||type|description |notes
 -------- | ---------| ---------| -------- | --------- | ---------
@@ -445,7 +441,6 @@ No.|parameter name||type|description |notes
 18|workerGroup | |String |Worker group| |
 19|preTasks | |Array|preposition tasks| |
 
-
 **Node data example:**
 
 ```bash
@@ -494,10 +489,8 @@ No.|parameter name||type|description |notes
 }
 ```
 
+### Flink Node
 
-
-
-## Flink Node
 **The node data structure is as follows:**
 
 No.|parameter name||type|description |notes
@@ -531,7 +524,6 @@ No.|parameter name||type|description |notes
 27|workerGroup | |String |Worker group| |
 38|preTasks | |Array|preposition tasks| |
 
-
 **Node data example:**
 
 ```bash
@@ -592,7 +584,8 @@ No.|parameter name||type|description |notes
 }
 ```
 
-## HTTP Node
+### HTTP Node
+
 **The node data structure is as follows:**
 
 No.|parameter name||type|description |notes
@@ -620,7 +613,6 @@ No.|parameter name||type|description |notes
 21|workerGroup | |String |Worker group| |
 22|preTasks | |Array|preposition tasks| |
 
-
 **Node data example:**
 
 ```bash
@@ -677,9 +669,8 @@ No.|parameter name||type|description |notes
 }
 ```
 
+### DataX Node
 
-
-## DataX Node
 **The node data structure is as follows:**
 No.|parameter name||type|description |notes
 -------- | ---------| ---------| -------- | --------- | ---------
@@ -713,11 +704,8 @@ No.|parameter name||type|description |notes
 28|workerGroup | |String |Worker group| |
 29|preTasks | |Array|preposition tasks| |
 
-
-
 **Node data example:**
 
-
 ```bash
 {
     "type":"DATAX",
@@ -768,7 +756,8 @@ No.|parameter name||type|description |notes
 }
 ```
 
-## Sqoop Node
+### Sqoop Node
+
 **The node data structure is as follows:**
 No.|parameter name||type|description |notes
 -------- | ---------| ---------| -------- | --------- | ---------
@@ -796,9 +785,6 @@ No.|parameter name||type|description |notes
 22|workerGroup | |String |Worker group| |
 23|preTasks | |Array|preposition tasks| |
 
-
-
-
 **Node data example:**
 
 ```bash
@@ -845,7 +831,8 @@ No.|parameter name||type|description |notes
         }
 ```
 
-## Condition Branch Node
+### Condition Branch Node
+
 **The node data structure is as follows:**
 No.|parameter name||type|description |notes
 -------- | ---------| ---------| -------- | --------- | ---------
@@ -866,7 +853,6 @@ No.|parameter name||type|description |notes
 15|workerGroup | |String |Worker group| |
 16|preTasks | |Array|preposition tasks| |
 
-
 **Node data example:**
 
 ```bash
@@ -909,8 +895,8 @@ No.|parameter name||type|description |notes
 }
 ```
 
+### Subprocess Node
 
-## Subprocess Node
 **The node data structure is as follows:**
 No.|parameter name||type|description |notes
 -------- | ---------| ---------| -------- | --------- | ---------
@@ -932,7 +918,6 @@ No.|parameter name||type|description |notes
 16|workerGroup | |String |Worker group| |
 17|preTasks | |Array|preposition tasks| |
 
-
 **Node data example:**
 
 ```bash
@@ -969,9 +954,8 @@ No.|parameter name||type|description |notes
         }
 ```
 
+### DEPENDENT Node
 
-
-## DEPENDENT Node
 **The node data structure is as follows:**
 No.|parameter name||type|description |notes
 -------- | ---------| ---------| -------- | --------- | ---------
@@ -997,7 +981,6 @@ No.|parameter name||type|description |notes
 20|workerGroup | |String |Worker group| |
 21|preTasks | |Array|preposition tasks| |
 
-
 **Node data example:**
 
 ```bash
@@ -1128,4 +1111,4 @@ No.|parameter name||type|description |notes
 
             ]
         }
-```
+```
\ No newline at end of file
diff --git a/docs/en-us/dev/user_doc/guide/alert/alert_plugin_user_guide.md 
b/docs/en-us/dev/user_doc/guide/alert/alert_plugin_user_guide.md
index a6f44e7..83db24d 100644
--- a/docs/en-us/dev/user_doc/guide/alert/alert_plugin_user_guide.md
+++ b/docs/en-us/dev/user_doc/guide/alert/alert_plugin_user_guide.md
@@ -1,4 +1,6 @@
-## How to create alert plugins and alert groups
+# Alert Component User Guide
+
+## How to Create Alert Plugins and Alert Groups
 
 In version 2.0.0, users need to create alert instances, and then associate 
them with alert groups, and an alert group can use multiple alert instances, 
and we will notify them one by one.
 
diff --git a/docs/en-us/dev/user_doc/guide/alert/dingtalk.md 
b/docs/en-us/dev/user_doc/guide/alert/dingtalk.md
index 18263f9..9944973 100644
--- a/docs/en-us/dev/user_doc/guide/alert/dingtalk.md
+++ b/docs/en-us/dev/user_doc/guide/alert/dingtalk.md
@@ -4,7 +4,7 @@ If you need to use DingTalk for alerting, please create an 
alert instance in the
 
 ![dingtalk-plugin](/img/alert/dingtalk-plugin.png)
 
-parameter configuration
+## Parameter Configuration
 
 * Webhook
   > The format is as follows: 
https://oapi.dingtalk.com/robot/send?access_token=XXXXXX
diff --git a/docs/en-us/dev/user_doc/guide/alert/enterprise-webexteams.md 
b/docs/en-us/dev/user_doc/guide/alert/enterprise-webexteams.md
index 8252fb7..f731f99 100644
--- a/docs/en-us/dev/user_doc/guide/alert/enterprise-webexteams.md
+++ b/docs/en-us/dev/user_doc/guide/alert/enterprise-webexteams.md
@@ -1,11 +1,10 @@
-# WebexTeams
+# Webex Teams
 
-If you need to use WebexTeams to alert, please create an alarm Instance in 
warning instance manage, and then choose the WebexTeams plugin. The 
configuration example of enterprise WebexTeams is as follows:
+If you need to use Webex Teams to alert, please create an alarm Instance in 
warning instance manage, and then choose the WebexTeams plugin. The 
configuration example of enterprise WebexTeams is as follows:
 
 ![enterprise-webexteams-plugin](/img/alert/enterprise-webexteams-plugin.png)
 
-
-parameter configuration
+## Parameter Configuration
 
 * botAccessToken
   > The robot's access token you were given
diff --git a/docs/en-us/dev/user_doc/guide/alert/telegram.md 
b/docs/en-us/dev/user_doc/guide/alert/telegram.md
index c5dc231..d0d7f7b 100644
--- a/docs/en-us/dev/user_doc/guide/alert/telegram.md
+++ b/docs/en-us/dev/user_doc/guide/alert/telegram.md
@@ -1,11 +1,12 @@
 # Telegram
+
 If you need `Telegram` to alert, please create an alarm instance in warning 
instance manage dashboard. and choose the `Telegram` plugin
 
 The configuration example of `Telegram` is as follows:
 
 ![telegram-plugin](/img/alert/telegram-plugin.png)
 
-params config:
+## Parameter Configuration
 
 * WebHook:
   > Telegram open api
@@ -26,7 +27,7 @@ params config:
 * Password
   > Authentication(Password) for Proxy-Server
 
-P.S.:
+References:
 - [Telegram Application Bot Guide](https://core.telegram.org/bots)
 - [Telegram Bots Api](https://core.telegram.org/bots/api)
 - [Telegram SendMessage Api](https://core.telegram.org/bots/api#sendmessage)
diff --git a/docs/en-us/dev/user_doc/guide/datasource/hive.md 
b/docs/en-us/dev/user_doc/guide/datasource/hive.md
index 20d86d8..25f2106 100644
--- a/docs/en-us/dev/user_doc/guide/datasource/hive.md
+++ b/docs/en-us/dev/user_doc/guide/datasource/hive.md
@@ -20,7 +20,7 @@
 > configure `common.properties`. It is helpful when you try to set env before 
 > running HIVE SQL. Parameter
 > `support.hive.oneSession` default value is `false` and SQL would run in 
 > different session if their more than one.
 
-## Use HiveServer2 HA Zookeeper
+## Use HiveServer2 HA ZooKeeper
 
  <p align="center">
     <img src="/img/hive1-en.png" width="80%" />
diff --git a/docs/en-us/dev/user_doc/guide/datasource/introduction.md 
b/docs/en-us/dev/user_doc/guide/datasource/introduction.md
index c112812..fc387cb 100644
--- a/docs/en-us/dev/user_doc/guide/datasource/introduction.md
+++ b/docs/en-us/dev/user_doc/guide/datasource/introduction.md
@@ -1,4 +1,3 @@
-
 # Data Source
 
 Data source center supports MySQL, POSTGRESQL, HIVE/IMPALA, SPARK, CLICKHOUSE, 
ORACLE, SQLSERVER and other data sources
diff --git a/docs/en-us/dev/user_doc/guide/datasource/mysql.md 
b/docs/en-us/dev/user_doc/guide/datasource/mysql.md
index 7807a00..fdd84d0 100644
--- a/docs/en-us/dev/user_doc/guide/datasource/mysql.md
+++ b/docs/en-us/dev/user_doc/guide/datasource/mysql.md
@@ -1,6 +1,5 @@
 # MySQL
 
-
 - Data source: select MYSQL
 - Data source name: enter the name of the data source
 - Description: Enter a description of the data source
diff --git a/docs/en-us/dev/user_doc/guide/datasource/postgresql.md 
b/docs/en-us/dev/user_doc/guide/datasource/postgresql.md
index 77a4fd7..6b616f8 100644
--- a/docs/en-us/dev/user_doc/guide/datasource/postgresql.md
+++ b/docs/en-us/dev/user_doc/guide/datasource/postgresql.md
@@ -1,4 +1,4 @@
-# POSTGRESQL
+# PostgreSQL
 
 - Data source: select POSTGRESQL
 - Data source name: enter the name of the data source
diff --git a/docs/en-us/dev/user_doc/guide/expansion-reduction.md 
b/docs/en-us/dev/user_doc/guide/expansion-reduction.md
index a9b6a13..62fbd20 100644
--- a/docs/en-us/dev/user_doc/guide/expansion-reduction.md
+++ b/docs/en-us/dev/user_doc/guide/expansion-reduction.md
@@ -1,13 +1,13 @@
 # DolphinScheduler Expansion and Reduction
 
-## 1. Expansion 
+## Expansion 
 This article describes how to add a new master service or worker service to an 
existing DolphinScheduler cluster.
 ```
  Attention: There cannot be more than one master service process or worker 
service process on a physical machine.
        If the physical machine where the expansion master or worker node is 
located has already installed the scheduled service, skip to [1.4 Modify 
configuration] Edit the configuration file `conf/config/install_config.conf` on 
**all ** nodes, add masters or workers parameter, and restart the scheduling 
cluster.
 ```
 
-### 1.1 Basic software installation (please install the mandatory items 
yourself)
+### Basic software installation
 
 * [required] 
[JDK](https://www.oracle.com/technetwork/java/javase/downloads/index.html) 
(1.8+):Must be installed, please install and configure JAVA_HOME and PATH 
variables under /etc/profile
 * [optional] If the expansion is a worker node, you need to consider whether 
to install an external client, such as Hadoop, Hive, Spark Client.
@@ -17,7 +17,7 @@ This article describes how to add a new master service or 
worker service to an e
  Attention: DolphinScheduler itself does not depend on Hadoop, Hive, Spark, 
but will only call their Client for the corresponding task submission.
 ```
 
-### 1.2 Get installation package
+### Get Installation Package
 - Check which version of DolphinScheduler is used in your existing 
environment, and get the installation package of the corresponding version, if 
the versions are different, there may be compatibility problems.
 - Confirm the unified installation directory of other nodes, this article 
assumes that DolphinScheduler is installed in /opt/ directory, and the full 
path is /opt/dolphinscheduler.
 - Please download the corresponding version of the installation package to the 
server installation directory, uncompress it and rename it to dolphinscheduler 
and store it in the /opt directory. 
@@ -36,7 +36,7 @@ mv apache-dolphinscheduler-1.3.8-bin  dolphinscheduler
  Attention: The installation package can be copied directly from an existing 
environment to an expanded physical machine for use.
 ```
 
-### 1.3 Create Deployment Users
+### Create Deployment Users
 
 - Create deployment users on **all** expansion machines, and be sure to 
configure sudo-free. If we plan to deploy scheduling on four expansion 
machines, ds1, ds2, ds3, and ds4, we first need to create deployment users on 
each machine
 
@@ -60,7 +60,7 @@ sed -i 's/Defaults    requirett/#Defaults    requirett/g' 
/etc/sudoers
  - If resource uploads are used, you also need to assign read and write 
permissions to the deployment user on `HDFS or MinIO`.
 ```
 
-### 1.4 Modify configuration
+### Modify Configuration
 
 - From an existing node such as Master/Worker, copy the conf directory 
directly to replace the conf directory in the new node. After copying, check if 
the configuration items are correct.
     
@@ -124,7 +124,7 @@ workers="existing worker01:default,existing 
worker02:default,ds3:default,ds4:def
 sudo chown -R dolphinscheduler:dolphinscheduler dolphinscheduler
 ```
 
-### 1.4. Restart the cluster & verify
+### Restart the Cluster and Verify
 
 - restart the cluster
 
@@ -176,11 +176,13 @@ If the above services are started normally and the 
scheduling system page is nor
 
 -----------------------------------------------------------------------------
 
-## 2. Reduction
+## Reduction
+
 The reduction is to reduce the master or worker services for the existing 
DolphinScheduler cluster.
 There are two steps for shrinking. After performing the following two steps, 
the shrinking operation can be completed.
 
-### 2.1 Stop the service on the scaled-down node
+### Stop the Service on the Scaled-Down Node
+
  * If you are scaling down the master node, identify the physical machine 
where the master service is located, and stop the master service on the 
physical machine.
  * If the worker node is scaled down, determine the physical machine where the 
worker service is to be scaled down and stop the worker services on the 
physical machine.
  
@@ -219,7 +221,7 @@ sh bin/dolphinscheduler-daemon.sh start alert-server  # 
start alert  service
 If the corresponding master service or worker service does not exist, then the 
master/worker service is successfully shut down.
 
 
-### 2.2 Modify the configuration file
+### Modify the Configuration File
 
  - modify the configuration file `conf/config/install_config.conf` on the 
**all** nodes, synchronizing the following configuration.
     
diff --git a/docs/en-us/dev/user_doc/guide/flink-call.md 
b/docs/en-us/dev/user_doc/guide/flink-call.md
index 2b86d7c..d6f22d1 100644
--- a/docs/en-us/dev/user_doc/guide/flink-call.md
+++ b/docs/en-us/dev/user_doc/guide/flink-call.md
@@ -1,6 +1,6 @@
-# Flink Calls Operating steps
+# Flink Calls Operating Steps
 
-### Create a queue
+## Create a Queue
 
 1. Log in to the scheduling system, click "Security", then click "Queue 
manage" on the left, and click "Create queue" to create a queue.
 2. Fill in the name and value of the queue, and click "Submit" 
@@ -9,10 +9,7 @@
    <img src="/img/api/create_queue.png" width="80%" />
  </p>
 
-
-
-
-### Create a tenant 
+## Create a Tenant 
 
 ```
 1. The tenant corresponds to a Linux user, which the user worker uses to 
submit jobs. If Linux OS environment does not have this user, the worker will 
create this user when executing the script.
@@ -24,19 +21,13 @@
    <img src="/img/api/create_tenant.png" width="80%" />
  </p>
 
-
-
-
-### Create a user
+## Create a User
 
 <p align="center">
    <img src="/img/api/create_user.png" width="80%" />
  </p>
 
-
-
-
-### Create a token
+## Create a Token
 
 1. Log in to the scheduling system, click "Security", then click "Token 
manage" on the left, and click "Create token" to create a token.
 
@@ -51,8 +42,7 @@
    <img src="/img/create-token-en1.png" width="80%" />
  </p>
 
-
-### Use token
+## Use Token
 
 1. Open the API documentation page
 
@@ -78,18 +68,13 @@
    <img src="/img/test-api.png" width="80%" />
  </p>  
 
-
-
-### User authorization
+## User Authorization
 
 <p align="center">
    <img src="/img/api/user_authorization.png" width="80%" />
  </p>
 
-
-
-
-### User login
+## User Login
 
 ```
 http://192.168.1.163:12345/dolphinscheduler/ui/#/monitor/servers/master
@@ -99,19 +84,13 @@ 
http://192.168.1.163:12345/dolphinscheduler/ui/#/monitor/servers/master
    <img src="/img/api/user_login.png" width="80%" />
  </p>
 
-
-
-
-### Upload the resource
+## Upload the Resource
 
 <p align="center">
    <img src="/img/api/upload_resource.png" width="80%" />
  </p>
 
-
-
-
-### Create a workflow
+## Create a Workflow
 
 <p align="center">
    <img src="/img/api/create_workflow1.png" width="80%" />
@@ -132,21 +111,14 @@ 
http://192.168.1.163:12345/dolphinscheduler/ui/#/monitor/servers/master
    <img src="/img/api/create_workflow4.png" width="80%" />
  </p>
 
-
-
-
-### View the execution result
+## View the Execution Result
 
 <p align="center">
    <img src="/img/api/execution_result.png" width="80%" />
  </p>
 
-
-
-
-### View log
+## View Log
 
 <p align="center">
    <img src="/img/api/log.png" width="80%" />
- </p>
-
+ </p>
\ No newline at end of file
diff --git a/docs/en-us/dev/user_doc/guide/homepage.md 
b/docs/en-us/dev/user_doc/guide/homepage.md
index 7c7a19d..16353e3 100644
--- a/docs/en-us/dev/user_doc/guide/homepage.md
+++ b/docs/en-us/dev/user_doc/guide/homepage.md
@@ -1,4 +1,4 @@
-# Home
+# Home Page
 
 The home page contains task status statistics, process status statistics, and 
workflow definition statistics for all projects of the user.
 
diff --git a/docs/en-us/dev/user_doc/guide/installation/cluster.md 
b/docs/en-us/dev/user_doc/guide/installation/cluster.md
index 0c3b390..dc97ba1 100644
--- a/docs/en-us/dev/user_doc/guide/installation/cluster.md
+++ b/docs/en-us/dev/user_doc/guide/installation/cluster.md
@@ -8,11 +8,11 @@ If you are a green hand and want to experience 
DolphinScheduler, we recommended
 
 Cluster deployment uses the same scripts and configuration files as we deploy 
in [pseudo-cluster deployment](pseudo-cluster.md), so the prepare and required 
are the same as pseudo-cluster deployment. The difference is that 
[pseudo-cluster deployment](pseudo-cluster.md) is for one machine, while 
cluster deployment (Cluster) for multiple. and the steps of "Modify 
configuration" are quite different between pseudo-cluster deployment and 
cluster deployment.
 
-### Prepare && DolphinScheduler startup environment
+### Prepare and DolphinScheduler Startup Environment
 
-Because of cluster deployment for multiple machine, so you have to run you 
"Prepare" and "startup" in every machine in 
[pseudo-cluster.md](pseudo-cluster.md), except section "Configure machine SSH 
password-free login", "Start zookeeper", "Initialize the database", which is 
only for deployment or just need an single server
+Because of cluster deployment for multiple machine, so you have to run you 
"Prepare" and "startup" in every machine in 
[pseudo-cluster.md](pseudo-cluster.md), except section "Configure machine SSH 
password-free login", "Start ZooKeeper", "Initialize the database", which is 
only for deployment or just need an single server
 
-### Modify configuration
+### Modify Configuration
 
 This is a step that is quite different from 
[pseudo-cluster.md](pseudo-cluster.md), because the deployment script will 
transfer the resources required for installation machine to each deployment 
machine using `scp`. And we have to declare all machine we want to install 
DolphinScheduler and then run script `install.sh`. The configuration file is 
under the path `conf/config/install_config.conf`, here we only need to modify 
section **INSTALL MACHINE**, **DolphinScheduler ENV, Database, Regi [...]
 
@@ -30,6 +30,10 @@ alertServer="ds4"
 apiServers="ds5"
 ```
 
-## Start DolphinScheduler && Login DolphinScheduler && Server Start And Stop
+## Start and Login DolphinScheduler
 
 Same as pseudo-cluster.md](pseudo-cluster.md)
+
+## Start and Stop Server
+
+Same as pseudo-cluster.md](pseudo-cluster.md)
\ No newline at end of file
diff --git a/docs/en-us/dev/user_doc/guide/installation/docker.md 
b/docs/en-us/dev/user_doc/guide/installation/docker.md
index 9815eef..7bb8c0c 100644
--- a/docs/en-us/dev/user_doc/guide/installation/docker.md
+++ b/docs/en-us/dev/user_doc/guide/installation/docker.md
@@ -5,17 +5,17 @@
  - [Docker](https://docs.docker.com/engine/install/) 1.13.1+
  - [Docker Compose](https://docs.docker.com/compose/) 1.11.0+
 
-## How to use this Docker image
+## How to Use this Docker Image
 
 Here're 3 ways to quickly install DolphinScheduler
 
-### The First Way: Start a DolphinScheduler by docker-compose (recommended)
+### The First Way: Start a DolphinScheduler by Docker Compose (Recommended)
 
 In this way, you need to install 
[docker-compose](https://docs.docker.com/compose/) as a prerequisite, please 
install it yourself according to the rich docker-compose installation guidance 
on the Internet
 
 For Windows 7-10, you can install [Docker 
Toolbox](https://github.com/docker/toolbox/releases). For Windows 10 64-bit, 
you can install [Docker 
Desktop](https://docs.docker.com/docker-for-windows/install/), and pay 
attention to the [system 
requirements](https://docs.docker.com/docker-for-windows/install/#system-requirements)
 
-#### 0. Configure memory not less than 4GB
+#### Configure Memory not Less Than 4GB
 
 For Mac user, click `Docker Desktop -> Preferences -> Resources -> Memory`
 
@@ -28,11 +28,11 @@ For Windows Docker Desktop user
  - **Hyper-V mode**: Click `Docker Desktop -> Settings -> Resources -> Memory`
  - **WSL 2 mode**: Refer to [WSL 2 utility 
VM](https://docs.microsoft.com/en-us/windows/wsl/wsl-config#configure-global-options-with-wslconfig)
 
-#### 1. Download the Source Code Package
+#### Download the Source Code Package
 
 Please download the source code package 
apache-dolphinscheduler-1.3.8-src.tar.gz, download address: 
[download](/en-us/download/download.html)
 
-#### 2. Pull Image and Start the Service
+#### Pull Image and Start the Service
 
 > For Mac and Linux user, open **Terminal**
 > For Windows Docker Toolbox user, open **Docker Quickstart Terminal**
@@ -50,7 +50,7 @@ $ docker-compose up -d
 
 The **PostgreSQL** (with username `root`, password `root` and database 
`dolphinscheduler`) and **ZooKeeper** services will start by default
 
-#### 3. Login
+#### Login
 
 Visit the Web UI: http://localhost:12345/dolphinscheduler (The local address 
is http://localhost:12345/dolphinscheduler)
 
@@ -62,21 +62,21 @@ The default username is `admin` and the default password is 
`dolphinscheduler123
 
 Please refer to the `Quick Start` in the chapter [Quick 
Start](../quick-start.md) to explore how to use DolphinScheduler
 
-### The Second Way: Start via specifying the existing PostgreSQL and ZooKeeper 
service
+### The Second Way: Start via Specifying the Existing PostgreSQL and ZooKeeper 
Service
 
 In this way, you need to install 
[docker](https://docs.docker.com/engine/install/) as a prerequisite, please 
install it yourself according to the rich docker installation guidance on the 
Internet
 
-#### 1. Basic Required Software (please install by yourself)
+#### Basic Required Software
 
  - [PostgreSQL](https://www.postgresql.org/download/) (8.2.15+)
  - [ZooKeeper](https://zookeeper.apache.org/releases.html) (3.4.6+)
  - [Docker](https://docs.docker.com/engine/install/) (1.13.1+)
 
-#### 2. Please login to the PostgreSQL database and create a database named 
`dolphinscheduler`
+#### Please Login to the PostgreSQL Database and Create a Database Named 
`dolphinscheduler`
 
-#### 3. Initialize the database, import `sql/dolphinscheduler_postgre.sql` to 
create tables and initial data
+#### Initialize the Database, Import `sql/dolphinscheduler_postgre.sql` to 
Create Tables and Initial Data
 
-#### 4. Download the DolphinScheduler Image
+#### Download the DolphinScheduler Image
 
 We have already uploaded user-oriented DolphinScheduler image to the Docker 
repository so that you can pull the image from the docker repository:
 
@@ -97,11 +97,11 @@ apache/dolphinscheduler:1.3.8 all
 
 Note: database username test and password test need to be replaced with your 
actual PostgreSQL username and password, 192.168.x.x need to be replaced with 
your relate PostgreSQL and ZooKeeper host IP
 
-#### 6. Login
+#### Login
 
 Same as above
 
-### The Third Way: Start a standalone DolphinScheduler server
+### The Third Way: Start a Standalone DolphinScheduler Server
 
 The following services are automatically started when the container starts:
 
@@ -201,7 +201,7 @@ Especially, it can be configured through the environment 
variable configuration
 
 ## FAQ
 
-### How to manage DolphinScheduler by docker-compose?
+### How to Manage DolphinScheduler by Docker Compose?
 
 Start, restart, stop or list containers:
 
@@ -224,7 +224,7 @@ Stop containers and remove containers, networks and volumes:
 docker-compose down -v
 ```
 
-### How to view the logs of a container?
+### How to View the Logs of a Container?
 
 List all running containers:
 
@@ -241,7 +241,7 @@ docker logs -f docker-swarm_dolphinscheduler-api_1 # follow 
log output
 docker logs --tail 10 docker-swarm_dolphinscheduler-api_1 # show last 10 lines 
from the end of the logs
 ```
 
-### How to scale master and worker by docker-compose?
+### How to Scale Master and Worker by Docker Compose?
 
 Scale master to 2 instances:
 
@@ -255,7 +255,7 @@ Scale worker to 3 instances:
 docker-compose up -d --scale dolphinscheduler-worker=3 dolphinscheduler-worker
 ```
 
-### How to deploy DolphinScheduler on Docker Swarm?
+### How to Deploy DolphinScheduler on Docker Swarm?
 
 Assuming that the Docker Swarm cluster has been created (If there is no Docker 
Swarm cluster, please refer to 
[create-swarm](https://docs.docker.com/engine/swarm/swarm-tutorial/create-swarm/))
 
@@ -283,7 +283,7 @@ Remove the volumes of the stack named dolphinscheduler:
 docker volume rm -f $(docker volume ls --format "{{.Name}}" | grep -e 
"^dolphinscheduler")
 ```
 
-### How to scale master and worker on Docker Swarm?
+### How to Scale Master and Worker on Docker Swarm?
 
 Scale master of the stack named dolphinscheduler to 2 instances:
 
@@ -297,9 +297,9 @@ Scale worker of the stack named dolphinscheduler to 3 
instances:
 docker service scale dolphinscheduler_dolphinscheduler-worker=3
 ```
 
-### How to build a Docker image?
+### How to Build a Docker Image?
 
-#### Build from the source code (Require Maven 3.3+ & JDK 1.8+)
+#### Build From the Source Code (Require Maven 3.3+ and JDK 1.8+)
 
 In Unix-Like, execute in Terminal:
 
@@ -315,7 +315,7 @@ C:\dolphinscheduler-src>.\docker\build\hooks\build.bat
 
 Please read `./docker/build/hooks/build` `./docker/build/hooks/build.bat` 
script files if you don't understand
 
-#### Build from the binary distribution (Not require Maven 3.3+ & JDK 1.8+)
+#### Build From the Binary Distribution (Not require Maven 3.3+ and JDK 1.8+)
 
 Please download the binary distribution package 
apache-dolphinscheduler-1.3.8-bin.tar.gz, download address: 
[download](/en-us/download/download.html). And put 
apache-dolphinscheduler-1.3.8-bin.tar.gz into the 
`apache-dolphinscheduler-1.3.8-src/docker/build` directory, execute in Terminal 
or PowerShell:
 
@@ -326,7 +326,7 @@ $ docker build --build-arg VERSION=1.3.8 -t 
apache/dolphinscheduler:1.3.8 .
 
 > PowerShell should use `cd apache-dolphinscheduler-1.3.8-src/docker/build`
 
-#### Build multi-platform images
+#### Build Multi-Platform Images
 
 Currently support to build images including `linux/amd64` and `linux/arm64` 
platform architecture, requirements:
 
@@ -340,7 +340,7 @@ $ docker login # login to push apache/dolphinscheduler
 $ bash ./docker/build/hooks/build
 ```
 
-### How to add an environment variable for Docker?
+### How to Add an Environment Variable for Docker?
 
 If you would like to do additional initialization in an image derived from 
this one, add one or more environment variables under 
`/root/start-init-conf.sh`, and modify template files in 
`/opt/dolphinscheduler/conf/*.tpl`.
 
@@ -367,7 +367,7 @@ EOF
 done
 ```
 
-### How to use MySQL as the DolphinScheduler's database instead of PostgreSQL?
+### How to Use MySQL as the DolphinScheduler's Database Instead of PostgreSQL?
 
 > Because of the commercial license, we cannot directly use the driver of 
 > MySQL.
 >
@@ -413,7 +413,7 @@ DATABASE_PARAMS=useUnicode=true&characterEncoding=UTF-8
 
 8. Run a dolphinscheduler (See **How to use this docker image**)
 
-### How to support MySQL datasource in `Datasource manage`?
+### How to Support MySQL Datasource in `Datasource manage`?
 
 > Because of the commercial license, we cannot directly use the driver of 
 > MySQL.
 >
@@ -442,7 +442,7 @@ docker build -t apache/dolphinscheduler:mysql-driver .
 
 6. Add a MySQL datasource in `Datasource manage`
 
-### How to support Oracle datasource in `Datasource manage`?
+### How to Support Oracle Datasource in `Datasource manage`?
 
 > Because of the commercial license, we cannot directly use the driver of 
 > Oracle.
 >
@@ -471,7 +471,7 @@ docker build -t apache/dolphinscheduler:oracle-driver .
 
 6. Add an Oracle datasource in `Datasource manage`
 
-### How to support Python 2 pip and custom requirements.txt?
+### How to Support Python 2 pip and Custom requirements.txt?
 
 1. Create a new `Dockerfile` to install pip:
 
@@ -504,7 +504,7 @@ docker build -t apache/dolphinscheduler:pip .
 
 5. Verify pip under a new Python task
 
-### How to support Python 3?
+### How to Support Python 3?
 
 1. Create a new `Dockerfile` to install Python 3:
 
@@ -537,7 +537,7 @@ docker build -t apache/dolphinscheduler:python3 .
 
 6. Verify Python 3 under a new Python task
 
-### How to support Hadoop, Spark, Flink, Hive or DataX?
+### How to Support Hadoop, Spark, Flink, Hive or DataX?
 
 Take Spark 2.4.7 as an example:
 
@@ -591,7 +591,7 @@ Spark on YARN (Deploy Mode is `cluster` or `client`) 
requires Hadoop support. Si
 
 Ensure that `$HADOOP_HOME` and `$HADOOP_CONF_DIR` exists
 
-### How to support Spark 3?
+### How to Support Spark 3?
 
 In fact, the way to submit applications with `spark-submit` is the same, 
regardless of Spark 1, 2 or 3. In other words, the semantics of `SPARK_HOME2` 
is the second `SPARK_HOME` instead of `SPARK2`'s `HOME`, so just set 
`SPARK_HOME2=/path/to/spark3`
 
@@ -628,7 +628,7 @@ $SPARK_HOME2/bin/spark-submit --class 
org.apache.spark.examples.SparkPi $SPARK_H
 
 Check whether the task log contains the output like `Pi is roughly 3.146015`
 
-### How to support shared storage between Master, Worker and Api server?
+### How to Support Shared Storage between Master, Worker and Api server?
 
 > **Note**: If it is deployed on a single machine by `docker-compose`, step 1 
 > and 2 can be skipped directly, and execute the command like `docker cp 
 > hadoop-3.2.2.tar.gz docker-swarm_dolphinscheduler-worker_1:/opt/soft` to put 
 > Hadoop into the shared directory `/opt/soft` in the container
 
@@ -651,7 +651,7 @@ volumes:
 
 3. Ensure that `$HADOOP_HOME` and `$HADOOP_CONF_DIR` are correct
 
-### How to support local file resource storage instead of HDFS and S3?
+### How to Support Local File Resource Storage Instead of HDFS and S3?
 
 > **Note**: If it is deployed on a single machine by `docker-compose`, step 2 
 > can be skipped directly
 
@@ -675,7 +675,7 @@ volumes:
       device: ":/path/to/resource/dir"
 ```
 
-### How to support S3 resource storage like MinIO?
+### How to Support S3 Resource Storage Like MinIO?
 
 Take MinIO as an example: Modify the following environment variables in 
`config.env.sh`
 
@@ -692,7 +692,7 @@ FS_S3A_SECRET_KEY=MINIO_SECRET_KEY
 
 > **Note**: `MINIO_IP` can only use IP instead of the domain name, because 
 > DolphinScheduler currently doesn't support S3 path style access
 
-### How to configure SkyWalking?
+### How to Configure SkyWalking?
 
 Modify SkyWalking environment variables in `config.env.sh`:
 
@@ -759,13 +759,13 @@ This environment variable sets the database for the 
database. The default value
 
 **`ZOOKEEPER_QUORUM`**
 
-This environment variable sets zookeeper quorum. The default value is 
`127.0.0.1:2181`.
+This environment variable sets ZooKeeper quorum. The default value is 
`127.0.0.1:2181`.
 
 **Note**: You must specify it when starting a standalone dolphinscheduler 
server. Like `master-server`, `worker-server`, `api-server`.
 
 **`ZOOKEEPER_ROOT`**
 
-This environment variable sets zookeeper root directory for dolphinscheduler. 
The default value is `/dolphinscheduler`.
+This environment variable sets ZooKeeper root directory for dolphinscheduler. 
The default value is `/dolphinscheduler`.
 
 ### Common
 
diff --git a/docs/en-us/dev/user_doc/guide/installation/hardware.md 
b/docs/en-us/dev/user_doc/guide/installation/hardware.md
index 0c5df7f..1303276 100644
--- a/docs/en-us/dev/user_doc/guide/installation/hardware.md
+++ b/docs/en-us/dev/user_doc/guide/installation/hardware.md
@@ -2,7 +2,7 @@
 
 DolphinScheduler, as an open-source distributed workflow task scheduling 
system, can be well deployed and run in Intel architecture server environments 
and mainstream virtualization environments, and supports mainstream Linux 
operating system environments.
 
-## 1. Linux Operating System Version Requirements
+## Linux Operating System Version Requirements
 
 | OS       | Version         |
 | :----------------------- | :----------: |
@@ -14,8 +14,10 @@ DolphinScheduler, as an open-source distributed workflow 
task scheduling system,
 > **Attention:**
 >The above Linux operating systems can run on physical servers and mainstream 
 >virtualization environments such as VMware, KVM, and XEN.
 
-## 2. Recommended Server Configuration
+## Recommended Server Configuration
+
 DolphinScheduler supports 64-bit hardware platforms with Intel x86-64 
architecture. The following recommendation is made for server hardware 
configuration in a production environment:
+
 ### Production Environment
 
 | **CPU** | **MEM** | **HD** | **NIC** | **Num** |
@@ -27,7 +29,7 @@ DolphinScheduler supports 64-bit hardware platforms with 
Intel x86-64 architectu
 > - The hard disk size configuration is recommended by more than 50GB. The 
 > system disk and data disk are separated.
 
 
-## 3. Network Requirements
+## Network Requirements
 
 DolphinScheduler provides the following network port configurations for normal 
operation:
 
@@ -41,7 +43,7 @@ DolphinScheduler provides the following network port 
configurations for normal o
 > - MasterServer and WorkerServer do not need to enable communication between 
 > the networks. As long as the local ports do not conflict.
 > - Administrators can adjust relevant ports on the network side and host-side 
 > according to the deployment plan of DolphinScheduler components in the 
 > actual environment.
 
-## 4. Browser Requirements
+## Browser Requirements
 
 DolphinScheduler recommends Chrome and the latest browsers which using Chrome 
Kernel to access the front-end visual operator page.
 
diff --git a/docs/en-us/dev/user_doc/guide/installation/kubernetes.md 
b/docs/en-us/dev/user_doc/guide/installation/kubernetes.md
index 5976da0..225e40d 100644
--- a/docs/en-us/dev/user_doc/guide/installation/kubernetes.md
+++ b/docs/en-us/dev/user_doc/guide/installation/kubernetes.md
@@ -10,7 +10,7 @@ If you are a green hand and want to experience 
DolphinScheduler, we recommended
  - [Kubernetes](https://kubernetes.io/) 1.12+
  - PV provisioner support in the underlying infrastructure
 
-## Installing the Chart
+## Install the Chart
 
 Please download the source code package 
apache-dolphinscheduler-1.3.8-src.tar.gz, download address: 
[download](/en-us/download/download.html)
 
@@ -69,7 +69,7 @@ The default username is `admin` and the default password is 
`dolphinscheduler123
 
 Please refer to the `Quick Start` in the chapter [Quick 
Start](../quick-start.md) to explore how to use DolphinScheduler
 
-## Uninstalling the Chart
+## Uninstall the Chart
 
 To uninstall/delete the `dolphinscheduler` deployment:
 
@@ -128,7 +128,7 @@ The configuration file is `values.yaml`, and the 
[Appendix-Configuration](#appen
 
 ## FAQ
 
-### How to view the logs of a pod container?
+### How to View the Logs of a Pod Container?
 
 List all pods (aka `po`):
 
@@ -145,7 +145,7 @@ kubectl logs -f dolphinscheduler-master-0 # follow log 
output
 kubectl logs --tail 10 dolphinscheduler-master-0 -n test # show last 10 lines 
from the end of the logs
 ```
 
-### How to scale api, master and worker on Kubernetes?
+### How to Scale api, master and worker on Kubernetes?
 
 List all deployments (aka `deploy`):
 
@@ -182,7 +182,7 @@ kubectl scale --replicas=6 sts dolphinscheduler-worker
 kubectl scale --replicas=6 sts dolphinscheduler-worker -n test # with test 
namespace
 ```
 
-### How to use MySQL as the DolphinScheduler's database instead of PostgreSQL?
+### How to Use MySQL as the DolphinScheduler's Database Instead of PostgreSQL?
 
 > Because of the commercial license, we cannot directly use the driver of 
 > MySQL.
 >
@@ -225,7 +225,7 @@ externalDatabase:
 
 8. Run a DolphinScheduler release in Kubernetes (See **Installing the Chart**)
 
-### How to support MySQL datasource in `Datasource manage`?
+### How to Support MySQL Datasource in `Datasource manage`?
 
 > Because of the commercial license, we cannot directly use the driver of 
 > MySQL.
 >
@@ -254,7 +254,7 @@ docker build -t apache/dolphinscheduler:mysql-driver .
 
 7. Add a MySQL datasource in `Datasource manage`
 
-### How to support Oracle datasource in `Datasource manage`?
+### How to Support Oracle Datasource in `Datasource manage`?
 
 > Because of the commercial license, we cannot directly use the driver of 
 > Oracle.
 >
@@ -283,7 +283,7 @@ docker build -t apache/dolphinscheduler:oracle-driver .
 
 7. Add an Oracle datasource in `Datasource manage`
 
-### How to support Python 2 pip and custom requirements.txt?
+### How to Support Python 2 pip and Custom requirements.txt?
 
 1. Create a new `Dockerfile` to install pip:
 
@@ -316,7 +316,7 @@ docker build -t apache/dolphinscheduler:pip .
 
 6. Verify pip under a new Python task
 
-### How to support Python 3?
+### How to Support Python 3?
 
 1. Create a new `Dockerfile` to install Python 3:
 
@@ -349,7 +349,7 @@ docker build -t apache/dolphinscheduler:python3 .
 
 7. Verify Python 3 under a new Python task
 
-### How to support Hadoop, Spark, Flink, Hive or DataX?
+### How to Support Hadoop, Spark, Flink, Hive or DataX?
 
 Take Spark 2.4.7 as an example:
 
@@ -407,7 +407,7 @@ Spark on YARN (Deploy Mode is `cluster` or `client`) 
requires Hadoop support. Si
 
 Ensure that `$HADOOP_HOME` and `$HADOOP_CONF_DIR` exists
 
-### How to support Spark 3?
+### How to Support Spark 3?
 
 In fact, the way to submit applications with `spark-submit` is the same, 
regardless of Spark 1, 2 or 3. In other words, the semantics of `SPARK_HOME2` 
is the second `SPARK_HOME` instead of `SPARK2`'s `HOME`, so just set 
`SPARK_HOME2=/path/to/spark3`
 
@@ -448,7 +448,7 @@ $SPARK_HOME2/bin/spark-submit --class 
org.apache.spark.examples.SparkPi $SPARK_H
 
 Check whether the task log contains the output like `Pi is roughly 3.146015`
 
-### How to support shared storage between Master, Worker and Api server?
+### How to Support Shared Storage Between Master, Worker and Api Server?
 
 For example, Master, Worker and API server may use Hadoop at the same time
 
@@ -473,7 +473,7 @@ common:
 
 3. Ensure that `$HADOOP_HOME` and `$HADOOP_CONF_DIR` are correct
 
-### How to support local file resource storage instead of HDFS and S3?
+### How to Support Local File Resource Storage Instead of HDFS and S3?
 
 Modify the following configurations in `values.yaml`
 
@@ -495,7 +495,7 @@ common:
 
 > **Note**: `storageClassName` must support the access mode: `ReadWriteMany`
 
-### How to support S3 resource storage like MinIO?
+### How to Support S3 Resource Storage Like MinIO?
 
 Take MinIO as an example: Modify the following configurations in `values.yaml`
 
@@ -514,7 +514,7 @@ common:
 
 > **Note**: `MINIO_IP` can only use IP instead of domain name, because 
 > DolphinScheduler currently doesn't support S3 path style access
 
-### How to configure SkyWalking?
+### How to Configure SkyWalking?
 
 Modify SKYWALKING configurations in `values.yaml`:
 
@@ -554,14 +554,14 @@ common:
 | `externalDatabase.database`                                                  
     | If exists external PostgreSQL, and set `postgresql.enabled` value to 
false. DolphinScheduler's database database will use it   | `dolphinscheduler`  
                                  |
 | `externalDatabase.params`                                                    
     | If exists external PostgreSQL, and set `postgresql.enabled` value to 
false. DolphinScheduler's database params will use it     | 
`characterEncoding=utf8`                              |
 |                                                                              
     |                                                                          
                                                      |                         
                              |
-| `zookeeper.enabled`                                                          
     | If not exists external Zookeeper, by default, the DolphinScheduler will 
use a internal Zookeeper                               | `true`                 
                               |
+| `zookeeper.enabled`                                                          
     | If not exists external ZooKeeper, by default, the DolphinScheduler will 
use a internal ZooKeeper                               | `true`                 
                               |
 | `zookeeper.fourlwCommandsWhitelist`                                          
     | A list of comma separated Four Letter Words commands to use              
                                                      | `srvr,ruok,wchs,cons`   
                              |
-| `zookeeper.persistence.enabled`                                              
     | Set `zookeeper.persistence.enabled` to `true` to mount a new volume for 
internal Zookeeper                                     | `false`                
                               |
+| `zookeeper.persistence.enabled`                                              
     | Set `zookeeper.persistence.enabled` to `true` to mount a new volume for 
internal ZooKeeper                                     | `false`                
                               |
 | `zookeeper.persistence.size`                                                 
     | `PersistentVolumeClaim` size                                             
                                                      | `20Gi`                  
                              |
-| `zookeeper.persistence.storageClass`                                         
     | Zookeeper data persistent volume storage class. If set to "-", 
storageClassName: "", which disables dynamic provisioning       | `-`           
                                        |
-| `zookeeper.zookeeperRoot`                                                    
     | Specify dolphinscheduler root directory in Zookeeper                     
                                                      | `/dolphinscheduler`     
                              |
-| `externalZookeeper.zookeeperQuorum`                                          
     | If exists external Zookeeper, and set `zookeeper.enabled` value to 
false. Specify Zookeeper quorum                             | `127.0.0.1:2181`  
                                    |
-| `externalZookeeper.zookeeperRoot`                                            
     | If exists external Zookeeper, and set `zookeeper.enabled` value to 
false. Specify dolphinscheduler root directory in Zookeeper | 
`/dolphinscheduler`                                   |
+| `zookeeper.persistence.storageClass`                                         
     | ZooKeeper data persistent volume storage class. If set to "-", 
storageClassName: "", which disables dynamic provisioning       | `-`           
                                        |
+| `zookeeper.zookeeperRoot`                                                    
     | Specify dolphinscheduler root directory in ZooKeeper                     
                                                      | `/dolphinscheduler`     
                              |
+| `externalZookeeper.zookeeperQuorum`                                          
     | If exists external ZooKeeper, and set `zookeeper.enabled` value to 
false. Specify Zookeeper quorum                             | `127.0.0.1:2181`  
                                    |
+| `externalZookeeper.zookeeperRoot`                                            
     | If exists external ZooKeeper, and set `zookeeper.enabled` value to 
false. Specify dolphinscheduler root directory in Zookeeper | 
`/dolphinscheduler`                                   |
 |                                                                              
     |                                                                          
                                                      |                         
                              |
 | `common.configmap.DOLPHINSCHEDULER_OPTS`                                     
     | The jvm options for dolphinscheduler, suitable for all servers           
                                                      | `""`                    
                              |
 | `common.configmap.DATA_BASEDIR_PATH`                                         
     | User data directory path, self configuration, please make sure the 
directory exists and have read write permissions            | 
`/tmp/dolphinscheduler`                               |
diff --git a/docs/en-us/dev/user_doc/guide/installation/pseudo-cluster.md 
b/docs/en-us/dev/user_doc/guide/installation/pseudo-cluster.md
index 1376283..f1d03ea 100644
--- a/docs/en-us/dev/user_doc/guide/installation/pseudo-cluster.md
+++ b/docs/en-us/dev/user_doc/guide/installation/pseudo-cluster.md
@@ -18,9 +18,9 @@ Pseudo-cluster deployment of DolphinScheduler requires 
external software support
 
 > **_Note:_** DolphinScheduler itself does not depend on Hadoop, Hive, Spark, 
 > but if you need to run tasks that depend on them, you need to have the 
 > corresponding environment support
 
-## DolphinScheduler startup environment
+## DolphinScheduler Startup Environment
 
-### Configure user exemption and permissions
+### Configure User Exemption and Permissions
 
 Create a deployment user, and be sure to configure `sudo` without password. We 
here make a example for user dolphinscheduler.
 
@@ -44,7 +44,7 @@ chown -R dolphinscheduler:dolphinscheduler 
apache-dolphinscheduler-*-bin
 > * Because DolphinScheduler's multi-tenant task switch user by command `sudo 
 > -u {linux-user}`, the deployment user needs to have sudo privileges and is 
 > password-free. If novice learners don’t understand, you can ignore this 
 > point for the time being.
 > * If you find the line "Defaults requirest" in the `/etc/sudoers` file, 
 > please comment it
 
-### Configure machine SSH password-free login
+### Configure Machine SSH Password-Free Login
 
 Since resources need to be sent to different machines during installation, SSH 
password-free login is required between each machine. The steps to configure 
password-free login are as follows
 
@@ -58,12 +58,12 @@ chmod 600 ~/.ssh/authorized_keys
 
 > **_Notice:_** After the configuration is complete, you can run the command 
 > `ssh localhost` to test if it work or not, if you can login with ssh without 
 > password.
 
-### Start zookeeper
+### Start ZooKeeper
 
-Go to the zookeeper installation directory, copy configure file 
`zoo_sample.cfg` to `conf/zoo.cfg`, and change value of dataDir in 
`conf/zoo.cfg` to `dataDir=./tmp/zookeeper`
+Go to the ZooKeeper installation directory, copy configure file 
`zoo_sample.cfg` to `conf/zoo.cfg`, and change value of dataDir in 
`conf/zoo.cfg` to `dataDir=./tmp/zookeeper`
 
 ```shell
-# Start zookeeper
+# Start ZooKeeper
 ./bin/zkServer.sh start
 ```
 
@@ -85,7 +85,7 @@ sh script/create-dolphinscheduler.sh
 ```
 -->
 
-## Modify configuration
+## Modify Configuration
 
 After completing the preparation of the basic environment, you need to modify 
the configuration file according to your environment. The configuration file is 
in the path of `conf/config/install_config.conf`. Generally, you just needs to 
modify the **INSTALL MACHINE, DolphinScheduler ENV, Database, Registry Server** 
part to complete the deployment, the following describes the parameters that 
must be modified
 
@@ -126,11 +126,11 @@ dbname="dolphinscheduler"
 # ---------------------------------------------------------
 # Registry Server
 # ---------------------------------------------------------
-# Registration center address, the address of zookeeper service
+# Registration center address, the address of ZooKeeper service
 registryServers="localhost:2181"
 ```
 
-## Initialize the database
+## Initialize the Database
 
 DolphinScheduler metadata is stored in relational database. Currently, 
PostgreSQL and MySQL are supported. If you use MySQL, you need to manually 
download [mysql-connector-java driver][mysql] (8.0.16) and move it to the lib 
directory of DolphinScheduler. Let's take MySQL as an example for how to 
initialize the database
 
@@ -167,7 +167,7 @@ sh install.sh
 
 The browser access address http://localhost:12345/dolphinscheduler can login 
DolphinScheduler UI. The default username and password are 
**admin/dolphinscheduler123**
 
-## Start or stop server
+## Start or Stop Server
 
 ```shell
 # Stop all DolphinScheduler server
diff --git a/docs/en-us/dev/user_doc/guide/installation/skywalking-agent.md 
b/docs/en-us/dev/user_doc/guide/installation/skywalking-agent.md
index a3c776b..14b5187 100644
--- a/docs/en-us/dev/user_doc/guide/installation/skywalking-agent.md
+++ b/docs/en-us/dev/user_doc/guide/installation/skywalking-agent.md
@@ -5,11 +5,11 @@ The dolphinscheduler-skywalking module provides 
[SkyWalking](https://skywalking.
 
 This document describes how to enable SkyWalking 8.4+ support with this module 
(recommended to use SkyWalking 8.5.0).
 
-# Installation
+## Installation
 
 The following configuration is used to enable SkyWalking agent.
 
-### Through environment variable configuration (for Docker Compose)
+### Through Environment Variable Configuration (for Docker Compose)
 
 Modify SkyWalking environment variables in `docker/docker-swarm/config.env.sh`:
 
@@ -26,7 +26,7 @@ And run
 $ docker-compose up -d
 ```
 
-### Through environment variable configuration (for Docker)
+### Through Environment Variable Configuration (for Docker)
 
 ```shell
 $ docker run -d --name dolphinscheduler \
@@ -41,7 +41,7 @@ $ docker run -d --name dolphinscheduler \
 apache/dolphinscheduler:1.3.8 all
 ```
 
-### Through install_config.conf configuration (for DolphinScheduler install.sh)
+### Through install_config.conf Configuration (for DolphinScheduler install.sh)
 
 Add the following configurations to 
`${workDir}/conf/config/install_config.conf`.
 
@@ -59,11 +59,11 @@ skywalkingLogReporterPort="11800"
 
 ```
 
-# Usage
+## Usage
 
 ### Import Dashboard
 
-#### Import DolphinScheduler Dashboard to SkyWalking Sever
+#### Import DolphinScheduler Dashboard to SkyWalking Server
 
 Copy the 
`${dolphinscheduler.home}/ext/skywalking-agent/dashboard/dolphinscheduler.yml` 
file into `${skywalking-oap-server.home}/config/ui-initialized-templates/` 
directory, and restart SkyWalking oap-server.
 
diff --git a/docs/en-us/dev/user_doc/guide/installation/standalone.md 
b/docs/en-us/dev/user_doc/guide/installation/standalone.md
index 9ab7b79..143ca65 100644
--- a/docs/en-us/dev/user_doc/guide/installation/standalone.md
+++ b/docs/en-us/dev/user_doc/guide/installation/standalone.md
@@ -4,7 +4,7 @@ Standalone only for quick look for DolphinScheduler.
 
 If you are a green hand and want to experience DolphinScheduler, we 
recommended you install follow [Standalone](standalone.md). If you want to 
experience more complete functions or schedule large tasks number, we 
recommended you install follow [pseudo-cluster deployment](pseudo-cluster.md). 
If you want to using DolphinScheduler in production, we recommended you follow 
[cluster deployment](cluster.md) or [kubernetes](kubernetes.md)
 
-> **_Note:_** Standalone only recommends the use of less than 20 workflows, 
because it uses H2 Database, Zookeeper Testing Server, too many tasks may cause 
instability
+> **_Note:_** Standalone only recommends the use of less than 20 workflows, 
because it uses H2 Database, ZooKeeper Testing Server, too many tasks may cause 
instability
 
 ## Prepare
 
@@ -13,7 +13,7 @@ If you are a green hand and want to experience 
DolphinScheduler, we recommended
 
 ## Start DolphinScheduler Standalone Server
 
-### Extract and start DolphinScheduler
+### Extract and Start DolphinScheduler
 
 There is a standalone startup script in the binary compressed package, which 
can be quickly started after extract. Switch to a user with sudo permission and 
run the script
 
@@ -28,7 +28,7 @@ sh ./bin/dolphinscheduler-daemon.sh start standalone-server
 
 The browser access address http://localhost:12345/dolphinscheduler can login 
DolphinScheduler UI. The default username and password are 
**admin/dolphinscheduler123**
 
-## start/stop server
+### Start or Stop Server
 
 The script `./bin/dolphinscheduler-daemon.sh` can not only quickly start 
standalone, but also stop the service operation. All the commands are as follows
 
diff --git a/docs/en-us/dev/user_doc/guide/monitor.md 
b/docs/en-us/dev/user_doc/guide/monitor.md
index 2bad35e..4606abd 100644
--- a/docs/en-us/dev/user_doc/guide/monitor.md
+++ b/docs/en-us/dev/user_doc/guide/monitor.md
@@ -1,18 +1,17 @@
-
 # Monitor
 
-## Service management
+## Service Management
 
 - Service management is mainly to monitor and display the health status and 
basic information of each service in the system
 
-## master monitoring
+## Monitor Master Server
 
 - Mainly related to master information.
 <p align="center">
    <img src="/img/master-jk-en.png" width="80%" />
  </p>
 
-## worker monitoring
+## Monitor Worker Server
 
 - Mainly related to worker information.
 
@@ -20,7 +19,7 @@
    <img src="/img/worker-jk-en.png" width="80%" />
  </p>
 
-## Zookeeper monitoring
+## Monitor ZooKeeper
 
 - Mainly related configuration information of each worker and master in 
ZooKeeper.
 
@@ -28,7 +27,7 @@
    <img src="/img/zookeeper-monitor-en.png" width="80%" />
  </p>
 
-## DB monitoring
+## Monitor DB
 
 - Mainly the health of the DB
 
@@ -36,7 +35,7 @@
    <img src="/img/mysql-jk-en.png" width="80%" />
  </p>
 
-## Statistics management
+## Statistics Management
 
 <p align="center">
    <img src="/img/statistics-en.png" width="80%" />
@@ -44,5 +43,5 @@
 
 - Number of commands to be executed: statistics on the t_ds_command table
 - The number of failed commands: statistics on the t_ds_error_command table
-- Number of tasks to run: Count the data of task_queue in Zookeeper
-- Number of tasks to be killed: Count the data of task_kill in Zookeeper
+- Number of tasks to run: Count the data of task_queue in ZooKeeper
+- Number of tasks to be killed: Count the data of task_kill in ZooKeeper
\ No newline at end of file
diff --git a/docs/en-us/dev/user_doc/guide/open-api.md 
b/docs/en-us/dev/user_doc/guide/open-api.md
index e93737a..2210724 100644
--- a/docs/en-us/dev/user_doc/guide/open-api.md
+++ b/docs/en-us/dev/user_doc/guide/open-api.md
@@ -5,7 +5,7 @@ Generally, projects and processes are created through pages, 
but integration wit
 
 ## The Operation Steps of DS API Calls
 
-### Create a token
+### Create a Token
 1. Log in to the scheduling system, click "Security", then click "Token 
manage" on the left, and click "Create token" to create a token.
 
 <p align="center">
@@ -18,7 +18,7 @@ Generally, projects and processes are created through pages, 
but integration wit
    <img src="/img/create-token-en1.png" width="80%" />
  </p>
 
-### Use token
+### Use Token
 1. Open the API documentation page
     > Address:http://{api server 
ip}:12345/dolphinscheduler/doc.html?language=en_US&lang=en
 <p align="center">
@@ -36,7 +36,7 @@ Generally, projects and processes are created through pages, 
but integration wit
    <img src="/img/test-api.png" width="80%" />
  </p>  
 
-### Create a project
+### Create a Project
 Here is an example of creating a project named "wudl-flink-test":
 <p align="center">
    <img src="/img/api/create_project1.png" width="80%" />
@@ -52,7 +52,9 @@ Here is an example of creating a project named 
"wudl-flink-test":
 The returned msg information is "success", indicating that we have 
successfully created the project through API.
 
 If you are interested in the source code of the project, please continue to 
read the following:
-### Appendix:The source code of creating a project
+
+### Appendix:The Source Code of Creating a Project
+
 <p align="center">
    <img src="/img/api/create_source1.png" width="80%" />
  </p>
diff --git a/docs/en-us/dev/user_doc/guide/parameter/built-in.md 
b/docs/en-us/dev/user_doc/guide/parameter/built-in.md
index 2c88bed..5e666fe 100644
--- a/docs/en-us/dev/user_doc/guide/parameter/built-in.md
+++ b/docs/en-us/dev/user_doc/guide/parameter/built-in.md
@@ -45,4 +45,4 @@
       * Next N hours:$[HHmmss+N/24]
       * First N hours:$[HHmmss-N/24]
       * Next N minutes:$[HHmmss+N/24/60]
-      * First N minutes:$[HHmmss-N/24/60]
+      * First N minutes:$[HHmmss-N/24/60]
\ No newline at end of file
diff --git a/docs/en-us/dev/user_doc/guide/parameter/context.md 
b/docs/en-us/dev/user_doc/guide/parameter/context.md
index c125150..6fe9f9a 100644
--- a/docs/en-us/dev/user_doc/guide/parameter/context.md
+++ b/docs/en-us/dev/user_doc/guide/parameter/context.md
@@ -2,7 +2,7 @@
 
 DolphinScheduler provides the ability to refer to each other between 
parameters, including: local parameters refer to global parameters, and 
upstream and downstream parameter transfer. Because of the existence of 
references, it involves the priority of parameters when the parameter names are 
the same. see also [Parameter Priority](priority.md)
 
-## Local task use global parameter
+## Local Task Use Global Parameter
 
 The premise of local tasks referencing global parameters is that you have 
already defined [Global Parameter](global.md). The usage is similar to the 
usage in [local parameters](local.md), but the value of the parameter needs to 
be configured as the key in the global parameter
 
@@ -10,7 +10,7 @@ The premise of local tasks referencing global parameters is 
that you have alread
 
 As shown in the figure above, `${biz_date}` and `${curdate}` are examples of 
local parameters referencing global parameters. Observe the last line of the 
above figure, local_param_bizdate uses \${global_bizdate} to refer to the 
global parameter. In the shell script, you can use \${local_param_bizdate} to 
refer to the value of the global variable global_bizdate, or set the value of 
local_param_bizdate directly through JDBC. In the same way, local_param refers 
to the global parameters defi [...]
 
-## Pass parameter from upstream task to downstream
+## Pass Parameter From Upstream Task to Downstream
 
 DolphinScheduler Parameter transfer between tasks is allowed, and the current 
transfer direction only supports one-way transfer from upstream to downstream. 
The task types currently supporting this feature are:
 
diff --git a/docs/en-us/dev/user_doc/guide/parameter/priority.md 
b/docs/en-us/dev/user_doc/guide/parameter/priority.md
index e2ae733..008684b 100644
--- a/docs/en-us/dev/user_doc/guide/parameter/priority.md
+++ b/docs/en-us/dev/user_doc/guide/parameter/priority.md
@@ -37,4 +37,4 @@ The definition of the [use_create] node is as follows:
 
 "status" is the own parameters of the node set by the current node. However, 
the user also sets the "status" parameter when saving, assigning its value to 
-1. Then the value of status will be -1 with higher priority when the SQL is 
executed. The value of the node's own variable is discarded.
 
-The "ID" here is the parameter set by the upstream node. The user sets the 
parameters of the same parameter name "ID" for the [createparam1] node and 
[createparam2] node. And the [use_create] node uses the value of [createParam1] 
which is finished first.
+The "ID" here is the parameter set by the upstream node. The user sets the 
parameters of the same parameter name "ID" for the [createparam1] node and 
[createparam2] node. And the [use_create] node uses the value of [createParam1] 
which is finished first.
\ No newline at end of file
diff --git a/docs/en-us/dev/user_doc/guide/project/project-list.md 
b/docs/en-us/dev/user_doc/guide/project/project-list.md
index 37c7b9f..48b3d5e 100644
--- a/docs/en-us/dev/user_doc/guide/project/project-list.md
+++ b/docs/en-us/dev/user_doc/guide/project/project-list.md
@@ -1,6 +1,6 @@
 # Project
 
-## Create project
+## Create Project
 
 - Click "Project Management" to enter the project management page, click the 
"Create Project" button, enter the project name, project description, and click 
"Submit" to create a new project.
 
@@ -8,7 +8,7 @@
       <img src="/img/create_project_en1.png" width="80%" />
   </p>
 
-## Project home
+## Project Home
 
 - Click the project name link on the project management page to enter the 
project home page, as shown in the figure below, the project home page contains 
the task status statistics, process status statistics, and workflow definition 
statistics of the project. The introduction for those metric:
 
diff --git a/docs/en-us/dev/user_doc/guide/project/task-instance.md 
b/docs/en-us/dev/user_doc/guide/project/task-instance.md
index 6d02cdc..a8a465a 100644
--- a/docs/en-us/dev/user_doc/guide/project/task-instance.md
+++ b/docs/en-us/dev/user_doc/guide/project/task-instance.md
@@ -1,5 +1,5 @@
 
-## Task instance
+## Task Instance
 
 - Click Project Management -> Workflow -> Task Instance to enter the task 
instance page, as shown in the figure below, click the name of the workflow 
instance, you can jump to the workflow instance DAG chart to view the task 
status.
      <p align="center">
diff --git a/docs/en-us/dev/user_doc/guide/project/workflow-definition.md 
b/docs/en-us/dev/user_doc/guide/project/workflow-definition.md
index ddb9d2f..485c046 100644
--- a/docs/en-us/dev/user_doc/guide/project/workflow-definition.md
+++ b/docs/en-us/dev/user_doc/guide/project/workflow-definition.md
@@ -1,6 +1,6 @@
-# Workflow definition
+# Workflow Definition
 
-## <span id=creatDag> Create workflow definition</span>
+## <span id=creatDag> Create Workflow Definition</span>
 
 - Click Project Management -> Workflow -> Workflow Definition to enter the 
workflow definition page, and click the "Create Workflow" button to enter the 
**workflow DAG edit** page, as shown in the following figure:
   <p align="center">
@@ -37,7 +37,7 @@
    </p>
 > For other types of tasks, please refer to [Task Node Type and Parameter 
 > Settings](#TaskParamers).
 
-## Workflow definition operation function
+## Workflow Definition Operation Function
 
 Click Project Management -> Workflow -> Workflow Definition to enter the 
workflow definition page, as shown below:
 
@@ -59,7 +59,7 @@ The operation functions of the workflow definition list are 
as follows:
       <img src="/img/tree_en.png" width="80%" />
   </p>
 
-## <span id=runWorkflow>Run the workflow</span>
+## <span id=runWorkflow>Run the Workflow</span>
 
 - Click Project Management -> Workflow -> Workflow Definition to enter the 
workflow definition page, as shown in the figure below, click the "Go Online" 
button <img src="/img/online.png" width="35"/>,Go online workflow.
   <p align="center">
@@ -91,7 +91,7 @@ The operation functions of the workflow definition list are 
as follows:
 
   > Parallel mode: The tasks from May 1 to may 10 are executed simultaneously, 
and 10 process instances are generated on the process instance page.
 
-## <span id=creatTiming>Workflow timing</span>
+## <span id=creatTiming>Workflow Timing</span>
 
 - Create timing: Click Project Management->Workflow->Workflow Definition, 
enter the workflow definition page, go online the workflow, click the "timing" 
button <img src="/img/timing.png" width="35"/> ,The timing parameter setting 
dialog box pops up, as shown in the figure below:
   <p align="center">
@@ -109,6 +109,6 @@ The operation functions of the workflow definition list are 
as follows:
       <img src="/img/time-manage-list-en.png" width="80%" />
   </p>
 
-## Import workflow
+## Import Workflow
 
 Click Project Management -> Workflow -> Workflow Definition to enter the 
workflow definition page, click the "Import Workflow" button to import the 
local workflow file, the workflow definition list displays the imported 
workflow, and the status is offline.
diff --git a/docs/en-us/dev/user_doc/guide/project/workflow-instance.md 
b/docs/en-us/dev/user_doc/guide/project/workflow-instance.md
index ac65ebe..1733e7a 100644
--- a/docs/en-us/dev/user_doc/guide/project/workflow-instance.md
+++ b/docs/en-us/dev/user_doc/guide/project/workflow-instance.md
@@ -1,6 +1,6 @@
-# Workflow instance
+# Workflow Instance
 
-## View workflow instance
+## View Workflow Instance
 
 - Click Project Management -> Workflow -> Workflow Instance to enter the 
Workflow Instance page, as shown in the figure below:
      <p align="center">
@@ -11,7 +11,7 @@
     <img src="/img/instance-runs-en.png" width="80%" />
   </p>
 
-## View task log
+## View Task Log
 
 - Enter the workflow instance page, click the workflow name, enter the DAG 
view page, double-click the task node, as shown in the following figure:
    <p align="center">
@@ -22,7 +22,7 @@
      <img src="/img/task-log-en.png" width="80%" />
    </p>
 
-## View task history
+## View Task History
 
 - Click Project Management -> Workflow -> Workflow Instance to enter the 
workflow instance page, and click the workflow name to enter the workflow DAG 
page;
 - Double-click the task node, as shown in the figure below, click "View 
History" to jump to the task instance page, and display a list of task 
instances running by the workflow instance
@@ -30,7 +30,7 @@
      <img src="/img/task_history_en.png" width="80%" />
    </p>
 
-## View operating parameters
+## View Operating Parameters
 
 - Click Project Management -> Workflow -> Workflow Instance to enter the 
workflow instance page, and click the workflow name to enter the workflow DAG 
page;
 - Click the icon in the upper left corner <img 
src="/img/run_params_button.png" width="35"/>,View the startup parameters of 
the workflow instance; click the icon <img src="/img/global_param.png" 
width="35"/>,View the global and local parameters of the workflow instance, as 
shown in the following figure:
@@ -38,7 +38,7 @@
      <img src="/img/run_params_en.png" width="80%" />
    </p>
 
-## Workflow instance operation function
+## Workflow Instance Operation Function
 
 Click Project Management -> Workflow -> Workflow Instance to enter the 
Workflow Instance page, as shown in the figure below:
 
diff --git a/docs/en-us/dev/user_doc/guide/resource.md 
b/docs/en-us/dev/user_doc/guide/resource.md
index 0c5c230..26e7bf8 100644
--- a/docs/en-us/dev/user_doc/guide/resource.md
+++ b/docs/en-us/dev/user_doc/guide/resource.md
@@ -7,7 +7,7 @@ If you want to use the resource upload function, you can select 
the local file d
 > * If the resource upload function is used, the deployment user in 
 > [installation and deployment](installation/standalone.md) must to have 
 > operation authority
 > * If you using Hadoop cluster with HA, you need to enable HDFS resource 
 > upload, and you need to copy the `core-site.xml` and `hdfs-site.xml` under 
 > the Hadoop cluster to `/opt/dolphinscheduler/conf`, otherwise Skip step
 
-## hdfs resource configuration
+## HDFS Resource Configuration
 
 - Upload resource files and udf functions, all uploaded files and resources 
will be stored on hdfs, so the following configuration items are required:
 
@@ -42,7 +42,7 @@ conf/common/hadoop.properties
 - Only one address needs to be configured for yarn.resourcemanager.ha.rm.ids 
and yarn.application.status.address, and the other address is empty.
 - You need to copy core-site.xml and hdfs-site.xml from the conf directory of 
the Hadoop cluster to the conf directory of the dolphinscheduler project, and 
restart the api-server service.
 
-## File management
+## File Management
 
 > It is the management of various resource files, including creating basic 
 > txt/log/sh/conf/py/java and other files, uploading jar packages and other 
 > types of files, and can do edit, rename, download, delete and other 
 > operations.
 
@@ -95,9 +95,9 @@ conf/common/hadoop.properties
     </p>
 
 
-## UDF management
+## UDF Management
 
-### Resource management
+### Resource Management
 
 > The resource management and file management functions are similar. The 
 > difference is that the resource management is the uploaded UDF function, and 
 > the file management uploads the user program, script and configuration file.
 > Operation function: rename, download, delete.
@@ -105,7 +105,7 @@ conf/common/hadoop.properties
 - Upload udf resources
   > Same as uploading files.
 
-### Function management
+### Function Management
 
 - Create UDF function
   > Click "Create UDF Function", enter the udf function parameters, select the 
udf resource, and click "Submit" to create the udf function.
@@ -120,13 +120,13 @@ conf/common/hadoop.properties
    <img src="/img/udf_edit_en.png" width="80%" />
  </p>
  
- ## Task group settings
+## Task Group Settings
 
 The task group is mainly used to control the concurrency of task instances, 
and is designed to control the pressure of other resources (it can also control 
the pressure of the Hadoop cluster, the cluster will have queue control it). 
When creating a new task definition, you can configure the corresponding task 
group and configure the priority of the task running in the task group. 
 
-### Task group configuration 
+### Task Group Configuration 
 
-#### Create task group 
+#### Create Task Group 
 
 <p align="center">
     <img src="/img/task_group_manage_eng.png" width="80%" />
@@ -146,7 +146,7 @@ You need to enter the information in the picture:
 
 [Resource pool size]: The maximum number of concurrent task instances allowed 
 
-#### View task group queue 
+#### View Task Group Queue 
 
 <p align="center">
     <img src="/img/task_group_conf_eng.png" width="80%" />
@@ -158,7 +158,7 @@ Click the button to view task group usage information
     <img src="/img/task_group_queue_list_eng.png" width="80%" />
 </p>
 
-#### Use of task groups 
+#### Use of Task Groups 
 
 Note: The use of task groups is applicable to tasks executed by workers, such 
as [switch] nodes, [condition] nodes, [sub_process] and other node types 
executed by the master are not controlled by the task group. Let's take the 
shell node as an example: 
 
@@ -173,13 +173,13 @@ Regarding the configuration of the task group, all you 
need to do is to configur
 
 [Priority] : When there is a waiting resource, the task with high priority 
will be distributed to the worker by the master first. The larger the value of 
this part, the higher the priority. 
 
-###  Implementation logic of task group 
+### Implementation Logic of Task Group 
 
-#### Get task group resources: 
+#### Get Task Group Resources: 
 
 The master judges whether the task is configured with a task group when 
distributing the task. If the task is not configured, it is normally thrown to 
the worker to run; if a task group is configured, it checks whether the 
remaining size of the task group resource pool meets the current task operation 
before throwing it to the worker for execution. , if the resource pool -1 is 
satisfied, continue to run; if not, exit the task distribution and wait for 
other tasks to wake up. 
 
-#### Release and wake up: 
+#### Release and Wake Up: 
 
 When the task that has obtained the task group resource ends, the task group 
resource will be released. After the release, it will check whether there is a 
task waiting in the current task group. If there is, mark the task with the 
best priority to run, and create a new executable event. . The event stores the 
task id that is marked to obtain the resource, and then obtains the task group 
resource and then runs it. 
 
diff --git a/docs/en-us/dev/user_doc/guide/security.md 
b/docs/en-us/dev/user_doc/guide/security.md
index bbab492..9e20dcb 100644
--- a/docs/en-us/dev/user_doc/guide/security.md
+++ b/docs/en-us/dev/user_doc/guide/security.md
@@ -1,10 +1,9 @@
-
 # Security
 
 * Only the administrator account in the security center has the authority to 
operate. It has functions such as queue management, tenant management, user 
management, alarm group management, worker group management, token management, 
etc. In the user management module, resources, data sources, projects, etc. 
Authorization
 * Administrator login, default user name and password: 
admin/dolphinscheduler123
 
-## Create queue
+## Create Queue
 
 - Queue is used when the "queue" parameter is needed to execute programs such 
as spark and mapreduce.
 - The administrator enters the Security Center->Queue Management page and 
clicks the "Create Queue" button to create a queue.
@@ -12,7 +11,7 @@
    <img src="/img/create-queue-en.png" width="80%" />
  </p>
 
-## Add tenant
+## Add Tenant
 
 - The tenant corresponds to the Linux user, which is used by the worker to 
submit the job. Task will fail if Linux does not exists this user. You can set 
the parameter `worker.tenant.auto.create` as `true` in configuration file 
`worker.properties`. After that DolphinScheduler would create user if not 
exists, The property `worker.tenant.auto.create=true` requests worker run 
`sudo` command without password.
 - Tenant Code: **Tenant Code is the only user on Linux and cannot be repeated**
@@ -22,7 +21,7 @@
     <img src="/img/addtenant-en.png" width="80%" />
   </p>
 
-## Create normal user
+## Create Normal User
 
 - Users are divided into **administrator users** and **normal users**
 
@@ -45,7 +44,7 @@
 - The administrator enters the Security Center->User Management page and 
clicks the "Edit" button. When editing user information, enter the new password 
to modify the user password.
 - After a normal user logs in, click the user information in the user name 
drop-down box to enter the password modification page, enter the password and 
confirm the password and click the "Edit" button, then the password 
modification is successful.
 
-## Create alarm group
+## Create Alarm Group
 
 - The alarm group is a parameter set at startup. After the process ends, the 
status of the process and other information will be sent to the alarm group in 
the form of email.
 
@@ -54,7 +53,7 @@
   <p align="center">
     <img src="/img/mail-en.png" width="80%" />
 
-## Token management
+## Token Management
 
 > Since the back-end interface has login check, token management provides a 
 > way to perform various operations on the system by calling the interface.
 
@@ -102,7 +101,7 @@
     }
 ```
 
-## Granted permission
+## Granted Permission
 
     * Granted permissions include project permissions, resource permissions, 
data source permissions, UDF function permissions.
     * The administrator can authorize the projects, resources, data sources 
and UDF functions not created by ordinary users. Because the authorization 
methods for projects, resources, data sources and UDF functions are the same, 
we take project authorization as an example.
@@ -121,7 +120,7 @@
 
 - Resources, data sources, and UDF function authorization are the same as 
project authorization.
 
-## Worker grouping
+## Worker Grouping
 
 Each worker node will belong to its own worker group, and the default group is 
"default".
 
diff --git a/docs/en-us/dev/user_doc/guide/task/conditions.md 
b/docs/en-us/dev/user_doc/guide/task/conditions.md
index 345bee8..d2e262e 100644
--- a/docs/en-us/dev/user_doc/guide/task/conditions.md
+++ b/docs/en-us/dev/user_doc/guide/task/conditions.md
@@ -31,6 +31,6 @@ Drag in the toolbar<img src="/img/conditions.png" 
width="20"/>The task node to t
   - Add the upstream dependency: Use the first parameter to choose task name, 
and the second parameter for status of the upsteam task.
   - Upstream task relationship: we use `and` and `or` operators to handle 
complex relationship of upstream when multiple upstream tasks for Conditions 
task
 
-## Related task
+## Related Task
 
 [switch](switch.md): [Condition](conditions.md)task mainly executes the 
corresponding branch based on the execution status (success, failure) of the 
upstream node. The [Switch](switch.md) task mainly executes the corresponding 
branch based on the value of the [global parameter](../parameter/global.md) and 
the judgment expression result written by the user.
diff --git a/docs/en-us/dev/user_doc/guide/task/datax.md 
b/docs/en-us/dev/user_doc/guide/task/datax.md
index f6436bc..a13dec7 100644
--- a/docs/en-us/dev/user_doc/guide/task/datax.md
+++ b/docs/en-us/dev/user_doc/guide/task/datax.md
@@ -1,5 +1,4 @@
-
-# DATAX
+# DataX
 
 - Drag in the toolbar<img src="/img/datax.png" width="35"/>Task node into the 
drawing board
 
diff --git a/docs/en-us/dev/user_doc/guide/task/dependent.md 
b/docs/en-us/dev/user_doc/guide/task/dependent.md
index 97c2940..88868c9 100644
--- a/docs/en-us/dev/user_doc/guide/task/dependent.md
+++ b/docs/en-us/dev/user_doc/guide/task/dependent.md
@@ -1,4 +1,4 @@
-# DEPENDENT
+# Dependent
 
 - Dependent nodes are **dependency check nodes**. For example, process A 
depends on the successful execution of process B yesterday, and the dependent 
node will check whether process B has a successful execution yesterday.
 
diff --git a/docs/en-us/dev/user_doc/guide/task/emr.md 
b/docs/en-us/dev/user_doc/guide/task/emr.md
index 6bd314b..e44a599 100644
--- a/docs/en-us/dev/user_doc/guide/task/emr.md
+++ b/docs/en-us/dev/user_doc/guide/task/emr.md
@@ -17,6 +17,7 @@ Amazon EMR task type, for creating EMR clusters on AWS and 
performing computing
 - json: The json corresponding to the 
[RunJobFlowRequest](https://docs.aws.amazon.com/AWSJavaSDK/latest/javadoc/com/amazonaws/services/elasticmapreduce/model/RunJobFlowRequest.html)
 object,can also refer to 
[API_RunJobFlow_Examples](https://docs.aws.amazon.com/emr/latest/APIReference/API_RunJobFlow.html#API_RunJobFlow_Examples)
 
 ## json example
+
 ```json
 {
   "Name": "SparkPi",
diff --git a/docs/en-us/dev/user_doc/guide/task/flink.md 
b/docs/en-us/dev/user_doc/guide/task/flink.md
index 18c15f0..3f7881a 100644
--- a/docs/en-us/dev/user_doc/guide/task/flink.md
+++ b/docs/en-us/dev/user_doc/guide/task/flink.md
@@ -4,7 +4,7 @@
 
 Flink task type for executing Flink programs. For Flink nodes, the worker 
submits the task by using the flink command `flink run`. See [flink 
cli](https://nightlies.apache.org/flink/flink-docs-release-1.14/docs/deployment/cli/)
 for more details.
 
-## Create task
+## Create Task
 
 - Click Project Management -> Project Name -> Workflow Definition, and click 
the "Create Workflow" button to enter the DAG editing page.
 - Drag the <img src="/img/tasks/icons/flink.png" width="15"/> from the toolbar 
to the drawing board.
@@ -42,11 +42,11 @@ Flink task type for executing Flink programs. For Flink 
nodes, the worker submit
 
 ## Task Example
 
-### Execute the WordCount program
+### Execute the WordCount Program
 
 This is a common introductory case in the Big Data ecosystem, which often 
applied to computational frameworks such as MapReduce, Flink and Spark. The 
main purpose is to count the number of identical words in the input text. 
(Flink's releases come with this example job)
 
-#### Uploading the main package
+#### Upload the Main Package
 
 When using the Flink task node, you will need to use the Resource Centre to 
upload the jar package for the executable. Refer to the [resource 
center](../resource.md).
 
@@ -54,7 +54,7 @@ After configuring the Resource Centre, you can upload the 
required target files
 
 ![resource_upload](/img/tasks/demo/upload_flink.png)
 
-#### Configuring Flink nodes
+#### Configure Flink Nodes
 
 Simply configure the required content according to the parameter descriptions 
above.
 
@@ -62,4 +62,4 @@ Simply configure the required content according to the 
parameter descriptions ab
 
 ## Notice
 
- JAVA and Scala are only used for identification, there is no difference, if 
it is Flink developed by Python, there is no class of the main function, the 
others are the same.
+JAVA and Scala are only used for identification, there is no difference, if it 
is Flink developed by Python, there is no class of the main function, the 
others are the same.
diff --git a/docs/en-us/dev/user_doc/guide/task/http.md 
b/docs/en-us/dev/user_doc/guide/task/http.md
index 6072e66..d578180 100644
--- a/docs/en-us/dev/user_doc/guide/task/http.md
+++ b/docs/en-us/dev/user_doc/guide/task/http.md
@@ -1,4 +1,3 @@
-
 # HTTP
 
 - Drag in the toolbar<img src="/img/http.png" width="35"/>The task node to the 
drawing board, as shown in the following figure:
diff --git a/docs/en-us/dev/user_doc/guide/task/map-reduce.md 
b/docs/en-us/dev/user_doc/guide/task/map-reduce.md
index 5fa23ab..c98bdd4 100644
--- a/docs/en-us/dev/user_doc/guide/task/map-reduce.md
+++ b/docs/en-us/dev/user_doc/guide/task/map-reduce.md
@@ -8,6 +8,7 @@
 
 - Click Project Management-Project Name-Workflow Definition, and click the 
"Create Workflow" button to enter the DAG editing page.
 - Drag the <img src="/img/tasks/icons/mr.png" width="15"/> from the toolbar to 
the drawing board.
+
 ## Task Parameter
 
 -    **Node name**: The node name in a workflow definition is unique.
@@ -35,7 +36,7 @@
 - **Resource**: If the resource file is referenced in other parameters, you 
need to select and specify in the resource
 - **User-defined parameter**: It is a user-defined parameter of the MapReduce 
part, which will replace the content with \${variable} in the script
 
-## Python program
+## Python Program
 
 - **Program type**: select Python language
 - **Main jar package**: is the Python jar package for running MR
@@ -47,11 +48,11 @@
 
 ## Task Example
 
-### Execute the WordCount program
+### Execute the WordCount Program
 
 This example is a common introductory type of MapReduce application, which is 
designed to count the number of identical words in the input text.
 
-#### Uploading the main package
+#### Upload the Main Package
 
 When using the MapReduce task node, you will need to use the Resource Centre 
to upload the jar package for the executable. Refer to the [resource 
centre](../resource.md).
 
@@ -59,7 +60,7 @@ After configuring the Resource Centre, you can upload the 
required target files
 
 ![resource_upload](/img/tasks/demo/resource_upload.png)
 
-#### Configuring MapReduce nodes
+#### Configure MapReduce Nodes
 
 Simply configure the required content according to the parameter descriptions 
above.
 
diff --git a/docs/en-us/dev/user_doc/guide/task/pigeon.md 
b/docs/en-us/dev/user_doc/guide/task/pigeon.md
index b50e1c1..9ec2430 100644
--- a/docs/en-us/dev/user_doc/guide/task/pigeon.md
+++ b/docs/en-us/dev/user_doc/guide/task/pigeon.md
@@ -16,4 +16,4 @@ Drag in the toolbar<img src="/img/pigeon.png" width="20"/>The 
task node to the d
 - Number of failed retry attempts: The number of times the task failed to be 
resubmitted. It supports drop-down and hand-filling.
 - Failed retry interval: The time interval for resubmitting the task after a 
failed task. It supports drop-down and hand-filling.
 - Timeout alarm: Check the timeout alarm and timeout failure. When the task 
exceeds the "timeout period", an alarm email will be sent and the task 
execution will fail.
-- Target task name: Pigeon websocket service name.
+- Target task name: Pigeon websocket service name.
\ No newline at end of file
diff --git a/docs/en-us/dev/user_doc/guide/task/spark.md 
b/docs/en-us/dev/user_doc/guide/task/spark.md
index 9543d18..99bb2b5 100644
--- a/docs/en-us/dev/user_doc/guide/task/spark.md
+++ b/docs/en-us/dev/user_doc/guide/task/spark.md
@@ -4,7 +4,7 @@
 
 Spark task type for executing Spark programs. For Spark nodes, the worker 
submits the task by using the spark command `spark submit`. See 
[spark-submit](https://spark.apache.org/docs/3.2.1/submitting-applications.html#launching-applications-with-spark-submit)
 for more details.
 
-## Create task
+## Create Task
 
 - Click Project Management -> Project Name -> Workflow Definition, and click 
the "Create Workflow" button to enter the DAG editing page.
 - Drag the <img src="/img/tasks/icons/spark.png" width="15"/> from the toolbar 
to the drawing board.
@@ -39,11 +39,11 @@ Spark task type for executing Spark programs. For Spark 
nodes, the worker submit
 
 ## Task Example
 
-### Execute the WordCount program
+### Execute the WordCount Program
 
 This is a common introductory case in the Big Data ecosystem, which often 
applied to computational frameworks such as MapReduce, Flink and Spark. The 
main purpose is to count the number of identical words in the input text.
 
-#### Uploading the main package
+#### Upload the Main Package
 
 When using the Spark task node, you will need to use the Resource Center to 
upload the jar package for the executable. Refer to the [resource 
center](../resource.md).
 
@@ -51,7 +51,7 @@ After configuring the Resource Center, you can upload the 
required target files
 
 ![resource_upload](/img/tasks/demo/upload_spark.png)
 
-#### Configuring Spark nodes
+#### Configure Spark Nodes
 
 Simply configure the required content according to the parameter descriptions 
above.
 
@@ -59,4 +59,4 @@ Simply configure the required content according to the 
parameter descriptions ab
 
 ## Notice
 
- JAVA and Scala are only used for identification, there is no difference, if 
it is Spark developed by Python, there is no class of the main function, the 
others are the same.
+JAVA and Scala are only used for identification, there is no difference, if it 
is Spark developed by Python, there is no class of the main function, the 
others are the same.
diff --git a/docs/en-us/dev/user_doc/guide/task/sql.md 
b/docs/en-us/dev/user_doc/guide/task/sql.md
index 4cbb582..0b6f51c 100644
--- a/docs/en-us/dev/user_doc/guide/task/sql.md
+++ b/docs/en-us/dev/user_doc/guide/task/sql.md
@@ -4,7 +4,7 @@
 
 SQL task, used to connect to database and execute SQL.
 
-## create data source
+## Create Data Source
 
 Refer to [Data Source](../datasource/introduction.md)
 
@@ -26,13 +26,13 @@ Refer to [Data Source](../datasource/introduction.md)
 
 ## Task Example
 
-### Create a temporary table in hive and write data
+### Create a Temporary Table in Hive and Write Data
 
 This example creates a temporary table `tmp_hello_world` in hive and write a 
row of data. Before creating a temporary table, we need to ensure that the 
table does not exist, so we will use custom parameters to obtain the time of 
the day as the suffix of the table name every time we run, so that this task 
can run every day. The format of the created table name is: 
`tmp_hello_world_{yyyyMMdd}`.
 
 ![hive-sql](/img/tasks/demo/hive-sql.png)
 
-### After running the task successfully, query the results in hive.
+### After Running the Task Successfully, Query the Results in Hive
 
 Log in to the bigdata cluster and use 'hive' command or 'beeline' or 'JDBC' 
and other methods to connect to the 'Apache Hive' for the query. The query SQL 
is `select * from tmp_hello_world_{yyyyMMdd}`, please replace '{yyyyMMdd}' with 
the date of the running day. The query screenshot is as follows:
 
diff --git a/docs/en-us/dev/user_doc/guide/upgrade.md 
b/docs/en-us/dev/user_doc/guide/upgrade.md
index a2d2f22..1b49b86 100644
--- a/docs/en-us/dev/user_doc/guide/upgrade.md
+++ b/docs/en-us/dev/user_doc/guide/upgrade.md
@@ -1,18 +1,18 @@
+# DolphinScheduler Upgrade Documentation
 
-# DolphinScheduler upgrade documentation
+## Back Up Previous Version's Files and Database
 
-## 1. Back Up Previous Version's Files and Database.
-
-## 2. Stop All Services of DolphinScheduler.
+## Stop All Services of DolphinScheduler
 
  `sh ./script/stop-all.sh`
 
-## 3. Download the New Version's Installation Package.
+## Download the New Version's Installation Package
 
 - [download](/en-us/download/download.html) the latest version of the 
installation packages.
 - The following upgrade operations need to be performed in the new version's 
directory.
 
-## 4. Database Upgrade
+## Database Upgrade
+
 - Modify the following properties in conf/datasource.properties.
 
 - If you use MySQL as the database to run DolphinScheduler, please comment out 
PostgreSQL related configurations, and add mysql connector jar into lib dir, 
here we download mysql-connector-java-8.0.16.jar, and then correctly config 
database connect information. You can download mysql connector jar 
[here](https://downloads.MySQL.com/archives/c-j/). Alternatively, if you use 
Postgres as database, you just need to comment out Mysql related 
configurations, and correctly config database conne [...]
@@ -32,29 +32,30 @@
 
     `sh ./script/upgrade-dolphinscheduler.sh`
 
-## 5. Backend Service Upgrade.
+## Backend Service Upgrade
 
-### 5.1 Modify the Content in `conf/config/install_config.conf` File.
+### Modify the Content in `conf/config/install_config.conf` File
 
 - Standalone Deployment please refer the [6, Modify running arguments] in 
[Standalone-Deployment](./installation/standalone.md).
 - Cluster Deployment please refer the [6, Modify running arguments] in 
[Cluster-Deployment](./installation/cluster.md).
 
 #### Masters Need Attentions
+
 Create worker group in 1.3.1 version has different design: 
 
 - Before version 1.3.1 worker group can be created through UI interface.
 - Since version 1.3.1 worker group can be created by modify the worker 
configuration. 
 
-#### When Upgrade from Version Before 1.3.1 to 1.3.2, Below Operations are 
What We Need to Do to Keep Worker Group Config Consist with Previous.
+#### When Upgrade from Version Before 1.3.1 to 1.3.2, Below Operations are 
What We Need to Do to Keep Worker Group Config Consist with Previous
 
-1, Go to the backup database, search records in t_ds_worker_group table, 
mainly focus id, name and IP three columns.
+1. Go to the backup database, search records in t_ds_worker_group table, 
mainly focus id, name and IP three columns.
 
 | id | name | ip_list    |
 | :---         |     :---:      |          ---: |
 | 1   | service1     | 192.168.xx.10    |
 | 2   | service2     | 192.168.xx.11,192.168.xx.12      |
 
-2、Modify the workers config item in conf/config/install_config.conf file.
+2. Modify the workers config item in conf/config/install_config.conf file.
 
 Imaging bellow are the machine worker service to be deployed:
 | hostname | ip |
@@ -70,10 +71,11 @@ To keep worker group config consistent with the previous 
version, we need to mod
 workers="ds1:service1,ds2:service2,ds3:service2"
 ```
 
-#### The Worker Group has Been Enhanced in Version 1.3.2.
+#### The Worker Group has Been Enhanced in Version 1.3.2
 Worker in 1.3.1 can't belong to more than one worker group, in 1.3.2 it's 
supported. So in 1.3.1 it's not supported when 
workers="ds1:service1,ds1:service2", and in 1.3.2 it's supported. 
-  
-### 5.2 Execute Deploy Script.
+
+### Execute Deploy Script
+
 ```shell
 `sh install.sh`
 ```

Reply via email to