This is an automated email from the ASF dual-hosted git repository.

casion pushed a commit to branch dev
in repository https://gitbox.apache.org/repos/asf/linkis-website.git


The following commit(s) were added to refs/heads/dev by this push:
     new aa84b55fa91 [feat-4687]linkis-1.4.0 release note (#724)
aa84b55fa91 is described below

commit aa84b55fa91af52c2b139fc317527990436ade5a
Author: aiceflower <[email protected]>
AuthorDate: Mon Jul 17 19:56:26 2023 +0800

    [feat-4687]linkis-1.4.0 release note (#724)
    
    * update 1.4.0 feature
    
    * 1.4.0 release note and features
    
    * update version number
---
 docs/engine-usage/impala.md                        | 243 +++++++++++++++++++++
 docs/feature/ec-fix-label.md                       |  25 ---
 docs/feature/eureka-version-metadata.md            |  34 ---
 docs/feature/load-udf-by-udfid.md                  |  47 ----
 docs/feature/overview.md                           |  42 ++--
 docs/feature/remove-dss-support.md                 |  17 --
 docs/feature/spark-submit-jar.md                   |  52 -----
 docs/feature/update-token.md                       |  78 -------
 docs/feature/version-and-branch-intro.md           |  13 ++
 download/release-notes-1.4.0.md                    | 103 +++++++++
 .../current/release-notes-1.4.0.md                 | 104 +++++++++
 .../current/engine-usage/impala.md                 | 243 +++++++++++++++++++++
 .../current/feature/ec-fix-label.md                |  25 ---
 .../current/feature/eureka-version-metadata.md     |  34 ---
 .../current/feature/load-udf-by-udfid.md           |  47 ----
 .../current/feature/overview.md                    |  26 +--
 .../current/feature/remove-dss-support.md          |  17 --
 .../current/feature/spark-submit-jar.md            |  52 -----
 .../current/feature/update-token.md                |  78 -------
 .../current/feature/version-and-branch-intro.md    |  13 ++
 20 files changed, 753 insertions(+), 540 deletions(-)

diff --git a/docs/engine-usage/impala.md b/docs/engine-usage/impala.md
new file mode 100644
index 00000000000..282dd8073d3
--- /dev/null
+++ b/docs/engine-usage/impala.md
@@ -0,0 +1,243 @@
+---
+title: Impala
+sidebar_position: 15
+---
+
+This article mainly introduces the installation, usage and configuration of 
the `Impala` engine plugin in `Linkis`.
+
+
+## 1. Pre-work
+
+### 1.1 Engine installation
+
+If you want to use the `Impala` engine on your `Linkis` service, you need to 
prepare the Impala service and provide connection information, such as the 
connection address of the Impala cluster, SASL username and password, etc.
+
+### 1.2 Service Verification
+
+```shell
+# prepare trino-cli
+wget 
https://repo1.maven.org/maven2/io/trino/trino-cli/374/trino-cli-374-executable.jar
+mv trill-cli-374-executable.jar trill-cli
+chmod +x trino-cli
+
+# Execute the task
+./trino-cli --server localhost:8080 --execute 'show tables from system.jdbc'
+
+# Get the following output to indicate that the service is available
+"attributes"
+"catalogs"
+"columns"
+"procedure_columns"
+"procedures"
+"pseudo_columns"
+"schemas"
+"super_tables"
+"super_types"
+"table_types"
+"tables"
+"types"
+"udts"
+```
+
+## 2. Engine plugin deployment
+
+Before compiling the `Impala` engine, the `Linkis` project needs to be fully 
compiled, and the default installation and deployment package released by 
`Linkis` does not include this engine plug-in by default.
+
+### 2.1 Engine plugin preparation (choose one) [non-default 
engine](./overview.md)
+
+Method 1: Download the engine plug-in package directly
+
+[Linkis Engine Plugin 
Download](https://linkis.apache.org/zh-CN/blog/2022/04/15/how-to-download-engineconn-plugin)
+
+Method 2: Compile the engine plug-in separately (requires `maven` environment)
+
+```
+# compile
+cd ${linkis_code_dir}/linkis-engineconn-plugins/impala/
+mvn clean install
+# The compiled engine plug-in package is located in the following directory
+${linkis_code_dir}/linkis-engineconn-plugins/impala/target/out/
+```
+[EngineConnPlugin Engine Plugin 
Installation](../deployment/install-engineconn.md)
+
+### 2.2 Upload and load engine plugins
+
+Upload the engine package in 2.1 to the engine directory of the server
+```bash 
+${LINKIS_HOME}/lib/linkis-engineplugins
+```
+The directory structure after uploading is as follows
+```
+linkis-engineconn-plugins/
+├── impala
+│   ├── dist
+│ │ └── 3.4.0
+│   │       ├── conf
+│ │ └── lib
+│   └── plugin
+│ └── 3.4.0
+```
+
+### 2.3 Engine refresh
+
+#### 2.3.1 Restart and refresh
+Refresh the engine by restarting the `linkis-cg-linkismanager` service
+```bash
+cd ${LINKIS_HOME}/sbin
+sh linkis-daemon.sh restart cg-linkismanager
+```
+
+### 2.3.2 Check whether the engine is refreshed successfully
+You can check whether the `last_update_time` of the 
`linkis_engine_conn_plugin_bml_resources` table in the database is the time to 
trigger the refresh.
+
+```sql
+#login to `linkis` database
+select * from linkis_cg_engine_conn_plugin_bml_resources;
+```
+
+## 3 Engine usage
+
+### 3.1 Submit tasks through `Linkis-cli`
+
+```shell
+sh ./bin/linkis-cli -submitUser impala \
+-engineType impala-3.4.0 -code 'select * from default.test limit 10' \
+-runtimeMap linkis.es.http.method=GET \
+-runtimeMap linkis.impala.servers=127.0.0.1:21050
+```
+
+More `Linkis-Cli` command parameter reference: [Linkis-Cli 
usage](../user-guide/linkiscli-manual.md)
+
+## 4. Engine configuration instructions
+
+### 4.1 Default Configuration Description
+
+| Configuration | Default | Description | Required |
+| ----------------------------------------- | ---------- ----------- | 
-------------------------------------- ----- | -------- |
+| linkis.impala.default.limit | 5000 | Yes | The limit on the number of 
returned items in the query result set |
+| linkis.impala.engine.user | ${HDFS_ROOT_USER} | yes | default engine startup 
user |
+| linkis.impala.user.isolation.mode | false | yes | start the engine in 
multi-user mode |
+| linkis.impala.servers | 127.0.0.1:21050 | is | Impala server address, 
separated by ',' |
+| linkis.impala.maxConnections | 10 | Yes | Maximum number of connections to 
each Impala server |
+| linkis.impala.ssl.enable | false | yes | whether to enable SSL connection |
+| linkis.impala.ssl.keystore.type | JKS | No | SSL Keystore type |
+| linkis.impala.ssl.keystore | null | No | SSL Keystore path |
+| linkis.impala.ssl.keystore.password | null | No | SSL Keystore password |
+| linkis.impala.ssl.truststore.type | JKS | No | SSL Truststore type |
+| linkis.impala.ssl.truststore | null | No | SSL Truststore path |
+| linkis.impala.ssl.truststore.password | null | No | SSL Truststore password |
+| linkis.impala.sasl.enable | false | yes | whether to enable SASL 
authentication |
+| linkis.impala.sasl.mechanism | PLAIN | 否 | SASL Mechanism |
+| linkis.impala.sasl.authorizationId      | null                 |   否   | 
SASL AuthorizationId                     |
+| linkis.impala.sasl.protocol | LDAP | 否 | SASL Protocol |
+| linkis.impala.sasl.properties | null | No | SASL Properties: 
key1=value1,key2=value2 |
+| linkis.impala.sasl.username | ${impala.engine.user}| 否 | SASL Username |
+| linkis.impala.sasl.password | null | No | SASL Password |
+| linkis.impala.sasl.password.cmd | null | No | SASL Password get command |
+| linkis.impala.heartbeat.seconds | 1 | yes | task status update interval |
+| linkis.impala.query.timeout.seconds | 0 | No | Task execution timeout |
+| linkis.impala.query.batchSize | 1000 | yes | result set fetch batch size |
+| linkis.impala.query.options | null | No | Query submission parameters: 
key1=value1,key2=value2 |
+
+### 4.2 Configuration modification
+
+If the default parameters are not satisfied, there are the following ways to 
configure some basic parameters
+
+#### 4.2.1 Management console configuration
+
+![](./images/trino-config.png)
+
+Note: After modifying the configuration under the `IDE` tag, you need to 
specify `-creator IDE` to take effect (other tags are similar), such as:
+
+```shell
+sh ./bin/linkis-cli -creator IDE -submitUser hadoop \
+ -engineType impala-3.4.0 -codeType sql \
+ -code 'select * from system.jdbc.schemas limit 10' 
+```
+
+#### 4.2.2 Task interface configuration
+Submit the task interface and configure it through the parameter 
`params.configuration.runtime`
+
+```shell
+Example of http request parameters
+{
+    "executionContent": {"code": "select * from system.jdbc.schemas limit 
10;", "runType":  "sql"},
+    "params": {
+                    "variable": {},
+                    "configuration": {
+                            "runtime": {
+                                "linkis.trino.url":"http://127.0.0.1:8080";,
+                                "linkis.trino.catalog ":"hive",
+                                "linkis.trino.schema ":"default"
+                                }
+                            }
+                    },
+    "labels": {
+        "engineType": "trino-371",
+        "userCreator": "hadoop-IDE"
+    }
+}
+```
+
+### 4.3 Engine related data table
+
+`Linkis` is managed through engine tags, and the data table information 
involved is as follows.
+
+```
+linkis_ps_configuration_config_key: Insert the key and default values ​​​​of 
the configuration parameters of the engine
+linkis_cg_manager_label: insert engine label such as: trino-375
+linkis_ps_configuration_category: Insert the directory association of the 
engine
+linkis_ps_configuration_config_value: Insert the configuration that the engine 
needs to display
+linkis_ps_configuration_key_engine_relation: the relationship between 
configuration items and engines
+```
+
+The initial data related to the engine in the table is as follows
+
+
+```sql
+-- set variable
+SET @ENGINE_LABEL="impala-3.4.0";
+SET @ENGINE_IDE=CONCAT('*-IDE,',@ENGINE_LABEL);
+SET @ENGINE_ALL=CONCAT('*-*,',@ENGINE_LABEL);
+SET @ENGINE_NAME="impala";
+
+-- add impala engine to IDE
+insert into `linkis_cg_manager_label` (`label_key`, `label_value`, 
`label_feature`, `label_value_size`, `update_time`, `create_time`) VALUES 
('combined_userCreator_engineType', @ENGINE_ALL, 'OPTIONAL', 2, now(), now());
+insert into `linkis_cg_manager_label` (`label_key`, `label_value`, 
`label_feature`, `label_value_size`, `update_time`, `create_time`) VALUES 
('combined_userCreator_engineType', @ENGINE_IDE, 'OPTIONAL', 2, now(), now());
+select @label_id := id from `linkis_cg_manager_label` where label_value = 
@ENGINE_IDE;
+insert into `linkis_ps_configuration_category` (`label_id`, `level`) VALUES 
(@label_id, 2);
+
+-- insert configuration key
+INSERT INTO `linkis_ps_configuration_config_key` (`key`, `description`, 
`name`, `default_value`, `validate_type`, `validate_range`, `engine_conn_type`, 
`is_hidden`, `is_advanced`, `level`, `treeName`) VALUES 
('linkis.impala.default.limit', 'result result set limit of query', 'result set 
limit', 'null', 'None', '', @ENGINE_NAME, 0, 0, 1 , 'Data Source 
Configuration');
+INSERT INTO `linkis_ps_configuration_config_key` (`key`, `description`, 
`name`, `default_value`, `validate_type`, `validate_range`, `engine_conn_type`, 
`is_hidden`, `is_advanced`, `level`, `treeName`) VALUES 
('linkis.impala.engine.user', 'Default engine startup user', 'Default startup 
user', 'null', 'None', '', @ENGINE_NAME, 0, 0, 1, 'Data source configuration' );
+INSERT INTO `linkis_ps_configuration_config_key` (`key`, `description`, 
`name`, `default_value`, `validate_type`, `validate_range`, `engine_conn_type`, 
`is_hidden`, `is_advanced`, `level`, `treeName`) VALUES 
('linkis.impala.user.isolation.mode', 'Start engine in multi-user mode', 
'Multi-user mode', 'null', 'None', '', @ENGINE_NAME, 0, 0, 1, ' Datasource 
configuration');
+INSERT INTO `linkis_ps_configuration_config_key` (`key`, `description`, 
`name`, `default_value`, `validate_type`, `validate_range`, `engine_conn_type`, 
`is_hidden`, `is_advanced`, `level`, `treeName`) VALUES 
('linkis.impala.servers', 'Impala server address', 'service address', 'null', 
'None', '', @ENGINE_NAME, 0, 0, 1, 'data source configuration');
+INSERT INTO `linkis_ps_configuration_config_key` (`key`, `description`, 
`name`, `default_value`, `validate_type`, `validate_range`, `engine_conn_type`, 
`is_hidden`, `is_advanced`, `level`, `treeName`) VALUES 
('linkis.impala.maxConnections ', 'The maximum number of connections to each 
Impala server', 'Maximum number of connections', 'null', 'None', '', 
@ENGINE_NAME, 0, 0, 1, 'data source configuration');
+INSERT INTO `linkis_ps_configuration_config_key` (`key`, `description`, 
`name`, `default_value`, `validate_type`, `validate_range`, `engine_conn_type`, 
`is_hidden`, `is_advanced`, `level`, `treeName`) VALUES 
('linkis.impala.ssl.enable', 'Enable SSL connection', 'Enable SSL', 'null', 
'None', '', @ENGINE_NAME, 0, 0, 1, 'Data source configuration') ;
+INSERT INTO `linkis_ps_configuration_config_key` (`key`, `description`, 
`name`, `default_value`, `validate_type`, `validate_range`, `engine_conn_type`, 
`is_hidden`, `is_advanced`, `level`, `treeName`) VALUES 
('linkis.impala.ssl.keystore.type', 'SSL Keystore类型', 'SSL Keystore类型', 'null', 
'None', '', @ENGINE_NAME, 0, 0, 1, '数据源配置');
+INSERT INTO `linkis_ps_configuration_config_key` (`key`, `description`, 
`name`, `default_value`, `validate_type`, `validate_range`, `engine_conn_type`, 
`is_hidden`, `is_advanced`, `level`, `treeName`) VALUES 
('linkis.impala.ssl.keystore', 'SSL Keystore路径', 'SSL Keystore路径', 'null', 
'None', '', @ENGINE_NAME, 0, 0, 1, '数据源配置');
+INSERT INTO `linkis_ps_configuration_config_key` (`key`, `description`, 
`name`, `default_value`, `validate_type`, `validate_range`, `engine_conn_type`, 
`is_hidden`, `is_advanced`, `level`, `treeName`) VALUES 
('linkis.impala.ssl.keystore.password', 'SSL Keystore密码', 'SSL Keystore密码', 
'null', 'None', '', @ENGINE_NAME, 0, 0, 1, '数据源配置');
+INSERT INTO `linkis_ps_configuration_config_key` (`key`, `description`, 
`name`, `default_value`, `validate_type`, `validate_range`, `engine_conn_type`, 
`is_hidden`, `is_advanced`, `level`, `treeName`) VALUES 
('linkis.impala.ssl.truststore.type', 'SSL Truststore类型', 'SSL Truststore类型', 
'null', 'None', '', @ENGINE_NAME, 0, 0, 1, '数据源配置');
+INSERT INTO `linkis_ps_configuration_config_key` (`key`, `description`, 
`name`, `default_value`, `validate_type`, `validate_range`, `engine_conn_type`, 
`is_hidden`, `is_advanced`, `level`, `treeName`) VALUES 
('linkis.impala.ssl.truststore', 'SSL Truststore路径', 'SSL Truststore路径', 
'null', 'None', '', @ENGINE_NAME, 0, 0, 1, '数据源配置');
+INSERT INTO `linkis_ps_configuration_config_key` (`key`, `description`, 
`name`, `default_value`, `validate_type`, `validate_range`, `engine_conn_type`, 
`is_hidden`, `is_advanced`, `level`, `treeName`) VALUES 
('linkis.impala.ssl.truststore.password', 'SSL Truststore密码', 'SSL 
Truststore密码', 'null', 'None', '', @ENGINE_NAME, 0, 0, 1, '数据源配置');
+INSERT INTO `linkis_ps_configuration_config_key` (`key`, `description`, 
`name`, `default_value`, `validate_type`, `validate_range`, `engine_conn_type`, 
`is_hidden`, `is_advanced`, `level`, `treeName`) VALUES 
('linkis.impala.sasl.enable', 'whether to enable SASL authentication', 'enable 
SASL', 'null', 'None', '', @ENGINE_NAME, 0, 0, 1, 'data source configuration') ;
+INSERT INTO `linkis_ps_configuration_config_key` (`key`, `description`, 
`name`, `default_value`, `validate_type`, `validate_range`, `engine_conn_type`, 
`is_hidden`, `is_advanced`, `level`, `treeName`) VALUES 
('linkis.impala.sasl.mechanism', 'SASL Mechanism', 'SASL Mechanism', 'null', 
'None', '', @ENGINE_NAME, 0, 0, 1, '数据源配置');
+INSERT INTO `linkis_ps_configuration_config_key` (`key`, `description`, 
`name`, `default_value`, `validate_type`, `validate_range`, `engine_conn_type`, 
`is_hidden`, `is_advanced`, `level`, `treeName`) VALUES 
('linkis.impala.sasl.authorizationId', 'SASL AuthorizationId', 'SASL 
AuthorizationId', 'null', 'None', '', @ENGINE_NAME, 0, 0, 1, '数据源配置');
+INSERT INTO `linkis_ps_configuration_config_key` (`key`, `description`, 
`name`, `default_value`, `validate_type`, `validate_range`, `engine_conn_type`, 
`is_hidden`, `is_advanced`, `level`, `treeName`) VALUES 
('linkis.impala.sasl.protocol', 'SASL Protocol', 'SASL Protocol', 'null', 
'None', '', @ENGINE_NAME, 0, 0, 1, '数据源配置');
+INSERT INTO `linkis_ps_configuration_config_key` (`key`, `description`, 
`name`, `default_value`, `validate_type`, `validate_range`, `engine_conn_type`, 
`is_hidden`, `is_advanced`, `level`, `treeName`) VALUES 
('linkis.impala.sasl.properties', 'SASL Properties: key1=value1,key2=value2', 
'SASL Properties', 'null', 'None', '', @ENGINE_NAME, 0, 0, 1, '数据源配置');
+INSERT INTO `linkis_ps_configuration_config_key` (`key`, `description`, 
`name`, `default_value`, `validate_type`, `validate_range`, `engine_conn_type`, 
`is_hidden`, `is_advanced`, `level`, `treeName`) VALUES 
('linkis.impala.sasl.username', 'SASL Username', 'SASL Username', 'null', 
'None', '', @ENGINE_NAME, 0, 0, 1, '数据源配置');
+INSERT INTO `linkis_ps_configuration_config_key` (`key`, `description`, 
`name`, `default_value`, `validate_type`, `validate_range`, `engine_conn_type`, 
`is_hidden`, `is_advanced`, `level`, `treeName`) VALUES 
('linkis.impala.sasl.password', 'SASL Password', 'SASL Password', 'null', 
'None', '', @ENGINE_NAME, 0, 0, 1, '数据源配置');
+INSERT INTO `linkis_ps_configuration_config_key` (`key`, `description`, 
`name`, `default_value`, `validate_type`, `validate_range`, `engine_conn_type`, 
`is_hidden`, `is_advanced`, `level`, `treeName`) VALUES 
('linkis.impala.sasl.password.cmd', 'SASL Password get command', 'SASL Password 
get command', 'null', 'None', '', @ENGINE_NAME, 0, 0, 1, 'data source 
configuration');
+INSERT INTO `linkis_ps_configuration_config_key` (`key`, `description`, 
`name`, `default_value`, `validate_type`, `validate_range`, `engine_conn_type`, 
`is_hidden`, `is_advanced`, `level`, `treeName`) VALUES 
('linkis.impala.heartbeat.seconds', 'Task status update interval', 'Task status 
update interval', 'null', 'None', '', @ENGINE_NAME, 0, 0, 1, 'Data source 
configuration ');
+INSERT INTO `linkis_ps_configuration_config_key` (`key`, `description`, 
`name`, `default_value`, `validate_type`, `validate_range`, `engine_conn_type`, 
`is_hidden`, `is_advanced`, `level`, `treeName`) VALUES 
('linkis.impala.query.timeout.seconds', 'Task execution timeout', 'Task 
execution timeout', 'null', 'None', '', @ENGINE_NAME, 0, 0, 1, 'data source 
configuration');
+INSERT INTO `linkis_ps_configuration_config_key` (`key`, `description`, 
`name`, `default_value`, `validate_type`, `validate_range`, `engine_conn_type`, 
`is_hidden`, `is_advanced`, `level`, `treeName`) VALUES 
('linkis.impala.query.batchSize', 'result set acquisition batch size', 'result 
set acquisition batch size', 'null', 'None', '', @ENGINE_NAME, 0, 0, 1, 
'Datasource Configuration');
+INSERT INTO `linkis_ps_configuration_config_key` (`key`, `description`, 
`name`, `default_value`, `validate_type`, `validate_range`, `engine_conn_type`, 
`is_hidden`, `is_advanced`, `level`, `treeName`) VALUES 
('linkis.impala.query.options', 'Query submission parameters: 
key1=value1,key2=value2', 'Query submission parameters', 'null', 'None', '', 
@ENGINE_NAME, 0, 0, 1, 'Data source configuration');
+-- impala engine -*
+insert into `linkis_ps_configuration_key_engine_relation` (`config_key_id`, 
`engine_type_label_id`)
+(select config.id as config_key_id, label.id AS engine_type_label_id FROM 
`linkis_ps_configuration_config_key` config
+INNER JOIN `linkis_cg_manager_label` label ON config.engine_conn_type = 
@ENGINE_NAME and label_value = @ENGINE_ALL);
+-- impala engine default configuration
+insert into `linkis_ps_configuration_config_value` (`config_key_id`, 
`config_value`, `config_label_id`)
+(select relation.config_key_id AS config_key_id, '' AS config_value, 
relation.engine_type_label_id AS config_label_id FROM 
`linkis_ps_configuration_key_engine_relation` relation
+INNER JOIN `linkis_cg_manager_label` label ON relation.engine_type_label_id = 
label.id AND label.label_value = @ENGINE_ALL);
+```
\ No newline at end of file
diff --git a/docs/feature/ec-fix-label.md b/docs/feature/ec-fix-label.md
deleted file mode 100644
index 240ac83e977..00000000000
--- a/docs/feature/ec-fix-label.md
+++ /dev/null
@@ -1,25 +0,0 @@
----
-title: Task Fixed EngineConn Execution
-sidebar_position: 0.3
----
-
-## 1. Requirement Background
-Now when Linkis tasks are submitted, they are created or reused based on the 
tags of EngineConn (hereinafter referred to as EC), and the ECs between 
multiple tasks are random. However, for the existence of multi-tasks that need 
to be able to meet the dependencies of the tasks, execution on the same EC 
cannot be well supported. Add a new EngineConnInstanceLabel to multi-tasks to 
fix the same EC for multiple tasks.
-
-## 2. Instructions for use
-1. The management console adds a specific label, and the adding path is as 
follows: login to the control panel -> ECM management -> click on an ECM 
instance name -> edit the EC to be fixed -> add a label of type 
FixdEngineConnLabel.
-![](/Images/feature/ecm.png)
-![](/Images/feature/ec.png)
-![](/Images/feature/label.png)
-2. To submit the task execution, you need to add: FixdEngineConnLabel label 
and submit it to the fixed instance
-```json
-"labels": {
-    "engineType": "spark-2.4.3",
-    "userCreator": "hadoop-IDE",
-    "fixedEngineConn": "idvalue"
-}
-```
-## 3. Precautions
-1. For the first task, you can choose to obtain the list of EC instances for 
selection, or you can directly submit the task for creation
-
-2. If the EC is not idle and available, a new EC instance will be created to 
execute the task. If you need to avoid this situation, you can call the EC 
instance query interface when the task is submitted to determine whether the 
corresponding EC exists and status before submitting.
\ No newline at end of file
diff --git a/docs/feature/eureka-version-metadata.md 
b/docs/feature/eureka-version-metadata.md
deleted file mode 100644
index 7b417d3cba5..00000000000
--- a/docs/feature/eureka-version-metadata.md
+++ /dev/null
@@ -1,34 +0,0 @@
----
-title: Eureka reports version metadata
-sidebar_position: 0.2
----
-
-## 1. Requirement Background
-Eureka metadata adds additional information such as version. Supports reading 
configuration files, which is consistent with the version number of the 
configuration file, and uses a minor version number for the configuration file. 
Consider adding two version information in eureka metadata, one is the 
configuration file version and the other is the program software version. The 
configuration file versions of different services may be different. The 
configuration file versions of the same s [...]
-
-## 2. Instructions for use
-**Program version configuration**
-
-Add the program version configuration to linkis_env.sh to control the program 
version, the addition is as follows:
-```
-linkis.app.version=${version}
-```
-After reporting eureka metadata, the version format version + compilation time 
is as follows: 1.3.2-20230304
-```xml
-<metadata>
-    <linkis.app.version>${appVersion}</linkis.app.version>
-</metadata>
-```
-
-**Service Version Configuration**
-
-Add the service version configuration in the configuration file of each 
service, and add the following content:
-```
-linkis.conf.version=version number
-```
-Version format after reporting eureka metadata
-```xml
-<metadata>
-    <linkis.conf.version>${serviceVersion}</linkis.conf.version>
-</metadata>
-```
\ No newline at end of file
diff --git a/docs/feature/load-udf-by-udfid.md 
b/docs/feature/load-udf-by-udfid.md
deleted file mode 100644
index 368d7e101e6..00000000000
--- a/docs/feature/load-udf-by-udfid.md
+++ /dev/null
@@ -1,47 +0,0 @@
----
-title: Load UDF by UDF ID
-sidebar_position: 0.2
----
-
-## 1. Background
-In some scenarios, UDF is not loaded through visual interfaces such as Scripts 
and DSS, but through code. This needs to provide the function of loading UDF by 
UDF ID.
-
-## 2. Instructions for use
-Parameter Description:
-
-| parameter name | description | default value |
-|--------------|----------------|--------|
-|`linkis.user.udf.all.load` | Whether to load all UDFs selected by the user | 
true |
-|`linkis.user.udf.custom.ids`| UDF ID list, separated by `,` | - |
-
-Submit the task through RestFul, the request example is as follows.
-```json
-POST /api/rest_j/v1/entrance/submit
-Content-Type: application/json
-Token-Code: dss-AUTH
-Token-User: linked
-
-{
-    "executionContent": {
-        "code": "show databases",
-        "runType": "sql"
-    },
-    "params": {
-        "configuration": {
-            "startup": {
-                "linkis.user.udf.all.load": false
-                "linkis.user.udf.custom.ids": "1,2,3"
-            }
-        }
-    },
-    "labels": {
-        "engineType": "spark-2.4.3",                  // 
pattern:engineType-version
-        "userCreator": "linkis-IDE"                   // userCreator: linkis 
is username。IDE is system that be configed in Linkis。
-    }
-}
-```
-
-## 3. Precautions
-1. When `linkis.user.udf.all.load` specifies true, the 
`linkis.user.udf.custom.ids` parameter does not take effect
-
-2. This function is independent of the loading of 
`/udf/isload?udfId=123&isLoad=true` interface
\ No newline at end of file
diff --git a/docs/feature/overview.md b/docs/feature/overview.md
index 05f681ad80e..af894cf8391 100644
--- a/docs/feature/overview.md
+++ b/docs/feature/overview.md
@@ -3,29 +3,29 @@ title: Version Feature
 sidebar_position: 0.1 
 --- 
 
-- [Supports Spark task submission Jar package function](./spark-submit-jar.md) 
-- [Supports loading specific UDF by UDF ID](./load-udf-by-udfid.md) 
-- [Multi-task fixed EC execution](./ec-fix-label.md) 
-- [Eureka version metadata reporting](./eureka-version-metadata.md) 
-- [Remove the dss-gateway-support dependency](./remove-dss-support.md)
-- [Modify the system to initialize the default Token](./update-token.md)
-- [Linkis Integration with 
OceanBase](/blog/2023/03/08/linkis-integration-with-oceanbase) 
-- [version of Release-Notes](/download/release-notes-1.3.2) 
+- [hadoop, spark, hive default version upgraded to 
3.x](./upgrade-base-engine-version.md)
+- [Reduce compatibility issues of different versions of the base 
engine](./base-engine-compatibilty.md)
+- [Hive engine connector supports concurrent 
tasks](./hive-engine-support-concurrent.md)
+- [linkis-storage supports S3 and OSS file 
systems](./storage-add-support-oss.md)
+- [Support more data sources](./spark-etl.md)
+- [Add postgresql database support](/docs/deployment/deploy-quick.md)
+- [Do not kill EC when ECM restarts](./ecm-takes-over-ec.md)
+- [Spark ETL enhancements](./spark-etl.md)
+- [version number and branch modification 
instructions](./version-and-branch-intro.md)
+- [version of Release-Notes](/download/release-notes-1.4.0)
 
+## Parameter changes
 
-
-## Parameter change 
-
-| module name (service name) | type | parameter name | default value | 
description | 
-|------|-----|-------------------------------------|-----|------------------------------|
-| mg-eureka | Add | eureka.instance.metadata-map.linkis.app.version | 
${linkis.app.version} | Eureka metadata report Linkis application version 
information | 
-| mg-eureka | Add | eureka. instance.metadata-map.linkis.conf.version | None | 
Eureka metadata report Linkis service version information | 
-| mg-eureka | Modify | eureka.client.registry-fetch-interval-seconds | 8 | 
Eureka Client pull service registration Information interval time (seconds) | 
-| mg-eureka | New | eureka.instance.lease-renewal-interval-in-seconds | 4 | 
The frequency (seconds) at which eureka client sends heartbeats to the server | 
| mg 
--eureka | New | eureka.instance.lease-expiration-duration-in-seconds | 12 | 
eureka waits for the next heartbeat timeout (seconds) | | 
-EC-shell | modification | wds.linkis.engineconn.support.parallelism | true | 
whether to enable Parallel execution of shell tasks | 
-| EC-shell | Modify | linkis.engineconn.shell.concurrent.limit | 15 | 
Concurrent number of shell tasks | 
+| module name (service name) | type | parameter name | default value | 
description |
+| ----------- | ----- | ------------------------------- 
------------------------- | ---------------- | ------- 
--------------------------------------------------- |
+| mg-eureka | New | eureka.instance.metadata-map.linkis.app.version | 
${linkis.app.version} | Eureka metadata reports Linkis application version 
information|
+| mg-eureka | Add | eureka.instance.metadata-map.linkis.conf.version | None | 
Eureka metadata report Linkis service version information |
+| mg-eureka | Modify | eureka.client.registry-fetch-interval-seconds | 8 | 
Eureka Client pull service registration information interval (seconds) |
+| mg-eureka | New | eureka.instance.lease-renewal-interval-in-seconds | 4 | 
The frequency (seconds) at which the eureka client sends heartbeats to the 
server |
+| mg-eureka | new | eureka.instance.lease-expiration-duration-in-seconds | 12 
| eureka waits for the next heartbeat timeout (seconds) |
+| EC-shell | Modify | wds.linkis.engineconn.support.parallelism | true | 
Whether to enable parallel execution of shell tasks |
+| EC-shell | Modify | linkis.engineconn.shell.concurrent.limit | 15 | 
Concurrent number of shell tasks |
 
 
 ## Database table changes
-For details, see the upgrade schema `db/upgrade/1.3.2_schema` file in the 
corresponding branch of the code warehouse (https://github.com/apache/linkis)
\ No newline at end of file
+For details, see the upgrade schema `db/upgrade/1.4.0_schema` file in the 
corresponding branch of the code warehouse (https://github.com/apache/linkis)
\ No newline at end of file
diff --git a/docs/feature/remove-dss-support.md 
b/docs/feature/remove-dss-support.md
deleted file mode 100644
index ec3e40e6d7b..00000000000
--- a/docs/feature/remove-dss-support.md
+++ /dev/null
@@ -1,17 +0,0 @@
----
-title: Remove DSS Support dependency
-sidebar_position: 0.4
----
-
-## 1. Requirement background
-The Linkis microservice module relies on the dss-gateway-support jar package, 
and jar package conflicts may occur when compiling with versions earlier than 
scala 2.12. So consider removing the dss-gateway-support module dependency.
-
-## 2. Instructions for use
-
-After removing the dss-gateway-support dependency, Linkis will not be affected.
-
-## 3. Precautions
-
-- Linkis >= 1.3.2 version, if you encounter an error related to dss support, 
you can check whether there is a jar package related to dss support in the 
$LINKIS_HOME/lib/linkis-spring-cloud-services/linkis-mg-gateway directory, and 
delete it if so Relevant jar packages, just restart the service.
-
-- The reason for the conflict is that the dss support package is installed 
under linkis-mg-gateway during the one-click installation of dss. The specific 
jar package is dSS-gateway-support-xxx.jar
\ No newline at end of file
diff --git a/docs/feature/spark-submit-jar.md b/docs/feature/spark-submit-jar.md
deleted file mode 100644
index 7b1a1493f85..00000000000
--- a/docs/feature/spark-submit-jar.md
+++ /dev/null
@@ -1,52 +0,0 @@
----
-title: Spark-Submit Jar package task
-sidebar_position: 0.2
----
-
-## 1. Background
-In some scenarios, tasks need to be executed in the form of jar packages 
submitted through spark-submit.
-
-## 2. Instructions for use
-Submit the Spark task through the SDK, the code example is as follows.
-```java
-public class SparkOnceJobTest {
-
-    public static void main(String[] args) {
-
-        LinkisJobClient.config().setDefaultServerUrl("http://127.0.0.1:9001";);
-
-        String submitUser = "linkis";
-        String engineType = "spark";
-
-        SubmittableSimpleOnceJob onceJob =
-                // region
-                LinkisJobClient.once().simple().builder()
-                        .setCreateService("Spark-Test")
-                        .setMaxSubmitTime(300000)
-                        .setDescription("SparkTestDescription")
-                        .addExecuteUser(submitUser)
-                        .addJobContent("runType", "jar")
-                        .addJobContent("spark.app.main.class", 
"org.apache.spark.examples.JavaWordCount")
-                        // Parameters obtained by the submitted jar package
-                        .addJobContent("spark.app.args", 
"hdfs:///tmp/test_word_count.txt") // WordCount test file
-                        .addLabel("engineType", engineType + "-2.4.3")
-                        .addLabel("userCreator", submitUser + "-IDE")
-                        .addLabel("engineConnMode", "once")
-                        .addStartupParam("spark.app.name", 
"spark-submit-jar-test-linkis") // Application Name displayed on yarn
-                        .addStartupParam("spark.executor.memory", "1g")
-                        .addStartupParam("spark.driver.memory", "1g")
-                        .addStartupParam("spark.executor.cores", "1")
-                        .addStartupParam("spark.executor.instance", "1")
-                        .addStartupParam("spark.app.resource", 
"hdfs:///tmp/spark/spark-examples_2.11-2.3.0.2.6.5.0-292.jar")
-                        .addSource("jobName", "OnceJobTest")
-                        .build();
-        // endregion
-        onceJob. submit();
-        onceJob.waitForCompleted(); // Temporary network failure will cause 
exceptions. It is recommended to modify the SDK later. For current use, 
exception handling is required
-    }
-}
-```
-## 3. Precautions
-1. The jar package or parameter file used in submitting tasks needs to be 
uploaded to hdfs or a shared directory in advance
-
-2. spark-submit jar only supports Once task
\ No newline at end of file
diff --git a/docs/feature/update-token.md b/docs/feature/update-token.md
deleted file mode 100644
index 89120e74f80..00000000000
--- a/docs/feature/update-token.md
+++ /dev/null
@@ -1,78 +0,0 @@
----
-title: Modify the system initialization default Token
-sidebar_position: 0.4
----
-
-## 1. Requirement background
-
-Linkis's original default Token is fixed and the length is too short, posing 
security risks. Therefore, Linkis 1.3.2 changes the original fixed Token to 
random generation, and increases the length of the Token.
-
-Modified Token format: application abbreviation - 32-bit random number, such 
as BML-928a721518014ba4a28735ec2a0da799
-
-Token may be used in the Linkis service itself, such as executing tasks 
through Shell, uploading BML, etc., or it may be used in other applications, 
such as DSS, Qualitis and other applications to access Linkis.
-
-
-## 2. Instructions for use
-
-### Token configuration required when Linkis uploads BML
-When the Linkis service itself uses Token, the Token in the configuration file 
must be consistent with the Token in the database. Match by applying the short 
name prefix.
-
-The token generated in the database can be queried by the following statement:
-
-```sql
-select * from linkis_mg_gateway_auth_token;
-```
-
-**$LINKIS_HOME/conf/linkis.properites file Token configuration**
-
-```
-linkis.configuration.linkisclient.auth.token.value=BML-928a721518014ba4a28735ec2a0da799
-wds.linkis.client.common.tokenValue=BML-928a721518014ba4a28735ec2a0da799
-wds.linkis.bml.auth.token.value=BML-928a721518014ba4a28735ec2a0da799
-wds.linkis.context.client.auth.value=BML-928a721518014ba4a28735ec2a0da799
-wds.linkis.errorcode.auth.token=BML-928a721518014ba4a28735ec2a0da799
-
-wds.linkis.client.test.common.tokenValue=LINKIS_CLI-215af9e265ae437ca1f070b17d6a540d
-
-wds.linkis.filesystem.token.value=WS-52bce72ed51741c7a2a9544812b45725
-wds.linkis.gateway.access.token=WS-52bce72ed51741c7a2a9544812b45725
-
-wds.linkis.server.dsm.auth.token.value=DSM-65169e8e1b564c0d8a04ee861ca7df6e
-```
-
-### Use the linkis-cli command to execute task Token configuration
-
-Modify $LINKIS_HOME/conf/linkis-cli/linkis-cli.properties file Token 
configuration
-```properties
-wds.linkis.client.common.tokenValue=BML-928a721518014ba4a28735ec2a0da799
-```
-
-## 3. Precautions
-
-### Full installation
-
-For the full installation of the new version of Linkis, the install.sh script 
will automatically process the configuration file and keep the database Token 
consistent. Therefore, the Token of the Linkis service itself does not need to 
be modified. Each application can query and use the new token through the 
management console.
-
-### version upgrade
-
-When the version is upgraded, the database Token is not modified, so there is 
no need to modify the configuration file and application Token.
-
-### Token expiration problem
-
-When the Token token is invalid or has expired, query the Token through the 
management console or sql statement. Check whether the Token used by the client 
is consistent with the database. If not, there are two solutions.
-
-1. Modify the client configuration to make the Token settings consistent with 
the database.
-
-2. Modify the Token configuration value of each application in the database. 
The old version database Token configuration reference is as follows
-
-```sql
-INSERT INTO 
`linkis_mg_gateway_auth_token`(`token_name`,`legal_users`,`legal_hosts`,`business_owner`,`create_time`,`update_time`,`elapse_day`,`update_by`)
 VALUES ('QML-AUTH','*','*','BDP',curdate(),curdate(),-1,'LINKIS');
-INSERT INTO 
`linkis_mg_gateway_auth_token`(`token_name`,`legal_users`,`legal_hosts`,`business_owner`,`create_time`,`update_time`,`elapse_day`,`update_by`)
 VALUES ('BML-AUTH','*','*','BDP',curdate(),curdate(),-1,'LINKIS');
-INSERT INTO 
`linkis_mg_gateway_auth_token`(`token_name`,`legal_users`,`legal_hosts`,`business_owner`,`create_time`,`update_time`,`elapse_day`,`update_by`)
 VALUES ('WS-AUTH','*','*','BDP',curdate(),curdate(),-1,'LINKIS');
-INSERT INTO 
`linkis_mg_gateway_auth_token`(`token_name`,`legal_users`,`legal_hosts`,`business_owner`,`create_time`,`update_time`,`elapse_day`,`update_by`)
 VALUES ('dss-AUTH','*','*','BDP',curdate(),curdate(),-1,'LINKIS');
-INSERT INTO 
`linkis_mg_gateway_auth_token`(`token_name`,`legal_users`,`legal_hosts`,`business_owner`,`create_time`,`update_time`,`elapse_day`,`update_by`)
 VALUES ('QUALITIS-AUTH','*','*','BDP',curdate(),curdate(),-1,'LINKIS');
-INSERT INTO 
`linkis_mg_gateway_auth_token`(`token_name`,`legal_users`,`legal_hosts`,`business_owner`,`create_time`,`update_time`,`elapse_day`,`update_by`)
 VALUES ('VALIDATOR-AUTH','*','*','BDP',curdate(),curdate(),-1,'LINKIS');
-INSERT INTO 
`linkis_mg_gateway_auth_token`(`token_name`,`legal_users`,`legal_hosts`,`business_owner`,`create_time`,`update_time`,`elapse_day`,`update_by`)
 VALUES ('LINKISCLI-AUTH','*','*','BDP',curdate(),curdate(),-1,'LINKIS');
-INSERT INTO 
`linkis_mg_gateway_auth_token`(`token_name`,`legal_users`,`legal_hosts`,`business_owner`,`create_time`,`update_time`,`elapse_day`,`update_by`)
 VALUES ('DSM-AUTH','*','*','BDP',curdate(),curdate(),-1,'LINKIS');
-INSERT INTO 
`linkis_mg_gateway_auth_token`(`token_name`,`legal_users`,`legal_hosts`,`business_owner`,`create_time`,`update_time`,`elapse_day`,`update_by`)
 VALUES ('LINKIS_CLI_TEST','*','*','BDP',curdate(),curdate(),-1,'LINKIS');
-```
\ No newline at end of file
diff --git a/docs/feature/version-and-branch-intro.md 
b/docs/feature/version-and-branch-intro.md
new file mode 100644
index 00000000000..b4ff4c3af7d
--- /dev/null
+++ b/docs/feature/version-and-branch-intro.md
@@ -0,0 +1,13 @@
+---
+title: version number and branch modification instructions
+sidebar_position: 0.4
+---
+
+## 1. Linkis main version number modification instructions
+
+Linkis will no longer be upgraded by minor version after version 1.3.2. The 
next version will be 1.4.0, and the version number will be 1.5.0, 1.6.0 and so 
on. When encountering a major defect in a released version that needs to be 
fixed, it will pull a minor version to fix the defect, such as 1.4.1.
+
+
+## 2. Linkis code submission master branch instructions
+
+The modified code of Linkis 1.3.2 and earlier versions is merged into the dev 
branch by default. In fact, the development community of Apache Linkis is very 
active, and new development requirements or repair functions will be submitted 
to the dev branch, but when users visit the Linkis code base, the master branch 
is displayed by default. Since we only release a new version every quarter, it 
seems that the community is not very active from the perspective of the master 
branch. Therefore, [...]
\ No newline at end of file
diff --git a/download/release-notes-1.4.0.md b/download/release-notes-1.4.0.md
new file mode 100644
index 00000000000..dbfe015fcc9
--- /dev/null
+++ b/download/release-notes-1.4.0.md
@@ -0,0 +1,103 @@
+---
+title: Release Notes 1.4.0
+sidebar_position: 0.14
+---
+
+Apache Linkis 1.4.0 includes all [Project 
Linkis-1.3.4](https://github.com/apache/linkis/projects/26)
+
+Linkis version 1.4.0 mainly adds the following functions: upgrade the default 
versions of hadoop, spark, and hive to 3.x; reduce the compatibility issues of 
different versions of the basic engine; Hive EC supports concurrent submission 
of tasks; ECM service does not kill EC when restarting; linkis-storage supports 
S3 and OSS file systems; supports more data sources, such as: tidb, starrocks, 
Gaussdb, etc.; increases postgresql database support; and enhances Spark ETL 
functions, supports  [...]
+
+
+The main functions are as follows:
+
+- Upgrade the default versions of hadoop, spark, and hive to 3.x
+- Reduce the compatibility issues of different versions of the base engine
+- Support Hive EC to execute tasks concurrently
+- Support not kill EC when restarting ECM service
+- linkis-storage supports S3 and OSS file systems
+- Support more data sources, such as: tidb, starrocks, Gaussdb, etc.
+- Add postgresql database support
+- Enhancements to Spark ETL
+- Version number upgrade rules and submitted code default merge branch 
modification
+
+abbreviation:
+- ORCHESTRATOR: Linkis Orchestrator
+- COMMON: Linkis Common
+- ENTRANCE: Linkis Entrance
+-EC: Engineconn
+- ECM: EngineConnManager
+- ECP: EngineConnPlugin
+- DMS: Data Source Manager Service
+- MDS: MetaData Manager Service
+- LM: Linkis Manager
+- PS: Linkis Public Service
+- PE: Linkis Public Enhancement
+- RPC: Linkis Common RPC
+- CG: Linkis Computation Governance
+- DEPLOY: Linkis Deployment
+- WEB: Linkis Web
+- GATEWAY: Linkis Gateway
+- EP: Engine Plugin
+
+
+## new features
+- \[EC][LINKIS-4263](https://github.com/apache/linkis/pull/4263) upgrade the 
default version of Hadoop, Spark, Hive to 3.x
+- \[EC-Hive][LINKIS-4359](https://github.com/apache/linkis/pull/4359) Hive EC 
supports concurrent tasks
+- \[COMMON][LINKIS-4424](https://github.com/apache/linkis/pull/4424) 
linkis-storage supports OSS file system
+- \[COMMON][LINKIS-4435](https://github.com/apache/linkis/pull/4435) 
linkis-storage supports S3 file system
+- \[EC-Impala][LINKIS-4458](https://github.com/apache/linkis/pull/4458) Add 
Impala EC plugin support
+- \[ECM][LINKIS-4452](https://github.com/apache/linkis/pull/4452) Do not kill 
EC when ECM restarts
+- \[EC][LINKIS-4460](https://github.com/apache/linkis/pull/4460) Linkis 
supports multiple clusters
+- \[COMMON][LINKIS-4524](https://github.com/apache/linkis/pull/4524) supports 
postgresql database
+- \[DMS][LINKIS-4486](https://github.com/apache/linkis/pull/4486) data source 
model supports Tidb data source
+- \[DMS][LINKIS-4496](https://github.com/apache/linkis/pull/4496) data source 
module supports Starrocks data source
+- \[DMS][LINKIS-4513](https://github.com/apache/linkis/pull/4513) data source 
model supports Gaussdb data source
+- \[DMS][LINKIS-](https://github.com/apache/linkis/pull/4581) data source 
model supports OceanBase data source
+- \[EC-Spark][LINKIS-4568](https://github.com/apache/linkis/pull/4568) Spark 
JDBC supports dm and kingbase databases
+- \[EC-Spark][LINKIS-4539](https://github.com/apache/linkis/pull/4539) Spark 
etl supports excel
+- \[EC-Spark][LINKIS-4534](https://github.com/apache/linkis/pull/4534) Spark 
etl supports redis
+- \[EC-Spark][LINKIS-4564](https://github.com/apache/linkis/pull/4564) Spark 
etl supports RocketMQ
+- \[EC-Spark][LINKIS-4560](https://github.com/apache/linkis/pull/4560) Spark 
etl supports mongo and es
+- \[EC-Spark][LINKIS-4569](https://github.com/apache/linkis/pull/4569) Spark 
etl supports solr
+- \[EC-Spark][LINKIS-4563](https://github.com/apache/linkis/pull/4563) Spark 
etl supports kafka
+- \[EC-Spark][LINKIS-4538](https://github.com/apache/linkis/pull/4538) Spark 
etl supports data lake
+
+
+## Enhancement points
+- \[COMMON][LINKIS-4462](https://github.com/apache/linkis/pull/4462) code 
optimization, unified attribute name
+- \[COMMON][LINKIS-4425](https://github.com/apache/linkis/pull/4425) code 
optimization, delete useless code
+- \[COMMON][LINKIS-4368](https://github.com/apache/linkis/pull/4368) code 
optimization, remove json4s dependency
+- \[COMMON][LINKIS-4357](https://github.com/apache/linkis/pull/4357) file 
upload interface optimization
+- \[ECM][LINKIS-4449](https://github.com/apache/linkis/pull/4449) ECM code 
optimization
+- \[EC][LINKIS-4341](https://github.com/apache/linkis/pull/4341) Optimize the 
code logic of CustomerDelimitedJSONSerDe
+- \[EC-Openlookeng][LINKIS-](https://github.com/apache/linkis/pull/4474) 
Openlookeng EC code conversion to Java
+- \[EC-Shell][LINKIS-4473](https://github.com/apache/linkis/pull/4473) Shell 
EC code conversion to Java
+- \[EC-Python][LINKIS-4482](https://github.com/apache/linkis/pull/4482) Python 
EC code conversion to Java
+- \[EC-Trino][LINKIS-4526](https://github.com/apache/linkis/pull/4526) Trino 
EC code conversion to Java
+- \[EC-Presto][LINKIS-4514](https://github.com/apache/linkis/pull/4514) Presto 
EC code conversion to Java
+- \[EC-Elasticsearch][LINKIS-4531](https://github.com/apache/linkis/pull/4531) 
Elasticsearch EC code conversion to Java
+- \[COMMON][LINKIS-4475](https://github.com/apache/linkis/pull/4475) use 
latest mysql DDL in k8s deployment
+- \[EC-Flink][LINKIS-4556](https://github.com/apache/linkis/pull/4556) Flink 
EC adds task interceptor
+- \[GATEWAY][LINKIS-4548](https://github.com/apache/linkis/pull/4548) Clear 
all backend caches on user logout
+- \[COMMON][LINKIS-4554](https://github.com/apache/linkis/pull/4554) Add MDC 
log format in Linkis to track JobID
+- \[CG][LINKIS-4583](https://github.com/apache/linkis/pull/4583) When 
submitting an once task, you can get the result of creating the engine
+- \[EC-Spark][LINKIS-4570](https://github.com/apache/linkis/pull/4570) 
Generate Spark sql based on jdbc data source
+- \[COMMON][LINKIS-4601](https://github.com/apache/linkis/pull/4601) supports 
integration test Action
+- \[EC-Seatunnel][LINKIS-4673](https://github.com/apache/linkis/pull/4673) 
Seatunnel version upgrade to 2.3.1
+
+
+## Repair function
+- \[EC-Hive][LINKIS-4246](https://github.com/apache/linkis/pull/4246) The Hive 
engine version number supports hyphens, such as hive3.1.2-cdh5.12.0
+- \[COMMON][LINKIS-4438](https://github.com/apache/linkis/pull/4438) fixed 
nohup startup error
+- \[EC][LINKIS-4429](https://github.com/apache/linkis/pull/4429) fix CPU 
average load calculation bug
+- \[PE][LINKIS-4457](https://github.com/apache/linkis/pull/4457) fix parameter 
validation issue configured by admin console
+- \[DMS][LINKIS-4500](https://github.com/apache/linkis/pull/4500) Fixed type 
conversion failure between client and data source
+- \[COMMON][LINKIS-4480](https://github.com/apache/linkis/pull/4480) fixed 
build default configuration file with jdk17
+- \[CG][LINKIS-4663](https://github.com/apache/linkis/pull/4663) Fix the 
problem that engine reuse may throw NPE
+- \[LM][LINKIS-4652](https://github.com/apache/linkis/pull/4652) fixed the 
problem of creating engine node throwing NPE
+- \[][LINKIS-](https://github.com/apache/linkis/pull/)
+- \[][LINKIS-](https://github.com/apache/linkis/pull/)
+
+
+## Acknowledgments
+The release of Apache Linkis 1.4.0 is inseparable from the contributors of the 
Linkis community, thanks to all community contributors, 
casionone,MrFengqin,zhangwejun,Zhao,ahaoyao,duhanmin,guoshupei,shixiutao,CharlieYan24,peacewong,GuoPhilipse,aiceflower,waynecookie,jacktao007,chenghuichen,ws00428637,ChengJie1053,dependabot,jackxu2011,sjgllgh,rarexixi,pjfanning,v-kkhuang,binbinCheng,stdnt-xiao,mayinrain.
\ No newline at end of file
diff --git 
a/i18n/zh-CN/docusaurus-plugin-content-docs-download/current/release-notes-1.4.0.md
 
b/i18n/zh-CN/docusaurus-plugin-content-docs-download/current/release-notes-1.4.0.md
new file mode 100644
index 00000000000..201c22357e6
--- /dev/null
+++ 
b/i18n/zh-CN/docusaurus-plugin-content-docs-download/current/release-notes-1.4.0.md
@@ -0,0 +1,104 @@
+---
+title: Release Notes 1.4.0
+sidebar_position: 0.14
+---
+
+Apache Linkis 1.4.0 包括所有 [Project 
Linkis-1.4.0](https://github.com/apache/linkis/projects/26)
+
+Linkis 1.4.0 版本,主要增加了如下功能:将 hadoop、spark、hive 默认版本升级为3.x;减少基础引擎不同版本兼容性问题;Hive 
EC 支持并发提交任务;ECM 服务重启时不 kill EC;linkis-storage 支持 S3 和 OSS 
文件系统;支持更多的数据源,如:tidb、starrocks、Gaussdb等;增加 postgresql 数据库支持;以及对Spark ETL 
功能增强,支持 Excel、Redis、Mongo、Elasticsearch等;同时对版本号升级规则及代码提交默认合并分支做了修改。
+
+
+主要功能如下:
+
+- 将 hadoop、spark、hive 默认版本升级为3.x
+- 减少基础引擎不同版本兼容性问题
+- 支持 Hive EC 并发执行任务
+- 支持 ECM 服务重启时不 kill EC
+- linkis-storage 支持 S3 和 OSS 文件系统
+- 支持更多的数据源,如:tidb、starrocks、Gaussdb等
+- 增加 postgresql 数据库支持
+- 对Spark ETL 功能增强
+- 版本号升级规则及提交代码默认合并分支修改
+
+缩写:
+- ORCHESTRATOR: Linkis Orchestrator
+- COMMON: Linkis Common
+- ENTRANCE: Linkis Entrance
+- EC: Engineconn
+- ECM: EngineConnManager
+- ECP: EngineConnPlugin
+- DMS: Data Source Manager Service
+- MDS: MetaData Manager Service
+- LM: Linkis Manager
+- PS: Linkis Public Service
+- PE: Linkis Public Enhancement
+- RPC: Linkis Common RPC
+- CG: Linkis Computation Governance
+- DEPLOY: Linkis Deployment
+- WEB: Linkis Web
+- GATEWAY: Linkis Gateway
+- EP: Engine Plugin
+
+
+## 新特性
+- \[EC][LINKIS-4263](https://github.com/apache/linkis/pull/4263) 将 
Hadoop、Spark、Hive 默认版本升级为3.x
+- \[EC-Hive][LINKIS-4359](https://github.com/apache/linkis/pull/4359)  Hive EC 
支持并发任务
+- \[COMMON][LINKIS-4424](https://github.com/apache/linkis/pull/4424) 
linkis-storage 支持 OSS 文件系统 
+- \[COMMON][LINKIS-4435](https://github.com/apache/linkis/pull/4435)  
linkis-storage 支持 S3 文件系统
+- \[EC-Impala][LINKIS-4458](https://github.com/apache/linkis/pull/4458) 增加 
Impala EC 插件支持 
+- \[ECM][LINKIS-4452](https://github.com/apache/linkis/pull/4452) ECM 重启时不 
kill EC
+- \[EC][LINKIS-4460](https://github.com/apache/linkis/pull/4460) Linkis 支持多集群
+- 
\[COMMON][LINKIS-4524](https://github.com/apache/linkis/pull/4524)支持postgresql数据库
+- \[DMS][LINKIS-4486](https://github.com/apache/linkis/pull/4486) 数据源模支持 Tidb 
数据源 
+- \[DMS][LINKIS-4496](https://github.com/apache/linkis/pull/4496) 数据源模支持 
Starrocks 数据源
+- \[DMS][LINKIS-4513](https://github.com/apache/linkis/pull/4513) 数据源模支持 
Gaussdb 数据源 
+- \[DMS][LINKIS-](https://github.com/apache/linkis/pull/4581) 数据源模支持 OceanBase 
数据源 
+- \[EC-Spark][LINKIS-4568](https://github.com/apache/linkis/pull/4568) Spark 
JDBC支持dm和kingbase数据库
+- \[EC-Spark][LINKIS-4539](https://github.com/apache/linkis/pull/4539) Spark 
etl支持excel
+- \[EC-Spark][LINKIS-4534](https://github.com/apache/linkis/pull/4534) Spark 
etl支持redis
+- \[EC-Spark][LINKIS-4564](https://github.com/apache/linkis/pull/4564) Spark 
etl支持RocketMQ
+- \[EC-Spark][LINKIS-4560](https://github.com/apache/linkis/pull/4560) Spark 
etl支持mongo and es
+- \[EC-Spark][LINKIS-4569](https://github.com/apache/linkis/pull/4569) Spark 
etl支持solr
+- \[EC-Spark][LINKIS-4563](https://github.com/apache/linkis/pull/4563) Spark 
etl支持kafka
+- \[EC-Spark][LINKIS-4538](https://github.com/apache/linkis/pull/4538) Spark 
etl 支持数据湖
+
+
+## 增强点
+- \[COMMON][LINKIS-4462](https://github.com/apache/linkis/pull/4462) 
代码优化,统一属性名称
+- \[COMMON][LINKIS-4425](https://github.com/apache/linkis/pull/4425) 
代码优化,删除了无用的代码
+- \[COMMON][LINKIS-4368](https://github.com/apache/linkis/pull/4368) 代码优化,移除 
json4s 依赖
+- \[COMMON][LINKIS-4357](https://github.com/apache/linkis/pull/4357) 文件上传接口优化
+- \[ECM][LINKIS-4449](https://github.com/apache/linkis/pull/4449) ECM 代码优化
+- \[EC][LINKIS-4341](https://github.com/apache/linkis/pull/4341) 优化 
CustomerDelimitedJSONSerDe 代码逻辑
+- \[EC-Openlookeng][LINKIS-](https://github.com/apache/linkis/pull/4474) 
Openlookeng EC 代码转换为 Java
+- \[EC-Shell][LINKIS-4473](https://github.com/apache/linkis/pull/4473) Shell 
EC 代码转换为 Java
+- \[EC-Python][LINKIS-4482](https://github.com/apache/linkis/pull/4482) Python 
EC 代码转换为 Java
+- \[EC-Trino][LINKIS-4526](https://github.com/apache/linkis/pull/4526) Trino 
EC 代码转换为 Java
+- \[EC-Presto][LINKIS-4514](https://github.com/apache/linkis/pull/4514) Presto 
EC 代码转换为 Java
+- \[EC-Elasticsearch][LINKIS-4531](https://github.com/apache/linkis/pull/4531) 
Elasticsearch EC 代码转换为 Java
+- \[COMMON][LINKIS-4475](https://github.com/apache/linkis/pull/4475) 
在k8s部署中使用最新的mysql DDL
+- \[EC-Flink][LINKIS-4556](https://github.com/apache/linkis/pull/4556) Flink 
EC 增加任务拦截器
+- \[GATEWAY][LINKIS-4548](https://github.com/apache/linkis/pull/4548) 
用户注销时清除所有后端缓存
+- \[COMMON][LINKIS-4554](https://github.com/apache/linkis/pull/4554) 
在Linkis中增加MDC日志格式,用于跟踪JobID
+- \[CG][LINKIS-4583](https://github.com/apache/linkis/pull/4583) 提交一个 once 
任务时可以得到创建引擎的结果
+- \[EC-Spark][LINKIS-4570](https://github.com/apache/linkis/pull/4570) 
基于jdbc数据源生成 Spark sql
+- \[COMMON][LINKIS-4601](https://github.com/apache/linkis/pull/4601) 支持集成测试 
Action
+- \[EC-Seatunnel][LINKIS-4673](https://github.com/apache/linkis/pull/4673) 
Seatunnel 版本升级到 2.3.1
+
+
+## 修复功能
+- \[EC-Hive][LINKIS-4246](https://github.com/apache/linkis/pull/4246)  Hive 
引擎版本号支持连字符,如hive3.1.2-cdh5.12.0
+- \[COMMON][LINKIS-4438](https://github.com/apache/linkis/pull/4438) 
修正了nohup启动错误
+- \[EC][LINKIS-4429](https://github.com/apache/linkis/pull/4429)修复 CPU 
平均负载计算bug
+- \[PE][LINKIS-4457](https://github.com/apache/linkis/pull/4457) 
修复由管理控制台配置的参数验证问题
+- \[DMS][LINKIS-4500](https://github.com/apache/linkis/pull/4500) 
修复客户端与数据源之间类型转换失败问题
+- \[COMMON][LINKIS-4480](https://github.com/apache/linkis/pull/4480) 修复了使用 
jdk17 构建默认配置文件的问题
+- \[CG][LINKIS-4663](https://github.com/apache/linkis/pull/4663) 修复引擎复用可能会抛出 
NPE 的问题
+- \[LM][LINKIS-4652](https://github.com/apache/linkis/pull/4652) 修复了创建引擎节点抛出 
NPE 的问题
+- \[][LINKIS-](https://github.com/apache/linkis/pull/)
+- \[][LINKIS-](https://github.com/apache/linkis/pull/)
+
+
+## 致谢
+Apache Linkis 1.3.2 的发布离不开 Linkis 社区的贡献者,感谢所有的社区贡献者,包括但不仅限于以下 
Contributors(排名不发先后):
+casionone,MrFengqin,zhangwejun,Zhao,ahaoyao,duhanmin,guoshupei,shixiutao,CharlieYan24,peacewong,GuoPhilipse,aiceflower,waynecookie,jacktao007,chenghuichen,ws00428637,ChengJie1053,dependabot,jackxu2011,sjgllgh,rarexixi,pjfanning,v-kkhuang,binbinCheng,stdnt-xiao,mayinrain。
\ No newline at end of file
diff --git 
a/i18n/zh-CN/docusaurus-plugin-content-docs/current/engine-usage/impala.md 
b/i18n/zh-CN/docusaurus-plugin-content-docs/current/engine-usage/impala.md
new file mode 100644
index 00000000000..b91f93bed33
--- /dev/null
+++ b/i18n/zh-CN/docusaurus-plugin-content-docs/current/engine-usage/impala.md
@@ -0,0 +1,243 @@
+---
+title: Impala
+sidebar_position: 12
+---
+
+本文主要介绍在 `Linkis` 中,`Impala` 引擎插件的安装、使用和配置。
+
+
+## 1. 前置工作
+
+### 1.1 引擎安装
+
+如果您希望在您的 `Linkis` 服务上使用 `Impala` 引擎,您需要准备 Impala 服务并提供连接信息,如 Impala 
集群的连接地址、SASL用户名和密码等
+
+### 1.2 服务验证
+
+```shell
+# 准备 trino-cli
+wget 
https://repo1.maven.org/maven2/io/trino/trino-cli/374/trino-cli-374-executable.jar
+mv trino-cli-374-executable.jar trino-cli
+chmod +x trino-cli
+
+#  执行任务
+./trino-cli --server localhost:8080 --execute 'show tables from system.jdbc'
+
+# 得到如下输出代表服务可用
+"attributes"
+"catalogs"
+"columns"
+"procedure_columns"
+"procedures"
+"pseudo_columns"
+"schemas"
+"super_tables"
+"super_types"
+"table_types"
+"tables"
+"types"
+"udts"
+```
+
+## 2. 引擎插件部署
+
+编译 `Impala` 引擎之前需要进行 `Linkis` 项目全量编译  , `Linkis` 默认发布的安装部署包中默认不包含此引擎插件。
+
+### 2.1 引擎插件准备(二选一)[非默认引擎](./overview.md)
+
+方式一:直接下载引擎插件包
+
+[Linkis 
引擎插件下载](https://linkis.apache.org/zh-CN/blog/2022/04/15/how-to-download-engineconn-plugin)
+
+方式二:单独编译引擎插件(需要有 `maven` 环境)
+
+```
+# 编译
+cd ${linkis_code_dir}/linkis-engineconn-plugins/impala/
+mvn clean install
+# 编译出来的引擎插件包,位于如下目录中
+${linkis_code_dir}/linkis-engineconn-plugins/impala/target/out/
+```
+[EngineConnPlugin 引擎插件安装](../deployment/install-engineconn.md)
+
+### 2.2 引擎插件的上传和加载
+
+将 2.1 中的引擎包上传到服务器的引擎目录下
+```bash 
+${LINKIS_HOME}/lib/linkis-engineplugins
+```
+上传后目录结构如下所示
+```
+linkis-engineconn-plugins/
+├── impala
+│   ├── dist
+│   │   └── 3.4.0
+│   │       ├── conf
+│   │       └── lib
+│   └── plugin
+│       └── 3.4.0
+```
+
+### 2.3 引擎刷新
+
+#### 2.3.1 重启刷新
+通过重启 `linkis-cg-linkismanager` 服务刷新引擎
+```bash
+cd ${LINKIS_HOME}/sbin
+sh linkis-daemon.sh restart cg-linkismanager
+```
+
+### 2.3.2 检查引擎是否刷新成功
+可以查看数据库中的 `linkis_engine_conn_plugin_bml_resources` 这张表的`last_update_time` 
是否为触发刷新的时间。
+
+```sql
+#登陆到 `linkis` 的数据库 
+select * from linkis_cg_engine_conn_plugin_bml_resources;
+```
+
+## 3 引擎的使用
+
+### 3.1 通过 `Linkis-cli` 提交任务 
+
+```shell
+sh ./bin/linkis-cli -submitUser impala \
+-engineType impala-3.4.0 -code 'select * from default.test limit 10' \
+-runtimeMap linkis.es.http.method=GET \
+-runtimeMap linkis.impala.servers=127.0.0.1:21050
+```
+
+更多 `Linkis-Cli` 命令参数参考: [Linkis-Cli 使用](../user-guide/linkiscli-manual.md)
+
+## 4. 引擎配置说明
+
+### 4.1 默认配置说明
+
+| 配置                                   | 默认值                | 说明               
                         | 是否必须 |
+| -------------------------------------- | --------------------- | 
------------------------------------------- | -------- |
+| linkis.impala.default.limit             | 5000                 |   是   | 
查询的结果集返回条数限制                    |
+| linkis.impala.engine.user               | ${HDFS_ROOT_USER}    |   是   | 
默认引擎启动用户                           |
+| linkis.impala.user.isolation.mode       | false                |   是   | 
以多用户模式启动引擎                        |
+| linkis.impala.servers                   | 127.0.0.1:21050      |   是   | 
Impala服务器地址,','分隔                  |
+| linkis.impala.maxConnections            | 10                   |   是   | 
对每台Impala服务器的连接数上限               |
+| linkis.impala.ssl.enable                | false                |   是   | 
是否启用SSL连接                            |
+| linkis.impala.ssl.keystore.type         | JKS                  |   否   | SSL 
Keystore类型                          |
+| linkis.impala.ssl.keystore              | null                 |   否   | SSL 
Keystore路径                          |
+| linkis.impala.ssl.keystore.password     | null                 |   否   | SSL 
Keystore密码                          |
+| linkis.impala.ssl.truststore.type       | JKS                  |   否   | SSL 
Truststore类型                        |
+| linkis.impala.ssl.truststore            | null                 |   否   | SSL 
Truststore路径                        |
+| linkis.impala.ssl.truststore.password   | null                 |   否   | SSL 
Truststore密码                        |
+| linkis.impala.sasl.enable               | false                |   是   | 
是否启用SASL认证                           |
+| linkis.impala.sasl.mechanism            | PLAIN                |   否   | 
SASL Mechanism                           |
+| linkis.impala.sasl.authorizationId      | null                 |   否   | 
SASL AuthorizationId                     |
+| linkis.impala.sasl.protocol             | LDAP                 |   否   | 
SASL Protocol                            |
+| linkis.impala.sasl.properties           | null                 |   否   | 
SASL Properties: key1=value1,key2=value2 |
+| linkis.impala.sasl.username             | ${impala.engine.user}|   否   | 
SASL Username                            |
+| linkis.impala.sasl.password             | null                 |   否   | 
SASL Password                            |
+| linkis.impala.sasl.password.cmd         | null                 |   否   | 
SASL Password获取命令                     |
+| linkis.impala.heartbeat.seconds         | 1                    |   是   | 
任务状态更新间隔                            |
+| linkis.impala.query.timeout.seconds     | 0                    |   否   | 
任务执行超时时间                           |
+| linkis.impala.query.batchSize           | 1000                 |   是   | 
结果集获取批次大小                          |
+| linkis.impala.query.options             | null                 |   否   | 
查询提交参数: key1=value1,key2=value2      |
+
+### 4.2 配置修改
+
+如果默认参数不满足时,有如下几中方式可以进行一些基础参数配置
+
+#### 4.2.1 管理台配置
+
+![](./images/trino-config.png)
+
+注意: 修改 `IDE` 标签下的配置后需要指定 `-creator IDE` 才会生效(其它标签类似),如:
+
+```shell
+sh ./bin/linkis-cli -creator IDE -submitUser hadoop \
+ -engineType impala-3.4.0 -codeType sql \
+ -code 'select * from system.jdbc.schemas limit 10' 
+```
+
+#### 4.2.2 任务接口配置
+提交任务接口,通过参数 `params.configuration.runtime` 进行配置
+
+```shell
+http 请求参数示例 
+{
+    "executionContent": {"code": "select * from system.jdbc.schemas limit 
10;", "runType":  "sql"},
+    "params": {
+                    "variable": {},
+                    "configuration": {
+                            "runtime": {
+                                "linkis.trino.url":"http://127.0.0.1:8080";,
+                                "linkis.trino.catalog ":"hive",
+                                "linkis.trino.schema ":"default"
+                                }
+                            }
+                    },
+    "labels": {
+        "engineType": "trino-371",
+        "userCreator": "hadoop-IDE"
+    }
+}
+```
+
+### 4.3 引擎相关数据表
+
+`Linkis` 是通过引擎标签来进行管理的,所涉及的数据表信息如下所示。
+
+```
+linkis_ps_configuration_config_key:  插入引擎的配置参数的key和默认values
+linkis_cg_manager_label:插入引擎label如:trino-375
+linkis_ps_configuration_category: 插入引擎的目录关联关系
+linkis_ps_configuration_config_value: 插入引擎需要展示的配置
+linkis_ps_configuration_key_engine_relation:配置项和引擎的关联关系
+```
+
+表中与引擎相关的初始数据如下
+
+
+```sql
+-- set variable
+SET @ENGINE_LABEL="impala-3.4.0";
+SET @ENGINE_IDE=CONCAT('*-IDE,',@ENGINE_LABEL);
+SET @ENGINE_ALL=CONCAT('*-*,',@ENGINE_LABEL);
+SET @ENGINE_NAME="impala";
+
+-- add impala engine to IDE
+insert into `linkis_cg_manager_label` (`label_key`, `label_value`, 
`label_feature`, `label_value_size`, `update_time`, `create_time`) VALUES 
('combined_userCreator_engineType', @ENGINE_ALL, 'OPTIONAL', 2, now(), now());
+insert into `linkis_cg_manager_label` (`label_key`, `label_value`, 
`label_feature`, `label_value_size`, `update_time`, `create_time`) VALUES 
('combined_userCreator_engineType', @ENGINE_IDE, 'OPTIONAL', 2, now(), now());
+select @label_id := id from `linkis_cg_manager_label` where label_value = 
@ENGINE_IDE;
+insert into `linkis_ps_configuration_category` (`label_id`, `level`) VALUES 
(@label_id, 2);
+
+-- insert configuration key
+INSERT INTO `linkis_ps_configuration_config_key` (`key`, `description`, 
`name`, `default_value`, `validate_type`, `validate_range`, `engine_conn_type`, 
`is_hidden`, `is_advanced`, `level`, `treeName`) VALUES 
('linkis.impala.default.limit', '查询的结果集返回条数限制', '结果集条数限制', 'null', 'None', '', 
@ENGINE_NAME, 0, 0, 1, '数据源配置');
+INSERT INTO `linkis_ps_configuration_config_key` (`key`, `description`, 
`name`, `default_value`, `validate_type`, `validate_range`, `engine_conn_type`, 
`is_hidden`, `is_advanced`, `level`, `treeName`) VALUES 
('linkis.impala.engine.user', '默认引擎启动用户', '默认启动用户', 'null', 'None', '', 
@ENGINE_NAME, 0, 0, 1, '数据源配置');
+INSERT INTO `linkis_ps_configuration_config_key` (`key`, `description`, 
`name`, `default_value`, `validate_type`, `validate_range`, `engine_conn_type`, 
`is_hidden`, `is_advanced`, `level`, `treeName`) VALUES 
('linkis.impala.user.isolation.mode', '以多用户模式启动引擎', '多用户模式', 'null', 'None', 
'', @ENGINE_NAME, 0, 0, 1, '数据源配置');
+INSERT INTO `linkis_ps_configuration_config_key` (`key`, `description`, 
`name`, `default_value`, `validate_type`, `validate_range`, `engine_conn_type`, 
`is_hidden`, `is_advanced`, `level`, `treeName`) VALUES 
('linkis.impala.servers', 'Impala服务器地址', '服务地址', 'null', 'None', '', 
@ENGINE_NAME, 0, 0, 1, '数据源配置');
+INSERT INTO `linkis_ps_configuration_config_key` (`key`, `description`, 
`name`, `default_value`, `validate_type`, `validate_range`, `engine_conn_type`, 
`is_hidden`, `is_advanced`, `level`, `treeName`) VALUES 
('linkis.impala.maxConnections ', '对每台Impala服务器的连接数上限', '最大连接数', 'null', 
'None', '', @ENGINE_NAME, 0, 0, 1, '数据源配置');
+INSERT INTO `linkis_ps_configuration_config_key` (`key`, `description`, 
`name`, `default_value`, `validate_type`, `validate_range`, `engine_conn_type`, 
`is_hidden`, `is_advanced`, `level`, `treeName`) VALUES 
('linkis.impala.ssl.enable', '是否启用SSL连接', '启用SSL', 'null', 'None', '', 
@ENGINE_NAME, 0, 0, 1, '数据源配置');
+INSERT INTO `linkis_ps_configuration_config_key` (`key`, `description`, 
`name`, `default_value`, `validate_type`, `validate_range`, `engine_conn_type`, 
`is_hidden`, `is_advanced`, `level`, `treeName`) VALUES 
('linkis.impala.ssl.keystore.type', 'SSL Keystore类型', 'SSL Keystore类型', 'null', 
'None', '', @ENGINE_NAME, 0, 0, 1, '数据源配置');
+INSERT INTO `linkis_ps_configuration_config_key` (`key`, `description`, 
`name`, `default_value`, `validate_type`, `validate_range`, `engine_conn_type`, 
`is_hidden`, `is_advanced`, `level`, `treeName`) VALUES 
('linkis.impala.ssl.keystore', 'SSL Keystore路径', 'SSL Keystore路径', 'null', 
'None', '', @ENGINE_NAME, 0, 0, 1, '数据源配置');
+INSERT INTO `linkis_ps_configuration_config_key` (`key`, `description`, 
`name`, `default_value`, `validate_type`, `validate_range`, `engine_conn_type`, 
`is_hidden`, `is_advanced`, `level`, `treeName`) VALUES 
('linkis.impala.ssl.keystore.password', 'SSL Keystore密码', 'SSL Keystore密码', 
'null', 'None', '', @ENGINE_NAME, 0, 0, 1, '数据源配置');
+INSERT INTO `linkis_ps_configuration_config_key` (`key`, `description`, 
`name`, `default_value`, `validate_type`, `validate_range`, `engine_conn_type`, 
`is_hidden`, `is_advanced`, `level`, `treeName`) VALUES 
('linkis.impala.ssl.truststore.type', 'SSL Truststore类型', 'SSL Truststore类型', 
'null', 'None', '', @ENGINE_NAME, 0, 0, 1, '数据源配置');
+INSERT INTO `linkis_ps_configuration_config_key` (`key`, `description`, 
`name`, `default_value`, `validate_type`, `validate_range`, `engine_conn_type`, 
`is_hidden`, `is_advanced`, `level`, `treeName`) VALUES 
('linkis.impala.ssl.truststore', 'SSL Truststore路径', 'SSL Truststore路径', 
'null', 'None', '', @ENGINE_NAME, 0, 0, 1, '数据源配置');
+INSERT INTO `linkis_ps_configuration_config_key` (`key`, `description`, 
`name`, `default_value`, `validate_type`, `validate_range`, `engine_conn_type`, 
`is_hidden`, `is_advanced`, `level`, `treeName`) VALUES 
('linkis.impala.ssl.truststore.password', 'SSL Truststore密码', 'SSL 
Truststore密码', 'null', 'None', '', @ENGINE_NAME, 0, 0, 1, '数据源配置');
+INSERT INTO `linkis_ps_configuration_config_key` (`key`, `description`, 
`name`, `default_value`, `validate_type`, `validate_range`, `engine_conn_type`, 
`is_hidden`, `is_advanced`, `level`, `treeName`) VALUES 
('linkis.impala.sasl.enable', '是否启用SASL认证', '启用SASL', 'null', 'None', '', 
@ENGINE_NAME, 0, 0, 1, '数据源配置');
+INSERT INTO `linkis_ps_configuration_config_key` (`key`, `description`, 
`name`, `default_value`, `validate_type`, `validate_range`, `engine_conn_type`, 
`is_hidden`, `is_advanced`, `level`, `treeName`) VALUES 
('linkis.impala.sasl.mechanism', 'SASL Mechanism', 'SASL Mechanism', 'null', 
'None', '', @ENGINE_NAME, 0, 0, 1, '数据源配置');
+INSERT INTO `linkis_ps_configuration_config_key` (`key`, `description`, 
`name`, `default_value`, `validate_type`, `validate_range`, `engine_conn_type`, 
`is_hidden`, `is_advanced`, `level`, `treeName`) VALUES 
('linkis.impala.sasl.authorizationId', 'SASL AuthorizationId', 'SASL 
AuthorizationId', 'null', 'None', '', @ENGINE_NAME, 0, 0, 1, '数据源配置');
+INSERT INTO `linkis_ps_configuration_config_key` (`key`, `description`, 
`name`, `default_value`, `validate_type`, `validate_range`, `engine_conn_type`, 
`is_hidden`, `is_advanced`, `level`, `treeName`) VALUES 
('linkis.impala.sasl.protocol', 'SASL Protocol', 'SASL Protocol', 'null', 
'None', '', @ENGINE_NAME, 0, 0, 1, '数据源配置');
+INSERT INTO `linkis_ps_configuration_config_key` (`key`, `description`, 
`name`, `default_value`, `validate_type`, `validate_range`, `engine_conn_type`, 
`is_hidden`, `is_advanced`, `level`, `treeName`) VALUES 
('linkis.impala.sasl.properties', 'SASL Properties: key1=value1,key2=value2', 
'SASL Properties', 'null', 'None', '', @ENGINE_NAME, 0, 0, 1, '数据源配置');
+INSERT INTO `linkis_ps_configuration_config_key` (`key`, `description`, 
`name`, `default_value`, `validate_type`, `validate_range`, `engine_conn_type`, 
`is_hidden`, `is_advanced`, `level`, `treeName`) VALUES 
('linkis.impala.sasl.username', 'SASL Username', 'SASL Username', 'null', 
'None', '', @ENGINE_NAME, 0, 0, 1, '数据源配置');
+INSERT INTO `linkis_ps_configuration_config_key` (`key`, `description`, 
`name`, `default_value`, `validate_type`, `validate_range`, `engine_conn_type`, 
`is_hidden`, `is_advanced`, `level`, `treeName`) VALUES 
('linkis.impala.sasl.password', 'SASL Password', 'SASL Password', 'null', 
'None', '', @ENGINE_NAME, 0, 0, 1, '数据源配置');
+INSERT INTO `linkis_ps_configuration_config_key` (`key`, `description`, 
`name`, `default_value`, `validate_type`, `validate_range`, `engine_conn_type`, 
`is_hidden`, `is_advanced`, `level`, `treeName`) VALUES 
('linkis.impala.sasl.password.cmd', 'SASL Password获取命令', 'SASL Password获取命令', 
'null', 'None', '', @ENGINE_NAME, 0, 0, 1, '数据源配置');
+INSERT INTO `linkis_ps_configuration_config_key` (`key`, `description`, 
`name`, `default_value`, `validate_type`, `validate_range`, `engine_conn_type`, 
`is_hidden`, `is_advanced`, `level`, `treeName`) VALUES 
('linkis.impala.heartbeat.seconds', '任务状态更新间隔', '任务状态更新间隔', 'null', 'None', '', 
@ENGINE_NAME, 0, 0, 1, '数据源配置');
+INSERT INTO `linkis_ps_configuration_config_key` (`key`, `description`, 
`name`, `default_value`, `validate_type`, `validate_range`, `engine_conn_type`, 
`is_hidden`, `is_advanced`, `level`, `treeName`) VALUES 
('linkis.impala.query.timeout.seconds', '任务执行超时时间', '任务执行超时时间', 'null', 'None', 
'', @ENGINE_NAME, 0, 0, 1, '数据源配置');
+INSERT INTO `linkis_ps_configuration_config_key` (`key`, `description`, 
`name`, `default_value`, `validate_type`, `validate_range`, `engine_conn_type`, 
`is_hidden`, `is_advanced`, `level`, `treeName`) VALUES 
('linkis.impala.query.batchSize', '结果集获取批次大小', '结果集获取批次大小', 'null', 'None', '', 
@ENGINE_NAME, 0, 0, 1, '数据源配置');
+INSERT INTO `linkis_ps_configuration_config_key` (`key`, `description`, 
`name`, `default_value`, `validate_type`, `validate_range`, `engine_conn_type`, 
`is_hidden`, `is_advanced`, `level`, `treeName`) VALUES 
('linkis.impala.query.options', '查询提交参数: key1=value1,key2=value2', '查询提交参数', 
'null', 'None', '', @ENGINE_NAME, 0, 0, 1, '数据源配置');
+-- impala engine -*
+insert into `linkis_ps_configuration_key_engine_relation` (`config_key_id`, 
`engine_type_label_id`)
+(select config.id as config_key_id, label.id AS engine_type_label_id FROM 
`linkis_ps_configuration_config_key` config
+INNER JOIN `linkis_cg_manager_label` label ON config.engine_conn_type = 
@ENGINE_NAME and label_value = @ENGINE_ALL);
+-- impala engine default configuration
+insert into `linkis_ps_configuration_config_value` (`config_key_id`, 
`config_value`, `config_label_id`)
+(select relation.config_key_id AS config_key_id, '' AS config_value, 
relation.engine_type_label_id AS config_label_id FROM 
`linkis_ps_configuration_key_engine_relation` relation
+INNER JOIN `linkis_cg_manager_label` label ON relation.engine_type_label_id = 
label.id AND label.label_value = @ENGINE_ALL);
+```
\ No newline at end of file
diff --git 
a/i18n/zh-CN/docusaurus-plugin-content-docs/current/feature/ec-fix-label.md 
b/i18n/zh-CN/docusaurus-plugin-content-docs/current/feature/ec-fix-label.md
deleted file mode 100644
index 28b101f8ab5..00000000000
--- a/i18n/zh-CN/docusaurus-plugin-content-docs/current/feature/ec-fix-label.md
+++ /dev/null
@@ -1,25 +0,0 @@
----
-title: 任务固定 EngineConn 执行
-sidebar_position: 0.3
---- 
-
-## 1. 需求背景
-现在Linkis任务提交时,是基于标签进行创建或者复用 EngineConn(以下简称EC),多个任务间的 EC 
是随机的。但是对于存在多任务需要能够满足任务的依赖性,在同一个EC 上进行执行就不能很好的做支持。在多人任务中添加新的 
EngineConnInstanceLabel 来满足多任务固定同一个 EC 的目的。
-
-## 2. 使用说明
-1. 管理台添加特定标签,添加路径如下:登录管理台 -> ECM管理 -> 点击某 ECM 示例名称 -> 编辑待固定的 EC -> 添加 
FixdEngineConnLabel 类型的标签。
-![](/Images-zh/feature/ecm.png)
-![](/Images-zh/feature/ec.png)
-![](/Images-zh/feature/label.png)
-2. 提交任务执行需要新增:FixdEngineConnLabel 标签 并提交到固定的实例
-```json
-"labels": {
-    "engineType": "spark-2.4.3",
-    "userCreator": "hadoop-IDE",
-    "fixedEngineConn": "idvalue"
-}
-```
-## 3. 注意事项
-1. 第一个任务可以选择先获取EC实例列表进行选择,也可以直接提交任务进行创建
-
-2. 
如果EC不处于空闲可以用的状态,也会创建新的EC实例对任务进行执行,如果需要避免这种情况,可以在任务进行提交时调用EC实例查询接口,判断对应的EC是否存在和状态后再进行提交。
\ No newline at end of file
diff --git 
a/i18n/zh-CN/docusaurus-plugin-content-docs/current/feature/eureka-version-metadata.md
 
b/i18n/zh-CN/docusaurus-plugin-content-docs/current/feature/eureka-version-metadata.md
deleted file mode 100644
index 5b1832596c8..00000000000
--- 
a/i18n/zh-CN/docusaurus-plugin-content-docs/current/feature/eureka-version-metadata.md
+++ /dev/null
@@ -1,34 +0,0 @@
----
-title: Eureka 上报版本元数据
-sidebar_position: 0.2
---- 
-
-## 1. 需求背景
-eureka metadata添加版本等额外信息。 支持读取配置文件,跟配置文件版本号统一,配置文件用小版本号。 考虑在eureka 
metadata中添加两个版本信息,一个是配置文件版本,一个是程序软件版本。不同服务的配置文件版本可能不一样,同一个集群中相同服务的配置文件版本应该一致,同一个集群中所有程序的软件版本应该一致。
-
-## 2. 使用说明
-**程序版本配置**
-
-将程序版本配置添加到linkis_env.sh中,用于控制程序版本,添加内容如下:
-```
-linkis.app.version=${version}
-```
-上报eureka metadata后版本格式 版本+编译时间如:1.3.2-20230304
-```xml
-<metadata>
-    <linkis.app.version>${appVersion}</linkis.app.version>
-</metadata>
-```
-
-**服务版本配置**
-
-在每个服务的配置文件中增加服务版本配置,添加内容如下:
-```
-linkis.conf.version=版本号
-```
-上报eureka metadata后版本格式
-```xml
-<metadata>
-    <linkis.conf.version>${serviceVersion}</linkis.conf.version>
-</metadata>
-```
diff --git 
a/i18n/zh-CN/docusaurus-plugin-content-docs/current/feature/load-udf-by-udfid.md
 
b/i18n/zh-CN/docusaurus-plugin-content-docs/current/feature/load-udf-by-udfid.md
deleted file mode 100644
index 16be6c5ea53..00000000000
--- 
a/i18n/zh-CN/docusaurus-plugin-content-docs/current/feature/load-udf-by-udfid.md
+++ /dev/null
@@ -1,47 +0,0 @@
----
-title: 通过 UDF ID 加载 UDF
-sidebar_position: 0.2
---- 
-
-## 1. 背景
-在一些场景加载 UDF 不是通过 Scripts、DSS 等可视化界面进行加载,而是通过代码进行加载。这就需要提供通过 UDF ID 加载 UDF 的功能。
-
-## 2. 使用说明
-参数说明:
-
-| 参数名                      | 说明                   |  默认值|
-|--------------------------- |------------------------|--------|
-|`linkis.user.udf.all.load`  | 是否加载用户选中的所有 UDF | true |
-|`linkis.user.udf.custom.ids`| UDF ID 列表,用 `,` 分隔 |  -   |
-
-通过 RestFul 的方式提交任务,请求示例如下。
-```json
-POST /api/rest_j/v1/entrance/submit
-Content-Type: application/json
-Token-Code: dss-AUTH
-Token-User: linkis
-
-{
-    "executionContent": {
-        "code": "show databases",
-        "runType": "sql"
-    },
-    "params": {
-        "configuration": {
-            "startup": {
-                "linkis.user.udf.all.load": false
-                "linkis.user.udf.custom.ids": "1,2,3"
-            }
-        }
-    },
-    "labels": {
-        "engineType": "spark-2.4.3",                  // 
pattern:engineType-version
-        "userCreator": "linkis-IDE"                   // userCreator: linkis 
is username。IDE is system that be configed in Linkis。
-    }
-}
-```
-
-## 3. 注意事项
-1. 当 `linkis.user.udf.all.load` 指定 true 时,`linkis.user.udf.custom.ids` 参数不生效
-
-2. 该功能与 `/udf/isload?udfId=123&isLoad=true` 接口的加载是相互独立的
diff --git 
a/i18n/zh-CN/docusaurus-plugin-content-docs/current/feature/overview.md 
b/i18n/zh-CN/docusaurus-plugin-content-docs/current/feature/overview.md
index 2e96162978f..1a056f8bf98 100644
--- a/i18n/zh-CN/docusaurus-plugin-content-docs/current/feature/overview.md
+++ b/i18n/zh-CN/docusaurus-plugin-content-docs/current/feature/overview.md
@@ -3,16 +3,16 @@ title: 版本总览
 sidebar_position: 0.1
 --- 
 
-- [支持 Spark 任务提交 Jar 包功能](./spark-submit-jar.md)
-- [支持通过UDF ID 加载特定的 UDF](./load-udf-by-udfid.md)
-- [多任务固定 EC 执行](./ec-fix-label.md)
-- [Eureka 版本元数据上报](./eureka-version-metadata.md)
-- [移除 dss-gateway-support 依赖](./remove-dss-support.md)
-- [修改系统初始化默认 Token](./update-token.md)
-- [Linkis 整合 OceanBase](/blog/2023/03/08/linkis-integration-with-oceanbase)
-- [版本的 Release-Notes](/download/release-notes-1.3.2)
-
-
+- [hadoop、spark、hive 默认版本升级为3.x](./upgrade-base-engine-version.md)
+- [减少基础引擎不同版本兼容性问题](./base-engine-compatibilty.md)
+- [Hive 引擎连接器支持并发任务](./hive-engine-support-concurrent.md)
+- [linkis-storage 支持 S3 和 OSS 文件系统](./storage-add-support-oss.md)
+- [支持更多的数据源](./spark-etl.md)
+- [增加 postgresql 数据库支持](/docs/deployment/deploy-quick.md)
+- [ECM重启时不kill EC](./ecm-takes-over-ec.md)
+- [Spark ETL 功能增强](./spark-etl.md)
+- [版本号及分支修改说明](./version-and-branch-intro.md)
+- [版本的 Release-Notes](/download/release-notes-1.4.0)
 
 ## 参数变化 
 
@@ -23,9 +23,9 @@ sidebar_position: 0.1
 | mg-eureka | 修改 | eureka.client.registry-fetch-interval-seconds | 8 | Eureka 
Client拉取服务注册信息间隔时间(秒) |
 | mg-eureka | 新增 | eureka.instance.lease-renewal-interval-in-seconds | 4 | 
eureka client发送心跳给server端的频率(秒)|
 | mg-eureka | 新增 | eureka.instance.lease-expiration-duration-in-seconds | 12 | 
eureka 等待下一次心跳的超时时间(秒)|
-| EC-shell | 修改 | wds.linkis.engineconn.support.parallelism | true | 是否开启 
shell 任务并行执行|
-| EC-shell | 修改 | linkis.engineconn.shell.concurrent.limit | 15 | shell 任务并发数 |
+| EC-shell  | 修改 | wds.linkis.engineconn.support.parallelism | true | 是否开启 
shell 任务并行执行|
+| EC-shell  | 修改 | linkis.engineconn.shell.concurrent.limit | 15 | shell 任务并发数 
|
 
 
 ## 数据库表变化 
-详细见代码仓库(https://github.com/apache/linkis) 
对应分支中的升级schema`db/upgrade/1.3.2_schema`文件
+详细见代码仓库(https://github.com/apache/linkis) 
对应分支中的升级schema`db/upgrade/1.4.0_schema`文件
diff --git 
a/i18n/zh-CN/docusaurus-plugin-content-docs/current/feature/remove-dss-support.md
 
b/i18n/zh-CN/docusaurus-plugin-content-docs/current/feature/remove-dss-support.md
deleted file mode 100644
index c85ad2f2ec6..00000000000
--- 
a/i18n/zh-CN/docusaurus-plugin-content-docs/current/feature/remove-dss-support.md
+++ /dev/null
@@ -1,17 +0,0 @@
----
-title: 移除 DSS Support 依赖
-sidebar_position: 0.4
---- 
-
-## 1. 需求背景
-Linkis 微服务模块中依赖了 dss-gateway-support jar 包,在使用 scala 2.12 之前的版本编译时会出现 jar 
包冲突的情况。因此考虑去掉 dss-gateway-support 模块依赖。
-
-## 2. 使用说明
-
-移除 dss-gateway-support 依赖后,不影响 Linkis 使用。
-
-## 3. 注意事项
-
-- Linkis >= 1.3.2 版本,遇到 dss support 相关的错误,可查看 
$LINKIS_HOME/lib/linkis-spring-cloud-services/linkis-mg-gateway 目录下是否有 dss 
support 相关的 jar包,如果有删除相关 jar 包,重启服务即可。 
-
-- 出现冲突的原因为 DSS 的一键安装过程中会安装 dss support 包至linkis-mg-gateway下,具体jar包为 
dss-gateway-support-xxx.jar
\ No newline at end of file
diff --git 
a/i18n/zh-CN/docusaurus-plugin-content-docs/current/feature/spark-submit-jar.md 
b/i18n/zh-CN/docusaurus-plugin-content-docs/current/feature/spark-submit-jar.md
deleted file mode 100644
index f2ba4da939a..00000000000
--- 
a/i18n/zh-CN/docusaurus-plugin-content-docs/current/feature/spark-submit-jar.md
+++ /dev/null
@@ -1,52 +0,0 @@
----
-title: Spark-Submit 提交 Jar 包任务
-sidebar_position: 0.2
---- 
-
-## 1. 背景
-在一些场景下需要通过 通过 spark-submit 提交 jar 包的形式执行任务。 
-
-## 2. 使用说明
-通过 SDK 的方式提交 Spark 任务,代码示例如下。
-```java
-public class SparkOnceJobTest {
-
-    public static void main(String[] args)  {
-
-        LinkisJobClient.config().setDefaultServerUrl("http://127.0.0.1:9001";);
-
-        String submitUser = "linkis";
-        String engineType = "spark";
-
-        SubmittableSimpleOnceJob onceJob =
-                // region
-                LinkisJobClient.once().simple().builder()
-                        .setCreateService("Spark-Test")
-                        .setMaxSubmitTime(300000)
-                        .setDescription("SparkTestDescription")
-                        .addExecuteUser(submitUser)
-                        .addJobContent("runType", "jar")
-                        .addJobContent("spark.app.main.class", 
"org.apache.spark.examples.JavaWordCount")
-                        // 提交的jar包获取的参数
-                        .addJobContent("spark.app.args", 
"hdfs:///tmp/test_word_count.txt") // WordCount 测试文件
-                        .addLabel("engineType", engineType + "-2.4.3")
-                        .addLabel("userCreator", submitUser + "-IDE")
-                        .addLabel("engineConnMode", "once")
-                        .addStartupParam("spark.app.name", 
"spark-submit-jar-test-linkis") // yarn上展示的 Application Name
-                        .addStartupParam("spark.executor.memory", "1g")
-                        .addStartupParam("spark.driver.memory", "1g")
-                        .addStartupParam("spark.executor.cores", "1")
-                        .addStartupParam("spark.executor.instance", "1")
-                        .addStartupParam("spark.app.resource", 
"hdfs:///tmp/spark/spark-examples_2.11-2.3.0.2.6.5.0-292.jar")
-                        .addSource("jobName", "OnceJobTest")
-                        .build();
-        // endregion
-        onceJob.submit();
-        onceJob.waitForCompleted(); // 网络临时不通会导致异常,建议后期修改 SDK,现阶段使用,需要做异常处理
-    }
-}
-```
-## 3. 注意事项
-1. 提交任务中使用的 jar 包或参数文件需要提前上传到 hdfs 或共享目录中
-
-2. spark-submit jar 仅支持 Once 任务
\ No newline at end of file
diff --git 
a/i18n/zh-CN/docusaurus-plugin-content-docs/current/feature/update-token.md 
b/i18n/zh-CN/docusaurus-plugin-content-docs/current/feature/update-token.md
deleted file mode 100644
index ea5b2eee684..00000000000
--- a/i18n/zh-CN/docusaurus-plugin-content-docs/current/feature/update-token.md
+++ /dev/null
@@ -1,78 +0,0 @@
----
-title: 修改系统初始化默认 Token
-sidebar_position: 0.4
---- 
-
-## 1. 需求背景
-
-Linkis 原有默认 Token 固定且长度太短存在安全隐患。因此 Linkis 1.3.2 将原有固定 Token 改为随机生成,并增加 Token 
长度。
-
-修改后 Token 格式:应用简称-32 位随机数,如BML-928a721518014ba4a28735ec2a0da799
-
-Token 可能在 Linkis 服务自身使用,如通过 Shell 方式执行任务、BML 上传等,也可能在其它应用中使用,如 DSS、Qualitis 
等应用访问 Linkis。
-
-
-## 2. 使用说明
-
-### Linkis 上传 BML 时所需 Token 配置
-Linkis 服务本身使用 Token 时,配置文件中 Token 需与数据库中 Token 一致。通过应用简称前缀匹配。
-
-数据库中生成的 token 可通过如下语句查询:
-
-```sql
-select * from linkis_mg_gateway_auth_token;
-```
-
-**$LINKIS_HOME/conf/linkis.properites 文件 Token 配置**
-
-```
-linkis.configuration.linkisclient.auth.token.value=BML-928a721518014ba4a28735ec2a0da799
-wds.linkis.client.common.tokenValue=BML-928a721518014ba4a28735ec2a0da799
-wds.linkis.bml.auth.token.value=BML-928a721518014ba4a28735ec2a0da799
-wds.linkis.context.client.auth.value=BML-928a721518014ba4a28735ec2a0da799
-wds.linkis.errorcode.auth.token=BML-928a721518014ba4a28735ec2a0da799
-
-wds.linkis.client.test.common.tokenValue=LINKIS_CLI-215af9e265ae437ca1f070b17d6a540d
-
-wds.linkis.filesystem.token.value=WS-52bce72ed51741c7a2a9544812b45725
-wds.linkis.gateway.access.token=WS-52bce72ed51741c7a2a9544812b45725
-
-wds.linkis.server.dsm.auth.token.value=DSM-65169e8e1b564c0d8a04ee861ca7df6e
-```
-
-### 使用 linkis-cli 命令执行任务 Token 配置
-
-修改 $LINKIS_HOME/conf/linkis-cli/linkis-cli.properties文件 Token 配置
-```properties
-wds.linkis.client.common.tokenValue=BML-928a721518014ba4a28735ec2a0da799
-```
-
-## 3. 注意事项
-
-### 全量安装
-
-对于全量安装新版本 Linkis 时, install.sh 脚本中会自动处理配置文件和数据库 Token 保持一致。因此 Linkis 服务自身 
Token 无需修改。各应用可通过管理台查询并使用新 Token。
-
-### 版本升级
-
-版本升级时,数据库 Token 并未修改,因此无需修改配置文件和应用 Token。
-
-### Token 过期问题
-
-当遇到 Token 令牌无效或已过期问题时,通过管理台或者 sql 语句查询 Token。检查客户端使用的 Token 
是否与数据库一致,若不一致有两种解决办法。
-
-1. 修改客户端配置,将 Token 设置位与数据库一致。
-
-2. 修改数据库各应用 Token 配置值。旧版本数据库 Token 配置参考如下
-
-```sql
-INSERT INTO 
`linkis_mg_gateway_auth_token`(`token_name`,`legal_users`,`legal_hosts`,`business_owner`,`create_time`,`update_time`,`elapse_day`,`update_by`)
 VALUES ('QML-AUTH','*','*','BDP',curdate(),curdate(),-1,'LINKIS');
-INSERT INTO 
`linkis_mg_gateway_auth_token`(`token_name`,`legal_users`,`legal_hosts`,`business_owner`,`create_time`,`update_time`,`elapse_day`,`update_by`)
 VALUES ('BML-AUTH','*','*','BDP',curdate(),curdate(),-1,'LINKIS');
-INSERT INTO 
`linkis_mg_gateway_auth_token`(`token_name`,`legal_users`,`legal_hosts`,`business_owner`,`create_time`,`update_time`,`elapse_day`,`update_by`)
 VALUES ('WS-AUTH','*','*','BDP',curdate(),curdate(),-1,'LINKIS');
-INSERT INTO 
`linkis_mg_gateway_auth_token`(`token_name`,`legal_users`,`legal_hosts`,`business_owner`,`create_time`,`update_time`,`elapse_day`,`update_by`)
 VALUES ('dss-AUTH','*','*','BDP',curdate(),curdate(),-1,'LINKIS');
-INSERT INTO 
`linkis_mg_gateway_auth_token`(`token_name`,`legal_users`,`legal_hosts`,`business_owner`,`create_time`,`update_time`,`elapse_day`,`update_by`)
 VALUES ('QUALITIS-AUTH','*','*','BDP',curdate(),curdate(),-1,'LINKIS');
-INSERT INTO 
`linkis_mg_gateway_auth_token`(`token_name`,`legal_users`,`legal_hosts`,`business_owner`,`create_time`,`update_time`,`elapse_day`,`update_by`)
 VALUES ('VALIDATOR-AUTH','*','*','BDP',curdate(),curdate(),-1,'LINKIS');
-INSERT INTO 
`linkis_mg_gateway_auth_token`(`token_name`,`legal_users`,`legal_hosts`,`business_owner`,`create_time`,`update_time`,`elapse_day`,`update_by`)
 VALUES ('LINKISCLI-AUTH','*','*','BDP',curdate(),curdate(),-1,'LINKIS');
-INSERT INTO 
`linkis_mg_gateway_auth_token`(`token_name`,`legal_users`,`legal_hosts`,`business_owner`,`create_time`,`update_time`,`elapse_day`,`update_by`)
 VALUES ('DSM-AUTH','*','*','BDP',curdate(),curdate(),-1,'LINKIS');
-INSERT INTO 
`linkis_mg_gateway_auth_token`(`token_name`,`legal_users`,`legal_hosts`,`business_owner`,`create_time`,`update_time`,`elapse_day`,`update_by`)
 VALUES ('LINKIS_CLI_TEST','*','*','BDP',curdate(),curdate(),-1,'LINKIS');
-```
\ No newline at end of file
diff --git 
a/i18n/zh-CN/docusaurus-plugin-content-docs/current/feature/version-and-branch-intro.md
 
b/i18n/zh-CN/docusaurus-plugin-content-docs/current/feature/version-and-branch-intro.md
new file mode 100644
index 00000000000..0ec0391b69f
--- /dev/null
+++ 
b/i18n/zh-CN/docusaurus-plugin-content-docs/current/feature/version-and-branch-intro.md
@@ -0,0 +1,13 @@
+---
+title: 版本号及分支修改说明
+sidebar_position: 0.4
+--- 
+
+## 1. Linkis 主版本号修改说明
+
+Linkis 从 1.3.2 版本后将不再按小版本升级,下一个版本为 1.4.0,再往后升级时版本号为1.5.0,1.6.0 
以此类推。当遇到某个发布版本有重大缺陷需要修复时会拉取小版本修复缺陷,如 1.4.1 。
+
+
+## 2. LInkis 代码提交主分支说明
+
+Linkis 1.3.2 及之前版本修改代码默认是合并到 dev 分支。实际上 Apache Linkis 
的开发社区很活跃,对于新开发的需求或修复功能都会提交到 dev 分支,但是用户访问 Linkis 代码库的时候默认显示的是 master 
分支。由于我们一个季度才会发布一个新版本,从 master 分支来看显得社区活跃的不高。因此我们决定从 1.4.0 版本开始,将开发者提交的代码默认合并到 
master 分支。
\ No newline at end of file


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]


Reply via email to