This is an automated email from the ASF dual-hosted git repository.

kinghao pushed a commit to branch dev-special-logic
in repository https://gitbox.apache.org/repos/asf/linkis-website.git

commit 0b53e68d747c7da8e87e4c98beaf9eb69531e988
Author: kinghao <[email protected]>
AuthorDate: Fri Dec 19 14:48:04 2025 +0800

    add special logic
---
 docs/development/special-logic/_category_.json     |   4 +
 .../special-logic/engine-config-priority.md        | 674 +++++++++++++++++++++
 .../special-logic/engine-max-job-config-guide.md   | 292 +++++++++
 .../special-logic/engine-reuse-logic.md            | 369 +++++++++++
 .../insert-engine-max-job-config-simple.sql        |  42 ++
 .../special-logic/insert-engine-max-job-config.sql | 205 +++++++
 docs/development/special-logic/overview.md         |  38 ++
 .../development/special-logic/_category_.json      |   4 +
 .../special-logic/engine-config-priority.md        | 674 +++++++++++++++++++++
 .../special-logic/engine-max-job-config-guide.md   | 292 +++++++++
 .../special-logic/engine-reuse-logic.md            | 368 +++++++++++
 .../current/development/special-logic/overview.md  |  38 ++
 12 files changed, 3000 insertions(+)

diff --git a/docs/development/special-logic/_category_.json 
b/docs/development/special-logic/_category_.json
new file mode 100644
index 00000000000..13a6613e601
--- /dev/null
+++ b/docs/development/special-logic/_category_.json
@@ -0,0 +1,4 @@
+{
+  "label": "Special Logic",
+  "position": 12.0
+}
diff --git a/docs/development/special-logic/engine-config-priority.md 
b/docs/development/special-logic/engine-config-priority.md
new file mode 100644
index 00000000000..800f1ba27f7
--- /dev/null
+++ b/docs/development/special-logic/engine-config-priority.md
@@ -0,0 +1,674 @@
+---
+title: Linkis Engine Configuration Parameter Priority Analysis
+sidebar_position: 1
+---
+
+# Linkis Engine Configuration Parameter Priority and Effective Logic Analysis
+
+## Overview
+
+This document provides a detailed analysis of the configuration parameter 
effective logic, priority mechanism, and database table relationships in Linkis 
during task submission and engine creation process.
+
+## 1. Configuration Parameter Levels
+
+Linkis configuration parameters are divided into the following 4 levels:
+
+### 1.1 Global Default Configuration
+- **Label**: `*-*,*-*`
+- **Description**: Global default configuration applicable to all users, all 
applications, and all engines
+- **Priority**: Lowest
+- **Example**: System-level resource limits, queue configurations, etc.
+
+### 1.2 Engine Default Configuration
+- **Label**: `*-*,{engineType}-{version}`
+- **Description**: Default configuration for a specific engine type, 
applicable to all users
+- **Priority**: Low
+- **Example**: `*-*,spark-3.2.1` represents default configuration for Spark 
3.2.1 engine
+
+### 1.3 Creator Default Configuration
+- **Label**: `*-{creator},*-*` or `{user}-*,*-*`
+- **Description**: Default configuration for a specific creator (e.g., IDE, 
scheduler) or specific user
+- **Priority**: Medium
+- **Example**: `*-IDE,*-*` represents default configuration for all tasks 
submitted via IDE
+
+### 1.4 User Specific Configuration
+- **Label**: `{user}-{creator},{engineType}-{version}`
+- **Description**: Personalized configuration for a specific user, creator, 
and engine
+- **Priority**: High
+- **Example**: `hadoop-IDE,spark-3.2.1` represents hadoop user's configuration 
for using Spark 3.2.1 via IDE
+
+### 1.5 Runtime Parameters
+- **Source**: `params` parameter passed in the API when user submits a task
+- **Priority**: **Highest**
+- **Description**: Runtime-specified parameters that override all 
configuration levels
+
+## 2. Configuration Parameter Priority
+
+### 2.1 Priority Order
+
+```text
+Runtime Parameters                              [Priority: 1 - Highest]
+    ↓
+User Specific Configuration                     [Priority: 2]
+    ↓
+Creator Default Configuration                   [Priority: 3]
+    ↓
+Engine Default Configuration                    [Priority: 4]
+    ↓
+Global Default Configuration                    [Priority: 5 - Lowest]
+```
+
+### 2.2 Priority Rules
+
+Based on code analysis (`ConfigurationService.scala:289-475`):
+
+1. **Cascaded Query**: System queries user configuration, creator 
configuration, user general configuration, engine configuration, and global 
configuration in sequence
+2. **Value Override**: Higher priority configuration values override lower 
priority configurations with the same name
+3. **Parameter Merge**: Non-duplicate configuration items from different 
priorities are merged
+4. **Runtime First**: Parameters passed when user submits a task have the 
highest priority and will not be overridden by any configuration
+
+### 2.3 Core Code Logic
+
+#### Configuration Cascaded Query (ConfigurationService.scala)
+
+```scala
+// Location: ConfigurationService.scala:366-419
+def getConfigsByLabelList(
+    labelList: java.util.List[Label[_]],
+    useDefaultConfig: Boolean = true,
+    language: String
+): (util.List[ConfigKeyValue], util.List[ConfigKeyValue]) = {
+
+    // 1. Get user-specific configuration (user-creator,engineType-version)
+    val configs: util.List[ConfigKeyValue] = getConfigByLabelId(label.getId, 
language)
+
+    // 2. Get creator default configuration (*-creator,*-*)
+    val defaultCreatorConfigs = getConfigByLabelId(defaultCreatorLabel.getId, 
language)
+
+    // 3. Get user general default configuration (user-*,*-*)
+    val defaultUserConfigs = getConfigByLabelId(defaultUserLabel.getId, 
language)
+
+    // 4. Get engine general default configuration (*-*,engineType-version)
+    val defaultEngineConfigs = getConfigByLabelId(defaultEngineLabel.getId, 
language)
+
+    // 5. Configuration merge: creator config > engine config
+    if (Configuration.USE_CREATOR_DEFAULE_VALUE && userCreatorLabel.getCreator 
!= "*") {
+      replaceCreatorToEngine(defaultCreatorConfigs, defaultEngineConfigs)
+    }
+
+    // 6. Configuration merge: user config > engine config
+    if (Configuration.USE_USER_DEFAULE_VALUE && userCreatorLabel.getUser != 
"*") {
+      replaceCreatorToEngine(defaultUserConfigs, defaultEngineConfigs)
+    }
+
+    return (configs, defaultEngineConfigs)
+}
+```
+
+#### Configuration Tree Building (ConfigurationService.scala)
+
+```scala
+// Location: ConfigurationService.scala:294-333
+// Priority: configs > defaultConfigs
+def buildTreeResult(
+    configs: util.List[ConfigKeyValue],
+    defaultConfigs: util.List[ConfigKeyValue]
+): util.ArrayList[ConfigTree] = {
+
+    // Iterate through default configurations
+    defaultConfigs.asScala.foreach(defaultConfig => {
+        defaultConfig.setIsUserDefined(false)
+
+        // User configuration overrides default configuration
+        configs.asScala.foreach(config => {
+          if (config.getKey.equals(defaultConfig.getKey)) {
+            defaultConfig.setConfigValue(config.getConfigValue)  // Value 
override
+            defaultConfig.setIsUserDefined(true)
+          }
+        })
+    })
+
+    return resultConfigsTree
+}
+```
+
+#### Parameter Merge During Engine Creation (DefaultEngineCreateService.scala)
+
+```scala
+// Location: DefaultEngineCreateService.scala:371-397
+def generateResource(
+    props: util.Map[String, String],           // User-submitted parameters
+    user: String,
+    labelList: util.List[Label[_]],
+    timeout: Long
+): NodeResource = {
+
+    // Get console configuration from configuration service (already 
multi-level merged)
+    val configProp = 
engineConnConfigurationService.getConsoleConfiguration(labelList)
+
+    // Key: User-submitted parameters have the highest priority
+    if (null != configProp && configProp.asScala.nonEmpty) {
+      configProp.asScala.foreach(keyValue => {
+        if (!props.containsKey(keyValue._1)) {  // Only use config when user 
didn't specify
+          props.put(keyValue._1, keyValue._2)
+        }
+      })
+    }
+
+    // Continue processing special logic like cross-queue configuration
+    // ...
+}
+```
+
+**Key Point**: Line `380` check `if (!props.containsKey(keyValue._1))` ensures 
that **user-submitted parameters will never be overridden by configuration 
service values**.
+
+## 3. Database Table Structure and Relationships
+
+### 3.1 Core Configuration Tables
+
+#### Table 1: `linkis_ps_configuration_config_key` (Configuration Key 
Definition Table)
+
+| Field | Type | Description |
+|-------|------|-------------|
+| id | bigint | Primary key, configuration key ID |
+| key | varchar(50) | Configuration parameter name (e.g., 
`wds.linkis.rm.yarnqueue`) |
+| name | varchar(50) | Configuration display name |
+| description | varchar(200) | Configuration description (Chinese) |
+| en_name | varchar(100) | English display name |
+| en_description | varchar(200) | English description |
+| engine_conn_type | varchar(50) | Engine type (e.g., `spark`, `hive`) |
+| default_value | varchar(200) | Default value |
+| validate_type | varchar(50) | Validation type (`None`, `NumInterval`, 
`Regex`, etc.) |
+| validate_range | varchar(150) | Validation range |
+| is_hidden | tinyint(1) | Whether hidden |
+| is_advanced | tinyint(1) | Whether advanced configuration |
+| level | tinyint(1) | Configuration level |
+| treeName | varchar(20) | Configuration category tree name |
+| boundary_type | tinyint | Boundary type |
+| template_required | tinyint(1) | Whether template is required |
+
+**Purpose**: Define metadata for all available configuration items.
+
+#### Table 2: `linkis_ps_configuration_config_value` (Configuration Value 
Storage Table)
+
+| Field | Type | Description |
+|-------|------|-------------|
+| id | bigint | Primary key, configuration value ID |
+| config_key_id | bigint | Foreign key, references `config_key.id` |
+| config_value | varchar(500) | Actual value of the configuration |
+| config_label_id | int | Foreign key, references `cg_manager_label.id` |
+| create_time | datetime | Creation time |
+| update_time | datetime | Update time |
+
+**Purpose**: Store configuration values under different labels.
+**Unique Index**: `(config_key_id, config_label_id)` ensures only one value 
per configuration key under the same label.
+
+#### Table 3: `linkis_cg_manager_label` (Label Table)
+
+| Field | Type | Description |
+|-------|------|-------------|
+| id | int | Primary key, label ID |
+| label_key | varchar(32) | Label key (e.g., 
`combined_userCreator_engineType`) |
+| label_value | varchar(128) | Label value (e.g., `hadoop-IDE,spark-3.2.1`) |
+| label_feature | varchar(16) | Label feature (`OPTIONAL`, `CORE`, etc.) |
+| label_value_size | int | Number of label value dimensions |
+| create_time | datetime | Creation time |
+| update_time | datetime | Update time |
+
+**Purpose**: Define multi-dimensional labels, supporting combinations of user, 
creator, engine type, version, etc.
+
+**Label Value Examples**:
+- `*-*,*-*`: Global default
+- `*-*,spark-3.2.1`: Spark engine default configuration
+- `*-IDE,*-*`: IDE creator default configuration
+- `hadoop-IDE,spark-3.2.1`: hadoop user's configuration for using Spark via IDE
+
+#### Table 4: `linkis_ps_configuration_key_limit_for_user` (User Configuration 
Limit Table)
+
+| Field | Type | Description |
+|-------|------|-------------|
+| id | bigint | Primary key |
+| user_name | varchar(50) | Username |
+| combined_label_value | varchar(128) | Combined label value |
+| key_id | bigint | Configuration key ID |
+| config_value | varchar(200) | Configuration value |
+| max_value | varchar(50) | Maximum value limit |
+| min_value | varchar(50) | Minimum value limit |
+| is_valid | varchar(2) | Whether effective (`Y`/`N`) |
+| create_by | varchar(50) | Creator |
+| create_time | datetime | Creation time |
+| update_by | varchar(50) | Updater |
+| update_time | datetime | Update time |
+
+**Purpose**: Set upper and lower limits for configuration values for specific 
users, preventing users from configuring beyond administrator-allowed ranges.
+
+### 3.2 Table Relationships
+
+```text
+┌─────────────────────────────────────────────────────────────────────┐
+│                    Configuration Parameter Relationships              │
+└─────────────────────────────────────────────────────────────────────┘
+
+┌──────────────────────────────┐
+│  linkis_cg_manager_label     │
+│  (Label Table)               │
+├──────────────────────────────┤
+│  id (PK)                     │◄──────────┐
+│  label_key                   │           │
+│  label_value                 │           │ N:1
+│  - *-*,*-*                   │           │
+│  - *-*,spark-3.2.1           │           │
+│  - hadoop-IDE,spark-3.2.1    │           │
+└──────────────────────────────┘           │
+                                           │
+        ┌──────────────────────────────────┘
+        │
+        │  ┌────────────────────────────────────────┐
+        └──┤  linkis_ps_configuration_config_value  │
+           │  (Configuration Value Table)           │
+           ├────────────────────────────────────────┤
+           │  id (PK)                               │
+           │  config_key_id (FK) ──────────┐        │
+           │  config_value                 │        │
+           │  config_label_id (FK)         │        │
+           └────────────────────────────────────────┘
+                                           │
+                                           │ N:1
+                                           │
+        ┌──────────────────────────────────┘
+        │
+        │  ┌────────────────────────────────────────┐
+        └─►│  linkis_ps_configuration_config_key    │
+           │  (Configuration Key Definition Table)  │
+           ├────────────────────────────────────────┤
+           │  id (PK)                               │
+           │  key                                   │
+           │  name                                  │
+           │  description                           │
+           │  engine_conn_type                      │
+           │  default_value                         │
+           │  validate_type                         │
+           │  validate_range                        │
+           │  level                                 │
+           └────────────────────────────────────────┘
+                      │
+                      │ 1:N
+                      │
+        ┌─────────────┘
+        │
+        │  ┌────────────────────────────────────────┐
+        └─►│ linkis_ps_configuration_key_limit_     │
+           │ for_user (User Config Limit Table)     │
+           ├────────────────────────────────────────┤
+           │  id (PK)                               │
+           │  user_name                             │
+           │  combined_label_value                  │
+           │  key_id (FK)                           │
+           │  max_value                             │
+           │  min_value                             │
+           └────────────────────────────────────────┘
+```
+
+### 3.3 SQL Query Examples
+
+#### Query User's Complete Configuration (with Priority Merge)
+
+```sql
+-- Query hadoop user's configuration for using Spark 3.2.1 via IDE
+-- Result will include merged user config, creator config, and engine config
+
+SELECT
+    k.key,
+    k.name,
+    k.engine_conn_type,
+    k.default_value,
+    v.config_value,
+    l.label_value,
+    CASE
+        WHEN l.label_value LIKE 'hadoop-IDE,spark-3.2.1' THEN 'User Config'
+        WHEN l.label_value LIKE '*-IDE,*-*' THEN 'Creator Default'
+        WHEN l.label_value LIKE '*-*,spark-3.2.1' THEN 'Engine Default'
+        WHEN l.label_value LIKE '*-*,*-*' THEN 'Global Default'
+        ELSE 'Other'
+    END AS config_level
+FROM
+    linkis_ps_configuration_config_key k
+LEFT JOIN
+    linkis_ps_configuration_config_value v ON k.id = v.config_key_id
+LEFT JOIN
+    linkis_cg_manager_label l ON v.config_label_id = l.id
+WHERE
+    l.label_value IN (
+        'hadoop-IDE,spark-3.2.1',    -- User config
+        '*-IDE,*-*',                 -- Creator default
+        '*-*,spark-3.2.1',           -- Engine default
+        '*-*,*-*'                    -- Global default
+    )
+ORDER BY
+    k.key,
+    FIELD(l.label_value, 'hadoop-IDE,spark-3.2.1', '*-IDE,*-*', 
'*-*,spark-3.2.1', '*-*,*-*');
+```
+
+#### Query User Configuration Limits
+
+```sql
+-- Query configuration limits for hadoop user
+SELECT
+    u.user_name,
+    u.combined_label_value,
+    k.key,
+    k.name,
+    u.max_value,
+    u.min_value,
+    u.is_valid
+FROM
+    linkis_ps_configuration_key_limit_for_user u
+JOIN
+    linkis_ps_configuration_config_key k ON u.key_id = k.id
+WHERE
+    u.user_name = 'hadoop'
+    AND u.is_valid = 'Y';
+```
+
+## 4. Complete Engine Creation Parameter Effective Flow
+
+### 4.1 Flow Diagram
+
+```text
+┌─────────────────────────────────────────────────────────────────────────────┐
+│              Linkis Engine Creation Configuration Parameter Flow             
│
+└─────────────────────────────────────────────────────────────────────────────┘
+
+[1] Frontend/SDK submits task
+    │
+    ├─ params: { "spark.executor.memory": "4g", ... }
+    ├─ labels: ["hadoop-IDE", "spark-3.2.1"]
+    └─ executionContent: "select * from table"
+    │
+    ▼
+[2] EntranceParser.parseToTask()
+    │ (Parse request, extract params, labels)
+    │
+    ▼
+[3] EntranceJob (Job object)
+    │ - jobRequest.params
+    │ - jobRequest.labels
+    │
+    ▼
+[4] Orchestrator
+    │ - JobReqParamCheckRuler (Parameter validation)
+    │
+    ▼
+[5] DefaultEngineCreateService.createEngine()
+    │
+    ├──► [5.1] buildLabel(labels, user)
+    │     └─ Build label list: UserCreatorLabel + EngineTypeLabel
+    │
+    ├──► [5.2] selectECM(request, labelList)
+    │     └─ Select suitable ECM node
+    │
+    ├──► [5.3] generateResource(props, user, labelList, timeout)
+    │     │
+    │     ├─ engineConnConfigurationService.getConsoleConfiguration(labelList)
+    │     │   │
+    │     │   ├─► ConfigurationMapCache.engineMapCache.get(labelList)
+    │     │   │   │
+    │     │   │   ├─► [Cache miss] RPC call ConfigurationService
+    │     │   │   │
+    │     │   │   └─► ConfigurationService.getConfigsByLabelList()
+    │     │   │       │
+    │     │   │       ├─ Query user config (hadoop-IDE,spark-3.2.1)
+    │     │   │       ├─ Query creator config (*-IDE,*-*)
+    │     │   │       ├─ Query user general config (hadoop-*,*-*)
+    │     │   │       ├─ Query engine config (*-*,spark-3.2.1)
+    │     │   │       ├─ Query global config (*-*,*-*)
+    │     │   │       │
+    │     │   │       └─► replaceCreatorToEngine() (Config merge)
+    │     │   │           └─ Creator config > Engine config
+    │     │   │           └─ User config > Engine config
+    │     │   │
+    │     │   └─ Return Map<String, String> configProp
+    │     │
+    │     └─ Parameter merge logic:
+    │         for (entry : configProp) {
+    │             if (!props.containsKey(entry.key)) {  ◄── Key check
+    │                 props.put(entry.key, entry.value)
+    │             }
+    │         }
+    │         └─ **User-submitted params (props) will not be overridden**
+    │
+    ├──► [5.4] resourceManager.requestResource()
+    │     └─ Resource request
+    │
+    ├──► [5.5] createEngineNode()
+    │     └─ Build engine node request
+    │
+    ├──► [5.6] emService.createEngine(engineBuildRequest, emNode)
+    │     └─ Call ECM to create engine
+    │
+    └──► [5.7] Engine starts with merged parameters
+          └─ Final effective params = User submitted params + Config service 
params (deduplicated)
+```
+
+### 4.2 Key Step Explanation
+
+#### Step 5.3: Parameter Merge Logic (generateResource)
+
+**Code Location**: `DefaultEngineCreateService.scala:371-397`
+
+```scala
+def generateResource(
+    props: util.Map[String, String],           // User-submitted parameters
+    user: String,
+    labelList: util.List[Label[_]],
+    timeout: Long
+): NodeResource = {
+    // 1. Get parameters from configuration service (already multi-level 
merged)
+    val configProp = 
engineConnConfigurationService.getConsoleConfiguration(labelList)
+
+    // 2. Parameter merge: User-submitted parameters take priority
+    if (null != configProp && configProp.asScala.nonEmpty) {
+      configProp.asScala.foreach(keyValue => {
+        if (!props.containsKey(keyValue._1)) {  // ◄── Only use config when 
user didn't specify
+          props.put(keyValue._1, keyValue._2)
+        }
+      })
+    }
+
+    // 3. Handle cross-queue configuration
+    val crossQueue = props.get(AMConfiguration.CROSS_QUEUE)
+    if (StringUtils.isNotBlank(crossQueue)) {
+      val queueName = 
props.getOrDefault(AMConfiguration.YARN_QUEUE_NAME_CONFIG_KEY, "default")
+      props.put(AMConfiguration.YARN_QUEUE_NAME_CONFIG_KEY, crossQueue)
+    }
+
+    // 4. Create resource request
+    val timeoutEngineResourceRequest = TimeoutEngineResourceRequest(timeout, 
user, labelList, props)
+    
engineConnResourceFactoryService.createEngineResource(timeoutEngineResourceRequest)
+}
+```
+
+**Key Points**:
+1. `configProp` is already the result of multi-level configuration merge (user 
config > creator config > engine config > global config)
+2. `if (!props.containsKey(keyValue._1))` ensures user-submitted parameters 
are not overridden
+3. Final `props` contains complete parameter set, passed to the engine
+
+### 4.3 Configuration Cache Mechanism
+
+**Code Location**: `ConfigurationMapCache.java`
+
+```java
+// Global configuration cache (by user dimension)
+static RPCMapCache<UserCreatorLabel, String, String> globalMapCache
+
+// Engine configuration cache (by user + engine dimension)
+static RPCMapCache<Tuple2<UserCreatorLabel, EngineTypeLabel>, String, String> 
engineMapCache
+```
+
+**Cache Working Mechanism**:
+1. Uses RPC cache to reduce repeated queries
+2. Cache Key: `(UserCreatorLabel, EngineTypeLabel)` combination
+3. Cache Value: `Map<String, String>` (configuration key-value pairs)
+4. Cache invalidation: Automatically invalidated after configuration update
+
+## 5. Configuration Parameter Validation Mechanism
+
+### 5.1 Validation Types
+
+Linkis supports multiple parameter validation types (defined in 
`validate_type` field):
+
+| Validation Type | Description | Example |
+|----------------|-------------|---------|
+| None | No validation | - |
+| NumInterval | Numeric interval validation | `[1,100]` means value must be 
between 1-100 |
+| FloatInterval | Float interval validation | `[0.0,1.0]` |
+| Regex | Regular expression validation | `^[a-zA-Z0-9_]+$` |
+| Json | JSON format validation | Validate if it's valid JSON |
+| OFT | OneOf type validation | `queue1,queue2,queue3` (must choose one) |
+| Contain | Contains validation | Validate if value contains specified string |
+
+### 5.2 Validator Implementation
+
+**Code Location**: 
`linkis-configuration/src/main/scala/org/apache/linkis/configuration/validate/`
+
+- `ValidatorManager`: Validator manager
+- `NumericalValidator`: Numeric validator
+- `RegexValidator`: Regex validator
+- `JsonValidator`: JSON validator
+- `OneOfValidator`: Enum validator
+
+### 5.3 User Configuration Limits
+
+Through the `linkis_ps_configuration_key_limit_for_user` table, administrators 
can set upper and lower limits for specific users:
+
+```sql
+-- Example: Limit hadoop user's executor memory to no more than 8G
+INSERT INTO linkis_ps_configuration_key_limit_for_user
+(user_name, combined_label_value, key_id, max_value, is_valid, create_by)
+VALUES
+('hadoop', 'hadoop-*,spark-*',
+ (SELECT id FROM linkis_ps_configuration_config_key WHERE 
key='spark.executor.memory'),
+ '8G', 'Y', 'admin');
+```
+
+**Effective Logic** (Code Location: `ConfigurationService.scala:422-442`):
+
+```scala
+// Add special configuration limit information
+val limitList = configKeyLimitForUserMapper.selectByLabelAndKeyIds(
+    combinedLabel.getStringValue, keyIdList
+)
+
+defaultEngineConfigs.asScala.foreach(entity => {
+  val keyId = entity.getId
+  val res = limitList.asScala.filter(v => v.getKeyId == keyId).toList.asJava
+  if (res.size() > 0) {
+    val specialMap = new util.HashMap[String, String]()
+    val maxValue = res.get(0).getMaxValue
+    if (StringUtils.isNotBlank(maxValue)) {
+      specialMap.put("maxValue", maxValue)
+      entity.setSpecialLimit(specialMap)  // Set special limit
+    }
+  }
+})
+```
+
+## 6. Practical Case Analysis
+
+### 6.1 Scenario Description
+
+User `hadoop` submits a Spark 3.2.1 task via IDE, analyzing parameter 
effective behavior.
+
+### 6.2 Database Configuration
+
+```sql
+-- Global default configuration (label_id=5: *-*,*-*)
+INSERT INTO linkis_ps_configuration_config_value
+VALUES (101, 10, '2G', 5);  -- spark.executor.memory = 2G
+
+-- Spark engine default configuration (label_id=20: *-*,spark-3.2.1)
+INSERT INTO linkis_ps_configuration_config_value
+VALUES (102, 10, '4G', 20);  -- spark.executor.memory = 4G
+
+-- IDE creator default configuration (label_id=30: *-IDE,*-*)
+INSERT INTO linkis_ps_configuration_config_value
+VALUES (103, 10, '6G', 30);  -- spark.executor.memory = 6G
+
+-- hadoop user configuration (label_id=40: hadoop-IDE,spark-3.2.1)
+INSERT INTO linkis_ps_configuration_config_value
+VALUES (104, 10, '8G', 40);  -- spark.executor.memory = 8G
+```
+
+### 6.3 Task Submission Parameters
+
+```json
+{
+  "params": {
+    "spark.executor.memory": "10G",
+    "spark.executor.cores": "4"
+  },
+  "labels": {
+    "userCreator": "hadoop-IDE",
+    "engineType": "spark-3.2.1"
+  }
+}
+```
+
+### 6.4 Parameter Effective Process
+
+| Step | Operation | Current Value | Description |
+|------|-----------|---------------|-------------|
+| 1 | Config service query | - | Start querying configuration |
+| 2 | Query user config (40) | `8G` | Found `hadoop-IDE,spark-3.2.1` 
configuration |
+| 3 | Query creator config (30) | `6G` | Found `*-IDE,*-*` configuration |
+| 4 | Query engine config (20) | `4G` | Found `*-*,spark-3.2.1` configuration |
+| 5 | Query global config (5) | `2G` | Found `*-*,*-*` configuration |
+| 6 | Configuration merge | `8G` | User config overrides all default configs |
+| 7 | Merge with task params | `10G` | User submitted params > Config service 
params |
+| 8 | Final effective | **`10G`** | **Task submission params take effect** |
+
+### 6.5 Final Parameter Set
+
+```json
+{
+  "spark.executor.memory": "10G",        // From task submission params 
(highest priority)
+  "spark.executor.cores": "4",           // From task submission params
+  "wds.linkis.rm.yarnqueue": "default",  // From config service (user didn't 
specify)
+  "spark.driver.memory": "1G"            // From config service (user didn't 
specify)
+}
+```
+
+## 7. Summary
+
+### 7.1 Key Points
+
+1. **Clear Priority Mechanism**: Task params > User config > Creator config > 
Engine config > Global config
+2. **Cascaded Configuration Query**: Supports multi-level default 
configurations, automatic merge
+3. **User Parameters First**: User-submitted parameters are never overridden 
by configuration service
+4. **Label-Driven**: Multi-dimensional configuration management through 
combined labels
+5. **Validation and Limits**: Supports parameter validation and user-level 
configuration limits
+
+### 7.2 Best Practice Recommendations
+
+1. **Set Reasonable Defaults**: Configure reasonable default values at global 
and engine levels
+2. **Configure On-Demand**: Only create personalized configurations for users 
when necessary
+3. **Use Limit Features**: Use `key_limit_for_user` table to prevent users 
from exceeding configuration limits
+4. **Parameter Validation**: Use `validate_type` and `validate_range` to 
ensure parameter validity
+5. **Monitor Configuration Changes**: Pay attention to `update_time` field to 
track configuration modification history
+
+### 7.3 Related Code File Index
+
+| Module | File Path |
+|--------|-----------|
+| Configuration Service | 
`/linkis-public-enhancements/linkis-configuration/src/main/scala/org/apache/linkis/configuration/service/ConfigurationService.scala`
 |
+| Engine Creation | 
`/linkis-computation-governance/linkis-manager/linkis-application-manager/src/main/scala/org/apache/linkis/manager/am/service/engine/DefaultEngineCreateService.scala`
 |
+| Configuration Cache | 
`/linkis-computation-governance/linkis-manager/linkis-application-manager/src/main/java/org/apache/linkis/manager/am/conf/ConfigurationMapCache.java`
 |
+| Data Access | 
`/linkis-public-enhancements/linkis-configuration/src/main/java/org/apache/linkis/configuration/dao/ConfigMapper.java`
 |
+| Mapper XML | 
`/linkis-public-enhancements/linkis-configuration/src/main/resources/mapper/common/ConfigMapper.xml`
 |
+| Parameter Validation | 
`/linkis-public-enhancements/linkis-configuration/src/main/scala/org/apache/linkis/configuration/validate/`
 |
+
+---
+
+**Document Version**: 1.0
+**Last Updated**: 2025-11-23
+**Analysis Based On**: Linkis Project at /data/workspace/linkis
diff --git a/docs/development/special-logic/engine-max-job-config-guide.md 
b/docs/development/special-logic/engine-max-job-config-guide.md
new file mode 100644
index 00000000000..ca31e403eb9
--- /dev/null
+++ b/docs/development/special-logic/engine-max-job-config-guide.md
@@ -0,0 +1,292 @@
+---
+title: Engine Max Running Job Configuration Guide
+sidebar_position: 2
+---
+
+# Engine Max Running Job Configuration Guide
+
+## Overview
+
+This guide explains how to configure the maximum number of concurrent running 
jobs for Linkis engines using the `wds.linkis.engine.running.job.max` parameter.
+
+## Configuration Details
+
+### Parameter Information
+
+| Property | Value |
+|----------|-------|
+| Configuration Key | `wds.linkis.engine.running.job.max` |
+| Display Name | 引擎运行最大任务数 (Engine Max Running Jobs) |
+| Description | Maximum number of concurrent jobs that can run in an engine 
instance |
+| Default Value | 30 |
+| Validation Type | NumInterval |
+| Applicable Engines | All engines (shell, hive, spark, python, etc.) |
+
+### Configuration Levels
+
+This guide demonstrates how to set the configuration at two levels:
+
+1. **Global Default** (`*-*,*-*`)
+   - Applies to all engines
+   - Lowest priority
+   - Fallback when no engine-specific config exists
+
+2. **Hive Engine Default** (`*-*,hive-3.1.3`)
+   - Applies specifically to Hive 3.1.3 engine
+   - Higher priority than global default
+   - Overrides global default for Hive engine
+
+## Prerequisites
+
+Before executing the SQL scripts, verify the following:
+
+### 1. Check Configuration Key Exists
+
+```sql
+SELECT id, `key`, name, engine_conn_type, default_value
+FROM linkis_ps_configuration_config_key
+WHERE `key` = 'wds.linkis.engine.running.job.max';
+```
+
+**Expected Result:**
+- `id`: 112 (may vary in your environment)
+- `key`: wds.linkis.engine.running.job.max
+- `name`: 引擎运行最大任务数
+
+### 2. Check Label IDs
+
+```sql
+SELECT id, label_key, label_value
+FROM linkis_cg_manager_label
+WHERE label_key = 'combined_userCreator_engineType'
+  AND label_value IN ('*-*,*-*', '*-*,hive-3.1.3')
+ORDER BY label_value;
+```
+
+**Expected Results:**
+- `id=5`, `label_value='*-*,*-*'` (Global Default)
+- `id=7`, `label_value='*-*,hive-3.1.3'` (Hive Engine Default)
+
+:::warning Important
+The `config_key_id` and `config_label_id` values in the SQL scripts are based 
on a standard Linkis installation. **You must verify these IDs in your own 
database** and update the scripts accordingly if they differ.
+:::
+
+## SQL Scripts
+
+### Quick Execution Script
+
+Use this script for direct execution:
+
+```sql
+-- Insert Global Default Configuration (*-*,*-*)
+INSERT INTO linkis_ps_configuration_config_value
+(config_key_id, config_value, config_label_id, create_time, update_time)
+VALUES
+(112, '30', 5, NOW(), NOW());
+
+-- Insert Hive Engine Default Configuration (*-*,hive-3.1.3)
+INSERT INTO linkis_ps_configuration_config_value
+(config_key_id, config_value, config_label_id, create_time, update_time)
+VALUES
+(112, '30', 7, NOW(), NOW());
+```
+
+### Idempotent Script (Safe for Re-execution)
+
+Use `REPLACE INTO` if you want to run the script multiple times safely:
+
+```sql
+REPLACE INTO linkis_ps_configuration_config_value
+(config_key_id, config_value, config_label_id, create_time, update_time)
+VALUES
+(112, '30', 5, NOW(), NOW()),  -- Global default
+(112, '30', 7, NOW(), NOW());  -- Hive engine default
+```
+
+### Update Existing Configuration
+
+If the configuration already exists and you want to update it:
+
+```sql
+-- Update Global Default
+UPDATE linkis_ps_configuration_config_value
+SET config_value = '30', update_time = NOW()
+WHERE config_key_id = 112 AND config_label_id = 5;
+
+-- Update Hive Engine Default
+UPDATE linkis_ps_configuration_config_value
+SET config_value = '30', update_time = NOW()
+WHERE config_key_id = 112 AND config_label_id = 7;
+```
+
+## Verification
+
+After executing the SQL scripts, verify the configuration:
+
+```sql
+SELECT
+    v.id AS value_id,
+    k.key AS config_key,
+    k.name AS config_name,
+    v.config_value,
+    l.label_value,
+    CASE
+        WHEN l.label_value = '*-*,*-*' THEN 'Global Default (Priority: 5)'
+        WHEN l.label_value = '*-*,hive-3.1.3' THEN 'Hive Engine Default 
(Priority: 4)'
+        ELSE 'Other'
+    END AS config_level,
+    v.create_time,
+    v.update_time
+FROM
+    linkis_ps_configuration_config_value v
+JOIN
+    linkis_ps_configuration_config_key k ON v.config_key_id = k.id
+JOIN
+    linkis_cg_manager_label l ON v.config_label_id = l.id
+WHERE
+    k.key = 'wds.linkis.engine.running.job.max'
+    AND v.config_label_id IN (5, 7)
+ORDER BY
+    l.label_value;
+```
+
+**Expected Output:**
+
+| value_id | config_key | config_name | config_value | label_value | 
config_level | create_time | update_time |
+|----------|------------|-------------|--------------|-------------|--------------|-------------|-------------|
+| xxx | wds.linkis.engine.running.job.max | 引擎运行最大任务数 | 30 | *-*,*-* | Global 
Default (Priority: 5) | ... | ... |
+| xxx | wds.linkis.engine.running.job.max | 引擎运行最大任务数 | 30 | *-*,hive-3.1.3 | 
Hive Engine Default (Priority: 4) | ... | ... |
+
+## Priority Explanation
+
+When a Hive job is submitted, the effective configuration value follows this 
priority:
+
+```text
+1. Runtime Parameters (highest)         ← User can override via API
+   ↓
+2. User Specific Configuration
+   Example: 'hadoop-IDE,hive-3.1.3'
+   ↓
+3. Creator Default Configuration
+   Example: '*-IDE,*-*'
+   ↓
+4. Engine Default Configuration         ← Created by this script
+   >>> '*-*,hive-3.1.3' = 30
+   ↓
+5. Global Default Configuration (lowest) ← Created by this script
+   >>> '*-*,*-*' = 30
+```
+
+### Example Scenarios
+
+#### Scenario 1: Hive Job Without User Config
+- User: `hadoop`, Creator: `IDE`, Engine: `hive-3.1.3`
+- No user-specific or creator config exists
+- **Effective value**: `30` (from Hive engine default `*-*,hive-3.1.3`)
+
+#### Scenario 2: Spark Job Without User Config
+- User: `hadoop`, Creator: `IDE`, Engine: `spark-3.2.1`
+- No Spark engine default config exists
+- **Effective value**: `30` (falls back to global default `*-*,*-*`)
+
+#### Scenario 3: User Submits with Runtime Parameter
+```json
+{
+  "params": {
+    "wds.linkis.engine.running.job.max": "50"
+  },
+  "labels": {
+    "userCreator": "hadoop-IDE",
+    "engineType": "hive-3.1.3"
+  }
+}
+```
+- **Effective value**: `50` (runtime parameter overrides all configs)
+
+## Configuration for Other Engines
+
+To add the same configuration for other engines (e.g., Spark, Python), follow 
the same pattern:
+
+### 1. Find the Engine's Label ID
+
+```sql
+SELECT id, label_value
+FROM linkis_cg_manager_label
+WHERE label_key = 'combined_userCreator_engineType'
+  AND label_value LIKE '*-*,spark%'  -- For Spark
+ORDER BY label_value;
+```
+
+### 2. Insert Configuration
+
+```sql
+-- Example: Spark 3.2.1 engine default (assuming label_id = 8)
+INSERT INTO linkis_ps_configuration_config_value
+(config_key_id, config_value, config_label_id, create_time, update_time)
+VALUES
+(112, '30', 8, NOW(), NOW());  -- Adjust label_id as needed
+```
+
+## Troubleshooting
+
+### Issue: Insert Fails with Duplicate Key Error
+
+**Cause**: Configuration value already exists for the label.
+
+**Solution**: Use `REPLACE INTO` instead of `INSERT INTO`, or update the 
existing value:
+
+```sql
+UPDATE linkis_ps_configuration_config_value
+SET config_value = '30', update_time = NOW()
+WHERE config_key_id = 112 AND config_label_id = 5;
+```
+
+### Issue: Configuration Not Taking Effect
+
+**Possible Causes**:
+1. Configuration cache not invalidated
+2. Higher priority configuration exists
+3. Runtime parameter overriding the config
+
+**Solutions**:
+1. Restart Linkis services to clear cache
+2. Check for user-specific or creator configurations:
+   ```sql
+   SELECT v.*, l.label_value
+   FROM linkis_ps_configuration_config_value v
+   JOIN linkis_cg_manager_label l ON v.config_label_id = l.id
+   WHERE v.config_key_id = 112
+   ORDER BY l.label_value;
+   ```
+3. Check job submission parameters in logs
+
+## Cleanup
+
+To remove the configurations created by this guide:
+
+```sql
+DELETE FROM linkis_ps_configuration_config_value
+WHERE config_key_id = 112
+  AND config_label_id IN (5, 7);
+```
+
+## Related Documentation
+
+- [Linkis Engine Configuration Parameter Priority 
Analysis](./engine-config-priority.md)
+- [Linkis Configuration Management 
Guide](https://linkis.apache.org/docs/latest/configuration/)
+
+## References
+
+- **Database Tables**:
+  - `linkis_ps_configuration_config_key`: Configuration key definitions
+  - `linkis_ps_configuration_config_value`: Configuration values
+  - `linkis_cg_manager_label`: Label definitions
+
+- **Code Files**:
+  - `ConfigurationService.scala`: Configuration service implementation
+  - `DefaultEngineCreateService.scala`: Engine creation service
+  - `ConfigMapper.xml`: Database mapper definitions
+
+---
+
+**Last Updated**: 2025-11-23
diff --git a/docs/development/special-logic/engine-reuse-logic.md 
b/docs/development/special-logic/engine-reuse-logic.md
new file mode 100644
index 00000000000..32fc23fb225
--- /dev/null
+++ b/docs/development/special-logic/engine-reuse-logic.md
@@ -0,0 +1,369 @@
+---
+title: Engine Reuse Logic
+sidebar_position: 1
+---
+
+# Engine Reuse Logic
+
+## Overview
+
+Engine reuse is a critical performance optimization mechanism in Linkis. When 
a user submits a task, the system first attempts to reuse an existing idle 
engine instead of creating a new one. This significantly reduces engine startup 
overhead and improves task response time.
+
+## Core Workflow
+
+```
+┌─────────────────────────────────────────────────────────────────────────┐
+│                        Engine Ask Request                                │
+└─────────────────────────────────────────────────────────────────────────┘
+                                    │
+                                    ▼
+                    ┌───────────────────────────────┐
+                    │ Check EXECUTE_ONCE_KEY Label  │
+                    └───────────────────────────────┘
+                                    │
+                    ┌───────────────┴───────────────┐
+                    │                               │
+                    ▼                               ▼
+           Has EXECUTE_ONCE            No EXECUTE_ONCE
+                    │                               │
+                    ▼                               ▼
+           Create New Engine           ┌─────────────────────┐
+                                       │  Try Engine Reuse   │
+                                       └─────────────────────┘
+                                                   │
+                                                   ▼
+                                       ┌─────────────────────┐
+                                       │ Build Label Filter  │
+                                       │  - EngineNodeLabel  │
+                                       │  - UserCreatorLabel │
+                                       │  - EngineTypeLabel  │
+                                       └─────────────────────┘
+                                                   │
+                                                   ▼
+                                       ┌─────────────────────┐
+                                       │ Check Exclusion     │
+                                       │ Label               │
+                                       └─────────────────────┘
+                                                   │
+                              ┌────────────────────┴────────────────────┐
+                              │                                         │
+                              ▼                                         ▼
+                    Wildcard (*) Exclusion               Specific Instance 
Exclusion
+                              │                                         │
+                              ▼                                         ▼
+                    Return null (No Reuse)              Remove Excluded 
Instances
+                                                                       │
+                                                                       ▼
+                                                       ┌─────────────────────┐
+                                                       │ Apply Label Choosers│
+                                                       │ (Multi-User Engine) │
+                                                       └─────────────────────┘
+                                                                       │
+                                                                       ▼
+                                                       ┌─────────────────────┐
+                                                       │ Get Engine Instances│
+                                                       │ from Label Service  │
+                                                       └─────────────────────┘
+                                                                       │
+                                                                       ▼
+                                                       ┌─────────────────────┐
+                                                       │ Optional Filters:   │
+                                                       │ - Template Name     │
+                                                       │ - Resource Match    │
+                                                       │ - Python Version    │
+                                                       └─────────────────────┘
+                                                                       │
+                                                                       ▼
+                                                       ┌─────────────────────┐
+                                                       │   Node Selector     │
+                                                       │  (Choose Best Node) │
+                                                       └─────────────────────┘
+                                                                       │
+                                                                       ▼
+                                                       ┌─────────────────────┐
+                                                       │ Try Lock Engine     │
+                                                       └─────────────────────┘
+                                                                       │
+                                              
┌────────────────────────┴────────────────────────┐
+                                              │                                
                 │
+                                              ▼                                
                 ▼
+                                        Lock Success                           
           Lock Failed
+                                              │                                
                 │
+                                              ▼                                
                 ▼
+                                        Return Engine                          
    Retry (up to limit)
+                                                                               
               │
+                                                                               
               ▼
+                                                                               
┌─────────────────────┐
+                                                                               
│ Exceeded Retry Limit│
+                                                                               
│ or Timeout          │
+                                                                               
└─────────────────────┘
+                                                                               
               │
+                                                                               
               ▼
+                                                                               
Throw LinkisRetryException
+```
+
+## Key Components
+
+### Core Classes
+
+| Class | Location | Description |
+|-------|----------|-------------|
+| `EngineReuseService` | 
`linkis-application-manager/.../service/engine/EngineReuseService.scala` | 
Service interface for engine reuse |
+| `DefaultEngineReuseService` | 
`linkis-application-manager/.../service/engine/DefaultEngineReuseService.scala` 
| Core implementation of reuse logic |
+| `EngineReuseLabelChooser` | 
`linkis-application-manager/.../label/EngineReuseLabelChooser.java` | Interface 
for label selection during reuse |
+| `MultiUserEngineReuseLabelChooser` | 
`linkis-application-manager/.../label/MultiUserEngineReuseLabelChooser.java` | 
Handles multi-user engine label selection |
+| `ReuseExclusionLabel` | 
`linkis-label-common/.../label/entity/engine/ReuseExclusionLabel.java` | Label 
to exclude specific instances from reuse |
+| `EngineReuseRequest` | 
`linkis-manager-common/.../protocol/engine/EngineReuseRequest.java` | Request 
protocol for engine reuse |
+| `DefaultEngineNodeManager` | 
`linkis-application-manager/.../manager/DefaultEngineNodeManager.java` | 
Manages engine node operations including locking |
+
+### Source Code Locations
+
+```
+linkis-computation-governance/linkis-manager/
+├── linkis-application-manager/src/main/
+│   ├── scala/org/apache/linkis/manager/am/service/engine/
+│   │   ├── EngineReuseService.scala              # Interface
+│   │   ├── DefaultEngineReuseService.scala       # Core implementation
+│   │   └── DefaultEngineAskEngineService.scala   # Caller service
+│   └── java/org/apache/linkis/manager/am/
+│       ├── label/
+│       │   ├── EngineReuseLabelChooser.java
+│       │   └── MultiUserEngineReuseLabelChooser.java
+│       ├── manager/
+│       │   └── DefaultEngineNodeManager.java
+│       └── conf/
+│           └── AMConfiguration.java              # Configuration
+├── linkis-manager-common/src/main/java/.../protocol/engine/
+│   └── EngineReuseRequest.java
+└── linkis-label-common/src/main/java/.../label/entity/engine/
+    └── ReuseExclusionLabel.java
+```
+
+## Reuse Conditions
+
+### 1. Basic Trigger Condition
+
+Engine reuse is attempted when the request does NOT contain the 
`EXECUTE_ONCE_KEY` label:
+
+```scala
+if 
(!engineAskRequest.getLabels.containsKey(LabelKeyConstant.EXECUTE_ONCE_KEY)) {
+  // Attempt engine reuse
+  val reuseNode = engineReuseService.reuseEngine(engineReuseRequest, sender)
+}
+```
+
+### 2. Label Matching
+
+The system filters available engines based on:
+
+- **EngineNodeLabel**: Matches engine node type
+- **UserCreatorLabel**: Matches user and creator application
+- **EngineTypeLabel**: Matches engine type (spark, hive, python, etc.)
+- **AliasServiceInstanceLabel**: Filters by service instance alias
+
+### 3. Exclusion Rules
+
+#### ReuseExclusionLabel
+
+This label allows excluding specific engine instances from reuse:
+
+```java
+// Exclude all engines (wildcard)
+ReuseExclusionLabel label = new ReuseExclusionLabel();
+label.setInstances("*");
+
+// Exclude specific instances
+label.setInstances("instance1;instance2;instance3");
+```
+
+When wildcard `*` is set, no engine will be reused for that request.
+
+### 4. Engine Status Check
+
+An engine can only be reused if:
+
+- **Status is Unlock**: The engine is not currently locked by another task
+- **Status is Available**: The engine is in a healthy, available state
+
+```java
+@Override
+public EngineNode reuseEngine(EngineNode engineNode) {
+  EngineNode node = getEngineNodeInfo(engineNode);
+  if (node == null || !NodeStatus.isAvailable(node.getNodeStatus())) {
+    return null;
+  }
+  if (!NodeStatus.isLocked(node.getNodeStatus())) {
+    Optional<String> lockStr = engineLocker.lockEngine(node, timeout);
+    if (!lockStr.isPresent()) {
+      throw new LinkisRetryException(...);
+    }
+    node.setLock(lockStr.get());
+    return node;
+  }
+  return null;
+}
+```
+
+## Multi-User Engine Support
+
+Some engine types support multi-user sharing, where engines can be reused 
across different users:
+
+### Supported Multi-User Engine Types
+
+```
+es, presto, io_file, appconn, openlookeng, trino, jobserver, nebula, hbase, 
doris
+```
+
+### How It Works
+
+For multi-user engines, the `UserCreatorLabel` is modified to use an admin 
user:
+
+```java
+public List<Label<?>> chooseLabels(List<Label<?>> labelList) {
+  // Check if engine type is multi-user
+  if (isMultiUserEngine(engineTypeLabel)) {
+    String userAdmin = getAdminUser(engineTypeLabel.getEngineType());
+    userCreatorLabel.setUser(userAdmin);
+  }
+  return labels;
+}
+```
+
+This allows different users to share the same engine instance.
+
+## Optional Filtering Rules
+
+### Template Name Matching
+
+When enabled, only engines with matching template names will be reused:
+
+```properties
+linkis.ec.reuse.with.template.rule.enable=true
+```
+
+The system checks the `ec.resource.name` property in engine parameters.
+
+### Resource Matching
+
+When enabled, the system ensures the engine has sufficient resources:
+
+```properties
+linkis.ec.reuse.with.resource.rule.enable=true
+linkis.ec.reuse.with.resource.with.ecs=spark,hive,shell,python
+```
+
+Checks performed:
+1. Available/locked resources >= requested resources
+2. Python version compatibility (for Python/PySpark engines)
+
+## Caching Strategy
+
+The system supports caching engine instances to improve reuse performance:
+
+### Cache Configuration
+
+| Parameter | Default | Description |
+|-----------|---------|-------------|
+| `wds.linkis.manager.am.engine.reuse.enable.cache` | `false` | Enable 
instance caching |
+| `wds.linkis.manager.am.engine.reuse.cache.expire.time` | `5s` | Cache 
expiration time |
+| `wds.linkis.manager.am.engine.reuse.cache.max.size` | `1000` | Maximum cache 
entries |
+| `wds.linkis.manager.am.engine.reuse.cache.support.engines` | `shell` | 
Engine types supporting cache |
+
+### Cache Key Format
+
+```scala
+val cacheKey = userCreatorLabel.getStringValue + "_" + 
engineTypeLabel.getEngineType
+// Example: "hadoop-IDE_spark"
+```
+
+## Configuration Parameters
+
+### Core Reuse Parameters
+
+| Parameter | Default | Description |
+|-----------|---------|-------------|
+| `wds.linkis.manager.am.engine.reuse.max.time` | `5m` | Maximum wait time for 
reuse |
+| `wds.linkis.manager.am.engine.reuse.count.limit` | `2` | Maximum reuse retry 
count |
+| `wds.linkis.manager.am.engine.locker.max.time` | `5m` | Maximum engine lock 
time |
+
+### Multi-User Engine Parameters
+
+| Parameter | Default | Description |
+|-----------|---------|-------------|
+| `wds.linkis.multi.user.engine.types` | `es,presto,...` | Multi-user engine 
type list |
+| `wds.linkis.multi.user.engine.user` | JSON config | Admin user mapping for 
each engine type |
+
+### Optional Filter Parameters
+
+| Parameter | Default | Description |
+|-----------|---------|-------------|
+| `linkis.ec.reuse.with.template.rule.enable` | `false` | Enable template name 
matching |
+| `linkis.ec.reuse.with.resource.rule.enable` | `false` | Enable resource 
matching |
+| `linkis.ec.reuse.with.resource.with.ecs` | `spark,hive,shell,python` | 
Engine types for resource matching |
+
+## Retry and Timeout Handling
+
+### Retry Logic
+
+```scala
+val reuseLimit = if (engineReuseRequest.getReuseCount <= 0)
+                   AMConfiguration.ENGINE_REUSE_COUNT_LIMIT  // Default: 2
+                 else engineReuseRequest.getReuseCount
+
+def selectEngineToReuse: Boolean = {
+  if (count > reuseLimit) {
+    throw new LinkisRetryException(...)
+  }
+  // Try to reuse selected engine
+  engine = Utils.tryCatch(getEngineNodeManager.reuseEngine(engineNode)) { t =>
+    // On failure, remove from candidates and retry
+    count = count + 1
+    engineScoreList = engineScoreList.filter(!_.equals(choseNode.get))
+    null
+  }
+  engine != null
+}
+```
+
+### Timeout Handling
+
+- If reuse times out, the system will asynchronously stop the problematic 
engine
+- After timeout, control returns to allow engine creation as fallback
+
+```scala
+if (ExceptionUtils.getRootCause(t).isInstanceOf[TimeoutException]) {
+  val stopEngineRequest = new EngineStopRequest(engineNode.getServiceInstance, 
...)
+  engineStopService.asyncStopEngine(stopEngineRequest)
+}
+```
+
+## Best Practices
+
+1. **Enable caching for frequently used engines**: For engine types like 
`shell` that are used frequently with short execution times, enable caching to 
improve reuse efficiency.
+
+2. **Configure appropriate timeout values**: Set `engine.reuse.max.time` based 
on your cluster's network latency and engine response times.
+
+3. **Use ReuseExclusionLabel when needed**: If certain tasks require isolated 
engines, use `ReuseExclusionLabel` to prevent unwanted reuse.
+
+4. **Monitor reuse metrics**: Track engine reuse success rates to identify 
potential issues with specific engine types or configurations.
+
+5. **Consider multi-user engines**: For read-only query engines like Presto or 
Trino, consider configuring them as multi-user engines to maximize resource 
utilization.
+
+## Troubleshooting
+
+### Common Issues
+
+1. **Engine reuse always fails**
+   - Check if engines are in `Unlock` status
+   - Verify label matching (user, creator, engine type)
+   - Check for `ReuseExclusionLabel` in requests
+
+2. **Reuse timeout errors**
+   - Increase `wds.linkis.manager.am.engine.reuse.max.time`
+   - Check network connectivity between manager and engine
+   - Review engine logs for lock acquisition issues
+
+3. **Wrong engine reused**
+   - Verify label configuration
+   - Check if template name matching is enabled when needed
+   - Review multi-user engine configuration
diff --git 
a/docs/development/special-logic/insert-engine-max-job-config-simple.sql 
b/docs/development/special-logic/insert-engine-max-job-config-simple.sql
new file mode 100644
index 00000000000..db1a39ec787
--- /dev/null
+++ b/docs/development/special-logic/insert-engine-max-job-config-simple.sql
@@ -0,0 +1,42 @@
+-- =====================================================================
+-- Linkis 引擎最大任务数配置插入脚本 (简化版)
+-- =====================================================================
+-- 配置项: wds.linkis.engine.running.job.max (引擎运行最大任务数)
+-- 配置值: 30
+-- =====================================================================
+
+-- 插入全局默认配置 (*-*,*-*)
+INSERT INTO linkis_ps_configuration_config_value
+(config_key_id, config_value, config_label_id, create_time, update_time)
+VALUES
+(112, '30', 5, NOW(), NOW());
+
+-- 插入 Hive 引擎默认配置 (*-*,hive-3.1.3)
+INSERT INTO linkis_ps_configuration_config_value
+(config_key_id, config_value, config_label_id, create_time, update_time)
+VALUES
+(112, '30', 7, NOW(), NOW());
+
+-- 验证插入结果
+SELECT
+    v.id AS value_id,
+    k.key AS config_key,
+    k.name AS config_name,
+    v.config_value,
+    l.label_value,
+    CASE
+        WHEN l.label_value = '*-*,*-*' THEN '全局默认'
+        WHEN l.label_value = '*-*,hive-3.1.3' THEN 'Hive引擎默认'
+        ELSE '其他'
+    END AS config_level
+FROM
+    linkis_ps_configuration_config_value v
+JOIN
+    linkis_ps_configuration_config_key k ON v.config_key_id = k.id
+JOIN
+    linkis_cg_manager_label l ON v.config_label_id = l.id
+WHERE
+    k.key = 'wds.linkis.engine.running.job.max'
+    AND v.config_label_id IN (5, 7)
+ORDER BY
+    l.label_value;
diff --git a/docs/development/special-logic/insert-engine-max-job-config.sql 
b/docs/development/special-logic/insert-engine-max-job-config.sql
new file mode 100644
index 00000000000..7d5dac5b9a7
--- /dev/null
+++ b/docs/development/special-logic/insert-engine-max-job-config.sql
@@ -0,0 +1,205 @@
+-- =====================================================================
+-- Linkis Configuration: Insert Engine Max Running Job Configuration
+-- =====================================================================
+-- Description: Insert configuration for maximum running jobs for engines
+-- Configuration Key: wds.linkis.engine.running.job.max
+-- Default Value: 30 (for both global and hive engine)
+-- =====================================================================
+
+-- Step 1: Check if configuration key exists
+-- The configuration key 'wds.linkis.engine.running.job.max' should already 
exist
+-- If not, you need to create it first in linkis_ps_configuration_config_key 
table
+
+SELECT
+    id,
+    `key`,
+    name,
+    engine_conn_type,
+    default_value
+FROM
+    linkis_ps_configuration_config_key
+WHERE
+    `key` = 'wds.linkis.engine.running.job.max';
+
+-- Expected result: id = 112 (may vary in your environment)
+
+-- Step 2: Get label IDs for global and hive engine configurations
+SELECT
+    id,
+    label_key,
+    label_value,
+    CASE
+        WHEN label_value = '*-*,*-*' THEN 'Global Default'
+        WHEN label_value = '*-*,hive-3.1.3' THEN 'Hive Engine Default'
+        ELSE 'Other'
+    END AS label_type
+FROM
+    linkis_cg_manager_label
+WHERE
+    label_key = 'combined_userCreator_engineType'
+    AND label_value IN ('*-*,*-*', '*-*,hive-3.1.3')
+ORDER BY
+    label_value;
+
+-- Expected results:
+-- id=5,  label_value='*-*,*-*'         (Global Default)
+-- id=7,  label_value='*-*,hive-3.1.3' (Hive Engine Default)
+
+-- =====================================================================
+-- Step 3: Insert Configuration Values
+-- =====================================================================
+
+-- 3.1 Insert Global Default Configuration
+-- Sets maximum running jobs to 30 for all engines (global default)
+INSERT INTO linkis_ps_configuration_config_value
+(
+    config_key_id,      -- References config_key.id (112)
+    config_value,       -- Value: 30
+    config_label_id,    -- References label.id (5 for global '*-*,*-*')
+    create_time,
+    update_time
+)
+VALUES
+(
+    112,                -- config_key_id for 
'wds.linkis.engine.running.job.max'
+    '30',               -- Maximum 30 concurrent jobs (global default)
+    5,                  -- label_id for '*-*,*-*' (global default)
+    NOW(),
+    NOW()
+);
+
+-- 3.2 Insert Hive Engine Default Configuration
+-- Sets maximum running jobs to 30 specifically for Hive engine
+INSERT INTO linkis_ps_configuration_config_value
+(
+    config_key_id,      -- References config_key.id (112)
+    config_value,       -- Value: 30
+    config_label_id,    -- References label.id (7 for hive '*-*,hive-3.1.3')
+    create_time,
+    update_time
+)
+VALUES
+(
+    112,                -- config_key_id for 
'wds.linkis.engine.running.job.max'
+    '30',               -- Maximum 30 concurrent jobs (Hive default)
+    7,                  -- label_id for '*-*,hive-3.1.3' (Hive engine default)
+    NOW(),
+    NOW()
+);
+
+-- =====================================================================
+-- Step 4: Verify Insertions
+-- =====================================================================
+
+-- Verify the inserted configuration values
+SELECT
+    v.id AS value_id,
+    k.key AS config_key,
+    k.name AS config_name,
+    v.config_value,
+    l.label_value,
+    CASE
+        WHEN l.label_value = '*-*,*-*' THEN 'Global Default (Priority: 5 - 
Lowest)'
+        WHEN l.label_value = '*-*,hive-3.1.3' THEN 'Hive Engine Default 
(Priority: 4)'
+        ELSE 'Other'
+    END AS config_level,
+    v.create_time,
+    v.update_time
+FROM
+    linkis_ps_configuration_config_value v
+JOIN
+    linkis_ps_configuration_config_key k ON v.config_key_id = k.id
+JOIN
+    linkis_cg_manager_label l ON v.config_label_id = l.id
+WHERE
+    k.key = 'wds.linkis.engine.running.job.max'
+    AND v.config_label_id IN (5, 7)
+ORDER BY
+    l.label_value;
+
+-- =====================================================================
+-- Priority Explanation
+-- =====================================================================
+/*
+Configuration Priority (from highest to lowest):
+
+1. Runtime Parameters                    [Priority: 1 - Highest]
+   User can override via API params when submitting job
+
+2. User Specific Configuration           [Priority: 2]
+   Example: 'hadoop-IDE,hive-3.1.3'
+   (Not created in this script)
+
+3. Creator Default Configuration         [Priority: 3]
+   Example: '*-IDE,*-*'
+   (Not created in this script)
+
+4. Engine Default Configuration          [Priority: 4]
+   >>> '*-*,hive-3.1.3' = 30  (CREATED IN THIS SCRIPT)
+   Applied to all users using Hive engine
+
+5. Global Default Configuration          [Priority: 5 - Lowest]
+   >>> '*-*,*-*' = 30  (CREATED IN THIS SCRIPT)
+   Applied to all users, all engines
+
+When a Hive job is submitted:
+- If user provides runtime param: Use runtime param value
+- Else if user-specific config exists: Use user config value
+- Else if creator config exists: Use creator config value
+- Else: Use Hive engine default (30) ← Created in this script
+- Finally: Use global default (30) ← Created in this script
+
+For other engines (non-Hive):
+- Will fallback to global default (30)
+*/
+
+-- =====================================================================
+-- Alternative: Use REPLACE INTO for Idempotent Execution
+-- =====================================================================
+-- If you want to make the script idempotent (can run multiple times),
+-- use REPLACE INTO instead of INSERT INTO:
+
+/*
+REPLACE INTO linkis_ps_configuration_config_value
+(config_key_id, config_value, config_label_id, create_time, update_time)
+VALUES
+(112, '30', 5, NOW(), NOW()),  -- Global default
+(112, '30', 7, NOW(), NOW());  -- Hive engine default
+*/
+
+-- =====================================================================
+-- Update Existing Configuration (if needed)
+-- =====================================================================
+-- If the configuration already exists and you want to update it:
+
+/*
+UPDATE linkis_ps_configuration_config_value
+SET
+    config_value = '30',
+    update_time = NOW()
+WHERE
+    config_key_id = 112
+    AND config_label_id = 5;  -- Global default
+
+UPDATE linkis_ps_configuration_config_value
+SET
+    config_value = '30',
+    update_time = NOW()
+WHERE
+    config_key_id = 112
+    AND config_label_id = 7;  -- Hive engine default
+*/
+
+-- =====================================================================
+-- Cleanup (if needed to remove these configurations)
+-- =====================================================================
+/*
+DELETE FROM linkis_ps_configuration_config_value
+WHERE
+    config_key_id = 112
+    AND config_label_id IN (5, 7);
+*/
+
+-- =====================================================================
+-- End of Script
+-- =====================================================================
diff --git a/docs/development/special-logic/overview.md 
b/docs/development/special-logic/overview.md
new file mode 100644
index 00000000000..9b2367cbb6b
--- /dev/null
+++ b/docs/development/special-logic/overview.md
@@ -0,0 +1,38 @@
+---
+title: Special Logic Overview
+sidebar_position: 0
+---
+
+# Special Logic Overview
+
+This section documents the special logic implementations in Linkis that are 
important for developers to understand when contributing to or debugging the 
project. These are core mechanisms that handle complex scenarios in the Linkis 
architecture.
+
+## Document List
+
+| Document | Description |
+|----------|-------------|
+| [Engine Reuse Logic](./engine-reuse-logic.md) | Explains the engine reuse 
mechanism during engine startup, including matching rules, filtering 
conditions, and configuration parameters |
+
+## Purpose
+
+Understanding these special logic implementations is crucial for:
+
+1. **Debugging Issues**: When troubleshooting problems related to engine 
management, task scheduling, or resource allocation
+2. **Performance Optimization**: Understanding how these mechanisms work helps 
in tuning the system for better performance
+3. **Feature Development**: When developing new features, understanding 
existing special logic helps avoid conflicts and ensures proper integration
+4. **Code Review**: Reviewers can better evaluate changes that affect these 
critical code paths
+
+## How to Contribute
+
+If you discover other important special logic in Linkis that should be 
documented:
+
+1. Create a new markdown file in this directory following the naming 
convention: `<feature-name>-logic.md`
+2. Use the existing documents as templates for structure and format
+3. Include:
+   - Overview of the logic
+   - Core workflow/flowchart
+   - Key classes and code locations
+   - Configuration parameters
+   - Examples if applicable
+4. Update this overview page to include your new document
+5. Submit a PR following the [contribution 
guidelines](/community/how-to-contribute)
diff --git 
a/i18n/zh-CN/docusaurus-plugin-content-docs/current/development/special-logic/_category_.json
 
b/i18n/zh-CN/docusaurus-plugin-content-docs/current/development/special-logic/_category_.json
new file mode 100644
index 00000000000..961d117f028
--- /dev/null
+++ 
b/i18n/zh-CN/docusaurus-plugin-content-docs/current/development/special-logic/_category_.json
@@ -0,0 +1,4 @@
+{
+  "label": "特殊逻辑说明",
+  "position": 12.0
+}
diff --git 
a/i18n/zh-CN/docusaurus-plugin-content-docs/current/development/special-logic/engine-config-priority.md
 
b/i18n/zh-CN/docusaurus-plugin-content-docs/current/development/special-logic/engine-config-priority.md
new file mode 100644
index 00000000000..12b0a4b9254
--- /dev/null
+++ 
b/i18n/zh-CN/docusaurus-plugin-content-docs/current/development/special-logic/engine-config-priority.md
@@ -0,0 +1,674 @@
+---
+title: Linkis 引擎配置参数优先级分析
+sidebar_position: 1
+---
+
+# Linkis 引擎创建时配置参数生效逻辑与优先级分析
+
+## 概述
+
+本文档详细分析 Linkis 项目在任务提交及引擎创建流程中,配置参数的生效逻辑、优先级机制以及数据库表关联关系。
+
+## 一、配置参数级别定义
+
+Linkis 配置参数分为以下 4 个级别:
+
+### 1.1 全局默认配置 (Global Default Configuration)
+- **标签**: `*-*,*-*`
+- **说明**: 适用于所有用户、所有应用、所有引擎的全局默认配置
+- **优先级**: 最低
+- **示例**: 系统级别的资源限制、队列配置等
+
+### 1.2 引擎默认配置 (Engine Default Configuration)
+- **标签**: `*-*,{engineType}-{version}`
+- **说明**: 特定引擎类型的默认配置,适用于所有用户
+- **优先级**: 低
+- **示例**: `*-*,spark-3.2.1` 表示 Spark 3.2.1 引擎的默认配置
+
+### 1.3 创建者默认配置 (Creator Default Configuration)
+- **标签**: `*-{creator},*-*` 或 `{user}-*,*-*`
+- **说明**: 特定创建者(如 IDE、调度系统)或特定用户的默认配置
+- **优先级**: 中
+- **示例**: `*-IDE,*-*` 表示所有通过 IDE 提交任务的默认配置
+
+### 1.4 用户特定配置 (User Configuration)
+- **标签**: `{user}-{creator},{engineType}-{version}`
+- **说明**: 特定用户、特定创建者、特定引擎的个性化配置
+- **优先级**: 高
+- **示例**: `hadoop-IDE,spark-3.2.1` 表示 hadoop 用户通过 IDE 使用 Spark 3.2.1 的配置
+
+### 1.5 任务提交参数 (Runtime Parameters)
+- **来源**: 用户提交任务时在 API 中传递的 `params` 参数
+- **优先级**: **最高**
+- **说明**: 运行时动态指定的参数,会覆盖所有配置级别
+
+## 二、配置参数优先级
+
+### 2.1 优先级排序
+
+```text
+任务提交参数 (Runtime Parameters)           [优先级: 1 - 最高]
+    ↓
+用户特定配置 (User Configuration)           [优先级: 2]
+    ↓
+创建者默认配置 (Creator Default)            [优先级: 3]
+    ↓
+引擎默认配置 (Engine Default)               [优先级: 4]
+    ↓
+全局默认配置 (Global Default)               [优先级: 5 - 最低]
+```
+
+### 2.2 优先级生效规则
+
+根据代码分析 (`ConfigurationService.scala:289-475`):
+
+1. **级联查询**: 系统会依次查询用户配置、创建者配置、用户通用配置、引擎配置、全局配置
+2. **值覆盖**: 高优先级的配置值会覆盖低优先级的同名配置
+3. **参数合并**: 不同优先级中不重复的配置项会被合并
+4. **运行时最优**: 用户提交任务时传递的参数具有最高优先级,不会被任何配置覆盖
+
+### 2.3 核心代码逻辑
+
+#### 配置级联查询 (ConfigurationService.scala)
+
+```scala
+// 位置: ConfigurationService.scala:366-419
+def getConfigsByLabelList(
+    labelList: java.util.List[Label[_]],
+    useDefaultConfig: Boolean = true,
+    language: String
+): (util.List[ConfigKeyValue], util.List[ConfigKeyValue]) = {
+
+    // 1. 获取用户特定配置 (user-creator,engineType-version)
+    val configs: util.List[ConfigKeyValue] = getConfigByLabelId(label.getId, 
language)
+
+    // 2. 获取创建者默认配置 (*-creator,*-*)
+    val defaultCreatorConfigs = getConfigByLabelId(defaultCreatorLabel.getId, 
language)
+
+    // 3. 获取用户通用默认配置 (user-*,*-*)
+    val defaultUserConfigs = getConfigByLabelId(defaultUserLabel.getId, 
language)
+
+    // 4. 获取引擎通用默认配置 (*-*,engineType-version)
+    val defaultEngineConfigs = getConfigByLabelId(defaultEngineLabel.getId, 
language)
+
+    // 5. 配置合并:创建者配置 > 引擎配置
+    if (Configuration.USE_CREATOR_DEFAULE_VALUE && userCreatorLabel.getCreator 
!= "*") {
+      replaceCreatorToEngine(defaultCreatorConfigs, defaultEngineConfigs)
+    }
+
+    // 6. 配置合并:用户配置 > 引擎配置
+    if (Configuration.USE_USER_DEFAULE_VALUE && userCreatorLabel.getUser != 
"*") {
+      replaceCreatorToEngine(defaultUserConfigs, defaultEngineConfigs)
+    }
+
+    return (configs, defaultEngineConfigs)
+}
+```
+
+#### 配置树构建 (ConfigurationService.scala)
+
+```scala
+// 位置: ConfigurationService.scala:294-333
+// 优先级: configs > defaultConfigs
+def buildTreeResult(
+    configs: util.List[ConfigKeyValue],
+    defaultConfigs: util.List[ConfigKeyValue]
+): util.ArrayList[ConfigTree] = {
+
+    // 遍历默认配置
+    defaultConfigs.asScala.foreach(defaultConfig => {
+        defaultConfig.setIsUserDefined(false)
+
+        // 用户配置覆盖默认配置
+        configs.asScala.foreach(config => {
+          if (config.getKey.equals(defaultConfig.getKey)) {
+            defaultConfig.setConfigValue(config.getConfigValue)  // 值覆盖
+            defaultConfig.setIsUserDefined(true)
+          }
+        })
+    })
+
+    return resultConfigsTree
+}
+```
+
+#### 引擎创建时参数合并 (DefaultEngineCreateService.scala)
+
+```scala
+// 位置: DefaultEngineCreateService.scala:371-397
+def generateResource(
+    props: util.Map[String, String],           // 用户提交的参数
+    user: String,
+    labelList: util.List[Label[_]],
+    timeout: Long
+): NodeResource = {
+
+    // 从配置服务获取控制台配置(已完成多级合并)
+    val configProp = 
engineConnConfigurationService.getConsoleConfiguration(labelList)
+
+    // 关键:用户提交参数优先级最高
+    if (null != configProp && configProp.asScala.nonEmpty) {
+      configProp.asScala.foreach(keyValue => {
+        if (!props.containsKey(keyValue._1)) {  // 只在用户未指定时才使用配置
+          props.put(keyValue._1, keyValue._2)
+        }
+      })
+    }
+
+    // 继续处理跨队列配置等特殊逻辑
+    // ...
+}
+```
+
+**关键点**: 第 `380` 行的判断 `if (!props.containsKey(keyValue._1))` 
确保了**用户提交的参数永远不会被配置服务的值覆盖**。
+
+## 三、数据库表结构与关联关系
+
+### 3.1 核心配置表
+
+#### 表 1: `linkis_ps_configuration_config_key` (配置键定义表)
+
+| 字段名 | 类型 | 说明 |
+|--------|------|------|
+| id | bigint | 主键,配置键ID |
+| key | varchar(50) | 配置参数名 (如 `wds.linkis.rm.yarnqueue`) |
+| name | varchar(50) | 配置显示名称 |
+| description | varchar(200) | 配置描述 (中文) |
+| en_name | varchar(100) | 英文显示名称 |
+| en_description | varchar(200) | 英文描述 |
+| engine_conn_type | varchar(50) | 引擎类型 (如 `spark`, `hive`) |
+| default_value | varchar(200) | 默认值 |
+| validate_type | varchar(50) | 验证类型 (`None`, `NumInterval`, `Regex` 等) |
+| validate_range | varchar(150) | 验证范围 |
+| is_hidden | tinyint(1) | 是否隐藏 |
+| is_advanced | tinyint(1) | 是否高级配置 |
+| level | tinyint(1) | 配置级别 |
+| treeName | varchar(20) | 配置分类树名称 |
+| boundary_type | tinyint | 边界类型 |
+| template_required | tinyint(1) | 模板是否必填 |
+
+**作用**: 定义所有可用的配置项元数据。
+
+#### 表 2: `linkis_ps_configuration_config_value` (配置值存储表)
+
+| 字段名 | 类型 | 说明 |
+|--------|------|------|
+| id | bigint | 主键,配置值ID |
+| config_key_id | bigint | 外键,关联 `config_key.id` |
+| config_value | varchar(500) | 配置的实际值 |
+| config_label_id | int | 外键,关联 `cg_manager_label.id` |
+| create_time | datetime | 创建时间 |
+| update_time | datetime | 更新时间 |
+
+**作用**: 存储不同标签下的配置值。
+**唯一索引**: `(config_key_id, config_label_id)` 确保同一标签下同一配置键只有一个值。
+
+#### 表 3: `linkis_cg_manager_label` (标签表)
+
+| 字段名 | 类型 | 说明 |
+|--------|------|------|
+| id | int | 主键,标签ID |
+| label_key | varchar(32) | 标签键 (如 `combined_userCreator_engineType`) |
+| label_value | varchar(128) | 标签值 (如 `hadoop-IDE,spark-3.2.1`) |
+| label_feature | varchar(16) | 标签特性 (`OPTIONAL`, `CORE` 等) |
+| label_value_size | int | 标签值维度数量 |
+| create_time | datetime | 创建时间 |
+| update_time | datetime | 更新时间 |
+
+**作用**: 定义多维度标签,支持用户、创建者、引擎类型、版本等组合。
+
+**标签值示例**:
+- `*-*,*-*`: 全局默认
+- `*-*,spark-3.2.1`: Spark 引擎默认配置
+- `*-IDE,*-*`: IDE 创建者默认配置
+- `hadoop-IDE,spark-3.2.1`: hadoop 用户通过 IDE 使用 Spark 的配置
+
+#### 表 4: `linkis_ps_configuration_key_limit_for_user` (用户配置限制表)
+
+| 字段名 | 类型 | 说明 |
+|--------|------|------|
+| id | bigint | 主键 |
+| user_name | varchar(50) | 用户名 |
+| combined_label_value | varchar(128) | 组合标签值 |
+| key_id | bigint | 配置键ID |
+| config_value | varchar(200) | 配置值 |
+| max_value | varchar(50) | 最大值限制 |
+| min_value | varchar(50) | 最小值限制 |
+| is_valid | varchar(2) | 是否生效 (`Y`/`N`) |
+| create_by | varchar(50) | 创建人 |
+| create_time | datetime | 创建时间 |
+| update_by | varchar(50) | 更新人 |
+| update_time | datetime | 更新时间 |
+
+**作用**: 为特定用户设置配置值的上下限,防止用户配置超出管理员允许的范围。
+
+### 3.2 表关联关系
+
+```text
+┌─────────────────────────────────────────────────────────────────────┐
+│                         配置参数关联关系图                            │
+└─────────────────────────────────────────────────────────────────────┘
+
+┌──────────────────────────────┐
+│  linkis_cg_manager_label     │
+│  (标签表)                     │
+├──────────────────────────────┤
+│  id (PK)                     │◄──────────┐
+│  label_key                   │           │
+│  label_value                 │           │ N:1
+│  - *-*,*-*                   │           │
+│  - *-*,spark-3.2.1           │           │
+│  - hadoop-IDE,spark-3.2.1    │           │
+└──────────────────────────────┘           │
+                                           │
+        ┌──────────────────────────────────┘
+        │
+        │  ┌────────────────────────────────────────┐
+        └──┤  linkis_ps_configuration_config_value  │
+           │  (配置值表)                             │
+           ├────────────────────────────────────────┤
+           │  id (PK)                               │
+           │  config_key_id (FK) ──────────┐        │
+           │  config_value                 │        │
+           │  config_label_id (FK)         │        │
+           └────────────────────────────────────────┘
+                                           │
+                                           │ N:1
+                                           │
+        ┌──────────────────────────────────┘
+        │
+        │  ┌────────────────────────────────────────┐
+        └─►│  linkis_ps_configuration_config_key    │
+           │  (配置键定义表)                         │
+           ├────────────────────────────────────────┤
+           │  id (PK)                               │
+           │  key                                   │
+           │  name                                  │
+           │  description                           │
+           │  engine_conn_type                      │
+           │  default_value                         │
+           │  validate_type                         │
+           │  validate_range                        │
+           │  level                                 │
+           └────────────────────────────────────────┘
+                      │
+                      │ 1:N
+                      │
+        ┌─────────────┘
+        │
+        │  ┌────────────────────────────────────────┐
+        └─►│ linkis_ps_configuration_key_limit_     │
+           │ for_user (用户配置限制表)               │
+           ├────────────────────────────────────────┤
+           │  id (PK)                               │
+           │  user_name                             │
+           │  combined_label_value                  │
+           │  key_id (FK)                           │
+           │  max_value                             │
+           │  min_value                             │
+           └────────────────────────────────────────┘
+```
+
+### 3.3 SQL 查询示例
+
+#### 查询用户的完整配置 (含优先级合并)
+
+```sql
+-- 查询 hadoop 用户通过 IDE 使用 Spark 3.2.1 的配置
+-- 结果会包含用户配置、创建者配置、引擎配置的合并结果
+
+SELECT
+    k.key,
+    k.name,
+    k.engine_conn_type,
+    k.default_value,
+    v.config_value,
+    l.label_value,
+    CASE
+        WHEN l.label_value LIKE 'hadoop-IDE,spark-3.2.1' THEN '用户配置'
+        WHEN l.label_value LIKE '*-IDE,*-*' THEN '创建者默认'
+        WHEN l.label_value LIKE '*-*,spark-3.2.1' THEN '引擎默认'
+        WHEN l.label_value LIKE '*-*,*-*' THEN '全局默认'
+        ELSE '其他'
+    END AS config_level
+FROM
+    linkis_ps_configuration_config_key k
+LEFT JOIN
+    linkis_ps_configuration_config_value v ON k.id = v.config_key_id
+LEFT JOIN
+    linkis_cg_manager_label l ON v.config_label_id = l.id
+WHERE
+    l.label_value IN (
+        'hadoop-IDE,spark-3.2.1',    -- 用户配置
+        '*-IDE,*-*',                 -- 创建者默认
+        '*-*,spark-3.2.1',           -- 引擎默认
+        '*-*,*-*'                    -- 全局默认
+    )
+ORDER BY
+    k.key,
+    FIELD(l.label_value, 'hadoop-IDE,spark-3.2.1', '*-IDE,*-*', 
'*-*,spark-3.2.1', '*-*,*-*');
+```
+
+#### 查询用户配置的限制信息
+
+```sql
+-- 查询 hadoop 用户的配置限制
+SELECT
+    u.user_name,
+    u.combined_label_value,
+    k.key,
+    k.name,
+    u.max_value,
+    u.min_value,
+    u.is_valid
+FROM
+    linkis_ps_configuration_key_limit_for_user u
+JOIN
+    linkis_ps_configuration_config_key k ON u.key_id = k.id
+WHERE
+    u.user_name = 'hadoop'
+    AND u.is_valid = 'Y';
+```
+
+## 四、引擎创建时参数生效完整流程
+
+### 4.1 流程图
+
+```text
+┌─────────────────────────────────────────────────────────────────────────────┐
+│                    Linkis 引擎创建配置参数生效流程                            │
+└─────────────────────────────────────────────────────────────────────────────┘
+
+[1] 前端/SDK 提交任务
+    │
+    ├─ params: { "spark.executor.memory": "4g", ... }
+    ├─ labels: ["hadoop-IDE", "spark-3.2.1"]
+    └─ executionContent: "select * from table"
+    │
+    ▼
+[2] EntranceParser.parseToTask()
+    │ (解析请求,提取 params, labels)
+    │
+    ▼
+[3] EntranceJob (任务对象)
+    │ - jobRequest.params
+    │ - jobRequest.labels
+    │
+    ▼
+[4] Orchestrator 编排器
+    │ - JobReqParamCheckRuler (参数验证)
+    │
+    ▼
+[5] DefaultEngineCreateService.createEngine()
+    │
+    ├──► [5.1] buildLabel(labels, user)
+    │     └─ 构建标签列表: UserCreatorLabel + EngineTypeLabel
+    │
+    ├──► [5.2] selectECM(request, labelList)
+    │     └─ 选择合适的 ECM 节点
+    │
+    ├──► [5.3] generateResource(props, user, labelList, timeout)
+    │     │
+    │     ├─ engineConnConfigurationService.getConsoleConfiguration(labelList)
+    │     │   │
+    │     │   ├─► ConfigurationMapCache.engineMapCache.get(labelList)
+    │     │   │   │
+    │     │   │   ├─► [缓存未命中] RPC 调用 ConfigurationService
+    │     │   │   │
+    │     │   │   └─► ConfigurationService.getConfigsByLabelList()
+    │     │   │       │
+    │     │   │       ├─ 查询用户配置 (hadoop-IDE,spark-3.2.1)
+    │     │   │       ├─ 查询创建者配置 (*-IDE,*-*)
+    │     │   │       ├─ 查询用户通用配置 (hadoop-*,*-*)
+    │     │   │       ├─ 查询引擎配置 (*-*,spark-3.2.1)
+    │     │   │       ├─ 查询全局配置 (*-*,*-*)
+    │     │   │       │
+    │     │   │       └─► replaceCreatorToEngine() (配置合并)
+    │     │   │           └─ 创建者配置 > 引擎配置
+    │     │   │           └─ 用户配置 > 引擎配置
+    │     │   │
+    │     │   └─ 返回 Map<String, String> configProp
+    │     │
+    │     └─ 参数合并逻辑:
+    │         for (entry : configProp) {
+    │             if (!props.containsKey(entry.key)) {  ◄── 关键判断
+    │                 props.put(entry.key, entry.value)
+    │             }
+    │         }
+    │         └─ **用户提交参数(props)不会被覆盖**
+    │
+    ├──► [5.4] resourceManager.requestResource()
+    │     └─ 资源申请
+    │
+    ├──► [5.5] createEngineNode()
+    │     └─ 构建引擎节点请求
+    │
+    ├──► [5.6] emService.createEngine(engineBuildRequest, emNode)
+    │     └─ 调用 ECM 创建引擎
+    │
+    └──► [5.7] 引擎启动,使用合并后的参数
+          └─ 最终生效参数 = 用户提交参数 + 配置服务参数(去重)
+```
+
+### 4.2 关键步骤说明
+
+#### 步骤 5.3: 参数合并逻辑 (generateResource)
+
+**代码位置**: `DefaultEngineCreateService.scala:371-397`
+
+```scala
+def generateResource(
+    props: util.Map[String, String],           // 用户提交的参数
+    user: String,
+    labelList: util.List[Label[_]],
+    timeout: Long
+): NodeResource = {
+    // 1. 获取配置服务的参数 (已完成多级配置合并)
+    val configProp = 
engineConnConfigurationService.getConsoleConfiguration(labelList)
+
+    // 2. 参数合并: 用户提交参数优先
+    if (null != configProp && configProp.asScala.nonEmpty) {
+      configProp.asScala.foreach(keyValue => {
+        if (!props.containsKey(keyValue._1)) {  // ◄── 只在用户未指定时才使用配置
+          props.put(keyValue._1, keyValue._2)
+        }
+      })
+    }
+
+    // 3. 处理跨队列配置
+    val crossQueue = props.get(AMConfiguration.CROSS_QUEUE)
+    if (StringUtils.isNotBlank(crossQueue)) {
+      val queueName = 
props.getOrDefault(AMConfiguration.YARN_QUEUE_NAME_CONFIG_KEY, "default")
+      props.put(AMConfiguration.YARN_QUEUE_NAME_CONFIG_KEY, crossQueue)
+    }
+
+    // 4. 创建资源请求
+    val timeoutEngineResourceRequest = TimeoutEngineResourceRequest(timeout, 
user, labelList, props)
+    
engineConnResourceFactoryService.createEngineResource(timeoutEngineResourceRequest)
+}
+```
+
+**关键点**:
+1. `configProp` 已经是多级配置合并的结果 (用户配置 > 创建者配置 > 引擎配置 > 全局配置)
+2. 通过 `if (!props.containsKey(keyValue._1))` 确保用户提交的参数不会被覆盖
+3. 最终 `props` 包含完整的参数集合,传递给引擎
+
+### 4.3 配置缓存机制
+
+**代码位置**: `ConfigurationMapCache.java`
+
+```java
+// 全局配置缓存 (按用户维度)
+static RPCMapCache<UserCreatorLabel, String, String> globalMapCache
+
+// 引擎配置缓存 (按用户+引擎维度)
+static RPCMapCache<Tuple2<UserCreatorLabel, EngineTypeLabel>, String, String> 
engineMapCache
+```
+
+**缓存工作机制**:
+1. 使用 RPC 缓存,减少重复查询
+2. 缓存 Key: `(UserCreatorLabel, EngineTypeLabel)` 组合
+3. 缓存 Value: `Map<String, String>` (配置键值对)
+4. 缓存失效: 配置更新后自动失效
+
+## 五、配置参数验证机制
+
+### 5.1 验证类型
+
+Linkis 支持多种参数验证类型 (定义在 `validate_type` 字段):
+
+| 验证类型 | 说明 | 示例 |
+|---------|------|------|
+| None | 无验证 | - |
+| NumInterval | 数值区间验证 | `[1,100]` 表示值必须在 1-100 之间 |
+| FloatInterval | 浮点区间验证 | `[0.0,1.0]` |
+| Regex | 正则表达式验证 | `^[a-zA-Z0-9_]+$` |
+| Json | JSON 格式验证 | 验证是否为合法 JSON |
+| OFT | OneOf 类型验证 | `queue1,queue2,queue3` (只能选其一) |
+| Contain | 包含验证 | 验证值是否包含指定字符串 |
+
+### 5.2 验证器实现
+
+**代码位置**: 
`linkis-configuration/src/main/scala/org/apache/linkis/configuration/validate/`
+
+- `ValidatorManager`: 验证器管理器
+- `NumericalValidator`: 数值验证器
+- `RegexValidator`: 正则验证器
+- `JsonValidator`: JSON 验证器
+- `OneOfValidator`: 枚举验证器
+
+### 5.3 用户配置限制
+
+通过 `linkis_ps_configuration_key_limit_for_user` 表,管理员可以为特定用户设置配置上下限:
+
+```sql
+-- 示例:限制 hadoop 用户的 executor 内存不超过 8G
+INSERT INTO linkis_ps_configuration_key_limit_for_user
+(user_name, combined_label_value, key_id, max_value, is_valid, create_by)
+VALUES
+('hadoop', 'hadoop-*,spark-*',
+ (SELECT id FROM linkis_ps_configuration_config_key WHERE 
key='spark.executor.memory'),
+ '8G', 'Y', 'admin');
+```
+
+**生效逻辑** (代码位置: `ConfigurationService.scala:422-442`):
+
+```scala
+// 添加特殊配置限制信息
+val limitList = configKeyLimitForUserMapper.selectByLabelAndKeyIds(
+    combinedLabel.getStringValue, keyIdList
+)
+
+defaultEngineConfigs.asScala.foreach(entity => {
+  val keyId = entity.getId
+  val res = limitList.asScala.filter(v => v.getKeyId == keyId).toList.asJava
+  if (res.size() > 0) {
+    val specialMap = new util.HashMap[String, String]()
+    val maxValue = res.get(0).getMaxValue
+    if (StringUtils.isNotBlank(maxValue)) {
+      specialMap.put("maxValue", maxValue)
+      entity.setSpecialLimit(specialMap)  // 设置特殊限制
+    }
+  }
+})
+```
+
+## 六、实战案例分析
+
+### 6.1 场景描述
+
+用户 `hadoop` 通过 IDE 提交 Spark 3.2.1 任务,分析参数生效情况。
+
+### 6.2 数据库配置
+
+```sql
+-- 全局默认配置 (label_id=5: *-*,*-*)
+INSERT INTO linkis_ps_configuration_config_value
+VALUES (101, 10, '2G', 5);  -- spark.executor.memory = 2G
+
+-- Spark 引擎默认配置 (label_id=20: *-*,spark-3.2.1)
+INSERT INTO linkis_ps_configuration_config_value
+VALUES (102, 10, '4G', 20);  -- spark.executor.memory = 4G
+
+-- IDE 创建者默认配置 (label_id=30: *-IDE,*-*)
+INSERT INTO linkis_ps_configuration_config_value
+VALUES (103, 10, '6G', 30);  -- spark.executor.memory = 6G
+
+-- hadoop 用户配置 (label_id=40: hadoop-IDE,spark-3.2.1)
+INSERT INTO linkis_ps_configuration_config_value
+VALUES (104, 10, '8G', 40);  -- spark.executor.memory = 8G
+```
+
+### 6.3 任务提交参数
+
+```json
+{
+  "params": {
+    "spark.executor.memory": "10G",
+    "spark.executor.cores": "4"
+  },
+  "labels": {
+    "userCreator": "hadoop-IDE",
+    "engineType": "spark-3.2.1"
+  }
+}
+```
+
+### 6.4 参数生效过程
+
+| 步骤 | 操作 | 当前值 | 说明 |
+|------|------|--------|------|
+| 1 | 配置服务查询 | - | 开始查询配置 |
+| 2 | 查询用户配置 (40) | `8G` | 找到 `hadoop-IDE,spark-3.2.1` 的配置 |
+| 3 | 查询创建者配置 (30) | `6G` | 找到 `*-IDE,*-*` 的配置 |
+| 4 | 查询引擎配置 (20) | `4G` | 找到 `*-*,spark-3.2.1` 的配置 |
+| 5 | 查询全局配置 (5) | `2G` | 找到 `*-*,*-*` 的配置 |
+| 6 | 配置合并 | `8G` | 用户配置覆盖所有默认配置 |
+| 7 | 与任务参数合并 | `10G` | 用户提交参数 > 配置服务参数 |
+| 8 | 最终生效 | **`10G`** | **任务提交参数生效** |
+
+### 6.5 最终参数集合
+
+```json
+{
+  "spark.executor.memory": "10G",        // 来自任务提交参数 (优先级最高)
+  "spark.executor.cores": "4",           // 来自任务提交参数
+  "wds.linkis.rm.yarnqueue": "default",  // 来自配置服务 (用户未指定)
+  "spark.driver.memory": "1G"            // 来自配置服务 (用户未指定)
+}
+```
+
+## 七、总结
+
+### 7.1 核心要点
+
+1. **优先级机制明确**: 任务参数 > 用户配置 > 创建者配置 > 引擎配置 > 全局配置
+2. **配置级联查询**: 支持多级默认配置,自动合并
+3. **用户参数至上**: 用户提交的参数永远不会被配置服务覆盖
+4. **标签驱动**: 通过组合标签实现多维度配置管理
+5. **验证与限制**: 支持参数验证和用户级别的配置限制
+
+### 7.2 最佳实践建议
+
+1. **合理设置默认值**: 在全局和引擎级别设置合理的默认配置
+2. **按需配置**: 只在必要时为用户创建个性化配置
+3. **使用限制功能**: 通过 `key_limit_for_user` 表防止用户配置超限
+4. **参数验证**: 利用 `validate_type` 和 `validate_range` 确保参数合法性
+5. **监控配置变更**: 关注 `update_time` 字段,追踪配置修改历史
+
+### 7.3 相关代码文件索引
+
+| 功能模块 | 文件路径 |
+|---------|---------|
+| 配置服务 | 
`/linkis-public-enhancements/linkis-configuration/src/main/scala/org/apache/linkis/configuration/service/ConfigurationService.scala`
 |
+| 引擎创建 | 
`/linkis-computation-governance/linkis-manager/linkis-application-manager/src/main/scala/org/apache/linkis/manager/am/service/engine/DefaultEngineCreateService.scala`
 |
+| 配置缓存 | 
`/linkis-computation-governance/linkis-manager/linkis-application-manager/src/main/java/org/apache/linkis/manager/am/conf/ConfigurationMapCache.java`
 |
+| 数据访问 | 
`/linkis-public-enhancements/linkis-configuration/src/main/java/org/apache/linkis/configuration/dao/ConfigMapper.java`
 |
+| Mapper XML | 
`/linkis-public-enhancements/linkis-configuration/src/main/resources/mapper/common/ConfigMapper.xml`
 |
+| 参数验证 | 
`/linkis-public-enhancements/linkis-configuration/src/main/scala/org/apache/linkis/configuration/validate/`
 |
+
+---
+
+**文档版本**: 1.0
+**最后更新**: 2025-11-23
+**分析基于**: Linkis 项目 /data/workspace/linkis
diff --git 
a/i18n/zh-CN/docusaurus-plugin-content-docs/current/development/special-logic/engine-max-job-config-guide.md
 
b/i18n/zh-CN/docusaurus-plugin-content-docs/current/development/special-logic/engine-max-job-config-guide.md
new file mode 100644
index 00000000000..6083a977fe6
--- /dev/null
+++ 
b/i18n/zh-CN/docusaurus-plugin-content-docs/current/development/special-logic/engine-max-job-config-guide.md
@@ -0,0 +1,292 @@
+---
+title: 引擎最大任务数配置指南
+sidebar_position: 2
+---
+
+# 引擎最大任务数配置指南
+
+## 概述
+
+本指南说明如何使用 `wds.linkis.engine.running.job.max` 参数配置 Linkis 引擎的最大并发任务数。
+
+## 配置详情
+
+### 参数信息
+
+| 属性 | 值 |
+|------|-----|
+| 配置键 | `wds.linkis.engine.running.job.max` |
+| 显示名称 | 引擎运行最大任务数 |
+| 描述 | 引擎实例可同时运行的最大任务数量 |
+| 默认值 | 30 |
+| 验证类型 | NumInterval (数值区间) |
+| 适用引擎 | 所有引擎 (shell, hive, spark, python 等) |
+
+### 配置级别
+
+本指南演示如何在两个级别设置配置:
+
+1. **全局默认配置** (`*-*,*-*`)
+   - 适用于所有引擎
+   - 优先级最低
+   - 当不存在引擎特定配置时的后备值
+
+2. **Hive 引擎默认配置** (`*-*,hive-3.1.3`)
+   - 专门适用于 Hive 3.1.3 引擎
+   - 优先级高于全局默认
+   - 覆盖 Hive 引擎的全局默认值
+
+## 前置条件
+
+执行 SQL 脚本前,需要验证以下内容:
+
+### 1. 检查配置键是否存在
+
+```sql
+SELECT id, `key`, name, engine_conn_type, default_value
+FROM linkis_ps_configuration_config_key
+WHERE `key` = 'wds.linkis.engine.running.job.max';
+```
+
+**预期结果:**
+- `id`: 112 (您的环境中可能不同)
+- `key`: wds.linkis.engine.running.job.max
+- `name`: 引擎运行最大任务数
+
+### 2. 检查标签 ID
+
+```sql
+SELECT id, label_key, label_value
+FROM linkis_cg_manager_label
+WHERE label_key = 'combined_userCreator_engineType'
+  AND label_value IN ('*-*,*-*', '*-*,hive-3.1.3')
+ORDER BY label_value;
+```
+
+**预期结果:**
+- `id=5`, `label_value='*-*,*-*'` (全局默认)
+- `id=7`, `label_value='*-*,hive-3.1.3'` (Hive 引擎默认)
+
+:::warning 重要提示
+SQL 脚本中的 `config_key_id` 和 `config_label_id` 值基于标准 Linkis 安装。**您必须在自己的数据库中验证这些 
ID**,如果不同需要相应更新脚本。
+:::
+
+## SQL 脚本
+
+### 快速执行脚本
+
+使用此脚本直接执行:
+
+```sql
+-- 插入全局默认配置 (*-*,*-*)
+INSERT INTO linkis_ps_configuration_config_value
+(config_key_id, config_value, config_label_id, create_time, update_time)
+VALUES
+(112, '30', 5, NOW(), NOW());
+
+-- 插入 Hive 引擎默认配置 (*-*,hive-3.1.3)
+INSERT INTO linkis_ps_configuration_config_value
+(config_key_id, config_value, config_label_id, create_time, update_time)
+VALUES
+(112, '30', 7, NOW(), NOW());
+```
+
+### 幂等脚本 (可安全重复执行)
+
+如果希望脚本可以多次安全运行,使用 `REPLACE INTO`:
+
+```sql
+REPLACE INTO linkis_ps_configuration_config_value
+(config_key_id, config_value, config_label_id, create_time, update_time)
+VALUES
+(112, '30', 5, NOW(), NOW()),  -- 全局默认
+(112, '30', 7, NOW(), NOW());  -- Hive 引擎默认
+```
+
+### 更新现有配置
+
+如果配置已存在且需要更新:
+
+```sql
+-- 更新全局默认
+UPDATE linkis_ps_configuration_config_value
+SET config_value = '30', update_time = NOW()
+WHERE config_key_id = 112 AND config_label_id = 5;
+
+-- 更新 Hive 引擎默认
+UPDATE linkis_ps_configuration_config_value
+SET config_value = '30', update_time = NOW()
+WHERE config_key_id = 112 AND config_label_id = 7;
+```
+
+## 验证
+
+执行 SQL 脚本后,验证配置:
+
+```sql
+SELECT
+    v.id AS value_id,
+    k.key AS config_key,
+    k.name AS config_name,
+    v.config_value,
+    l.label_value,
+    CASE
+        WHEN l.label_value = '*-*,*-*' THEN '全局默认 (优先级: 5)'
+        WHEN l.label_value = '*-*,hive-3.1.3' THEN 'Hive引擎默认 (优先级: 4)'
+        ELSE '其他'
+    END AS config_level,
+    v.create_time,
+    v.update_time
+FROM
+    linkis_ps_configuration_config_value v
+JOIN
+    linkis_ps_configuration_config_key k ON v.config_key_id = k.id
+JOIN
+    linkis_cg_manager_label l ON v.config_label_id = l.id
+WHERE
+    k.key = 'wds.linkis.engine.running.job.max'
+    AND v.config_label_id IN (5, 7)
+ORDER BY
+    l.label_value;
+```
+
+**预期输出:**
+
+| value_id | config_key | config_name | config_value | label_value | 
config_level | create_time | update_time |
+|----------|------------|-------------|--------------|-------------|--------------|-------------|-------------|
+| xxx | wds.linkis.engine.running.job.max | 引擎运行最大任务数 | 30 | *-*,*-* | 全局默认 
(优先级: 5) | ... | ... |
+| xxx | wds.linkis.engine.running.job.max | 引擎运行最大任务数 | 30 | *-*,hive-3.1.3 | 
Hive引擎默认 (优先级: 4) | ... | ... |
+
+## 优先级说明
+
+提交 Hive 任务时,生效配置值遵循以下优先级:
+
+```text
+1. 任务提交参数 (最高)                 ← 用户可通过 API 覆盖
+   ↓
+2. 用户特定配置
+   示例: 'hadoop-IDE,hive-3.1.3'
+   ↓
+3. 创建者默认配置
+   示例: '*-IDE,*-*'
+   ↓
+4. 引擎默认配置                        ← 本脚本创建
+   >>> '*-*,hive-3.1.3' = 30
+   ↓
+5. 全局默认配置 (最低)                 ← 本脚本创建
+   >>> '*-*,*-*' = 30
+```
+
+### 示例场景
+
+#### 场景 1: 没有用户配置的 Hive 任务
+- 用户: `hadoop`, 创建者: `IDE`, 引擎: `hive-3.1.3`
+- 不存在用户特定或创建者配置
+- **生效值**: `30` (来自 Hive 引擎默认 `*-*,hive-3.1.3`)
+
+#### 场景 2: 没有用户配置的 Spark 任务
+- 用户: `hadoop`, 创建者: `IDE`, 引擎: `spark-3.2.1`
+- 不存在 Spark 引擎默认配置
+- **生效值**: `30` (回退到全局默认 `*-*,*-*`)
+
+#### 场景 3: 用户提交时指定运行时参数
+```json
+{
+  "params": {
+    "wds.linkis.engine.running.job.max": "50"
+  },
+  "labels": {
+    "userCreator": "hadoop-IDE",
+    "engineType": "hive-3.1.3"
+  }
+}
+```
+- **生效值**: `50` (运行时参数覆盖所有配置)
+
+## 为其他引擎配置
+
+要为其他引擎(如 Spark、Python)添加相同配置,遵循相同模式:
+
+### 1. 查找引擎的标签 ID
+
+```sql
+SELECT id, label_value
+FROM linkis_cg_manager_label
+WHERE label_key = 'combined_userCreator_engineType'
+  AND label_value LIKE '*-*,spark%'  -- 查找 Spark
+ORDER BY label_value;
+```
+
+### 2. 插入配置
+
+```sql
+-- 示例: Spark 3.2.1 引擎默认 (假设 label_id = 8)
+INSERT INTO linkis_ps_configuration_config_value
+(config_key_id, config_value, config_label_id, create_time, update_time)
+VALUES
+(112, '30', 8, NOW(), NOW());  -- 根据需要调整 label_id
+```
+
+## 常见问题
+
+### 问题: 插入失败,提示重复键错误
+
+**原因**: 该标签的配置值已存在。
+
+**解决方案**: 使用 `REPLACE INTO` 替代 `INSERT INTO`,或更新现有值:
+
+```sql
+UPDATE linkis_ps_configuration_config_value
+SET config_value = '30', update_time = NOW()
+WHERE config_key_id = 112 AND config_label_id = 5;
+```
+
+### 问题: 配置未生效
+
+**可能原因**:
+1. 配置缓存未失效
+2. 存在更高优先级的配置
+3. 运行时参数覆盖了配置
+
+**解决方案**:
+1. 重启 Linkis 服务以清除缓存
+2. 检查是否存在用户特定或创建者配置:
+   ```sql
+   SELECT v.*, l.label_value
+   FROM linkis_ps_configuration_config_value v
+   JOIN linkis_cg_manager_label l ON v.config_label_id = l.id
+   WHERE v.config_key_id = 112
+   ORDER BY l.label_value;
+   ```
+3. 检查日志中的任务提交参数
+
+## 清理
+
+删除本指南创建的配置:
+
+```sql
+DELETE FROM linkis_ps_configuration_config_value
+WHERE config_key_id = 112
+  AND config_label_id IN (5, 7);
+```
+
+## 相关文档
+
+- [Linkis 引擎配置参数优先级分析](./engine-config-priority.md)
+- [Linkis 配置管理指南](https://linkis.apache.org/zh-CN/docs/latest/configuration/)
+
+## 参考
+
+- **数据库表**:
+  - `linkis_ps_configuration_config_key`: 配置键定义
+  - `linkis_ps_configuration_config_value`: 配置值
+  - `linkis_cg_manager_label`: 标签定义
+
+- **代码文件**:
+  - `ConfigurationService.scala`: 配置服务实现
+  - `DefaultEngineCreateService.scala`: 引擎创建服务
+  - `ConfigMapper.xml`: 数据库映射定义
+
+---
+
+**最后更新**: 2025-11-23
diff --git 
a/i18n/zh-CN/docusaurus-plugin-content-docs/current/development/special-logic/engine-reuse-logic.md
 
b/i18n/zh-CN/docusaurus-plugin-content-docs/current/development/special-logic/engine-reuse-logic.md
new file mode 100644
index 00000000000..d5c1354db1a
--- /dev/null
+++ 
b/i18n/zh-CN/docusaurus-plugin-content-docs/current/development/special-logic/engine-reuse-logic.md
@@ -0,0 +1,368 @@
+---
+title: 引擎复用逻辑
+sidebar_position: 1
+---
+
+# 引擎复用逻辑
+
+## 概述
+
+引擎复用是 Linkis 
中一个关键的性能优化机制。当用户提交任务时,系统会首先尝试复用现有的空闲引擎,而不是创建新的引擎。这显著减少了引擎启动开销,提高了任务响应速度。
+
+## 核心流程
+
+```
+┌─────────────────────────────────────────────────────────────────────────┐
+│                          引擎请求 (Engine Ask Request)                    │
+└─────────────────────────────────────────────────────────────────────────┘
+                                    │
+                                    ▼
+                    ┌───────────────────────────────┐
+                    │  检查 EXECUTE_ONCE_KEY 标签   │
+                    └───────────────────────────────┘
+                                    │
+                    ┌───────────────┴───────────────┐
+                    │                               │
+                    ▼                               ▼
+             存在 EXECUTE_ONCE              不存在 EXECUTE_ONCE
+                    │                               │
+                    ▼                               ▼
+               创建新引擎                  ┌─────────────────────┐
+                                          │    尝试引擎复用      │
+                                          └─────────────────────┘
+                                                    │
+                                                    ▼
+                                          ┌─────────────────────┐
+                                          │    构建标签过滤器    │
+                                          │  - EngineNodeLabel  │
+                                          │  - UserCreatorLabel │
+                                          │  - EngineTypeLabel  │
+                                          └─────────────────────┘
+                                                    │
+                                                    ▼
+                                          ┌─────────────────────┐
+                                          │   检查排除标签       │
+                                          │ ReuseExclusionLabel │
+                                          └─────────────────────┘
+                                                    │
+                              ┌─────────────────────┴─────────────────────┐
+                              │                                           │
+                              ▼                                           ▼
+                      通配符 (*) 排除                            指定实例排除
+                              │                                           │
+                              ▼                                           ▼
+                  返回 null (不复用)                            从列表中移除排除实例
+                                                                          │
+                                                                          ▼
+                                                          
┌─────────────────────┐
+                                                          │  应用标签选择器      │
+                                                          │  (多用户引擎处理)    │
+                                                          
└─────────────────────┘
+                                                                          │
+                                                                          ▼
+                                                          
┌─────────────────────┐
+                                                          │  从标签服务获取      │
+                                                          │  可用引擎实例        │
+                                                          
└─────────────────────┘
+                                                                          │
+                                                                          ▼
+                                                          
┌─────────────────────┐
+                                                          │    可选过滤器:      │
+                                                          │  - 模板名称匹配      │
+                                                          │  - 资源匹配          │
+                                                          │  - Python版本匹配    │
+                                                          
└─────────────────────┘
+                                                                          │
+                                                                          ▼
+                                                          
┌─────────────────────┐
+                                                          │     节点选择器       │
+                                                          │   (选择最优节点)     │
+                                                          
└─────────────────────┘
+                                                                          │
+                                                                          ▼
+                                                          
┌─────────────────────┐
+                                                          │    尝试锁定引擎      │
+                                                          
└─────────────────────┘
+                                                                          │
+                                              
┌───────────────────────────┴───────────────────────────┐
+                                              │                                
                       │
+                                              ▼                                
                       ▼
+                                          锁定成功                                 
               锁定失败
+                                              │                                
                       │
+                                              ▼                                
                       ▼
+                                          返回引擎                                 
        重试 (达到限制前)
+                                                                               
                      │
+                                                                               
                      ▼
+                                                                               
      ┌─────────────────────┐
+                                                                               
      │  超过重试限制或超时  │
+                                                                               
      └─────────────────────┘
+                                                                               
                      │
+                                                                               
                      ▼
+                                                                               
      抛出 LinkisRetryException
+```
+
+## 关键组件
+
+### 核心类
+
+| 类 | 位置 | 描述 |
+|----|------|------|
+| `EngineReuseService` | 
`linkis-application-manager/.../service/engine/EngineReuseService.scala` | 
引擎复用服务接口 |
+| `DefaultEngineReuseService` | 
`linkis-application-manager/.../service/engine/DefaultEngineReuseService.scala` 
| 复用逻辑核心实现 |
+| `EngineReuseLabelChooser` | 
`linkis-application-manager/.../label/EngineReuseLabelChooser.java` | 
复用时标签选择的接口 |
+| `MultiUserEngineReuseLabelChooser` | 
`linkis-application-manager/.../label/MultiUserEngineReuseLabelChooser.java` | 
处理多用户引擎的标签选择 |
+| `ReuseExclusionLabel` | 
`linkis-label-common/.../label/entity/engine/ReuseExclusionLabel.java` | 
排除特定实例复用的标签 |
+| `EngineReuseRequest` | 
`linkis-manager-common/.../protocol/engine/EngineReuseRequest.java` | 引擎复用请求协议 |
+| `DefaultEngineNodeManager` | 
`linkis-application-manager/.../manager/DefaultEngineNodeManager.java` | 
管理引擎节点操作,包括锁定 |
+
+### 源代码位置
+
+```
+linkis-computation-governance/linkis-manager/
+├── linkis-application-manager/src/main/
+│   ├── scala/org/apache/linkis/manager/am/service/engine/
+│   │   ├── EngineReuseService.scala              # 接口定义
+│   │   ├── DefaultEngineReuseService.scala       # 核心实现
+│   │   └── DefaultEngineAskEngineService.scala   # 调用服务
+│   └── java/org/apache/linkis/manager/am/
+│       ├── label/
+│       │   ├── EngineReuseLabelChooser.java
+│       │   └── MultiUserEngineReuseLabelChooser.java
+│       ├── manager/
+│       │   └── DefaultEngineNodeManager.java
+│       └── conf/
+│           └── AMConfiguration.java              # 配置类
+├── linkis-manager-common/src/main/java/.../protocol/engine/
+│   └── EngineReuseRequest.java
+└── linkis-label-common/src/main/java/.../label/entity/engine/
+    └── ReuseExclusionLabel.java
+```
+
+## 复用条件
+
+### 1. 基本触发条件
+
+当请求中**不包含** `EXECUTE_ONCE_KEY` 标签时,系统会尝试引擎复用:
+
+```scala
+if 
(!engineAskRequest.getLabels.containsKey(LabelKeyConstant.EXECUTE_ONCE_KEY)) {
+  // 尝试引擎复用
+  val reuseNode = engineReuseService.reuseEngine(engineReuseRequest, sender)
+}
+```
+
+### 2. 标签匹配
+
+系统根据以下标签过滤可用引擎:
+
+- **EngineNodeLabel**:匹配引擎节点类型
+- **UserCreatorLabel**:匹配用户和创建者应用
+- **EngineTypeLabel**:匹配引擎类型(spark、hive、python 等)
+- **AliasServiceInstanceLabel**:按服务实例别名过滤
+
+### 3. 排除规则
+
+#### ReuseExclusionLabel
+
+此标签允许从复用中排除特定引擎实例:
+
+```java
+// 排除所有引擎(通配符)
+ReuseExclusionLabel label = new ReuseExclusionLabel();
+label.setInstances("*");
+
+// 排除特定实例
+label.setInstances("instance1;instance2;instance3");
+```
+
+当设置为通配符 `*` 时,该请求不会复用任何引擎。
+
+### 4. 引擎状态检查
+
+引擎只有在以下条件下才能被复用:
+
+- **状态为 Unlock**:引擎当前未被其他任务锁定
+- **状态为可用**:引擎处于健康、可用状态
+
+```java
+@Override
+public EngineNode reuseEngine(EngineNode engineNode) {
+  EngineNode node = getEngineNodeInfo(engineNode);
+  if (node == null || !NodeStatus.isAvailable(node.getNodeStatus())) {
+    return null;
+  }
+  if (!NodeStatus.isLocked(node.getNodeStatus())) {
+    Optional<String> lockStr = engineLocker.lockEngine(node, timeout);
+    if (!lockStr.isPresent()) {
+      throw new LinkisRetryException(...);
+    }
+    node.setLock(lockStr.get());
+    return node;
+  }
+  return null;
+}
+```
+
+## 多用户引擎支持
+
+某些引擎类型支持多用户共享,即引擎可以在不同用户之间复用:
+
+### 支持的多用户引擎类型
+
+```
+es, presto, io_file, appconn, openlookeng, trino, jobserver, nebula, hbase, 
doris
+```
+
+### 工作原理
+
+对于多用户引擎,`UserCreatorLabel` 会被修改为使用管理员用户:
+
+```java
+public List<Label<?>> chooseLabels(List<Label<?>> labelList) {
+  // 检查引擎类型是否为多用户
+  if (isMultiUserEngine(engineTypeLabel)) {
+    String userAdmin = getAdminUser(engineTypeLabel.getEngineType());
+    userCreatorLabel.setUser(userAdmin);
+  }
+  return labels;
+}
+```
+
+这允许不同用户共享同一个引擎实例。
+
+## 可选过滤规则
+
+### 模板名称匹配
+
+启用后,只有模板名称匹配的引擎才会被复用:
+
+```properties
+linkis.ec.reuse.with.template.rule.enable=true
+```
+
+系统会检查引擎参数中的 `ec.resource.name` 属性。
+
+### 资源匹配
+
+启用后,系统会确保引擎有足够的资源:
+
+```properties
+linkis.ec.reuse.with.resource.rule.enable=true
+linkis.ec.reuse.with.resource.with.ecs=spark,hive,shell,python
+```
+
+执行的检查:
+1. 可用/锁定资源 >= 请求资源
+2. Python 版本兼容性(针对 Python/PySpark 引擎)
+
+## 缓存策略
+
+系统支持缓存引擎实例以提高复用性能:
+
+### 缓存配置
+
+| 参数 | 默认值 | 描述 |
+|------|--------|------|
+| `wds.linkis.manager.am.engine.reuse.enable.cache` | `false` | 启用实例缓存 |
+| `wds.linkis.manager.am.engine.reuse.cache.expire.time` | `5s` | 缓存过期时间 |
+| `wds.linkis.manager.am.engine.reuse.cache.max.size` | `1000` | 最大缓存条目数 |
+| `wds.linkis.manager.am.engine.reuse.cache.support.engines` | `shell` | 
支持缓存的引擎类型 |
+
+### 缓存键格式
+
+```scala
+val cacheKey = userCreatorLabel.getStringValue + "_" + 
engineTypeLabel.getEngineType
+// 示例:"hadoop-IDE_spark"
+```
+
+## 配置参数
+
+### 核心复用参数
+
+| 参数 | 默认值 | 描述 |
+|------|--------|------|
+| `wds.linkis.manager.am.engine.reuse.max.time` | `5m` | 复用最大等待时间 |
+| `wds.linkis.manager.am.engine.reuse.count.limit` | `2` | 复用最大重试次数 |
+| `wds.linkis.manager.am.engine.locker.max.time` | `5m` | 引擎最大锁定时间 |
+
+### 多用户引擎参数
+
+| 参数 | 默认值 | 描述 |
+|------|--------|------|
+| `wds.linkis.multi.user.engine.types` | `es,presto,...` | 多用户引擎类型列表 |
+| `wds.linkis.multi.user.engine.user` | JSON 配置 | 每种引擎类型的管理员用户映射 |
+
+### 可选过滤参数
+
+| 参数 | 默认值 | 描述 |
+|------|--------|------|
+| `linkis.ec.reuse.with.template.rule.enable` | `false` | 启用模板名称匹配 |
+| `linkis.ec.reuse.with.resource.rule.enable` | `false` | 启用资源匹配 |
+| `linkis.ec.reuse.with.resource.with.ecs` | `spark,hive,shell,python` | 
需要资源匹配的引擎类型 |
+
+## 重试和超时处理
+
+### 重试逻辑
+
+```scala
+val reuseLimit = if (engineReuseRequest.getReuseCount <= 0)
+                   AMConfiguration.ENGINE_REUSE_COUNT_LIMIT  // 默认: 2
+                 else engineReuseRequest.getReuseCount
+
+def selectEngineToReuse: Boolean = {
+  if (count > reuseLimit) {
+    throw new LinkisRetryException(...)
+  }
+  // 尝试复用选中的引擎
+  engine = Utils.tryCatch(getEngineNodeManager.reuseEngine(engineNode)) { t =>
+    // 失败时,从候选列表中移除并重试
+    count = count + 1
+    engineScoreList = engineScoreList.filter(!_.equals(choseNode.get))
+    null
+  }
+  engine != null
+}
+```
+
+### 超时处理
+
+- 如果复用超时,系统会异步停止问题引擎
+- 超时后,控制权返回以允许作为回退创建新引擎
+
+```scala
+if (ExceptionUtils.getRootCause(t).isInstanceOf[TimeoutException]) {
+  val stopEngineRequest = new EngineStopRequest(engineNode.getServiceInstance, 
...)
+  engineStopService.asyncStopEngine(stopEngineRequest)
+}
+```
+
+## 最佳实践
+
+1. **为频繁使用的引擎启用缓存**:对于像 `shell` 这样频繁使用且执行时间短的引擎类型,启用缓存以提高复用效率。
+
+2. **配置适当的超时值**:根据集群的网络延迟和引擎响应时间设置 `engine.reuse.max.time`。
+
+3. **需要时使用 ReuseExclusionLabel**:如果某些任务需要隔离的引擎,使用 `ReuseExclusionLabel` 
防止不必要的复用。
+
+4. **监控复用指标**:跟踪引擎复用成功率,以识别特定引擎类型或配置的潜在问题。
+
+5. **考虑多用户引擎**:对于 Presto 或 Trino 等只读查询引擎,考虑将其配置为多用户引擎以最大化资源利用率。
+
+## 故障排除
+
+### 常见问题
+
+1. **引擎复用总是失败**
+   - 检查引擎是否处于 `Unlock` 状态
+   - 验证标签匹配(用户、创建者、引擎类型)
+   - 检查请求中是否有 `ReuseExclusionLabel`
+
+2. **复用超时错误**
+   - 增加 `wds.linkis.manager.am.engine.reuse.max.time`
+   - 检查管理器和引擎之间的网络连接
+   - 查看引擎日志中的锁获取问题
+
+3. **复用了错误的引擎**
+   - 验证标签配置
+   - 检查是否需要时启用了模板名称匹配
+   - 检查多用户引擎配置
diff --git 
a/i18n/zh-CN/docusaurus-plugin-content-docs/current/development/special-logic/overview.md
 
b/i18n/zh-CN/docusaurus-plugin-content-docs/current/development/special-logic/overview.md
new file mode 100644
index 00000000000..294416d5a1c
--- /dev/null
+++ 
b/i18n/zh-CN/docusaurus-plugin-content-docs/current/development/special-logic/overview.md
@@ -0,0 +1,38 @@
+---
+title: 特殊逻辑总览
+sidebar_position: 0
+---
+
+# 特殊逻辑总览
+
+本章节记录了 Linkis 中的特殊逻辑实现,这些内容对于开发者理解项目代码、参与贡献或进行问题排查都非常重要。这些是 Linkis 
架构中处理复杂场景的核心机制。
+
+## 文档列表
+
+| 文档 | 描述 |
+|------|------|
+| [引擎复用逻辑](./engine-reuse-logic.md) | 详细说明引擎启动时的复用机制,包括匹配规则、过滤条件和配置参数 |
+
+## 目的
+
+理解这些特殊逻辑实现对以下场景至关重要:
+
+1. **问题排查**:在排查引擎管理、任务调度或资源分配相关问题时
+2. **性能优化**:了解这些机制的工作原理有助于调优系统以获得更好的性能
+3. **功能开发**:开发新功能时,理解现有的特殊逻辑有助于避免冲突并确保正确集成
+4. **代码审查**:审查者可以更好地评估影响这些关键代码路径的变更
+
+## 如何贡献
+
+如果您发现 Linkis 中其他需要记录的重要特殊逻辑:
+
+1. 在本目录下创建新的 markdown 文件,遵循命名规范:`<功能名称>-logic.md`
+2. 使用现有文档作为结构和格式的模板
+3. 包含以下内容:
+   - 逻辑概述
+   - 核心流程/流程图
+   - 关键类和代码位置
+   - 配置参数
+   - 适用的示例
+4. 更新此总览页面以包含您的新文档
+5. 按照[贡献指南](/community/how-to-contribute)提交 PR


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to