This is an automated email from the ASF dual-hosted git repository.

shenghang pushed a commit to branch dev
in repository https://gitbox.apache.org/repos/asf/seatunnel.git


The following commit(s) were added to refs/heads/dev by this push:
     new e9773de0c5 [Add][Docs] Introduce SeaTunnel MCP and x2SeaTunnel 
documentation on website (#10108)
e9773de0c5 is described below

commit e9773de0c541514e36113f37133679b19173bf06
Author: Adam Wang <[email protected]>
AuthorDate: Tue Nov 25 21:20:55 2025 +0800

    [Add][Docs] Introduce SeaTunnel MCP and x2SeaTunnel documentation on 
website (#10108)
    
    Co-authored-by: wangxiaogang <[email protected]>
---
 docs/en/ecosystem/seatunnel-mcp.md   | 198 ++++++++++++++++++
 docs/en/ecosystem/x2seatunnel.md     | 329 +++++++++++++++++++++++++++++
 docs/images/seatunnel-mcp-logo.png   | Bin 0 -> 390468 bytes
 docs/images/seatunnel-mcp-server.png | Bin 0 -> 143625 bytes
 docs/sidebars.js                     |   8 +
 docs/zh/ecosystem/seatunnel-mcp.md   | 253 +++++++++++++++++++++++
 docs/zh/ecosystem/x2seatunnel.md     | 387 +++++++++++++++++++++++++++++++++++
 7 files changed, 1175 insertions(+)

diff --git a/docs/en/ecosystem/seatunnel-mcp.md 
b/docs/en/ecosystem/seatunnel-mcp.md
new file mode 100644
index 0000000000..006c9e60ac
--- /dev/null
+++ b/docs/en/ecosystem/seatunnel-mcp.md
@@ -0,0 +1,198 @@
+---
+id: seatunnel-mcp
+title: SeaTunnel MCP
+---
+# SeaTunnel MCP Server
+
+A Model Context Protocol (MCP) server for interacting with SeaTunnel through 
LLM interfaces like Claude.
+
+![SeaTunnel MCP Logo](../images/seatunnel-mcp-logo.png)
+
+![SeaTunnel MCP Server](../images/seatunnel-mcp-server.png)
+
+## Related Links
+
+- SeaTunnel MCP on GitHub: 
https://github.com/apache/seatunnel-tools/tree/main/seatunnel-mcp
+
+## Operation Video
+
+To help you better understand the features and usage of SeaTunnel MCP, we 
provide a video demonstration. Please refer to the link below or directly check 
the video file in the project documentation directory.
+
+https://www.youtube.com/watch?v=JaLA8EkZD7Q
+
+[![IMAGE ALT TEXT 
HERE](https://img.youtube.com/vi/JaLA8EkZD7Q/0.jpg)](https://www.youtube.com/watch?v=JaLA8EkZD7Q)
+
+
+> **Tip**: If the video does not play directly, make sure your device supports 
MP4 format and try opening it with a modern browser or video player. 
+
+
+## Features
+
+* Job management (submit, stop, monitor)
+* System monitoring and information retrieval
+* REST API interaction with SeaTunnel services
+* Built-in logging and monitoring tools
+* Dynamic connection configuration
+* Comprehensive job information and statistics
+
+## Installation
+
+```bash
+# Clone repository
+git clone https://github.com/apache/seatunnel-tools.git
+cd seatunnel-mcp
+
+# Create virtual environment and install
+python -m venv .venv
+source .venv/bin/activate  # On Windows: .venv\Scripts\activate
+pip install -e .
+```
+
+## Requirements
+
+* Python ≥ 3.12
+* Running SeaTunnel instance
+* Node.js (for testing with MCP Inspector)
+
+## Usage
+
+### Environment Variables
+
+```
+SEATUNNEL_API_URL=http://localhost:8090  # Default SeaTunnel REST API URL
+SEATUNNEL_API_KEY=your_api_key           # Optional: Default SeaTunnel API key
+```
+
+### Dynamic Connection Configuration
+
+The server provides tools to view and update connection settings at runtime:
+
+* `get-connection-settings`: View current connection URL and API key status
+* `update-connection-settings`: Update URL and/or API key to connect to a 
different SeaTunnel instance
+
+Example usage through MCP:
+
+```json
+// Get current settings
+{
+  "name": "get-connection-settings"
+}
+
+// Update connection settings
+{
+  "name": "update-connection-settings",
+  "arguments": {
+    "url": "http://new-host:8090";,
+    "api_key": "new-api-key"
+  }
+}
+```
+
+### Job Management
+
+The server provides tools to submit and manage SeaTunnel jobs:
+
+* `submit-job`: Submit a new job with job configuration
+* `submit-jobs`: Submit multiple jobs in batch
+* `stop-job`: Stop a running job
+* `get-job-info`: Get detailed information about a specific job
+* `get-running-jobs`: List all currently running jobs
+* `get-finished-jobs`: List all finished jobs by state (FINISHED, CANCELED, 
FAILED, etc.)
+
+### Running the Server
+
+```bash
+python -m src.seatunnel_mcp
+```
+
+### Usage with Claude Desktop
+
+To use this with Claude Desktop, add the following to your 
`claude_desktop_config.json`:
+
+```json
+{
+  "mcpServers": {
+    "seatunnel": {
+      "command": "python",
+      "args": ["-m", "src.seatunnel_mcp"],
+      "cwd": "Project root directory"
+    }
+  }
+}
+```
+
+### Testing with MCP Inspector
+
+```bash
+npx @modelcontextprotocol/inspector python -m src.seatunnel_mcp
+```
+
+## Available Tools
+
+### Connection Management
+
+* `get-connection-settings`: View current SeaTunnel connection URL and API key 
status
+* `update-connection-settings`: Update URL and/or API key to connect to a 
different instance
+
+### Job Management
+
+* `submit-job`: Submit a new job with configuration in HOCON format
+* `submit-job/upload`: submit job source upload configuration file
+* `submit-jobs`: Submit multiple jobs in batch, directly passing user input as 
request body
+* `stop-job`: Stop a running job with optional savepoint
+* `get-job-info`: Get detailed information about a specific job
+* `get-running-jobs`: List all currently running jobs
+* `get-running-job`: Get details about a specific running job
+* `get-finished-jobs`: List all finished jobs by state
+
+### System Monitoring
+
+* `get-overview`: Get an overview of the SeaTunnel cluster
+* `get-system-monitoring-information`: Get detailed system monitoring 
information
+
+## Changelog
+
+### v1.2.0 (2025-06-09)
+
+**New Features in v1.2.0**
+- **SSE Support**: Added `st-mcp-sse` for real-time communication with 
SeaTunnel MCP via Server-Sent Events (SSE). Corresponding sse branch
+- **UV/Studio Mode**: Added `st-mcp-uv` (or `st-mcp-studio`) to support 
running the MCP server using the `uv` tool for improved performance and async 
support. Corresponding to uv branch
+
+#### Example `claude_desktop_config.json`:
+
+```json
+{
+  "mcpServers": {
+    "st-mcp-sse": {
+      "url": "http://your-server:18080/sse";
+    },
+    "st-mcp-uv": {
+      "command": "uv",
+      "args": ["run", "seatunnel-mcp"],
+      "env": {
+        "SEATUNNEL_API_URL": "http://127.0.0.1:8080";
+      }
+    }
+  }
+}
+
+```
+
+### v1.1.0 (2025-04-10)
+
+- **New Feature**: Added `submit-jobs` and `submit-job/upload` tool for batch 
job submission and Document submission operations
+  - Allows submitting multiple jobs at once with a single API call
+  - Input is passed directly as the request body to the API
+  - Supports JSON format for job configurations
+  - Allow submission of jobs based on documents
+
+### v1.0.0 (Initial Release)
+
+- Initial release with basic SeaTunnel integration capabilities
+- Job management tools (submit, stop, monitor)
+- System monitoring tools
+- Connection configuration utilities
+
+## License
+
+[Apache License 2.0](https://www.apache.org/licenses/LICENSE-2.0)
\ No newline at end of file
diff --git a/docs/en/ecosystem/x2seatunnel.md b/docs/en/ecosystem/x2seatunnel.md
new file mode 100644
index 0000000000..c9e389ed78
--- /dev/null
+++ b/docs/en/ecosystem/x2seatunnel.md
@@ -0,0 +1,329 @@
+---
+id: x2seatunnel
+title: x2SeaTunnel
+---
+# X2SeaTunnel
+
+## Overview
+
+X2SeaTunnel is a tool for converting DataX and other configuration files to 
SeaTunnel configuration files, designed to help users quickly migrate from 
other data integration platforms to SeaTunnel. The current implementation only 
supports the conversion of DataX tasks.
+
+## Related Links
+
+- X2SeaTunnel on GitHub: 
https://github.com/apache/seatunnel-tools/tree/main/x2seatunnel
+
+## 🚀 Quick Start
+
+### Prerequisites
+
+- Java 8 or higher
+
+### Installation
+
+#### Build from Source
+```bash
+# Build x2seatunnel module in this repository
+mvn clean package -pl x2seatunnel -DskipTests
+```
+After compilation, the release package will be at 
`x2seatunnel/target/x2seatunnel-*.zip`.
+
+#### Using Release Package
+```bash
+# Download and extract release package
+unzip x2seatunnel-*.zip
+cd x2seatunnel-*/
+```
+
+### Basic Usage
+
+```bash
+# Standard conversion: Use default template system with built-in common 
Sources and Sinks
+./bin/x2seatunnel.sh -s examples/source/datax-mysql2hdfs.json -t 
examples/target/mysql2hdfs-result.conf -r examples/report/mysql2hdfs-report.md
+
+# Custom task: Implement customized conversion requirements through custom 
templates
+# Scenario: MySQL → Hive (DataX doesn't have HiveWriter)
+# DataX configuration: MySQL → HDFS Custom task: Convert to MySQL → Hive
+./bin/x2seatunnel.sh -s examples/source/datax-mysql2hdfs2hive.json -t 
examples/target/mysql2hive-result.conf -r examples/report/mysql2hive-report.md 
-T templates/datax/custom/mysql-to-hive.conf
+
+# YAML configuration method (equivalent to above command line parameters)
+./bin/x2seatunnel.sh -c examples/yaml/datax-mysql2hdfs2hive.yaml
+
+# Batch conversion mode: Process by directory
+./bin/x2seatunnel.sh -d examples/source -o examples/target2 -R examples/report2
+
+# Batch mode supports wildcard filtering
+./bin/x2seatunnel.sh -d examples/source -o examples/target3 -R 
examples/report3 --pattern "*-full.json" --verbose
+
+# View help
+./bin/x2seatunnel.sh --help
+```
+
+### Conversion Report
+After conversion is completed, view the generated Markdown report file, which 
includes:
+- **Basic Information**: Conversion time, source/target file paths, connector 
types, conversion status, etc.
+- **Conversion Statistics**: Counts and percentages of direct mappings, smart 
transformations, default values used, and unmapped fields
+- **Detailed Field Mapping Relationships**: Source values, target values, 
filters used for each field
+- **Default Value Usage**: List of all fields using default values
+- **Unmapped Fields**: Fields present in DataX but not converted
+- **Possible Error and Warning Information**: Issue prompts during conversion 
process
+
+For batch conversions, a batch summary report `summary.md` will be generated 
in the batch report directory, including:
+- **Conversion Overview**: Overall statistics, success rate, duration, etc.
+- **Successful Conversion List**: Complete list of successfully converted files
+- **Failed Conversion List**: Failed files and error messages (if any)
+
+### Log Files
+```bash
+# View log files
+tail -f logs/x2seatunnel.log
+```
+
+## 🎯 Features
+
+- ✅ **Standard Configuration Conversion**: DataX → SeaTunnel configuration 
file conversion
+- ✅ **Custom Template Conversion**: Support for user-defined conversion 
templates
+- ✅ **Detailed Conversion Reports**: Generate Markdown format conversion 
reports
+- ✅ **Regular Expression Variable Extraction**: Extract variables from 
configuration using regex, supporting custom scenarios
+- ✅ **Batch Conversion Mode**: Support directory and file wildcard batch 
conversion, automatic report and summary report generation
+
+## 📁 Directory Structure
+
+```
+x2seatunnel/
+├── bin/                        # Executable files
+│   ├── x2seatunnel.sh         # Startup script
+├── lib/                        # JAR package files
+│   └── x2seatunnel-*.jar      # Core JAR package
+├── config/                     # Configuration files
+│   └── log4j2.xml             # Log configuration
+├── templates/                  # Template files
+│   ├── template-mapping.yaml  # Template mapping configuration
+│   ├── report-template.md     # Report template
+│   └── datax/                 # DataX related templates
+│       ├── custom/            # Custom templates
+│       ├── env/               # Environment configuration templates
+│       ├── sources/           # Data source templates
+│       └── sinks/             # Data target templates
+├── examples/                   # Examples and tests
+│   ├── source/                # Example source files
+│   ├── target/                # Generated target files
+│   └── report/                # Generated reports
+├── logs/                       # Log files
+├── LICENSE                     # License
+└── README.md                   # Usage instructions
+```
+
+## 📖 Usage Instructions
+
+### Basic Syntax
+
+```bash
+x2seatunnel [OPTIONS]
+```
+
+### Command Line Parameters
+
+| Option   | Long Option     | Description                                     
            | Required |
+|----------|-----------------|-------------------------------------------------------------|----------|
+| -s       | --source        | Source configuration file path                  
            | Yes      |
+| -t       | --target        | Target configuration file path                  
            | Yes      |
+| -st      | --source-type   | Source configuration type (datax, default: 
datax)          | No       |
+| -T       | --template      | Custom template file path                       
            | No       |
+| -r       | --report        | Conversion report file path                     
            | No       |
+| -c       | --config        | YAML configuration file path, containing 
source, target, report, template and other settings | No |
+| -d       | --directory     | Batch conversion source directory               
            | No       |
+| -o       | --output-dir    | Batch conversion output directory               
            | No       |
+| -p       | --pattern       | File wildcard pattern (comma separated, e.g.: 
*.json,*.xml)| No       |
+| -R       | --report-dir    | Report output directory in batch mode, 
individual file reports and summary.md will be output to this directory | No |
+| -v       | --version       | Show version information                        
            | No       |
+| -h       | --help          | Show help information                           
            | No       |
+|          | --verbose       | Enable verbose log output                       
            | No       |
+
+```bash
+# Example: View command line help
+./bin/x2seatunnel.sh --help
+```
+
+### Supported Configuration Types
+
+#### Source Configuration Types
+- **datax**: DataX configuration files (JSON format) - Default type
+
+#### Target Configuration Types
+- **seatunnel**: SeaTunnel configuration files (HOCON format)
+
+## 🎨 Template System
+
+### Design Philosophy
+
+X2SeaTunnel adopts a DSL (Domain Specific Language) based template system, 
implementing rapid adaptation of different data sources and targets through 
configuration-driven approach. Core advantages:
+
+- **Configuration-driven**: All conversion logic is defined through YAML 
configuration files, no need to modify Java code
+- **Easy to extend**: Adding new data source types only requires adding 
template files and mapping configurations
+- **Unified syntax**: Uses Jinja2-style template syntax, easy to understand 
and maintain
+- **Intelligent mapping**: Implements complex parameter mapping logic through 
transformers
+
+### Template Syntax
+
+X2SeaTunnel supports partially compatible Jinja2-style template syntax, 
providing rich filter functionality to handle configuration conversion.
+
+```bash
+# Basic variable reference
+{{ datax.job.content[0].reader.parameter.username }}
+
+# Variables with filters
+{{ datax.job.content[0].reader.parameter.column | join(',') }}
+
+# Chained filters
+{{ datax.job.content[0].writer.parameter.path | split('/') | get(-2) | 
replace('.db','') }}
+```
+
+### 2. Filters
+
+| Filter | Syntax | Description | Example |
+|--------|--------|-------------|---------|
+| `join` | `{{ array \| join('separator') }}` | Array join | `{{ columns \| 
join(',') }}` |
+| `default` | `{{ value \| default('default_value') }}` | Default value | `{{ 
port \| default(3306) }}` |
+| `upper` | `{{ value \| upper }}` | Uppercase conversion | `{{ name \| upper 
}}` |
+| `lower` | `{{ value \| lower }}` | Lowercase conversion | `{{ name \| lower 
}}` |
+| `split` | `{{ string \| split('/') }}` | String split | `'a/b/c' → 
['a','b','c']` |
+| `get` | `{{ array \| get(0) }}` | Get array element | `['a','b','c'] → 'a'` |
+| `replace` | `{{ string \| replace('old,new') }}` | String replace | `'hello' 
→ 'hallo'` |
+| `regex_extract` | `{{ string \| regex_extract('pattern') }}` | Regex extract 
| Extract matching content |
+| `jdbc_driver_mapper` | `{{ jdbcUrl \| jdbc_driver_mapper }}` | JDBC driver 
mapping | Auto infer driver class |
+
+### 3. Examples
+
+```bash
+# join filter: Array join
+query = "SELECT {{ datax.job.content[0].reader.parameter.column | join(',') }} 
FROM table"
+
+# default filter: Default value
+partition_column = "{{ datax.job.content[0].reader.parameter.splitPk | 
default('') }}"
+fetch_size = {{ datax.job.content[0].reader.parameter.fetchSize | 
default(1024) }}
+
+# String operations
+driver = "{{ datax.job.content[0].reader.parameter.connection[0].jdbcUrl[0] | 
upper }}"
+```
+
+```bash
+# Chained filters: String split and get
+{{ datax.job.content[0].writer.parameter.path | split('/') | get(-2) | 
replace('.db','') }}
+
+# Regular expression extraction
+{{ jdbcUrl | regex_extract('jdbc:mysql://([^:]+):') }}
+
+# Transformer call: Intelligent parameter mapping
+driver = "{{ datax.job.content[0].reader.parameter.connection[0].jdbcUrl[0] | 
jdbc_driver_mapper }}"
+```
+
+```bash
+# Intelligent query generation
+query = "{{ datax.job.content[0].reader.parameter.querySql[0] | 
default('SELECT') }} {{ datax.job.content[0].reader.parameter.column | 
join(',') }} FROM {{ 
datax.job.content[0].reader.parameter.connection[0].table[0] }} WHERE {{ 
datax.job.content[0].reader.parameter.where | default('1=1') }}"
+
+# Path intelligent parsing: Extract Hive table name from HDFS path
+# Path: /user/hive/warehouse/test_ods.db/test_table/partition=20240101
+database = "{{ datax.job.content[0].writer.parameter.path | split('/') | 
get(-3) | replace('.db','') }}"
+table = "{{ datax.job.content[0].writer.parameter.path | split('/') | get(-2) 
}}"
+table_name = "{{ database }}.{{ table }}"
+```
+
+```bash
+# Auto infer database driver
+{{ datax.job.content[0].reader.parameter.connection[0].jdbcUrl[0] | 
jdbc_driver_mapper }}
+
+# Mapping relationships (configured in template-mapping.yaml):
+# mysql -> com.mysql.cj.jdbc.Driver
+# postgresql -> org.postgresql.Driver
+# oracle -> oracle.jdbc.driver.OracleDriver
+# sqlserver -> com.microsoft.sqlserver.jdbc.SQLServerDriver
+```
+
+### Custom Transformers
+
+Configure custom transformers through `templates/template-mapping.yaml`:
+
+```yaml
+transformers:
+  # JDBC driver mapping
+  jdbc_driver_mapper:
+    mysql: "com.mysql.cj.jdbc.Driver"
+    postgresql: "org.postgresql.Driver"
+    oracle: "oracle.jdbc.driver.OracleDriver"
+    sqlserver: "com.microsoft.sqlserver.jdbc.SQLServerDriver"
+
+  # File format mapping
+  file_format_mapper:
+    text: "text"
+    orc: "orc"
+    parquet: "parquet"
+    json: "json"
+```
+
+## Extending New Data Sources
+
+Adding new data source types requires only three steps:
+
+1. **Create template files**: Create new template files under 
`templates/datax/sources/`
+2. **Configure mapping relationships**: Add mapping configurations in 
`template-mapping.yaml`
+3. **Add transformers**: If special processing is needed, add corresponding 
transformer configurations
+
+No need to modify any Java code to support new data source types.
+
+## 🌐 Supported Data Sources and Targets
+
+### Data Sources (Sources)
+
+| Data Source Type | DataX Reader | Template File | Support Status |
+|------------------|-------------|---------------|----------------|
+| **MySQL** | `mysqlreader` | `mysql-source.conf` | ✅ Support |
+| **PostgreSQL** | `postgresqlreader` | `jdbc-source.conf` | ✅ Support |
+| **Oracle** | `oraclereader` | `jdbc-source.conf` | ✅ Support |
+| **SQL Server** | `sqlserverreader` | `jdbc-source.conf` | ✅ Support |
+| **HDFS** | `hdfsreader` | `hdfs-source.conf` | ✅ Support |
+
+### Data Targets (Sinks)
+
+| Data Target Type | DataX Writer | Template File | Support Status |
+|------------------|-------------|---------------|----------------|
+| **MySQL** | `mysqlwriter` | `jdbc-sink.conf` | ✅ Support |
+| **PostgreSQL** | `postgresqlwriter` | `jdbc-sink.conf` | ✅ Support |
+| **Oracle** | `oraclewriter` | `jdbc-sink.conf` | ✅ Support |
+| **SQL Server** | `sqlserverwriter` | `jdbc-sink.conf` | ✅ Support |
+| **HDFS** | `hdfswriter` | `hdfs-sink.conf` | ✅ Support |
+| **Doris** | `doriswriter` | `doris-sink.conf` | 📋 Planned |
+
+## Development Guide
+
+### Custom Configuration Templates
+
+You can customize configuration templates in the `templates/datax/custom/` 
directory, referring to the format and placeholder syntax of existing templates.
+
+### Code Structure
+
+```
+src/main/java/org/apache/seatunnel/tools/x2seatunnel/
+├── cli/                    # Command line interface
+├── core/                   # Core conversion logic
+├── template/               # Template processing
+├── utils/                  # Utility classes
+└── X2SeaTunnelApplication.java  # Main application class
+```
+
+### Changelog
+
+#### v1.0.0-SNAPSHOT (Current Version)
+- ✅ **Core Features**: Support for basic DataX to SeaTunnel configuration 
conversion
+- ✅ **Template System**: Jinja2-style DSL template language with 
configuration-driven extension support
+- ✅ **Unified JDBC Support**: MySQL, PostgreSQL, Oracle, SQL Server and other 
relational databases
+- ✅ **Intelligent Features**:
+  - Auto driver mapping (infer database driver based on jdbcUrl)
+  - Intelligent query generation (auto-generate SELECT statements based on 
column, table, where)
+  - Auto parameter mapping (splitPk→partition_column, fetchSize→fetch_size, 
etc.)
+- ✅ **Template Syntax**:
+  - Basic variable access: `{{ datax.path.to.value }}`
+  - Filter support: `{{ array | join(',') }}`, `{{ value | default('default') 
}}`
+  - Custom transformers: `{{ url | jdbc_driver_mapper }}`
+- ✅ **Batch Processing**: Support directory-level batch conversion and report 
generation
+- ✅ **Complete Examples**: Complete DataX configuration examples for 4 JDBC 
data sources
+- ✅ **Comprehensive Documentation**: Complete usage instructions and API 
documentation
\ No newline at end of file
diff --git a/docs/images/seatunnel-mcp-logo.png 
b/docs/images/seatunnel-mcp-logo.png
new file mode 100644
index 0000000000..ee33653081
Binary files /dev/null and b/docs/images/seatunnel-mcp-logo.png differ
diff --git a/docs/images/seatunnel-mcp-server.png 
b/docs/images/seatunnel-mcp-server.png
new file mode 100644
index 0000000000..dbc6a4cb39
Binary files /dev/null and b/docs/images/seatunnel-mcp-server.png differ
diff --git a/docs/sidebars.js b/docs/sidebars.js
index c0640c044d..993f222fb9 100644
--- a/docs/sidebars.js
+++ b/docs/sidebars.js
@@ -235,6 +235,14 @@ const sidebars = {
                 "other-engine/spark"
             ]
         },
+        {
+            "type": "category",
+            "label": "Ecosystem Projects",
+            "items": [
+                "ecosystem/x2seatunnel",
+                "ecosystem/seatunnel-mcp"
+            ]
+        },
         {
             type: 'category',
             label: 'Contribution',
diff --git a/docs/zh/ecosystem/seatunnel-mcp.md 
b/docs/zh/ecosystem/seatunnel-mcp.md
new file mode 100644
index 0000000000..cdd0bff1f3
--- /dev/null
+++ b/docs/zh/ecosystem/seatunnel-mcp.md
@@ -0,0 +1,253 @@
+---
+id: seatunnel-mcp
+title: SeaTunnel MCP
+---
+
+# SeaTunnel MCP 服务器
+
+SeaTunnel MCP(Model Context Protocol)服务器,提供与大型语言模型(如 Claude)交互的能力,使其能够操作 
SeaTunnel 任务。
+
+![SeaTunnel MCP Logo](../images/seatunnel-mcp-logo.png)
+
+![SeaTunnel MCP Server](../images/seatunnel-mcp-server.png)
+
+## 相关链接
+
+- GitHub 仓库:https://github.com/apache/seatunnel-tools/tree/main/seatunnel-mcp
+
+## 操作视频
+
+为了帮助您更好地了解 SeaTunnel MCP 的功能和使用方法,我们提供了一段操作视频演示。请参考以下链接或直接在项目文档目录中查看视频文件。
+
+https://www.bilibili.com/video/BV1UXZgY8EqS
+
+> **提示**:如果视频无法直接播放,请确保您的设备支持 MP4 格式,并尝试使用现代浏览器或视频播放器打开。
+
+
+## 功能特点
+
+* **作业管理**:提交、停止、监控 SeaTunnel 作业
+* **系统监控**:获取集群概览和详细的系统监控信息
+* **REST API 交互**:与 SeaTunnel 服务进行无缝交互
+* **内置日志和监控工具**:全面的日志和监控功能
+* **动态连接配置**:能够在运行时切换不同的 SeaTunnel 实例
+* **全面的作业信息**:提供详细的作业运行状态和统计数据
+
+## 安装
+
+```bash
+# 克隆仓库
+git clone https://github.com/apache/seatunnel-tools.git
+cd seatunnel-mcp
+
+# 创建虚拟环境并安装
+python -m venv .venv
+source .venv/bin/activate  # Windows 系统: .venv\Scripts\activate
+pip install -e .
+
+# 或者直接使用提供的脚本
+./run.sh  # Linux/Mac
+run.bat   # Windows
+```
+
+## 系统要求
+
+* Python ≥ 3.12
+* 运行中的 SeaTunnel 实例
+* Node.js(用于 MCP Inspector 测试)
+
+## 使用方法
+
+### 环境变量配置
+
+```
+SEATUNNEL_API_URL=http://localhost:8090  # 默认 SeaTunnel REST API URL
+SEATUNNEL_API_KEY=your_api_key           # 可选:默认 SeaTunnel API 密钥
+```
+
+### 命令行工具
+
+SeaTunnel MCP 提供了命令行工具,方便启动和配置服务器:
+
+```bash
+# 显示帮助信息
+seatunnel-mcp --help
+
+# 初始化环境配置文件
+seatunnel-mcp init
+
+# 运行 MCP 服务器
+seatunnel-mcp run --api-url http://your-seatunnel:8090
+
+# 为 Claude Desktop 配置 MCP 服务器
+seatunnel-mcp configure-claude
+```
+
+### 动态连接配置
+
+服务器提供了工具来查看和更新运行时的连接设置:
+
+* `get-connection-settings`:查看当前连接 URL 和 API 密钥状态
+* `update-connection-settings`:更新 URL 和/或 API 密钥以连接到不同的 SeaTunnel 实例
+
+MCP 使用示例:
+
+```json
+// 获取当前设置
+{
+  "name": "get-connection-settings"
+}
+
+// 更新连接设置
+{
+  "name": "update-connection-settings",
+  "arguments": {
+    "url": "http://new-host:8090";,
+    "api_key": "new-api-key"
+  }
+}
+```
+
+### 作业管理
+
+服务器提供工具来提交和管理 SeaTunnel 作业:
+
+* `submit-job`:提交新作业
+* `submit-jobs`:批量提交多个作业
+* `stop-job`:停止运行中的作业
+* `get-job-info`:获取特定作业的详细信息
+* `get-running-jobs`:列出所有正在运行的作业
+* `get-running-job`:获取特定运行中作业的详情
+* `get-finished-jobs`:按状态列出已完成的作业
+
+### 运行服务器
+
+```bash
+python -m src.seatunnel_mcp
+# 或者使用命令行工具
+seatunnel-mcp run
+```
+
+### 与 Claude Desktop 集成
+
+要在 Claude Desktop 中使用,请在 `claude_desktop_config.json` 中添加以下内容:
+
+```json
+{
+  "mcpServers": {
+    "seatunnel": {
+      "command": "python",
+      "args": ["-m", "src.seatunnel_mcp"], 
+       "cwd": "Project root directory"
+    }
+  }
+}
+```
+
+### 使用 MCP Inspector 测试
+
+```bash
+npx @modelcontextprotocol/inspector python -m src.seatunnel_mcp
+```
+
+## 可用工具
+
+### 连接管理
+
+* `get-connection-settings`:查看当前 SeaTunnel 连接 URL 和 API 密钥状态
+* `update-connection-settings`:更新 URL 和/或 API 密钥以连接到不同实例
+
+### 作业管理
+
+* `submit-job`:提交新作业
+* `submit-jobs`:批量提交多个作业,直接将用户输入作为请求体传递
+* `submit-job/upload`:提交作业来源上传配置文件
+* `stop-job`:停止运行中的作业
+* `get-job-info`:获取特定作业的详细信息
+* `get-running-jobs`:列出所有正在运行的作业
+* `get-running-job`:获取特定运行中作业的详情
+* `get-finished-jobs`:按状态列出已完成的作业
+
+### 系统监控
+
+* `get-overview`:获取 SeaTunnel 集群概览
+* `get-system-monitoring-information`:获取详细的系统监控信息
+
+
+
+## 开发
+
+如果您想为项目贡献代码:
+
+1. 克隆仓库并设置开发环境:
+   ```bash
+   python -m venv .venv
+   source .venv/bin/activate
+   pip install -e ".[dev]"
+   ```
+
+2. 安装预提交钩子:
+   ```bash
+   pip install pre-commit
+   pre-commit install
+   ```
+
+3. 运行测试:
+   ```bash
+   pytest -xvs tests/
+   ```
+
+详细的开发指南请参阅 [开发者指南](docs/DEVELOPER_GUIDE.md)。
+
+## 贡献
+
+1. Fork 仓库
+2. 创建功能分支
+3. 提交变更
+4. 创建 Pull Request
+
+## 更新日志
+
+### v1.2.0 (2025-06-09)
+**v1.2.0 新功能**
+
+- **SSE 实时通信**:新增 `st-mcp-sse`,支持通过 Server-Sent Events(SSE)协议与 SeaTunnel MCP 
实现实时数据推送。 对应sse分支
+- **UV/Studio 模式**:新增 `st-mcp-uv`(或 `st-mcp-studio`),支持通过 `uv` 工具运行 MCP 
服务器,提升异步和高性能场景下的运行效率。对应uv分支
+
+#### `claude_desktop_config.json` 配置示例:
+
+```json
+{
+  "mcpServers": {
+    "st-mcp-sse": {
+      "url": "http://your-server:18080/sse";
+    },
+    "st-mcp-uv": {
+      "command": "uv",
+      "args": ["run", "seatunnel-mcp"],
+      "env": {
+        "SEATUNNEL_API_URL": "http://127.0.0.1:8080";
+      }
+    }
+  }
+}
+```
+
+### v1.1.0 (2025-04-10)
+
+- **新功能**:添加了 `submit-jobs` 和`submit-job/upload` 工具用于批量提交作业 和 文件提交作业
+  - 允许通过单个 API 调用同时提交多个作业
+  - 用户输入直接作为请求体传递给 API
+  - 支持 JSON 格式的作业配置
+  - 允许根据文件提交作业
+
+### v1.0.0 (初始版本)
+
+- 初始版本,具备基本的 SeaTunnel 集成能力
+- 作业管理工具(提交、停止、监控)
+- 系统监控工具
+- 连接配置实用工具
+
+## 许可证
+
+[Apache License 2.0](https://www.apache.org/licenses/LICENSE-2.0)
\ No newline at end of file
diff --git a/docs/zh/ecosystem/x2seatunnel.md b/docs/zh/ecosystem/x2seatunnel.md
new file mode 100644
index 0000000000..661dc1b229
--- /dev/null
+++ b/docs/zh/ecosystem/x2seatunnel.md
@@ -0,0 +1,387 @@
+---
+id: x2seatunnel
+title: x2SeaTunnel
+---
+# X2SeaTunnel
+
+## 概览
+
+X2SeaTunnel 是一个用于将 DataX 等配置文件转换为 SeaTunnel 配置文件的工具,旨在帮助用户快速从其它数据集成平台迁移到 
SeaTunnel。当前的实现只支持DataX任务的转换。
+
+## 相关链接
+
+- GitHub 仓库:https://github.com/apache/seatunnel-tools/tree/main/x2seatunnel
+
+## 🚀 快速开始
+
+### 前置条件
+
+- Java 8 或更高版本
+
+### 安装
+
+#### 从源码编译
+```bash
+# 在本仓库内编译 x2seatunnel 模块
+mvn clean package -pl x2seatunnel -DskipTests
+```
+编译结束后,发布包位于 `x2seatunnel/target/x2seatunnel-*.zip`。
+
+#### 使用发布包
+```bash
+# 下载并解压发布包
+unzip x2seatunnel-*.zip
+cd x2seatunnel-*/
+```
+
+### 基本用法
+
+```bash
+# 标准转换:使用默认模板系统,内置常见的Source和Sink
+./bin/x2seatunnel.sh -s examples/source/datax-mysql2hdfs.json -t 
examples/target/mysql2hdfs-result.conf -r examples/report/mysql2hdfs-report.md
+
+# 自定义任务: 通过自定义模板实现定制化转换需求
+# 场景:MySQL → Hive(DataX 没有 HiveWriter)
+# DataX 配置:MySQL → HDFS 自定义任务:转换为 MySQL → Hive
+./bin/x2seatunnel.sh -s examples/source/datax-mysql2hdfs2hive.json -t 
examples/target/mysql2hive-result.conf -r examples/report/mysql2hive-report.md 
-T templates/datax/custom/mysql-to-hive.conf
+
+# YAML 配置方式(等效于上述命令行参数)
+./bin/x2seatunnel.sh -c examples/yaml/datax-mysql2hdfs2hive.yaml
+
+# 批量转换模式:按目录处理
+./bin/x2seatunnel.sh -d examples/source -o examples/target2 -R examples/report2
+
+# 批量模式支持通配符过滤
+./bin/x2seatunnel.sh -d examples/source -o examples/target3 -R 
examples/report3 --pattern "*-full.json" --verbose
+
+# 查看帮助
+./bin/x2seatunnel.sh --help
+```
+
+### 转换报告
+转换完成后,查看生成的Markdown报告文件,包含:
+- **基本信息**: 转换时间、源/目标文件路径、连接器类型、转换状态等
+- **转换统计**: 直接映射、智能转换、默认值使用、未映射字段的数量和百分比
+- **详细字段映射关系**: 每个字段的源值、目标值、使用的过滤器等
+- **默认值使用情况**: 列出所有使用默认值的字段
+- **未映射字段**: 显示DataX中存在但未转换的字段
+- **可能的错误和警告信息**: 转换过程中的问题提示
+
+如果是批量转换,则会在批量生成转换报告的文件夹下,生成批量汇总报告 `summary.md`,包含:
+- **转换概览**: 总体统计信息、成功率、耗时等
+- **成功转换列表**: 所有成功转换的文件清单
+- **失败转换列表**: 失败的文件及错误信息(如有)
+
+
+### 日志文件
+```bash
+# 查看日志文件
+tail -f logs/x2seatunnel.log
+```
+
+
+## 🎯 功能特性
+
+- ✅ **标准配置转换**: DataX → SeaTunnel 配置文件转换
+- ✅ **自定义模板转换**: 支持用户自定义转换模板
+- ✅ **详细转换报告**: 生成 Markdown 格式的转换报告
+- ✅ **支持正则表达式变量提取**: 从配置中正则提取变量,支持自定义场景
+- ✅ **批量转换模式**: 支持目录和文件通配符批量转换,自动生成报告和汇总报告
+
+## 📁 目录结构
+
+```
+x2seatunnel/
+├── bin/                        # 可执行文件
+│   ├── x2seatunnel.sh         # 启动脚本
+├── lib/                        # JAR包文件
+│   └── x2seatunnel-*.jar      # 核心JAR包
+├── config/                     # 配置文件
+│   └── log4j2.xml             # 日志配置
+├── templates/                  # 模板文件
+│   ├── template-mapping.yaml  # 模板映射配置
+│   ├── report-template.md     # 报告模板
+│   └── datax/                 # DataX相关模板
+│       ├── custom/            # 自定义模板
+│       ├── env/               # 环境配置模板
+│       ├── sources/           # 数据源模板
+│       └── sinks/             # 数据目标模板
+├── examples/                   # 示例和测试
+│   ├── source/                # 示例源文件
+│   ├── target/                # 生成的目标文件
+│   └── report/                # 生成的报告
+├── logs/                       # 日志文件
+├── LICENSE                     # 许可证
+└── README.md                   # 使用说明
+```
+
+## 📖 使用说明
+
+### 基本语法
+
+```bash
+x2seatunnel [OPTIONS]
+```
+
+### 命令行参数
+
+| 选项     | 长选项          | 描述                                                 | 
必需 |
+|----------|-----------------|------------------------------------------------------|------|
+| -s       | --source        | 源配置文件路径                                       | 
是   |
+| -t       | --target        | 目标配置文件路径                                     | 
是   |
+| -st      | --source-type   | 源配置类型 (datax, 默认: datax)                      | 
否   |
+| -T       | --template      | 自定义模板文件路径                                   | 否 
  |
+| -r       | --report        | 转换报告文件路径                                     | 
否   |
+| -c       | --config        | YAML 配置文件路径,包含 source, target, report, template 
等设置 | 否   |
+| -d       | --directory     | 批量转换源目录                                       | 
否   |
+| -o       | --output-dir    | 批量转换输出目录                                     | 
否   |
+| -p       | --pattern       | 文件通配符模式(逗号分隔,例如: *.json,*.xml)        | 否   |
+| -R       | --report-dir    | 批量模式下报告输出目录,单文件报告和汇总 summary.md 将输出到该目录 | 否   |
+| -v       | --version       | 显示版本信息                                         
| 否   |
+| -h       | --help          | 显示帮助信息                                         
| 否   |
+|          | --verbose       | 启用详细日志输出                                     | 
否   |
+
+```bash
+# 示例:查看命令行帮助
+./bin/x2seatunnel.sh --help
+```
+
+### 支持的配置类型
+
+#### 源配置类型
+- **datax**: DataX配置文件(JSON格式)- 默认类型
+
+#### 目标配置类型
+- **seatunnel**: SeaTunnel配置文件(HOCON格式)
+
+## 🎨 模板系统
+
+### 设计理念
+
+X2SeaTunnel 采用基于 DSL (Domain Specific Language) 
的模板系统,通过配置驱动的方式实现不同数据源和目标的快速适配。核心优势:
+
+- **配置驱动**:所有转换逻辑都通过 YAML 配置文件定义,无需修改 Java 代码
+- **易于扩展**:新增数据源类型只需添加模板文件和映射配置
+- **统一语法**:使用 Jinja2 风格的模板语法,易于理解和维护
+- **智能映射**:通过转换器(transformer)实现复杂的参数映射逻辑
+
+### 模板语法
+
+X2SeaTunnel 支持部分兼容 Jinja2 风格模板语法,提供丰富的过滤器功能来处理配置转换。
+
+```bash
+# 基本变量引用
+{{ datax.job.content[0].reader.parameter.username }}
+
+# 带过滤器的变量
+{{ datax.job.content[0].reader.parameter.column | join(',') }}
+
+# 链式过滤器
+{{ datax.job.content[0].writer.parameter.path | split('/') | get(-2) | 
replace('.db','') }}
+```
+
+
+### 2. 过滤器
+
+| 过滤器 | 语法 | 描述 | 示例 |
+|--------|------|------|------|
+| `join` | `{{ array \| join('分隔符') }}` | 数组连接 | `{{ columns \| join(',') }}` |
+| `default` | `{{ value \| default('默认值') }}` | 默认值 | `{{ port \| 
default(3306) }}` |
+| `upper` | `{{ value \| upper }}` | 大写转换 | `{{ name \| upper }}` |
+| `lower` | `{{ value \| lower }}` | 小写转换 | `{{ name \| lower }}` |
+| `split` | `{{ string \| split('/') }}` | 字符串分割 | `'a/b/c' → ['a','b','c']` |
+| `get` | `{{ array \| get(0) }}` | 获取数组元素 | `['a','b','c'] → 'a'` |
+| `replace` | `{{ string \| replace('old,new') }}` | 字符串替换 | `'hello' → 
'hallo'` |
+| `regex_extract` | `{{ string \| regex_extract('pattern') }}` | 正则提取 | 
提取匹配的内容 |
+| `jdbc_driver_mapper` | `{{ jdbcUrl \| jdbc_driver_mapper }}` | JDBC 驱动映射 | 
自动推断驱动类 |
+
+### 3. 样例
+
+```bash
+# join 过滤器:数组连接
+query = "SELECT {{ datax.job.content[0].reader.parameter.column | join(',') }} 
FROM table"
+
+# default 过滤器:默认值
+partition_column = "{{ datax.job.content[0].reader.parameter.splitPk | 
default('') }}"
+fetch_size = {{ datax.job.content[0].reader.parameter.fetchSize | 
default(1024) }}
+
+# 字符串操作
+driver = "{{ datax.job.content[0].reader.parameter.connection[0].jdbcUrl[0] | 
upper }}"
+```
+
+```bash
+# 链式过滤器:字符串分割和获取
+{{ datax.job.content[0].writer.parameter.path | split('/') | get(-2) | 
replace('.db','') }}
+
+# 正则表达式提取
+{{ jdbcUrl | regex_extract('jdbc:mysql://([^:]+):') }}
+
+# 转换器调用:智能参数映射
+driver = "{{ datax.job.content[0].reader.parameter.connection[0].jdbcUrl[0] | 
jdbc_driver_mapper }}"
+```
+
+```bash
+# 智能查询生成
+query = "{{ datax.job.content[0].reader.parameter.querySql[0] | 
default('SELECT') }} {{ datax.job.content[0].reader.parameter.column | 
join(',') }} FROM {{ 
datax.job.content[0].reader.parameter.connection[0].table[0] }} WHERE {{ 
datax.job.content[0].reader.parameter.where | default('1=1') }}"
+
+# 路径智能解析:从 HDFS 路径提取 Hive 表名
+# 路径: /user/hive/warehouse/test_ods.db/test_table/partition=20240101
+database = "{{ datax.job.content[0].writer.parameter.path | split('/') | 
get(-3) | replace('.db','') }}"
+table = "{{ datax.job.content[0].writer.parameter.path | split('/') | get(-2) 
}}"
+table_name = "{{ database }}.{{ table }}"
+```
+
+```bash
+# 自动推断数据库驱动
+{{ datax.job.content[0].reader.parameter.connection[0].jdbcUrl[0] | 
jdbc_driver_mapper }}
+
+# 映射关系(在 template-mapping.yaml 中配置):
+# mysql -> com.mysql.cj.jdbc.Driver
+# postgresql -> org.postgresql.Driver
+# oracle -> oracle.jdbc.driver.OracleDriver
+# sqlserver -> com.microsoft.sqlserver.jdbc.SQLServerDriver
+```
+
+### 4. 模板配置示例
+
+```hocon
+env {
+  execution.parallelism = {{ datax.job.setting.speed.channel | default(1) }}
+  job.mode = "BATCH"
+}
+
+source {
+  Jdbc {
+    url = "{{ datax.job.content[0].reader.parameter.connection[0].jdbcUrl[0] 
}}"
+    driver = "com.mysql.cj.jdbc.Driver"
+    user = "{{ datax.job.content[0].reader.parameter.username }}"
+    password = "{{ datax.job.content[0].reader.parameter.password }}"
+    query = "{{ datax.job.content[0].reader.parameter.querySql[0] | 
default('SELECT') }} {{ datax.job.content[0].reader.parameter.column | 
join(',') }} FROM {{ 
datax.job.content[0].reader.parameter.connection[0].table[0] }}"
+    plugin_output = "source_table"
+  }
+}
+
+sink {
+  Hive {
+    # 从路径智能提取 Hive 表名
+    # 使用 split 和 get 过滤器来提取数据库名和表名
+    # 步骤1:分割路径
+    # 步骤2:获取倒数第二个部分作为数据库名,去掉.db后缀
+    # 步骤3:获取倒数第一个部分作为表名
+    table_name = "{{ datax.job.content[0].writer.parameter.path | split('/') | 
get(-3) | replace('.db,') }}.{{ datax.job.content[0].writer.parameter.path | 
split('/') | get(-2) }}"
+
+    # Hive Metastore配置
+    metastore_uri = "{{ datax.job.content[0].writer.parameter.metastoreUri | 
default('thrift://localhost:9083') }}"
+    
+    # 压缩配置
+    compress_codec = "{{ datax.job.content[0].writer.parameter.compress | 
default('none') }}"
+    
+    # Hadoop配置文件路径(可选)
+    # hdfs_site_path = "/etc/hadoop/conf/hdfs-site.xml"
+    # hive_site_path = "/etc/hadoop/conf/hive-site.xml"
+    
+    # Hadoop配置(可选)
+    # hive.hadoop.conf = {
+    #   "fs.defaultFS" = "{{ datax.job.content[0].writer.parameter.defaultFS | 
default('hdfs://localhost:9000') }}"
+    # }
+    
+    # 结果表名
+    plugin_input = "source_table"
+  }
+}
+```
+
+### 自定义转换器
+
+通过 `templates/template-mapping.yaml` 配置自定义转换器:
+
+```yaml
+transformers:
+  # JDBC 驱动映射
+  jdbc_driver_mapper:
+    mysql: "com.mysql.cj.jdbc.Driver"
+    postgresql: "org.postgresql.Driver"
+    oracle: "oracle.jdbc.driver.OracleDriver"
+    sqlserver: "com.microsoft.sqlserver.jdbc.SQLServerDriver"
+  
+  # 文件格式映射
+  file_format_mapper:
+    text: "text"
+    orc: "orc"
+    parquet: "parquet"
+    json: "json"
+```
+
+## 扩展新数据源
+
+添加新数据源类型只需三步:
+
+1. **创建模板文件**:在 `templates/datax/sources/` 下创建新的模板文件
+2. **配置映射关系**:在 `template-mapping.yaml` 中添加映射配置
+3. **添加转换器**:如需特殊处理,添加对应的转换器配置
+
+无需修改任何 Java 代码,即可支持新的数据源类型。
+
+
+## 🌐 支持的数据源和目标
+
+### 数据源(Sources)
+
+| 数据源类型 | DataX Reader | 模板文件 | 支持状态 |
+|-----------|-------------|----------|----------|
+| **MySQL** | `mysqlreader` | `mysql-source.conf` | ✅ 支持 |
+| **PostgreSQL** | `postgresqlreader` | `jdbc-source.conf` | ✅ 支持 |
+| **Oracle** | `oraclereader` | `jdbc-source.conf` | ✅ 支持 |
+| **SQL Server** | `sqlserverreader` | `jdbc-source.conf` | ✅ 支持 |
+| **HDFS** | `hdfsreader` | `hdfs-source.conf` | 支持 |
+
+### 数据目标(Sinks)
+
+| 数据目标类型 | DataX Writer | 模板文件 | 支持状态 |
+|-------------|-------------|----------|----------|
+| **MySQL** | `mysqlwriter` | `jdbc-sink.conf` | ✅ 支持 |
+| **PostgreSQL** | `postgresqlwriter` | `jdbc-sink.conf` | ✅ 支持 |
+| **Oracle** | `oraclewriter` | `jdbc-sink.conf` | ✅ 支持 |
+| **SQL Server** | `sqlserverwriter` | `jdbc-sink.conf` | ✅ 支持 |
+| **HDFS** | `hdfswriter` | `hdfs-sink.conf` | ✅ 支持 |
+
+
+## 开发指南
+### 自定义配置模板
+
+可以在 `templates/datax/custom/` 目录下自定义配置模板,参考现有模板的格式和占位符语法。
+
+### 代码结构
+
+```
+src/main/java/org/apache/seatunnel/tools/x2seatunnel/
+├── cli/                    # 命令行界面
+├── core/                   # 核心转换逻辑
+├── template/               # 模板处理
+├── utils/                  # 工具类
+└── X2SeaTunnelApplication.java  # 主应用类
+```
+
+### 限制和注意事项
+#### 版本兼容性
+- 支持 DataX 主流版本的配置格式
+- 生成的配置兼容 SeaTunnel 2.3.11+ 版本,旧版本大部分差异不大
+- 模板系统向后兼容
+
+### 更新日志
+
+#### v1.0.0-SNAPSHOT (当前版本)
+- ✅ **核心功能**:支持DataX到SeaTunnel的基础配置转换
+- ✅ **模板系统**:基于Jinja2风格的DSL模板语言,支持配置驱动扩展
+- ✅ **JDBC统一支持**:MySQL、PostgreSQL、Oracle、SQL Server等关系型数据库
+- ✅ **智能特性**:
+  - 自动驱动映射(根据jdbcUrl推断数据库驱动)
+  - 智能查询生成(根据column、table、where自动拼接SELECT语句)
+  - 参数自动映射(splitPk→partition_column、fetchSize→fetch_size等)
+- ✅ **模板语法**:
+  - 基础变量访问:`{{ datax.path.to.value }}`
+  - 过滤器支持:`{{ array | join(',') }}`、`{{ value | default('default') }}`
+  - 自定义转换器:`{{ url | jdbc_driver_mapper }}`
+- ✅ **批量处理**:支持目录级别的批量转换和报告生成
+- ✅ **完整示例**:提供4种JDBC数据源的完整DataX配置样例
+- ✅ **详细文档**:完整的使用说明和API文档
\ No newline at end of file


Reply via email to