This is an automated email from the ASF dual-hosted git repository.

jin pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-hugegraph-doc.git


The following commit(s) were added to refs/heads/master by this push:
     new 1e297a2f docs: refactor docs of loader & client for new version(1.7.0) 
(#415)
1e297a2f is described below

commit 1e297a2f28b6b3d9349518991048fc94426bc325
Author: Duoduo Wang <[email protected]>
AuthorDate: Mon Dec 1 15:38:17 2025 +0800

    docs: refactor docs of loader & client for new version(1.7.0) (#415)
    
    * fixed mvn version to 1.7.0
    added graphspace part for docs of client
    changed client examples to NEWER version
    fixed parameters in loader docs
---
 content/cn/docs/clients/hugegraph-client.md        | 39 +++++++++++++-
 .../cn/docs/quickstart/client/hugegraph-client.md  | 15 ++++--
 .../docs/quickstart/toolchain/hugegraph-loader.md  | 39 +++++++++++---
 content/en/docs/clients/hugegraph-client.md        | 39 +++++++++++++-
 .../en/docs/quickstart/client/hugegraph-client.md  | 14 +++--
 .../docs/quickstart/toolchain/hugegraph-loader.md  | 61 +++++++++++++++-------
 6 files changed, 168 insertions(+), 39 deletions(-)

diff --git a/content/cn/docs/clients/hugegraph-client.md 
b/content/cn/docs/clients/hugegraph-client.md
index 1a9e1daa..4b51ed72 100644
--- a/content/cn/docs/clients/hugegraph-client.md
+++ b/content/cn/docs/clients/hugegraph-client.md
@@ -13,12 +13,13 @@ weight: 2
 
 HugeGraph-Client 是操作 graph 的总入口,用户必须先创建出 HugeGraph-Client 对象,与 
HugeGraph-Server 建立连接(伪连接)后,才能获取到 schema、graph 以及 gremlin 的操作入口对象。
 
-目前 HugeGraph-Client 只允许连接服务端已存在的图,无法自定义图进行创建。其创建方法如下:
+目前 HugeGraph-Client 只允许连接服务端已存在的图,无法自定义图进行创建。1.7.0 版本后,client 支持 graphSpace 
设置,默认为DEFAULT。其创建方法如下:
 
 ```java
 // HugeGraphServer 地址:"http://localhost:8080";
 // 图的名称:"hugegraph"
 HugeClient hugeClient = HugeClient.builder("http://localhost:8080";, 
"hugegraph")
+                                //.builder("http://localhost:8080";, 
"graphSpaceName", "hugegraph")
                                   .configTimeout(20) // 默认 20s 超时
                                   .configUser("**", "**") // 默认未开启用户权限
                                   .build();
@@ -455,6 +456,40 @@ Edge knows1 = marko.addEdge("knows", vadas, "city", 
"Beijing");
 
 **注意:当 frequency 为 multiple 时必须要设置 sortKeys 对应属性类型的值。**
 
-### 4 简单示例
+### 4 图管理
+client支持一个物理部署中多个 GraphSpace,每个 GraphSpace 下可以含多个图(graph)。
+- 兼容:不指定 GraphSpace 时,默认使用 "DEFAULT" 空间
+
+#### 4.1 创建GraphSpace
+
+```java
+GraphSpaceManager spaceManager = hugeClient.graphSpace();
+
+// 定义 GraphSpace 配置
+GraphSpace graphSpace = new GraphSpace();
+graphSpace.setName("myGraphSpace");
+graphSpace.setDescription("Business data graph space");
+graphSpace.setMaxGraphNumber(10);  // 最大图数量
+graphSpace.setMaxRoleNumber(100);  // 最大角色数量
+
+// 创建 GraphSpace
+spaceManager.createGraphSpace(graphSpace);
+```
+#### 4.2 GraphSpace 接口汇总
+
+| 类别 | 接口 | 描述 |
+|------|------|------|
+| Manager - 查询 | listGraphSpace() | 获取所有 GraphSpace 列表 |
+| | getGraphSpace(String name) | 获取指定 GraphSpace |
+| Manager - 创建/更新 | createGraphSpace(GraphSpace) | 创建 GraphSpace |
+| | updateGraphSpace(String, GraphSpace) | 更新配置 |
+| Manager - 删除 | removeGraphSpace(String) | 删除指定 GraphSpace |
+| GraphSpace - 属性 | getName() / getDescription() | 获取名称/描述 |
+| | getGraphNumber() | 获取图数量 |
+| GraphSpace - 配置 | setDescription(String) | 设置描述 |
+| | setMaxGraphNumber(int) | 设置最大图数量 |
+
+
+### 5 简单示例
 
 简单示例见[HugeGraph-Client](/cn/docs/quickstart/client/hugegraph-client)
diff --git a/content/cn/docs/quickstart/client/hugegraph-client.md 
b/content/cn/docs/quickstart/client/hugegraph-client.md
index 9d216531..9322aabf 100644
--- a/content/cn/docs/quickstart/client/hugegraph-client.md
+++ b/content/cn/docs/quickstart/client/hugegraph-client.md
@@ -48,7 +48,7 @@ weight: 1
         <groupId>org.apache.hugegraph</groupId>
         <artifactId>hugegraph-client</artifactId>
         <!-- Update to the latest release version -->
-        <version>1.5.0</version>
+        <version>1.7.0</version>
     </dependency>
 </dependencies>
 ```
@@ -79,7 +79,10 @@ public class SingleExample {
     public static void main(String[] args) throws IOException {
         // If connect failed will throw a exception.
         HugeClient hugeClient = HugeClient.builder("http://localhost:8080";,
+                                                   "DEFAULT",
                                                    "hugegraph")
+                                          .configUser("username", "password")
+                                          // 这是示例,生产环境需要使用安全的凭证
                                           .build();
 
         SchemaManager schema = hugeClient.schema();
@@ -224,7 +227,10 @@ public class BatchExample {
     public static void main(String[] args) {
         // If connect failed will throw a exception.
         HugeClient hugeClient = HugeClient.builder("http://localhost:8080";,
-                                                   "hugegraph")
+                                                "DEFAULT",
+                                                "hugegraph")
+                                          .configUser("username", "password")
+                                          // 这是示例,生产环境需要使用安全的凭证
                                           .build();
 
         SchemaManager schema = hugeClient.schema();
@@ -348,12 +354,11 @@ public class BatchExample {
 }
 ```
 
-### 4.4 运行 Example
+#### 4.4 运行 Example
 
 运行 Example 之前需要启动 Server,
 启动过程见[HugeGraph-Server Quick Start](/cn/docs/quickstart/hugegraph-server)
 
-### 4.5 详细 API 说明
+#### 4.5 详细 API 说明
 
 示例说明见[HugeGraph-Client 基本 API 介绍](/cn/docs/clients/hugegraph-client)
-
diff --git a/content/cn/docs/quickstart/toolchain/hugegraph-loader.md 
b/content/cn/docs/quickstart/toolchain/hugegraph-loader.md
index 0ea2b729..9b088e4c 100644
--- a/content/cn/docs/quickstart/toolchain/hugegraph-loader.md
+++ b/content/cn/docs/quickstart/toolchain/hugegraph-loader.md
@@ -605,7 +605,7 @@ bin/mapping-convert.sh struct.json
 
 ##### 3.3.2 输入源
 
-输入源目前分为四类:FILE、HDFS、JDBC、KAFKA,由`type`节点区分,我们称为本地文件输入源、HDFS 输入源、JDBC 输入源和 
KAFKA 输入源,下面分别介绍。
+输入源目前分为五类:FILE、HDFS、JDBC、KAFKA 和 GRAPH,由`type`节点区分,我们称为本地文件输入源、HDFS 输入源、JDBC 
输入源和 KAFKA 输入源,图数据源,下面分别介绍。
 
 ###### 3.3.2.1 本地文件输入源
 
@@ -709,6 +709,22 @@ schema: 必填
 - skipped_line:想跳过的行,复合结构,目前只能配置要跳过的行的正则表达式,用子节点 regex 描述,默认不跳过任何行,选填;
 - early_stop:某次从 Kafka broker 拉取的记录为空,停止任务,默认为 false,仅用于调试,选填;
 
+###### 3.3.2.5 GRAPH 输入源
+
+- type:输入源类型,必须填 `graph` 或 `GRAPH`,必填;
+- graphspace:源图空间名称,默认为 `DEFAULT`;
+- graph: 源图名称,必填;
+- username:HugeGraph 用户名;
+- password:HugeGraph 密码;
+- selected_vertices:要同步的顶点筛选规则;
+- ignored_vertices:要忽略的顶点筛选规则;
+- selected_edges:要同步的边筛选规则;
+- ignored_edges:要忽略的边筛选规则;
+- pd-peers:HugeGraph-PD 节点地址;
+- meta-endpoints:源集群 Meta服务端点;
+- cluster:源集群名称;
+- batch_size:批量读取源图数据的批次大小,默认为500;
+
 ##### 3.3.3 顶点和边映射
 
 顶点和边映射的节点(JSON 文件中的一个 key)有很多相同的部分,下面先介绍相同部分,再分别介绍`顶点映射`和`边映射`的特有节点。
@@ -794,20 +810,29 @@ schema: 必填
 | 参数                        | 默认值       | 是否必传 | 描述信息                          
                                    |
 
|---------------------------|-----------|------|-------------------------------------------------------------------|
 | `-f` 或 `--file`           |           | Y    | 配置脚本的路径                       
                                    |
-| `-g` 或 `--graph`          |           | Y    | 图数据库空间                        
                                    |
-| `-s` 或 `--schema`         |           | Y    | schema 文件路径                   
                                    |        |
-| `-h` 或 `--host`           | localhost |      | HugeGraphServer 的地址           
                                    |
+| `-g` 或 `--graph`          |           | Y    | 图名称                           
                                |
+| `-gs` 或 `--graphspace`    | DEFAULT   |      | 图空间                           
                                 |
+| `-s` 或 `--schema`         |           | Y    | schema 文件路径                   
                                    |
+| `-h` 或 `--host` 或 `-i`   | localhost |      | HugeGraphServer 的地址            
                                   |
 | `-p` 或 `--port`           | 8080      |      | HugeGraphServer 的端口号          
                                    |
 | `--username`              | null      |      | 当 HugeGraphServer 
开启了权限认证时,当前图的 username                          |
+| `--password`              | null      |      | 当 HugeGraphServer 
开启了权限认证时,当前图的 password                          |
+| `--create-graph`          | false     |      | 是否在图不存在时自动创建                  
                            |
 | `--token`                 | null      |      | 当 HugeGraphServer 
开启了权限认证时,当前图的 token                             |
 | `--protocol`              | http      |      | 向服务端发请求的协议,可选 http 或 https    
                                    |
+| `--pd-peers`              |           |      | PD 服务节点地址                     
                                  |
+| `--pd-token`              |           |      | 访问 PD 服务的 token               
                                   |
+| `--meta-endpoints`        |           |      | 元信息存储服务地址                     
                                |
+| `--direct`                | false     |      | 是否直连 HugeGraph-Store          
                                    |
+| `--route-type`            | NODE_PORT |      | 路由选择方式(可选值:NODE_PORT / DDS / 
BOTH)                               |
+| `--cluster`               | hg        |      | 集群名                           
                                    |
 | `--trust-store-file`      |           |      | 请求协议为 https 时,客户端的证书文件路径      
                                    |
 | `--trust-store-password`  |           |      | 请求协议为 https 时,客户端证书密码         
                                    |
 | `--clear-all-data`        | false     |      | 导入数据前是否清除服务端的原有数据             
                                    |
 | `--clear-timeout`         | 240       |      | 导入数据前清除服务端的原有数据的超时时间          
                                    |
-| `--incremental-mode`      | false     |      | 是否使用断点续导模式,仅输入源为 FILE 和 HDFS 
支持该模式,启用该模式能从上一次导入停止的地方开始导           |
+| `--incremental-mode`      | false     |      | 是否使用断点续导模式,仅输入源为 FILE 和 HDFS 
支持该模式,启用该模式能从上一次导入停止的地方开始导入           |
 | `--failure-mode`          | false     |      | 失败模式为 true 
时,会导入之前失败了的数据,一般来说失败数据文件需要在人工更正编辑好后,再次进行导入             |
-| `--batch-insert-threads`  | CPUs      |      | 批量插入线程池大小 (CPUs 是当前 OS 
可用可用**逻辑核**个数)                             |
+| `--batch-insert-threads`  | CPUs      |      | 批量插入线程池大小 (CPUs 是当前 OS 
可用**逻辑核**个数)                             |
 | `--single-insert-threads` | 8         |      | 单条插入线程池的大小                    
                                    |
 | `--max-conn`              | 4 * CPUs  |      | HugeClient 与 HugeGraphServer 
的最大 HTTP 连接数,**调整线程**的时候建议同时调整此项     |
 | `--max-conn-per-route`    | 2 * CPUs  |      | HugeClient 与 HugeGraphServer 
每个路由的最大 HTTP 连接数,**调整线程**的时候建议同时调整此项 |
@@ -821,7 +846,7 @@ schema: 必填
 | `--check-vertex`          | false     |      | 插入边时是否检查边所连接的顶点是否存在           
                                    |
 | `--print-progress`        | true      |      | 是否在控制台实时打印导入条数                
                                    |
 | `--dry-run`               | false     |      | 打开该模式,只解析不导入,通常用于测试           
                                    |
-| `--help`                  | false     |      | 打印帮助信息                        
                                    |
+| `--help`                  | false     |      | 打印帮助信息                        
                                    |                                           
       
 
 ##### 3.4.2 断点续导模式
 
diff --git a/content/en/docs/clients/hugegraph-client.md 
b/content/en/docs/clients/hugegraph-client.md
index 5ae2db27..746418b6 100644
--- a/content/en/docs/clients/hugegraph-client.md
+++ b/content/en/docs/clients/hugegraph-client.md
@@ -12,12 +12,13 @@ The `gremlin(groovy)` written by the user in 
`HugeGraph-Studio` can refer to the
 
 HugeGraph-Client is the general entry for operating graph. Users must first 
create a HugeGraph-Client object and establish a connection (pseudo connection) 
with HugeGraph-Server before they can obtain the operation entry objects of 
schema, graph and gremlin.
 
-Currently, HugeGraph-Client only allows connections to existing graphs on the 
server, and cannot create custom graphs. Its creation method is as follows:
+Currently, HugeGraph-Client only allows connections to existing graphs on the 
server, and cannot create custom graphs. After version 1.7.0, client has 
supported setting graphSpace, the default value for graphSpace is DEFAULT. Its 
creation method is as follows:
 
 ```java
 // HugeGraphServer address: "http://localhost:8080";
 // Graph Name: "hugegraph"
 HugeClient hugeClient = HugeClient.builder("http://localhost:8080";, 
"hugegraph")
+                                //.builder("http://localhost:8080";, 
"graphSpaceName", "hugegraph")
                                   .configTimeout(20) // 20s timeout
                                   .configUser("**", "**") // enable auth 
                                   .build();
@@ -444,6 +445,40 @@ Edge knows1 = marko.addEdge("knows", vadas, "city", 
"Beijing");
 
 **Note: When frequency is multiple, the value of the property type 
corresponding to sortKeys must be set.**
 
-### 4 Examples
+### 4 GraphSpace
+The client supports multiple GraphSpaces in one physical deployment, and each 
GraphSpace can contain multiple graphs.
+- Compatibility: When no GraphSpace is specified, the "DEFAULT" space is used 
by default.
+
+#### 4.1 Create GraphSpace
+
+```java
+GraphSpaceManager spaceManager = hugeClient.graphSpace();
+
+// Define GraphSpace configuration
+GraphSpace graphSpace = new GraphSpace();
+graphSpace.setName("myGraphSpace");
+graphSpace.setDescription("Business data graph space");
+graphSpace.setMaxGraphNumber(10);  // Maximum number of graphs
+graphSpace.setMaxRoleNumber(100);  // Maximum number of roles
+
+// Create GraphSpace
+spaceManager.createGraphSpace(graphSpace);
+```
+
+#### 4.2 GraphSpace Interface Summary
+
+| Category | Interface | Description |
+|----------|-----------|-------------|
+| Manager - Query | listGraphSpace() | Get the list of all GraphSpaces |
+|           | getGraphSpace(String name) | Get the specified GraphSpace |
+| Manager - Create/Update | createGraphSpace(GraphSpace) | Create a GraphSpace 
|
+|           | updateGraphSpace(String, GraphSpace) | Update configuration |
+| Manager - Delete | removeGraphSpace(String) | Delete the specified 
GraphSpace |
+| GraphSpace - Properties | getName() / getDescription() | Get name / 
description |
+|           | getGraphNumber() | Get the number of graphs |
+| GraphSpace - Configuration | setDescription(String) | Set description |
+|           | setMaxGraphNumber(int) | Set the maximum number of graphs |
+
+### 5 Simple Example
 
 Simple examples can reference 
[HugeGraph-Client](/docs/quickstart/client/hugegraph-client)
diff --git a/content/en/docs/quickstart/client/hugegraph-client.md 
b/content/en/docs/quickstart/client/hugegraph-client.md
index f932bedd..91ac7865 100644
--- a/content/en/docs/quickstart/client/hugegraph-client.md
+++ b/content/en/docs/quickstart/client/hugegraph-client.md
@@ -44,7 +44,7 @@ Using IDEA or Eclipse to create the project:
         <groupId>org.apache.hugegraph</groupId>
         <artifactId>hugegraph-client</artifactId>
         <!-- Update to the latest release version -->
-        <version>1.5.0</version>
+        <version>1.7.0</version>
     </dependency>    
 </dependencies>
 ```
@@ -75,7 +75,10 @@ public class SingleExample {
     public static void main(String[] args) throws IOException {
         // If connect failed will throw a exception.
         HugeClient hugeClient = HugeClient.builder("http://localhost:8080";,
+                                                   "DEFAULT",
                                                    "hugegraph")
+                                          .configUser("username", "password")
+                                          // This is an example. In a 
production environment, secure credentials should be used.
                                           .build();
 
         SchemaManager schema = hugeClient.schema();
@@ -218,9 +221,11 @@ import org.apache.hugegraph.structure.graph.Vertex;
 public class BatchExample {
 
     public static void main(String[] args) {
-        // If connect failed will throw a exception.
         HugeClient hugeClient = HugeClient.builder("http://localhost:8080";,
+                                                   "DEFAULT",
                                                    "hugegraph")
+                                          .configUser("username", "password")
+                                          // This is an example. In a 
production environment, secure credentials should be used.
                                           .build();
 
         SchemaManager schema = hugeClient.schema();
@@ -344,11 +349,10 @@ public class BatchExample {
 }
 ```
 
-### 4.4 Run The Example
+#### 4.4 Run The Example
 
 Before running Example, you need to start the Server. For the startup process, 
see[HugeGraph-Server Quick Start](/docs/quickstart/hugegraph/hugegraph-server).
 
-### 4.5 More Information About Client-API
+#### 4.5 More Information About Client-API
 
 See[Introduce basic API of HugeGraph-Client](/docs/clients/hugegraph-client).
-
diff --git a/content/en/docs/quickstart/toolchain/hugegraph-loader.md 
b/content/en/docs/quickstart/toolchain/hugegraph-loader.md
index a8ec5ec4..6d14d05a 100644
--- a/content/en/docs/quickstart/toolchain/hugegraph-loader.md
+++ b/content/en/docs/quickstart/toolchain/hugegraph-loader.md
@@ -592,7 +592,7 @@ A struct-v2.json will be generated in the same directory as 
struct.json.
 
 ##### 3.3.2 Input Source
 
-Input sources are currently divided into four categories: FILE, HDFS, JDBC and 
KAFKA, which are distinguished by the `type` node. We call them local file 
input sources, HDFS input sources, JDBC input sources, and KAFKA input sources, 
which are described below.
+Input sources are currently divided into five categories: FILE, HDFS, JDBC, 
KAFKA and GRAPH, which are distinguished by the `type` node. We call them local 
file input sources, HDFS input sources, JDBC input sources, KAFKA input sources 
and GRAPH input source, which are described below.
 
 ###### 3.3.2.1 Local file input source
 
@@ -696,6 +696,22 @@ schema: required
 - skipped_line: the line you want to skip, composite structure, currently can 
only configure the regular expression of the line to be skipped, described by 
the child node regex, the default is not to skip any line, optional;
 - early_stop: the record pulled from Kafka broker at a certain time is empty, 
stop the task, default is false, only for debugging, optional;
 
+###### 3.3.2.5 GRAPH input Source
+
+- type: Data source type; must be filled in as `graph` or `GRAPH` (required);
+- graphspace: Source graphSpace name; default is `DEFAULT`;
+- graph: Source graph name (required);
+- username: HugeGraph username;
+- password: HugeGraph password;
+- selected_vertices: Filtering rules for vertices to be synchronized;
+- ignored_vertices: Filtering rules for vertices to be ignored;
+- selected_edges: Filtering rules for edges to be synchronized;
+- ignored_edges: Filtering rules for edges to be ignored;
+- pd-peers: HugeGraph-PD node addresses;
+- meta-endpoints: Meta service endpoints of the source cluster;
+- cluster: Source cluster name;
+- batch_size: Batch size for reading data from the source graph; default is 
500;
+
 ##### 3.3.3 Vertex and Edge Mapping
 
 The nodes of vertex and edge mapping (a key in the JSON file) have a lot of 
the same parts. The same parts are introduced first, and then the unique nodes 
of `vertex map` and `edge map` are introduced respectively.
@@ -780,35 +796,44 @@ The import process is controlled by commands submitted by 
the user, and the user
 
 | Parameter                 | Default value | Required or not | Description    
                                                                                
                                                                                
           |
 
|---------------------------|---------------|-----------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
-| `-f` or `--file`          |               | Y               | path to 
configure script                                                                
                                                                                
                  |
-| `-g` or `--graph`         |               | Y               | graph space 
name                                                                            
                                                                                
              |
-| `-s` or `--schema`        |               | Y               | schema file 
path                                                                            
                                                                                
              |
-| `-h` or `--host`          | localhost     |                 | address of 
HugeGraphServer                                                                 
                                                                                
               |
-| `-p` or `--port`          | 8080          |                 | port number of 
HugeGraphServer                                                                 
                                                                                
           |
+| `-f` or `--file`          |               | Y               | Path to 
configure script                                                                
                                                                                
                  |
+| `-g` or `--graph`         |               | Y               | Graph name     
                                                                                
                                                                                
           |
+| `-gs` or `--graphspace`   | DEFAULT       |                 | Graph space 
name                                                                            
                                                                                
              |
+| `-s` or `--schema`        |               | Y               | Schema file 
path                                                                            
                                                                                
              |
+| `-h` or `--host` or `-i`  | localhost     |                 | Address of 
HugeGraphServer                                                                 
                                                                                
               |
+| `-p` or `--port`          | 8080          |                 | Port number of 
HugeGraphServer                                                                 
                                                                                
           |
 | `--username`              | null          |                 | When 
HugeGraphServer enables permission authentication, the username of the current 
graph                                                                           
                      |
+| `--password`              | null          |                 | When 
HugeGraphServer enables permission authentication, the password of the current 
graph                                                                           
                      |
+| `--create-graph`          | false         |                 | Whether to 
automatically create the graph if it does not exist                             
                                                                                
               |
 | `--token`                 | null          |                 | When 
HugeGraphServer has enabled authorization authentication, the token of the 
current graph                                                                   
                          |
 | `--protocol`              | http          |                 | Protocol for 
sending requests to the server, optional http or https                          
                                                                                
             |
+| `--pd-peers`              |               |                 | PD service 
node addresses                                                                  
                                                                                
               |
+| `--pd-token`              |               |                 | Token for 
accessing PD service                                                            
                                                                                
                |
+| `--meta-endpoints`        |               |                 | Meta 
information storage service addresses                                           
                                                                                
                     |
+| `--direct`                | false         |                 | Whether to 
directly connect to HugeGraph-Store                                             
                                                                                
               |
+| `--route-type`            | NODE_PORT     |                 | Route 
selection method (optional values: NODE_PORT / DDS / BOTH)                      
                                                                                
                    |
+| `--cluster`               | hg            |                 | Cluster name   
                                                                                
                                                                                
           |
 | `--trust-store-file`      |               |                 | When the 
request protocol is https, the client's certificate file path                   
                                                                                
                 |
 | `--trust-store-password`  |               |                 | When the 
request protocol is https, the client certificate password                      
                                                                                
                 |
 | `--clear-all-data`        | false         |                 | Whether to 
clear the original data on the server before importing data                     
                                                                                
               |
 | `--clear-timeout`         | 240           |                 | Timeout for 
clearing the original data on the server before importing data                  
                                                                                
              |
-| `--incremental-mode`      | false         |                 | Whether to use 
the breakpoint resume mode, only the input source is FILE and HDFS support this 
mode, enabling this mode can start the import from the place where the last 
import stopped |
-| `--failure-mode`          | false         |                 | When the 
failure mode is true, the data that failed before will be imported. Generally 
speaking, the failed data file needs to be manually corrected and edited, and 
then imported again  |
+| `--incremental-mode`      | false         |                 | Whether to use 
the breakpoint resume mode; only input sources FILE and HDFS support this mode. 
Enabling this mode allows starting the import from where the last import 
stopped          |
+| `--failure-mode`          | false         |                 | When failure 
mode is true, previously failed data will be imported. Generally, the failed 
data file needs to be manually corrected and edited before re-importing         
               |
 | `--batch-insert-threads`  | CPUs          |                 | Batch insert 
thread pool size (CPUs is the number of **logical cores** available to the 
current OS)                                                                     
                  |
 | `--single-insert-threads` | 8             |                 | Size of single 
insert thread pool                                                              
                                                                                
           |
-| `--max-conn`              | 4 * CPUs      |                 | The maximum 
number of HTTP connections between HugeClient and HugeGraphServer, it is 
recommended to adjust this when **adjusting threads**                           
                     |
-| `--max-conn-per-route`    | 2 * CPUs      |                 | The maximum 
number of HTTP connections for each route between HugeClient and 
HugeGraphServer, it is recommended to adjust this item at the same time when 
**adjusting the thread**        |
+| `--max-conn`              | 4 * CPUs      |                 | The maximum 
number of HTTP connections between HugeClient and HugeGraphServer; it is 
recommended to adjust this when **adjusting threads**                           
                     |
+| `--max-conn-per-route`    | 2 * CPUs      |                 | The maximum 
number of HTTP connections for each route between HugeClient and 
HugeGraphServer; it is recommended to adjust this item when **adjusting 
threads**                            |
 | `--batch-size`            | 500           |                 | The number of 
data items in each batch when importing data                                    
                                                                                
            |
-| `--max-parse-errors`      | 1             |                 | The maximum 
number of lines of data parsing errors allowed, and the program exits when this 
value is reached                                                                
              |
-| `--max-insert-errors`     | 500           |                 | The maximum 
number of rows of data insertion errors allowed, and the program exits when 
this value is reached                                                           
                  |
-| `--timeout`               | 60            |                 | Timeout 
(seconds) for inserting results to return                                       
                                                                                
                  |
+| `--max-parse-errors`      | 1             |                 | The maximum 
number of data parsing errors allowed (per line); the program exits when this 
value is reached                                                                
                |
+| `--max-insert-errors`     | 500           |                 | The maximum 
number of data insertion errors allowed (per row); the program exits when this 
value is reached                                                                
               |
+| `--timeout`               | 60            |                 | Timeout 
(seconds) for insert result return                                              
                                                                                
                  |
 | `--shutdown-timeout`      | 10            |                 | Waiting time 
for multithreading to stop (seconds)                                            
                                                                                
             |
 | `--retry-times`           | 0             |                 | Number of 
retries when a specific exception occurs                                        
                                                                                
                |
-| `--retry-interval`        | 10            |                 | interval 
before retry (seconds)                                                          
                                                                                
                 |
-| `--check-vertex`          | false         |                 | Whether to 
check whether the vertex connected by the edge exists when inserting the edge   
                                                                                
               |
-| `--print-progress`        | true          |                 | Whether to 
print the number of imported items in the console in real time                  
                                                                                
               |
-| `--dry-run`               | false         |                 | Turn on this 
mode, only parsing but not importing, usually used for testing                  
                                                                                
             |
-| `--help`                  | false         |                 | print help 
information                                                                     
                                                                                
               |
+| `--retry-interval`        | 10            |                 | Interval 
before retry (seconds)                                                          
                                                                                
                 |
+| `--check-vertex`          | false         |                 | Whether to 
check if the vertices connected by the edge exist when inserting the edge       
                                                                                
               |
+| `--print-progress`        | true          |                 | Whether to 
print the number of imported items in real time on the console                  
                                                                                
               |
+| `--dry-run`               | false         |                 | Enable this 
mode to only parse data without importing; usually used for testing             
                                                                                
              |
+| `--help`                  | false         |                 | Print help 
information                                                                     
                                |
 
 ##### 3.4.2 Breakpoint Continuation Mode
 


Reply via email to