This is an automated email from the ASF dual-hosted git repository.

monkeydluffy pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/apisix.git


The following commit(s) were added to refs/heads/master by this push:
     new 517bf8e92 docs: improve description of name for all logger plugins 
(#10701)
517bf8e92 is described below

commit 517bf8e927cfc461f810ad30b8a3d716141cfe35
Author: zhengke zhou <[email protected]>
AuthorDate: Mon Dec 25 16:53:05 2023 +0800

    docs: improve description of name for all logger plugins (#10701)
---
 docs/en/latest/plugins/clickhouse-logger.md | 2 +-
 docs/en/latest/plugins/kafka-logger.md      | 2 +-
 docs/en/latest/plugins/rocketmq-logger.md   | 2 +-
 docs/en/latest/plugins/skywalking-logger.md | 2 +-
 docs/en/latest/plugins/sls-logger.md        | 2 +-
 docs/en/latest/plugins/syslog.md            | 2 +-
 docs/en/latest/plugins/udp-logger.md        | 2 +-
 docs/zh/latest/plugins/clickhouse-logger.md | 2 +-
 docs/zh/latest/plugins/kafka-logger.md      | 2 +-
 docs/zh/latest/plugins/rocketmq-logger.md   | 2 +-
 docs/zh/latest/plugins/skywalking-logger.md | 2 +-
 docs/zh/latest/plugins/sls-logger.md        | 2 +-
 docs/zh/latest/plugins/syslog.md            | 2 +-
 docs/zh/latest/plugins/udp-logger.md        | 2 +-
 14 files changed, 14 insertions(+), 14 deletions(-)

diff --git a/docs/en/latest/plugins/clickhouse-logger.md 
b/docs/en/latest/plugins/clickhouse-logger.md
index 9a50c1707..feb6dd8a7 100644
--- a/docs/en/latest/plugins/clickhouse-logger.md
+++ b/docs/en/latest/plugins/clickhouse-logger.md
@@ -42,7 +42,7 @@ The `clickhouse-logger` Plugin is used to push logs to 
[ClickHouse](https://clic
 | user          | string  | True     |                     |              | 
ClickHouse username.                                           |
 | password      | string  | True     |                     |              | 
ClickHouse password.                                           |
 | timeout       | integer | False    | 3                   | [1,...]      | 
Time to keep the connection alive for after sending a request. |
-| name          | string  | False    | "clickhouse logger" |              | 
Unique identifier for the logger.                              |
+| name          | string  | False    | "clickhouse logger" |              | 
Unique identifier for the logger. If you use Prometheus to monitor APISIX 
metrics, the name is exported in `apisix_batch_process_entries`.                
              |
 | ssl_verify    | boolean | False    | true                | [true,false] | 
When set to `true`, verifies SSL.                              |
 | log_format       | object  | False    |  |              | Log format 
declared as key value pairs in JSON format. Values only support strings. 
[APISIX](../apisix-variable.md) or 
[Nginx](http://nginx.org/en/docs/varindex.html) variables can be used by 
prefixing the string with `$`. |
 | include_req_body       | boolean | False    | false          | [false, true] 
        | When set to `true` includes the request body in the log. If the 
request body is too big to be kept in the memory, it can't be logged due to 
Nginx's limitations.                                                            
                                                                                
                                     |
diff --git a/docs/en/latest/plugins/kafka-logger.md 
b/docs/en/latest/plugins/kafka-logger.md
index 066dbfc36..476442e50 100644
--- a/docs/en/latest/plugins/kafka-logger.md
+++ b/docs/en/latest/plugins/kafka-logger.md
@@ -50,7 +50,7 @@ It might take some time to receive the log data. It will be 
automatically sent a
 | required_acks          | integer | False    | 1              | [1, -1]       
     | Number of acknowledgements the leader needs to receive for the producer 
to consider the request complete. This controls the durability of the sent 
records. The attribute follows the same configuration as the Kafka `acks` 
attribute. `required_acks` cannot be 0. See [Apache Kafka 
documentation](https://kafka.apache.org/documentation/#producerconfigs_acks) 
for more. |
 | key                    | string  | False    |                |               
        | Key used for allocating partitions for messages.                      
                                                                                
                                                                                
                                                                                
                           |
 | timeout                | integer | False    | 3              | [1,...]       
        | Timeout for the upstream to send data.                                
                                                                                
                                                                                
                                                                                
                           |
-| name                   | string  | False    | "kafka logger" |               
        | Unique identifier for the batch processor.                            
                                                                                
                                                                                
                                                                                
                           |
+| name                   | string  | False    | "kafka logger" |               
        | Unique identifier for the batch processor. If you use Prometheus to 
monitor APISIX metrics, the name is exported in `apisix_batch_process_entries`. 
                                                                                
                                                                                
                                                                                
                [...]
 | meta_format            | enum    | False    | "default"      | 
["default","origin"] | Format to collect the request information. Setting to 
`default` collects the information in JSON format and `origin` collects the 
information with the original HTTP request. See 
[examples](#meta_format-example) below.                                         
                                                                               |
 | log_format | object | False    |   |               | Log format declared as 
key value pairs in JSON format. Values only support strings. 
[APISIX](../apisix-variable.md) or 
[Nginx](http://nginx.org/en/docs/varindex.html) variables can be used by 
prefixing the string with `$`. |
 | include_req_body       | boolean | False    | false          | [false, true] 
        | When set to `true` includes the request body in the log. If the 
request body is too big to be kept in the memory, it can't be logged due to 
Nginx's limitations.                                                            
                                                                                
                                     |
diff --git a/docs/en/latest/plugins/rocketmq-logger.md 
b/docs/en/latest/plugins/rocketmq-logger.md
index 40dd53a7c..3f50b0786 100644
--- a/docs/en/latest/plugins/rocketmq-logger.md
+++ b/docs/en/latest/plugins/rocketmq-logger.md
@@ -45,7 +45,7 @@ It might take some time to receive the log data. It will be 
automatically sent a
 | use_tls                | boolean | False    | false                          
                                               |                      | When 
set to `true`, uses TLS.                                                        
                                                                                
                                                          |
 | access_key             | string  | False    | ""                             
                                               |                      | Access 
key for ACL. Setting to an empty string will disable the ACL.                   
                                                                                
                                                        |
 | secret_key             | string  | False    | ""                             
                                               |                      | secret 
key for ACL.                                                                    
                                                                                
                                                        |
-| name                   | string  | False    | "rocketmq logger"              
                                               |                      | Unique 
identifier for the batch processor.                                             
                                                                                
                                                        |
+| name                   | string  | False    | "rocketmq logger"              
                                               |                      | Unique 
identifier for the batch processor. If you use Prometheus to monitor APISIX 
metrics, the name is exported in `apisix_batch_process_entries`. processor.     
                                                                                
                                                                                
                |
 | meta_format            | enum    | False    | "default"                      
                                               | ["default","origin"] | Format 
to collect the request information. Setting to `default` collects the 
information in JSON format and `origin` collects the information with the 
original HTTP request. See [examples](#meta_format-example) below.      |
 | include_req_body       | boolean | False    | false                          
                                               | [false, true]        | When 
set to `true` includes the request body in the log. If the request body is too 
big to be kept in the memory, it can't be logged due to Nginx's limitations.    
                                                           |
 | include_req_body_expr  | array   | False    |                                
                                               |                      | Filter 
for when the `include_req_body` attribute is set to `true`. Request body is 
only logged when the expression set here evaluates to `true`. See 
[lua-resty-expr](https://github.com/api7/lua-resty-expr) for more.        |
diff --git a/docs/en/latest/plugins/skywalking-logger.md 
b/docs/en/latest/plugins/skywalking-logger.md
index 4c839792d..df4c786fd 100644
--- a/docs/en/latest/plugins/skywalking-logger.md
+++ b/docs/en/latest/plugins/skywalking-logger.md
@@ -42,7 +42,7 @@ If there is an existing tracing context, it sets up the 
trace-log correlation au
 | service_instance_name | string  | False    | "APISIX Instance Name" |        
       | Service instance name for the SkyWalking reporter. Set it to 
`$hostname` to directly get the local hostname. |
 | log_format | object | False    |   |            | Log format declared as key 
value pairs in JSON format. Values only support strings. 
[APISIX](../apisix-variable.md) or 
[Nginx](http://nginx.org/en/docs/varindex.html) variables can be used by 
prefixing the string with `$`. |
 | timeout               | integer | False    | 3                      | 
[1,...]       | Time to keep the connection alive for after sending a request.  
                                             |
-| name                  | string  | False    | "skywalking logger"    |        
       | Unique identifier to identify the logger.                              
                                      |
+| name                  | string  | False    | "skywalking logger"    |        
       | Unique identifier to identify the logger. If you use Prometheus to 
monitor APISIX metrics, the name is exported in `apisix_batch_process_entries`. 
                                                                   |
 | include_req_body      | boolean | False    | false                  | 
[false, true] | When set to `true` includes the request body in the log.        
                                             |
 
 This Plugin supports using batch processors to aggregate and process entries 
(logs/data) in a batch. This avoids the need for frequently submitting the 
data. The batch processor submits data every `5` seconds or when the data in 
the queue reaches `1000`. See [Batch 
Processor](../batch-processor.md#configuration) for more information or setting 
your custom configuration.
diff --git a/docs/en/latest/plugins/sls-logger.md 
b/docs/en/latest/plugins/sls-logger.md
index 7cf0742bf..26808b8cb 100644
--- a/docs/en/latest/plugins/sls-logger.md
+++ b/docs/en/latest/plugins/sls-logger.md
@@ -46,7 +46,7 @@ It might take some time to receive the log data. It will be 
automatically sent a
 | access_key_id     | True     | AccessKey ID in Alibaba Cloud. See 
[Authorization](https://www.alibabacloud.com/help/en/log-service/latest/create-a-ram-user-and-authorize-the-ram-user-to-access-log-service)
 for more details.                                                              
       |
 | access_key_secret | True     | AccessKey Secret in Alibaba Cloud. See 
[Authorization](https://www.alibabacloud.com/help/en/log-service/latest/create-a-ram-user-and-authorize-the-ram-user-to-access-log-service)
 for more details.                                                              
   |
 | include_req_body  | True     | When set to `true`, includes the request body 
in the log.                                                                     
                                                                                
                                  |
-| name              | False    | Unique identifier for the batch processor.    
                                                                                
                                                                                
                                  |
+| name              | False    | Unique identifier for the batch processor. If 
you use Prometheus to monitor APISIX metrics, the name is exported in 
`apisix_batch_process_entries`.                                                 
                                                                                
                                                                     |
 
 NOTE: `encrypt_fields = {"access_key_secret"}` is also defined in the schema, 
which means that the field will be stored encrypted in etcd. See [encrypted 
storage fields](../plugin-develop.md#encrypted-storage-fields).
 
diff --git a/docs/en/latest/plugins/syslog.md b/docs/en/latest/plugins/syslog.md
index 222cc7f29..d8f107f36 100644
--- a/docs/en/latest/plugins/syslog.md
+++ b/docs/en/latest/plugins/syslog.md
@@ -38,7 +38,7 @@ Logs can be set as JSON objects.
 
|------------------|---------|----------|--------------|---------------|--------------------------------------------------------------------------------------------------------------------------|
 | host             | string  | True     |              |               | IP 
address or the hostname of the Syslog server.                                   
                                      |
 | port             | integer | True     |              |               | 
Target port of the Syslog server.                                               
                                         |
-| name             | string  | False    | "sys logger" |               | 
Identifier for the server.                                                      
                                         |
+| name             | string  | False    | "sys logger" |               | 
Identifier for the server. If you use Prometheus to monitor APISIX metrics, the 
name is exported in `apisix_batch_process_entries`.                             
                                                                  |
 | timeout          | integer | False    | 3000         | [1, ...]      | 
Timeout in ms for the upstream to send data.                                    
                                         |
 | tls              | boolean | False    | false        |               | When 
set to `true` performs TLS verification.                                        
                                    |
 | flush_limit      | integer | False    | 4096         | [1, ...]      | 
Maximum size of the buffer (KB) and the current message before it is flushed 
and written to the server.                  |
diff --git a/docs/en/latest/plugins/udp-logger.md 
b/docs/en/latest/plugins/udp-logger.md
index 062db9ee3..57d52b594 100644
--- a/docs/en/latest/plugins/udp-logger.md
+++ b/docs/en/latest/plugins/udp-logger.md
@@ -43,7 +43,7 @@ This plugin also allows to push logs as a batch to your 
external UDP server. It
 | port             | integer | True     |              | [0,...]      | Target 
upstream port.                                    |
 | timeout          | integer | False    | 3            | [1,...]      | 
Timeout for the upstream to send data.                   |
 | log_format       | object  | False    |  |              | Log format 
declared as key value pairs in JSON format. Values only support strings. 
[APISIX](../apisix-variable.md) or 
[Nginx](http://nginx.org/en/docs/varindex.html) variables can be used by 
prefixing the string with `$`. |
-| name             | string  | False    | "udp logger" |              | Unique 
identifier for the batch processor.               |
+| name             | string  | False    | "udp logger" |              | Unique 
identifier for the batch processor. If you use Prometheus to monitor APISIX 
metrics, the name is exported in `apisix_batch_process_entries`. processor.     
          |
 | include_req_body | boolean | False    | false        |              | When 
set to `true` includes the request body in the log. |
 
 This Plugin supports using batch processors to aggregate and process entries 
(logs/data) in a batch. This avoids the need for frequently submitting the 
data. The batch processor submits data every `5` seconds or when the data in 
the queue reaches `1000`. See [Batch 
Processor](../batch-processor.md#configuration) for more information or setting 
your custom configuration.
diff --git a/docs/zh/latest/plugins/clickhouse-logger.md 
b/docs/zh/latest/plugins/clickhouse-logger.md
index 9eafa199a..09d4c512f 100644
--- a/docs/zh/latest/plugins/clickhouse-logger.md
+++ b/docs/zh/latest/plugins/clickhouse-logger.md
@@ -42,7 +42,7 @@ description: 本文介绍了 API 网关 Apache APISIX 如何使用 clickhouse-lo
 | user             | string  | 是     |                     |              | 
ClickHouse 的用户。                                       |
 | password         | string  | 是     |                     |              | 
ClickHouse 的密码。                                      |
 | timeout          | integer | 否     | 3                   | [1,...]      | 
发送请求后保持连接活动的时间。                             |
-| name             | string  | 否     | "clickhouse logger" |              | 标识 
logger 的唯一标识符。                                |
+| name             | string  | 否     | "clickhouse logger" |              | 标识 
logger 的唯一标识符。如果您使用 Prometheus 监视 APISIX 指标,名称将以 `apisix_batch_process_entries` 
导出。                               |
 | ssl_verify       | boolean | 否     | true                | [true,false] | 
当设置为 `true` 时,验证证书。                                                |
 | log_format             | object  | 否   |          |         | 以 JSON 
格式的键值对来声明日志格式。对于值部分,仅支持字符串。如果是以 `$` 开头,则表明是要获取 [APISIX 
变量](../apisix-variable.md) 或 [NGINX 
内置变量](http://nginx.org/en/docs/varindex.html)。 |
 | include_req_body       | boolean | 否     | false          | [false, true]    
     | 当设置为 `true` 时,包含请求体。**注意**:如果请求体无法完全存放在内存中,由于 NGINX 的限制,APISIX 无法将它记录下来。|
diff --git a/docs/zh/latest/plugins/kafka-logger.md 
b/docs/zh/latest/plugins/kafka-logger.md
index 1882393b8..de8d3096c 100644
--- a/docs/zh/latest/plugins/kafka-logger.md
+++ b/docs/zh/latest/plugins/kafka-logger.md
@@ -48,7 +48,7 @@ description: API 网关 Apache APISIX 的 kafka-logger 插件用于将日志作
 | required_acks          | integer | 否     | 1              | [1, -1]          
  | 生产者在确认一个请求发送完成之前需要收到的反馈信息的数量。该参数是为了保证发送请求的可靠性。该属性的配置与 Kafka `acks` 
属性相同,具体配置请参考 [Apache Kafka 
文档](https://kafka.apache.org/documentation/#producerconfigs_acks)。required_acks 
还不支持为 0。  |
 | key                    | string  | 否     |                |                  
     | 用于消息分区而分配的密钥。                             |
 | timeout                | integer | 否     | 3              | [1,...]          
     | 发送数据的超时时间。                             |
-| name                   | string  | 否     | "kafka logger" |                  
     | batch processor 的唯一标识。                     |
+| name                   | string  | 否     | "kafka logger" |                  
     | 标识 logger 的唯一标识符。如果您使用 Prometheus 监视 APISIX 指标,名称将以 
`apisix_batch_process_entries` 导出。                     |
 | meta_format            | enum    | 否     | "default"      | 
["default","origin"] | `default`:获取请求信息以默认的 JSON 编码方式。`origin`:获取请求信息以 HTTP 
原始请求方式。更多信息,请参考 [meta_format](#meta_format-示例)。|
 | log_format             | object  | 否   | |         | 以 JSON 
格式的键值对来声明日志格式。对于值部分,仅支持字符串。如果是以 `$` 开头,则表明是要获取 [APISIX 
变量](../apisix-variable.md) 或 [NGINX 
内置变量](http://nginx.org/en/docs/varindex.html)。 |
 | include_req_body       | boolean | 否     | false          | [false, true]    
     | 当设置为 `true` 时,包含请求体。**注意**:如果请求体无法完全存放在内存中,由于 NGINX 的限制,APISIX 无法将它记录下来。|
diff --git a/docs/zh/latest/plugins/rocketmq-logger.md 
b/docs/zh/latest/plugins/rocketmq-logger.md
index a63dfca4b..6bef7a6b4 100644
--- a/docs/zh/latest/plugins/rocketmq-logger.md
+++ b/docs/zh/latest/plugins/rocketmq-logger.md
@@ -44,7 +44,7 @@ description: API 网关 Apache APISIX 的 rocketmq-logger 插件用于将日志
 | use_tls                | boolean | 否     | false             |               
        | 当设置为 `true` 时,开启 TLS 加密。               |
 | access_key             | string  | 否     | ""                |               
        | ACL 认证的 Access key,空字符串表示不开启 ACL。    |
 | secret_key             | string  | 否     | ""                |               
        | ACL 认证的 Secret key。                           |
-| name                   | string  | 否     | "rocketmq logger" |               
        | Batch Processor 的唯一标识。               |
+| name                   | string  | 否     | "rocketmq logger" |               
        | 标识 logger 的唯一标识符。如果您使用 Prometheus 监视 APISIX 指标,名称将以 
`apisix_batch_process_entries` 导出。               |
 | meta_format            | enum    | 否     | "default"         | 
["default","origin"] | `default`:获取请求信息以默认的 JSON 编码方式。`origin`:获取请求信息以 HTTP 
原始请求方式。更多信息,请参考 [meta_format](#meta_format-示例)。|
 | include_req_body       | boolean | 否     | false             | [false, true] 
        | 当设置为 `true` 时,包含请求体。**注意**:如果请求体无法完全存放在内存中,由于 NGINX 的限制,APISIX 
无法将它记录下来。|
 | include_req_body_expr  | array   | 否     |                   |               
        | 当 `include_req_body` 属性设置为 `true` 时进行过滤请求体,并且只有当此处设置的表达式计算结果为 `true` 
时,才会记录请求体。更多信息,请参考 [lua-resty-expr](https://github.com/api7/lua-resty-expr)。 |
diff --git a/docs/zh/latest/plugins/skywalking-logger.md 
b/docs/zh/latest/plugins/skywalking-logger.md
index 81e9d291c..3eb837e9f 100644
--- a/docs/zh/latest/plugins/skywalking-logger.md
+++ b/docs/zh/latest/plugins/skywalking-logger.md
@@ -44,7 +44,7 @@ description: 本文将介绍 API 网关 Apache APISIX 如何通过 skywalking-lo
 | service_instance_name  | string  | 否     |"APISIX Instance Name"|            
   | SkyWalking 服务的实例名称。当设置为 `$hostname`会直接获取本地主机名。 |
 | log_format             | object  | 否   |          |         | 以 JSON 
格式的键值对来声明日志格式。对于值部分,仅支持字符串。如果是以 `$` 开头,则表明是要获取 [APISIX 
变量](../apisix-variable.md) 或 [NGINX 
内置变量](http://nginx.org/en/docs/varindex.html)。 |
 | timeout                | integer | 否     | 3                    | [1,...]    
   | 发送请求后保持连接活动的时间。                                       |
-| name                   | string  | 否     | "skywalking logger"  |            
   | 标识 logger 的唯一标识符。                                         |
+| name                   | string  | 否     | "skywalking logger"  |            
   | 标识 logger 的唯一标识符。如果您使用 Prometheus 监视 APISIX 指标,名称将以 
`apisix_batch_process_entries` 导出。                                         |
 | include_req_body       | boolean | 否     | false                | [false, 
true] | 当设置为 `true` 时,将请求正文包含在日志中。                         |
 
 该插件支持使用批处理器来聚合并批量处理条目(日志/数据)。这样可以避免插件频繁地提交数据,默认设置情况下批处理器会每 `5` 秒钟或队列中的数据达到 
`1000` 条时提交数据,如需了解批处理器相关参数设置,请参考 [Batch-Processor](../batch-processor.md#配置)。
diff --git a/docs/zh/latest/plugins/sls-logger.md 
b/docs/zh/latest/plugins/sls-logger.md
index 54d4d42c8..d5b57a21b 100644
--- a/docs/zh/latest/plugins/sls-logger.md
+++ b/docs/zh/latest/plugins/sls-logger.md
@@ -43,7 +43,7 @@ title: sls-logger
 | access_key_id | 必须的 | AccessKey ID。建议使用阿里云子账号 AK,详情请参见 
[授权](https://help.aliyun.com/document_detail/47664.html?spm=a2c4g.11186623.2.15.49301b47lfvxXP#task-xsk-ttc-ry)。|
 | access_key_secret | 必须的 | AccessKey Secret。建议使用阿里云子账号 AK,详情请参见 
[授权](https://help.aliyun.com/document_detail/47664.html?spm=a2c4g.11186623.2.15.49301b47lfvxXP#task-xsk-ttc-ry)。|
 | include_req_body | 可选的 | 是否包含请求体。|
-|name| 可选的 | 批处理名字。|
+|name| 可选的 | 批处理名字。如果您使用 Prometheus 监视 APISIX 指标,名称将以 
`apisix_batch_process_entries` 导出。|
 
 注意:schema 中还定义了 `encrypt_fields = {"access_key_secret"}`,这意味着该字段将会被加密存储在 etcd 
中。具体参考 [加密存储字段](../plugin-develop.md#加密存储字段)。
 
diff --git a/docs/zh/latest/plugins/syslog.md b/docs/zh/latest/plugins/syslog.md
index 89af4dfea..d32d8cddb 100644
--- a/docs/zh/latest/plugins/syslog.md
+++ b/docs/zh/latest/plugins/syslog.md
@@ -39,7 +39,7 @@ description: API 网关 Apache APISIX syslog 插件可用于将日志推送到 S
 | ---------------- | ------- | ------ | ------------ | ------------- | 
------------------------------------------------------------------------------------------------------------------------------------
 |
 | host             | string  | 是     |              |               | IP 
地址或主机名。                                                                         
                                             |
 | port             | integer | 是     |              |               | 目标上游端口。  
                                                                                
                                       |
-| name             | string  | 否     | "sys logger" |               | syslog 
服务器的标识符。                                                                        
                                        |
+| name             | string  | 否     | "sys logger" |               | 标识 
logger 的唯一标识符。如果您使用 Prometheus 监视 APISIX 指标,名称将以 `apisix_batch_process_entries` 
导出。                                                                             
                                   |
 | timeout          | integer | 否     | 3000         | [1, ...]      | 
上游发送数据超时(以毫秒为单位)。                                                               
                                        |
 | tls              | boolean | 否     | false        |               | 当设置为 
`true` 时执行 SSL 验证。                                                              
                                         |
 | flush_limit      | integer | 否     | 4096         | [1, ...]      | 
如果缓冲的消息的大小加上当前消息的大小达到(> =)此限制(以字节为单位),则缓冲的日志消息将被写入日志服务器,默认为 4096(4KB)。          
    |
diff --git a/docs/zh/latest/plugins/udp-logger.md 
b/docs/zh/latest/plugins/udp-logger.md
index bc5880041..0966aaadf 100644
--- a/docs/zh/latest/plugins/udp-logger.md
+++ b/docs/zh/latest/plugins/udp-logger.md
@@ -41,7 +41,7 @@ description: 本文介绍了 API 网关 Apache APISIX 如何使用 udp-logger 
 | port             | integer | 是     |              | [0,...] | 目标端口。          
                               |
 | timeout          | integer | 否     | 1000         | [1,...] | 发送数据超时间。       
                            |
 | log_format       | object  | 否   |          |         | 以 JSON 
格式的键值对来声明日志格式。对于值部分,仅支持字符串。如果是以 `$` 开头,则表明是要获取 [APISIX 
变量](../apisix-variable.md) 或 [NGINX 
内置变量](http://nginx.org/en/docs/varindex.html)。 |
-| name             | string  | 否     | "udp logger" |         | 用于识别批处理器。      
                           |
+| name             | string  | 否     | "udp logger" |         | 标识 logger 
的唯一标识符。如果您使用 Prometheus 监视 APISIX 指标,名称将以 `apisix_batch_process_entries` 导出。    
                             |
 | include_req_body | boolean | 否     |              |         | 当设置为 `true` 
时,日志中将包含请求体。           |
 
 该插件支持使用批处理器来聚合并批量处理条目(日志和数据)。这样可以避免插件频繁地提交数据,默认情况下批处理器每 `5` 秒钟或队列中的数据达到 `1000` 
条时提交数据,如需了解批处理器相关参数设置,请参考 [Batch-Processor](../batch-processor.md#配置)。

Reply via email to