This is an automated email from the ASF dual-hosted git repository.

spacewander pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/apisix.git


The following commit(s) were added to refs/heads/master by this push:
     new 6ddca1d  docs: logger doc refactor (#6503)
6ddca1d is described below

commit 6ddca1dad813a68045e313771bb9e80cb10cc4c5
Author: John Chever <[email protected]>
AuthorDate: Mon Mar 7 12:07:47 2022 +0800

    docs: logger doc refactor (#6503)
---
 docs/en/latest/plugins/clickhouse-logger.md    | 5 ++---
 docs/en/latest/plugins/datadog.md              | 6 ++----
 docs/en/latest/plugins/google-cloud-logging.md | 7 ++-----
 docs/en/latest/plugins/loggly.md               | 7 ++-----
 docs/en/latest/plugins/syslog.md               | 4 ++--
 docs/en/latest/plugins/udp-logger.md           | 5 ++---
 docs/zh/latest/plugins/clickhouse-logger.md    | 5 ++---
 docs/zh/latest/plugins/google-cloud-logging.md | 7 ++-----
 docs/zh/latest/plugins/syslog.md               | 4 ++--
 docs/zh/latest/plugins/udp-logger.md           | 5 ++---
 10 files changed, 20 insertions(+), 35 deletions(-)

diff --git a/docs/en/latest/plugins/clickhouse-logger.md 
b/docs/en/latest/plugins/clickhouse-logger.md
index c422b6b..9998538 100644
--- a/docs/en/latest/plugins/clickhouse-logger.md
+++ b/docs/en/latest/plugins/clickhouse-logger.md
@@ -36,11 +36,10 @@ title: clickhouse-logger
 | password        | string  | required   |               |         | 
clickhouse password.                         |
 | timeout         | integer | optional   | 3             | [1,...] | Time to 
keep the connection alive after sending a request.                   |
 | name            | string  | optional   | "clickhouse logger" |         | A 
unique identifier to identity the logger.                             |
-| batch_max_size  | integer | optional   | 100           | [1,...] | Set the 
maximum number of logs sent in each batch. When the number of logs reaches the 
set maximum, all logs will be automatically pushed to the clickhouse.  |
-| max_retry_count | integer | optional   | 0             | [0,...] | Maximum 
number of retries before removing from the processing pipe line.        |
-| retry_delay     | integer | optional   | 1             | [0,...] | Number of 
seconds the process execution should be delayed if the execution fails.         
    |
 | ssl_verify      | boolean | optional   | true          | [true,false] | 
verify ssl.             |
 
+The plugin supports the use of batch processors to aggregate and process 
entries(logs/data) in a batch. This avoids frequent data submissions by the 
plugin, which by default the batch processor submits data every `5` seconds or 
when the data in the queue reaches `1000`. For information or custom batch 
processor parameter settings, see 
[Batch-Processor](../batch-processor.md#configuration) configuration section.
+
 ## How To Enable
 
 The following is an example of how to enable the `clickhouse-logger` for a 
specific route.
diff --git a/docs/en/latest/plugins/datadog.md 
b/docs/en/latest/plugins/datadog.md
index 80b7f8e..1de18e2 100644
--- a/docs/en/latest/plugins/datadog.md
+++ b/docs/en/latest/plugins/datadog.md
@@ -38,10 +38,8 @@ For more info on Batch-Processor in Apache APISIX please 
refer.
 | Name             | Type   | Requirement  | Default      | Valid       | 
Description                                                                     
           |
 | -----------      | ------ | -----------  | -------      | -----       | 
------------------------------------------------------------                    
           |
 | prefer_name      | boolean | optional    | true         | true/false  | If 
set to `false`, would use route/service id instead of name(default) with metric 
tags.   |
-| batch_max_size   | integer | optional    | 1000         | [1,...]     | Max 
buffer size of each batch                                                       
       |
-| inactive_timeout | integer | optional    | 5            | [1,...]     | 
Maximum age in seconds when the buffer will be flushed if inactive              
           |
-| buffer_duration  | integer | optional    | 60           | [1,...]     | 
Maximum age in seconds of the oldest entry in a batch before the batch must be 
processed   |
-| max_retry_count  | integer | optional    | 0            | [0,...]     | 
Maximum number of retries if one entry fails to reach dogstatsd server          
           |
+
+The plugin supports the use of batch processors to aggregate and process 
entries(logs/data) in a batch. This avoids frequent data submissions by the 
plugin, which by default the batch processor submits data every `5` seconds or 
when the data in the queue reaches `1000`. For information or custom batch 
processor parameter settings, see 
[Batch-Processor](../batch-processor.md#configuration) configuration section.
 
 ## Metadata
 
diff --git a/docs/en/latest/plugins/google-cloud-logging.md 
b/docs/en/latest/plugins/google-cloud-logging.md
index 9cfc572..f851a05 100644
--- a/docs/en/latest/plugins/google-cloud-logging.md
+++ b/docs/en/latest/plugins/google-cloud-logging.md
@@ -44,11 +44,8 @@ For more info on Batch-Processor in Apache APISIX please 
refer:
 | ssl_verify              | optional      | true                               
                                                                                
                                                                               
| enable `SSL` verification, option as per [OpenResty 
docs](https://github.com/openresty/lua-nginx-module#tcpsocksslhandshake)        
                                            |
 | resource                | optional      | {"type": "global"}                 
                                                                                
                                                                               
| the Google monitor resource, refer to: 
[MonitoredResource](https://cloud.google.com/logging/docs/reference/v2/rest/v2/MonitoredResource)
                                         |
 | log_id                  | optional      | apisix.apache.org%2Flogs           
                                                                                
                                                                               
| google cloud logging id, refer to: 
[LogEntry](https://cloud.google.com/logging/docs/reference/v2/rest/v2/LogEntry) 
                                                                    |
-| max_retry_count         | optional      | 0                                  
                                                                                
                                                                               
| max number of retries before removing from the processing pipe line           
                                                                                
                   |
-| retry_delay             | optional      | 1                                  
                                                                                
                                                                               
| number of seconds the process execution should be delayed if the execution 
fails                                                                           
                      |
-| buffer_duration         | optional      | 60                                 
                                                                                
                                                                               
| max age in seconds of the oldest entry in a batch before the batch must be 
processed                                                                       
                      |
-| inactive_timeout        | optional      | 5                                  
                                                                                
                                                                               
| max age in seconds when the buffer will be flushed if inactive                
                                                                                
                   |
-| batch_max_size          | optional      | 1000                               
                                                                                
                                                                               
| max size of each batch                                                        
                                                                                
                   |
+
+The plugin supports the use of batch processors to aggregate and process 
entries(logs/data) in a batch. This avoids frequent data submissions by the 
plugin, which by default the batch processor submits data every `5` seconds or 
when the data in the queue reaches `1000`. For information or custom batch 
processor parameter settings, see 
[Batch-Processor](../batch-processor.md#configuration) configuration section.
 
 ## How To Enable
 
diff --git a/docs/en/latest/plugins/loggly.md b/docs/en/latest/plugins/loggly.md
index 326e066..98224fb 100644
--- a/docs/en/latest/plugins/loggly.md
+++ b/docs/en/latest/plugins/loggly.md
@@ -41,11 +41,8 @@ For more info on Batch-Processor in Apache APISIX please 
refer to:
 | include_req_body | boolean | optional    | false          | Whether to 
include the request body. false: indicates that the requested body is not 
included; true: indicates that the requested body is included. Note: if the 
request body is too big to be kept in the memory, it can't be logged due to 
Nginx's limitation. |
 | include_resp_body| boolean | optional    | false         | Whether to 
include the response body. The response body is included if and only if it is 
`true`. |
 | include_resp_body_expr  | array  | optional    |          | When 
`include_resp_body` is true, control the behavior based on the result of the 
[lua-resty-expr](https://github.com/api7/lua-resty-expr) expression. If 
present, only log the response body when the result is true. |
-| max_retry_count     | integer    | optional      | 0                         
                                                                                
                                                                                
        | max number of retries before removing from the processing pipe line   
                                                                                
                           |
-| retry_delay        | integer     | optional      | 1                         
                                                                                
                                                                                
        | number of seconds the process execution should be delayed if the 
execution fails                                                                 
                                |
-| buffer_duration      | integer   | optional      | 60                        
                                                                                
                                                                                
        | max age in seconds of the oldest entry in a batch before the batch 
must be processed                                                               
                              |
-| inactive_timeout     | integer   | optional      | 5                         
                                                                                
                                                                                
        | max age in seconds when the buffer will be flushed if inactive        
                                                                                
                           |
-| batch_max_size    | integer      | optional      | 1000                      
                                                                                
                                                                                
        | max size of each batch                                                
                                                                                
                           |
+
+The plugin supports the use of batch processors to aggregate and process 
entries(logs/data) in a batch. This avoids frequent data submissions by the 
plugin, which by default the batch processor submits data every `5` seconds or 
when the data in the queue reaches `1000`. For information or custom batch 
processor parameter settings, see 
[Batch-Processor](../batch-processor.md#configuration) configuration section.
 
 To generate a Customer Token, head over to `<your assigned 
subdomain>/loggly.com/tokens` or navigate to `Logs > Source Setup > Customer 
Tokens` to generate a new token.
 
diff --git a/docs/en/latest/plugins/syslog.md b/docs/en/latest/plugins/syslog.md
index 61224b6..cbd5950 100644
--- a/docs/en/latest/plugins/syslog.md
+++ b/docs/en/latest/plugins/syslog.md
@@ -42,10 +42,10 @@ This will provide the ability to send Log data requests as 
JSON objects.
 | max_retry_times  | integer | optional    | 1            | [1, ...]      | 
Max number of retry times after a connect to a log server failed or send log 
messages to a log server failed.                                                
                                        |
 | retry_interval   | integer | optional    | 1            | [0, ...]      | 
The time delay (in ms) before retry to connect to a log server or retry to send 
log messages to a log server                                                    
                                     |
 | pool_size        | integer | optional    | 5            | [5, ...]      | 
Keepalive pool size used by sock:keepalive.                                     
                                                                                
                                     |
-| batch_max_size   | integer | optional    | 1000         | [1, ...]      | 
Max size of each batch                                                          
                                                                                
                                     |
-| buffer_duration  | integer | optional    | 60           | [1, ...]      | 
Maximum age in seconds of the oldest entry in a batch before the batch must be 
processed                                                                       
                                      |
 | include_req_body | boolean | optional    | false        |               | 
Whether to include the request body                                             
                                                                                
                                     |
 
+The plugin supports the use of batch processors to aggregate and process 
entries(logs/data) in a batch. This avoids frequent data submissions by the 
plugin, which by default the batch processor submits data every `5` seconds or 
when the data in the queue reaches `1000`. For information or custom batch 
processor parameter settings, see 
[Batch-Processor](../batch-processor.md#configuration) configuration section.
+
 ## How To Enable
 
 The following is an example on how to enable the sys-logger for a specific 
route.
diff --git a/docs/en/latest/plugins/udp-logger.md 
b/docs/en/latest/plugins/udp-logger.md
index 47c91ad..9639766 100644
--- a/docs/en/latest/plugins/udp-logger.md
+++ b/docs/en/latest/plugins/udp-logger.md
@@ -40,11 +40,10 @@ For more info on Batch-Processor in Apache APISIX please 
refer.
 | port             | integer | required    |              | [0,...] | Target 
upstream port.                                                                  
  |
 | timeout          | integer | optional    | 3            | [1,...] | Timeout 
for the upstream to send data.                                                  
 |
 | name             | string  | optional    | "udp logger" |         | A unique 
identifier to identity the batch processor                                      
|
-| batch_max_size   | integer | optional    | 1000         | [1,...] | Max size 
of each batch                                                                   
|
-| inactive_timeout | integer | optional    | 5            | [1,...] | Maximum 
age in seconds when the buffer will be flushed if inactive                      
 |
-| buffer_duration  | integer | optional    | 60           | [1,...] | Maximum 
age in seconds of the oldest entry in a batch before the batch must be 
processed |
 | include_req_body | boolean | optional    | false        |         | Whether 
to include the request body                                                     
 |
 
+The plugin supports the use of batch processors to aggregate and process 
entries(logs/data) in a batch. This avoids frequent data submissions by the 
plugin, which by default the batch processor submits data every `5` seconds or 
when the data in the queue reaches `1000`. For information or custom batch 
processor parameter settings, see 
[Batch-Processor](../batch-processor.md#configuration) configuration section.
+
 ## How To Enable
 
 The following is an example on how to enable the udp-logger for a specific 
route.
diff --git a/docs/zh/latest/plugins/clickhouse-logger.md 
b/docs/zh/latest/plugins/clickhouse-logger.md
index 5f57149..f688c30 100644
--- a/docs/zh/latest/plugins/clickhouse-logger.md
+++ b/docs/zh/latest/plugins/clickhouse-logger.md
@@ -36,11 +36,10 @@ title: clickhouse-logger
 | password         | string  | 必须   |               |         | clickhouse的密码 
。  |
 | timeout          | integer | 可选   | 3             | [1,...] | 
发送请求后保持连接活动的时间。                   |
 | name             | string  | 可选   | "clickhouse logger" |         | 标识 
logger 的唯一标识符。                     |
-| batch_max_size   | integer | 可选   | 100           | [1,...] | 
设置每批发送日志的最大条数,当日志条数达到设置的最大值时,会自动推送全部日志到 `clickhouse` 。 |
-| max_retry_count  | integer | 可选   | 0             | [0,...] | 
从处理管道中移除之前的最大重试次数。               |
-| retry_delay      | integer | 可选   | 1             | [0,...] | 
如果执行失败,则应延迟执行流程的秒数。             |
 | ssl_verify       | boolean | 可选   | true          | [true,false] | 验证证书。     
        |
 
+本插件支持使用批处理器来聚合并批量处理条目(日志/数据)。这样可以避免插件频繁地提交数据,默认设置情况下批处理器会每 `5` 秒钟或队列中的数据达到 
`1000` 条时提交数据,如需了解或自定义批处理器相关参数设置,请参考 
[Batch-Processor](../batch-processor.md#配置) 配置部分。
+
 ## 如何开启
 
 这是有关如何为特定路由启用 `clickhouse-logger` 插件的示例。
diff --git a/docs/zh/latest/plugins/google-cloud-logging.md 
b/docs/zh/latest/plugins/google-cloud-logging.md
index d778eae..9c37d37 100644
--- a/docs/zh/latest/plugins/google-cloud-logging.md
+++ b/docs/zh/latest/plugins/google-cloud-logging.md
@@ -44,11 +44,8 @@ title: google-cloud-logging
 | ssl_verify              | 可选   | true                                        
                                                                                
                                                                      | 启用 
`SSL` 验证, 配置根据 
[OpenResty文档](https://github.com/openresty/lua-nginx-module#tcpsocksslhandshake)
 选项|
 | resource                | 可选   | {"type": "global"}                          
                                                                                
                                                                      | 
谷歌监控资源,参考: 
[MonitoredResource](https://cloud.google.com/logging/docs/reference/v2/rest/v2/MonitoredResource)
           |
 | log_id                  | 可选   | apisix.apache.org%2Flogs                    
                                                                                
                                                                      | 
谷歌日志ID,参考: 
[LogEntry](https://cloud.google.com/logging/docs/reference/v2/rest/v2/LogEntry) 
                                |
-| max_retry_count         | 可选   | 0                                           
                                                                                
                                                                      | 
从处理管道中移除之前的最大重试次数                                                               
                         |
-| retry_delay             | 可选   | 1                                           
                                                                                
                                                                      | 
如果执行失败,流程执行应延迟的秒数                                                               
                         |
-| buffer_duration         | 可选   | 60                                          
                                                                                
                                                                      | 
必须先处理批次中最旧条目的最大期限(以秒为单位)                                                        
           |
-| inactive_timeout        | 可选   | 5                                           
                                                                                
                                                                      | 
刷新缓冲区的最大时间(以秒为单位)                                                               
                         |
-| batch_max_size          | 可选   | 1000                                        
                                                                                
                                                                      | 
每个批处理队列可容纳的最大条目数                                                                
                                                      |
+
+本插件支持使用批处理器来聚合并批量处理条目(日志/数据)。这样可以避免插件频繁地提交数据,默认设置情况下批处理器会每 `5` 秒钟或队列中的数据达到 
`1000` 条时提交数据,如需了解或自定义批处理器相关参数设置,请参考 
[Batch-Processor](../batch-processor.md#配置) 配置部分。
 
 ## 如何开启
 
diff --git a/docs/zh/latest/plugins/syslog.md b/docs/zh/latest/plugins/syslog.md
index dbb2458..7565972 100644
--- a/docs/zh/latest/plugins/syslog.md
+++ b/docs/zh/latest/plugins/syslog.md
@@ -42,10 +42,10 @@ title: syslog
 | max_retry_times  | integer | 可选   | 1            | [1, ...]      | 
连接到日志服务器失败或将日志消息发送到日志服务器失败后的最大重试次数。                                             
                  |
 | retry_interval   | integer | 可选   | 1            | [0, ...]      | 
重试连接到日志服务器或重试向日志服务器发送日志消息之前的时间延迟(以毫秒为单位)。                                       
            |
 | pool_size        | integer | 可选   | 5            | [5, ...]      | 
sock:keepalive 使用的 Keepalive 池大小。                                               
                                                |
-| batch_max_size   | integer | 可选   | 1000         | [1, ...]      | 每批的最大大小   
                                                                                
                                    |
-| buffer_duration  | integer | 可选   | 60           | [1, ...]      | 
必须先处理批次中最旧条目的最大期限(以秒为单位)                                                        
                             |
 | include_req_body | boolean | 可选   | false        |               | 是否包括请求 
body                                                                            
                                        |
 
+本插件支持使用批处理器来聚合并批量处理条目(日志/数据)。这样可以避免插件频繁地提交数据,默认设置情况下批处理器会每 `5` 秒钟或队列中的数据达到 
`1000` 条时提交数据,如需了解或自定义批处理器相关参数设置,请参考 
[Batch-Processor](../batch-processor.md#配置) 配置部分。
+
 ## 如何开启
 
 1. 下面例子展示了如何为指定路由开启 `sys-logger` 插件的。
diff --git a/docs/zh/latest/plugins/udp-logger.md 
b/docs/zh/latest/plugins/udp-logger.md
index 56a218c..f286e62 100644
--- a/docs/zh/latest/plugins/udp-logger.md
+++ b/docs/zh/latest/plugins/udp-logger.md
@@ -40,11 +40,10 @@ title: udp-logger
 | port             | integer | 必须   |              | [0,...] | 目标端口            
                             |
 | timeout          | integer | 可选   | 1000         | [1,...] | 发送数据超时间         
                          |
 | name             | string  | 可选   | "udp logger" |         | 用于识别批处理器        
                         |
-| batch_max_size   | integer | 可选   | 1000         | [1,...] | 每批的最大大小         
                          |
-| inactive_timeout | integer | 可选   | 5            | [1,...] | 
刷新缓冲区的最大时间(以秒为单位)               |
-| buffer_duration  | integer | 可选   | 60           | [1,...] | 
必须先处理批次中最旧条目的最长期限(以秒为单位) |
 | include_req_body | boolean | 可选   |              |         | 是否包括请求 body     
                           |
 
+本插件支持使用批处理器来聚合并批量处理条目(日志/数据)。这样可以避免插件频繁地提交数据,默认设置情况下批处理器会每 `5` 秒钟或队列中的数据达到 
`1000` 条时提交数据,如需了解或自定义批处理器相关参数设置,请参考 
[Batch-Processor](../batch-processor.md#配置) 配置部分。
+
 ## 如何开启
 
 1. 下面例子展示了如何为指定路由开启 `udp-logger` 插件的。

Reply via email to