This is an automated email from the ASF dual-hosted git repository.

xxyu pushed a commit to branch document
in repository https://gitbox.apache.org/repos/asf/kylin.git

commit 02182d1abf9f72b28649b8752dfbbee0095acf72
Author: xuekaiqi <kaiqi....@qq.com>
AuthorDate: Tue Nov 10 15:24:17 2020 +0800

    a little update
---
 website/_docs/tutorial/setup_systemcube.cn.md | 100 +++++++++++++-------------
 website/_docs/tutorial/setup_systemcube.md    |  98 +++++++++++++------------
 2 files changed, 103 insertions(+), 95 deletions(-)

diff --git a/website/_docs/tutorial/setup_systemcube.cn.md 
b/website/_docs/tutorial/setup_systemcube.cn.md
index d05e942..86a3cdb 100644
--- a/website/_docs/tutorial/setup_systemcube.cn.md
+++ b/website/_docs/tutorial/setup_systemcube.cn.md
@@ -20,7 +20,7 @@ permalink: /cn/docs/tutorial/setup_systemcube.html
 
 ## <span id="如何建立系统 Cube">如何建立系统 Cube</span>
 
-本节我们介绍手动启用系统 Cube 的方法,如果您希望通过 shell 脚本自动创建系统 Cube,请参考[自动创建系统 Cube](#什么是系统 
Cube)。
+本节我们介绍手动启用系统 Cube 的方法,如果您希望通过 shell 脚本自动创建系统 Cube,请参考[自动创建系统 Cube](#自动创建系统 
Cube)。
 
 ### 1. 准备
 
@@ -173,7 +173,7 @@ Hive 中有 5 张表记录了 Kylin 系统的相关指标数据,每一个系
   </tr>
   <tr>
     <td>REALIZATION</td>
-    <td>the cube which the query hits. In Kylin,there are two OLAP 
realizations: Cube,or Hybrid of Cubes</td>
+    <td>the cube which the query hits. In Kylin, there are two OLAP 
realizations: Cube, or Hybrid of Cubes</td>
   </tr>
   <tr>
     <td>REALIZATION_TYPE</td>
@@ -181,11 +181,11 @@ Hive 中有 5 张表记录了 Kylin 系统的相关指标数据,每一个系
   </tr>
   <tr>
     <td>QUERY_TYPE</td>
-    <td>users can query on different data 
sources,CACHE,OLAP,LOOKUP_TABLE,HIVE</td>
+    <td>users can query on different data sources: CACHE, OLAP, LOOKUP_TABLE, 
HIVE</td>
   </tr>
   <tr>
     <td>EXCEPTION</td>
-    <td>when doing query,exceptions may happen. It's for classifying different 
exception types</td>
+    <td>when doing query, exceptions may happen. It's for classifying 
different exception types</td>
   </tr>
 </table>
 
@@ -198,19 +198,19 @@ Hive 中有 5 张表记录了 Kylin 系统的相关指标数据,每一个系
     <td></td>
   </tr>
   <tr>
-    <td>MIN,MAX,SUM,PERCENTILE_APPROX of QUERY_TIME_COST</td>
+    <td>MIN, MAX, SUM, PERCENTILE_APPROX of QUERY_TIME_COST</td>
     <td>the time cost for the whole query</td>
   </tr>
   <tr>
-    <td>MAX,SUM of CALCITE_SIZE_RETURN</td>
+    <td>MAX, SUM of CALCITE_SIZE_RETURN</td>
     <td>the row count of the result Calcite returns</td>
   </tr>
   <tr>
-    <td>MAX,SUM of STORAGE_SIZE_RETURN</td>
+    <td>MAX, SUM of STORAGE_SIZE_RETURN</td>
     <td>the row count of the input to Calcite</td>
   </tr>
   <tr>
-    <td>MAX,SUM of CALCITE_SIZE_AGGREGATE_FILTER</td>
+    <td>MAX, SUM of CALCITE_SIZE_AGGREGATE_FILTER</td>
     <td>the row count of Calcite aggregates and filters</td>
   </tr>
   <tr>
@@ -220,6 +220,7 @@ Hive 中有 5 张表记录了 Kylin 系统的相关指标数据,每一个系
 </table>
 
 
+
 ### METRICS_QUERY_RPC
 这个 Cube 用于在最低级别收集查询 metrics。对于一个查询,相关的 aggregation 和 filter 能够下推到每一个 rpc 
目标服务器。Rpc 目标服务器的健壮性是更好查询性能的基础。细节如下:
 
@@ -237,7 +238,7 @@ Hive 中有 5 张表记录了 Kylin 系统的相关指标数据,每一个系
   </tr>
   <tr>
     <td>REALIZATION</td>
-    <td>the cube which the query hits.</td>
+    <td>the cube which the query hits</td>
   </tr>
   <tr>
     <td>RPC_SERVER</td>
@@ -245,7 +246,7 @@ Hive 中有 5 张表记录了 Kylin 系统的相关指标数据,每一个系
   </tr>
   <tr>
     <td>EXCEPTION</td>
-    <td>the exception of a rpc call. If no exception,"NULL" is used</td>
+    <td>the exception of a rpc call. If no exception, "NULL" is used</td>
   </tr>
 </table>
 
@@ -258,32 +259,33 @@ Hive 中有 5 张表记录了 Kylin 系统的相关指标数据,每一个系
     <td></td>
   </tr>
   <tr>
-    <td>MAX,SUM,PERCENTILE_APPROX of CALL_TIME</td>
+    <td>MAX, SUM, PERCENTILE_APPROX of CALL_TIME</td>
     <td>the time cost of a rpc all</td>
   </tr>
   <tr>
-    <td>MAX,SUM of COUNT_SKIP</td>
-    <td>based on fuzzy filters or else,a few rows will be skiped. This 
indicates the skipped row count</td>
+    <td>MAX, SUM of COUNT_SKIP</td>
+    <td>based on fuzzy filters or else, a few rows will be skiped. This 
indicates the skipped row count</td>
   </tr>
   <tr>
-    <td>MAX,SUM of SIZE_SCAN</td>
+    <td>MAX, SUM of SIZE_SCAN</td>
     <td>the row count actually scanned</td>
   </tr>
   <tr>
-    <td>MAX,SUM of SIZE_RETURN</td>
+    <td>MAX, SUM of SIZE_RETURN</td>
     <td>the row count actually returned</td>
   </tr>
   <tr>
-    <td>MAX,SUM of SIZE_AGGREGATE</td>
+    <td>MAX, SUM of SIZE_AGGREGATE</td>
     <td>the row count actually aggregated</td>
   </tr>
   <tr>
-    <td>MAX,SUM of SIZE_AGGREGATE_FILTER</td>
-    <td>the row count actually aggregated and filtered,= SIZE_SCAN - 
SIZE_RETURN</td>
+    <td>MAX, SUM of SIZE_AGGREGATE_FILTER</td>
+    <td>the row count actually aggregated and filtered, = SIZE_SCAN - 
SIZE_RETURN</td>
   </tr>
 </table>
 
 
+
 ### METRICS_QUERY_CUBE
 这个 Cube 用于在 Cube 级别收集查询 metrics。最重要的是 cuboids 相关的,其为 Cube planner 提供服务。细节如下:
 
@@ -316,7 +318,6 @@ Hive 中有 5 张表记录了 Kylin 系统的相关指标数据,每一个系
     <td>whether a query on this Cube is successful or not</td>
   </tr>
 </table>
-
 <table>
   <tr>
     <th colspan="2">Measure</th>
@@ -330,40 +331,41 @@ Hive 中有 5 张表记录了 Kylin 系统的相关指标数据,每一个系
     <td></td>
   </tr>
   <tr>
-    <td>MAX,SUM of STORAGE_CALL_COUNT</td>
+    <td>MAX, SUM of STORAGE_CALL_COUNT</td>
     <td>the number of rpc calls for a query hit on this Cube</td>
   </tr>
   <tr>
-    <td>MAX,SUM of STORAGE_CALL_TIME_SUM</td>
+    <td>MAX, SUM of STORAGE_CALL_TIME_SUM</td>
     <td>sum of time cost for the rpc calls of a query</td>
   </tr>
   <tr>
-    <td>MAX,SUM of STORAGE_CALL_TIME_MAX</td>
+    <td>MAX, SUM of STORAGE_CALL_TIME_MAX</td>
     <td>max of time cost among the rpc calls of a query</td>
   </tr>
   <tr>
-    <td>MAX,SUM of STORAGE_COUNT_SKIP</td>
+    <td>MAX, SUM of STORAGE_COUNT_SKIP</td>
     <td>the sum of row count skipped for the related rpc calls</td>
   </tr>
   <tr>
-    <td>MAX,SUM of STORAGE_COUNT_SCAN</td>
+    <td>MAX, SUM of STORAGE_COUNT_SCAN</td>
     <td>the sum of row count scanned for the related rpc calls</td>
   </tr>
   <tr>
-    <td>MAX,SUM of STORAGE_COUNT_RETURN</td>
+    <td>MAX, SUM of STORAGE_COUNT_RETURN</td>
     <td>the sum of row count returned for the related rpc calls</td>
   </tr>
   <tr>
-    <td>MAX,SUM of STORAGE_COUNT_AGGREGATE</td>
+    <td>MAX, SUM of STORAGE_COUNT_AGGREGATE</td>
     <td>the sum of row count aggregated for the related rpc calls</td>
   </tr>
   <tr>
-    <td>MAX,SUM of STORAGE_COUNT_AGGREGATE_FILTER</td>
-    <td>the sum of row count aggregated and filtered for the related rpc 
calls,= STORAGE_SIZE_SCAN - STORAGE_SIZE_RETURN</td>
+    <td>MAX, SUM of STORAGE_COUNT_AGGREGATE_FILTER</td>
+    <td>the sum of row count aggregated and filtered for the related rpc 
calls, = STORAGE_SIZE_SCAN - STORAGE_SIZE_RETURN</td>
   </tr>
 </table>
 
 
+
 ### METRICS_JOB
 在 Kylin 中,主要有三种类型的 job:
 - "BUILD",为了从 **HIVE** 中 building Cube segments。
@@ -378,19 +380,19 @@ Hive 中有 5 张表记录了 Kylin 系统的相关指标数据,每一个系
   </tr>
   <tr>
     <td>HOST</td>
-    <td>the host of server for query engine</td>
+    <td>the host of server for job engine</td>
   </tr>
   <tr>
     <td>KUSER</td>
-    <td>the user who executes the query</td>
+    <td>the user who run the job</td>
   </tr>
   <tr>
     <td>PROJECT</td>
-    <td>the project where the query executes</td>
+    <td>the project where the job runs</td>
   </tr>
   <tr>
     <td>CUBE_NAME</td>
-    <td>the cube which the query hits.</td>
+    <td>the cube with which the job is related</td>
   </tr>
   <tr>
     <td>JOB_TYPE</td>
@@ -398,7 +400,7 @@ Hive 中有 5 张表记录了 Kylin 系统的相关指标数据,每一个系
   </tr>
   <tr>
     <td>CUBING_TYPE</td>
-    <td>in kylin,there are two cubing algorithms,Layered & Fast(InMemory)</td>
+    <td>in kylin, there are two cubing algorithms, Layered & 
Fast(InMemory)</td>
   </tr>
 </table>
 
@@ -411,44 +413,45 @@ Hive 中有 5 张表记录了 Kylin 系统的相关指标数据,每一个系
     <td></td>
   </tr>
   <tr>
-    <td>MIN,MAX,SUM,PERCENTILE_APPROX of DURATION</td>
+    <td>MIN, MAX, SUM, PERCENTILE_APPROX of DURATION</td>
     <td>the duration from a job start to finish</td>
   </tr>
   <tr>
-    <td>MIN,MAX,SUM of TABLE_SIZE</td>
+    <td>MIN, MAX, SUM of TABLE_SIZE</td>
     <td>the size of data source in bytes</td>
   </tr>
   <tr>
-    <td>MIN,MAX,SUM of CUBE_SIZE</td>
+    <td>MIN, MAX, SUM of CUBE_SIZE</td>
     <td>the size of created Cube segment in bytes</td>
   </tr>
   <tr>
-    <td>MIN,MAX,SUM of PER_BYTES_TIME_COST</td>
+    <td>MIN, MAX, SUM of PER_BYTES_TIME_COST</td>
     <td>= DURATION / TABLE_SIZE</td>
   </tr>
   <tr>
-    <td>MIN,MAX,SUM of WAIT_RESOURCE_TIME</td>
+    <td>MIN, MAX, SUM of WAIT_RESOURCE_TIME</td>
     <td>a job may includes serveral MR(map reduce) jobs. Those MR jobs may 
wait because of lack of Hadoop resources.</td>
   </tr>
   <tr>
-    <td>MAX,SUM of step_duration_distinct_columns</td>
+    <td>MAX, SUM of step_duration_distinct_columns</td>
     <td></td>
   </tr>
   <tr>
-    <td>MAX,SUM of step_duration_dictionary</td>
+    <td>MAX, SUM of step_duration_dictionary</td>
     <td></td>
   </tr>
   <tr>
-    <td>MAX,SUM of step_duration_inmem_cubing</td>
+    <td>MAX, SUM of step_duration_inmem_cubing</td>
     <td></td>
   </tr>
   <tr>
-    <td>MAX,SUM of step_duration_hfile_convert</td>
+    <td>MAX, SUM of step_duration_hfile_convert</td>
     <td></td>
   </tr>
 </table>
 
 
+
 ### METRICS_JOB_EXCEPTION
 这个 Cube 是用来收集 job exception 指标。细节如下:
 
@@ -458,19 +461,19 @@ Hive 中有 5 张表记录了 Kylin 系统的相关指标数据,每一个系
   </tr>
   <tr>
     <td>HOST</td>
-    <td>the host of server for query engine</td>
+    <td>the host of server for job engine</td>
   </tr>
   <tr>
     <td>KUSER</td>
-    <td>the user who executes the query</td>
+    <td>the user who run a job</td>
   </tr>
   <tr>
     <td>PROJECT</td>
-    <td>the project where the query executes</td>
+    <td>the project where the job runs</td>
   </tr>
   <tr>
     <td>CUBE_NAME</td>
-    <td>the cube which the query hits.</td>
+    <td>the cube with which the job is related</td>
   </tr>
   <tr>
     <td>JOB_TYPE</td>
@@ -478,15 +481,16 @@ Hive 中有 5 张表记录了 Kylin 系统的相关指标数据,每一个系
   </tr>
   <tr>
     <td>CUBING_TYPE</td>
-    <td>in kylin,there are two cubing algorithms,Layered & Fast(InMemory)</td>
+    <td>in kylin, there are two cubing algorithms, Layered & 
Fast(InMemory)</td>
   </tr>
   <tr>
     <td>EXCEPTION</td>
-    <td>when running a job,exceptions may happen. It's for classifying 
different exception types</td>
+    <td>when running a job, exceptions may happen. It's for classifying 
different exception types</td>
   </tr>
 </table>
 
 
+
 <table>
   <tr>
     <th>Measure</th>
diff --git a/website/_docs/tutorial/setup_systemcube.md 
b/website/_docs/tutorial/setup_systemcube.md
index 196aca4..70cecad 100644
--- a/website/_docs/tutorial/setup_systemcube.md
+++ b/website/_docs/tutorial/setup_systemcube.md
@@ -20,7 +20,7 @@ For better supporting self-monitoring, a set of system Cubes 
are created under t
 
 ## <span id="How to Set Up System Cube">How to Set Up System Cube</span>
 
-In this section, we will introduce the method of manually enabling the system 
cube. If you want to automatically enable the system cube through shell 
scripts, please refer to [Automatically Create System Cube](#What is System 
Cube).
+In this section, we will introduce the method of manually enabling the system 
cube. If you want to automatically enable the system cube through shell 
scripts, please refer to [Automatically Create System Cube](#Automatically 
Create System Cube).
 
 ### 1. Prepare
 
@@ -176,7 +176,7 @@ This Cube is for collecting query metrics at the highest 
level. The details are
   </tr>
   <tr>
     <td>REALIZATION</td>
-    <td>the cube which the query hits. In Kylin,there are two OLAP 
realizations: Cube,or Hybrid of Cubes</td>
+    <td>the cube which the query hits. In Kylin, there are two OLAP 
realizations: Cube, or Hybrid of Cubes</td>
   </tr>
   <tr>
     <td>REALIZATION_TYPE</td>
@@ -184,14 +184,15 @@ This Cube is for collecting query metrics at the highest 
level. The details are
   </tr>
   <tr>
     <td>QUERY_TYPE</td>
-    <td>users can query on different data 
sources,CACHE,OLAP,LOOKUP_TABLE,HIVE</td>
+    <td>users can query on different data sources: CACHE, OLAP, LOOKUP_TABLE, 
HIVE</td>
   </tr>
   <tr>
     <td>EXCEPTION</td>
-    <td>when doing query,exceptions may happen. It's for classifying different 
exception types</td>
+    <td>when doing query, exceptions may happen. It's for classifying 
different exception types</td>
   </tr>
 </table>
 
+
 <table>
   <tr>
     <th colspan="2">Measure</th>
@@ -201,19 +202,19 @@ This Cube is for collecting query metrics at the highest 
level. The details are
     <td></td>
   </tr>
   <tr>
-    <td>MIN,MAX,SUM,PERCENTILE_APPROX of QUERY_TIME_COST</td>
+    <td>MIN, MAX, SUM, PERCENTILE_APPROX of QUERY_TIME_COST</td>
     <td>the time cost for the whole query</td>
   </tr>
   <tr>
-    <td>MAX,SUM of CALCITE_SIZE_RETURN</td>
+    <td>MAX, SUM of CALCITE_SIZE_RETURN</td>
     <td>the row count of the result Calcite returns</td>
   </tr>
   <tr>
-    <td>MAX,SUM of STORAGE_SIZE_RETURN</td>
+    <td>MAX, SUM of STORAGE_SIZE_RETURN</td>
     <td>the row count of the input to Calcite</td>
   </tr>
   <tr>
-    <td>MAX,SUM of CALCITE_SIZE_AGGREGATE_FILTER</td>
+    <td>MAX, SUM of CALCITE_SIZE_AGGREGATE_FILTER</td>
     <td>the row count of Calcite aggregates and filters</td>
   </tr>
   <tr>
@@ -240,7 +241,7 @@ This Cube is for collecting query metrics at the lowest 
level. For a query, the
   </tr>
   <tr>
     <td>REALIZATION</td>
-    <td>the cube which the query hits.</td>
+    <td>the cube which the query hits</td>
   </tr>
   <tr>
     <td>RPC_SERVER</td>
@@ -261,32 +262,33 @@ This Cube is for collecting query metrics at the lowest 
level. For a query, the
     <td></td>
   </tr>
   <tr>
-    <td>MAX,SUM,PERCENTILE_APPROX of CALL_TIME</td>
+    <td>MAX, SUM, PERCENTILE_APPROX of CALL_TIME</td>
     <td>the time cost of a rpc all</td>
   </tr>
   <tr>
-    <td>MAX,SUM of COUNT_SKIP</td>
-    <td>based on fuzzy filters or else,a few rows will be skiped. This 
indicates the skipped row count</td>
+    <td>MAX, SUM of COUNT_SKIP</td>
+    <td>based on fuzzy filters or else, a few rows will be skiped. This 
indicates the skipped row count</td>
   </tr>
   <tr>
-    <td>MAX,SUM of SIZE_SCAN</td>
+    <td>MAX, SUM of SIZE_SCAN</td>
     <td>the row count actually scanned</td>
   </tr>
   <tr>
-    <td>MAX,SUM of SIZE_RETURN</td>
+    <td>MAX, SUM of SIZE_RETURN</td>
     <td>the row count actually returned</td>
   </tr>
   <tr>
-    <td>MAX,SUM of SIZE_AGGREGATE</td>
+    <td>MAX, SUM of SIZE_AGGREGATE</td>
     <td>the row count actually aggregated</td>
   </tr>
   <tr>
-    <td>MAX,SUM of SIZE_AGGREGATE_FILTER</td>
-    <td>the row count actually aggregated and filtered,= SIZE_SCAN - 
SIZE_RETURN</td>
+    <td>MAX, SUM of SIZE_AGGREGATE_FILTER</td>
+    <td>the row count actually aggregated and filtered, = SIZE_SCAN - 
SIZE_RETURN</td>
   </tr>
 </table>
 
 
+
 ### METRICS_QUERY_CUBE
 This Cube is for collecting query metrics at the Cube level. The most 
important are cuboids related, which will serve for Cube planner. The details 
are as follows:
 
@@ -319,7 +321,6 @@ This Cube is for collecting query metrics at the Cube 
level. The most important
     <td>whether a query on this Cube is successful or not</td>
   </tr>
 </table>
-
 <table>
   <tr>
     <th colspan="2">Measure</th>
@@ -333,40 +334,41 @@ This Cube is for collecting query metrics at the Cube 
level. The most important
     <td></td>
   </tr>
   <tr>
-    <td>MAX,SUM of STORAGE_CALL_COUNT</td>
+    <td>MAX, SUM of STORAGE_CALL_COUNT</td>
     <td>the number of rpc calls for a query hit on this Cube</td>
   </tr>
   <tr>
-    <td>MAX,SUM of STORAGE_CALL_TIME_SUM</td>
+    <td>MAX, SUM of STORAGE_CALL_TIME_SUM</td>
     <td>sum of time cost for the rpc calls of a query</td>
   </tr>
   <tr>
-    <td>MAX,SUM of STORAGE_CALL_TIME_MAX</td>
+    <td>MAX, SUM of STORAGE_CALL_TIME_MAX</td>
     <td>max of time cost among the rpc calls of a query</td>
   </tr>
   <tr>
-    <td>MAX,SUM of STORAGE_COUNT_SKIP</td>
+    <td>MAX, SUM of STORAGE_COUNT_SKIP</td>
     <td>the sum of row count skipped for the related rpc calls</td>
   </tr>
   <tr>
-    <td>MAX,SUM of STORAGE_COUNT_SCAN</td>
+    <td>MAX, SUM of STORAGE_COUNT_SCAN</td>
     <td>the sum of row count scanned for the related rpc calls</td>
   </tr>
   <tr>
-    <td>MAX,SUM of STORAGE_COUNT_RETURN</td>
+    <td>MAX, SUM of STORAGE_COUNT_RETURN</td>
     <td>the sum of row count returned for the related rpc calls</td>
   </tr>
   <tr>
-    <td>MAX,SUM of STORAGE_COUNT_AGGREGATE</td>
+    <td>MAX, SUM of STORAGE_COUNT_AGGREGATE</td>
     <td>the sum of row count aggregated for the related rpc calls</td>
   </tr>
   <tr>
-    <td>MAX,SUM of STORAGE_COUNT_AGGREGATE_FILTER</td>
-    <td>the sum of row count aggregated and filtered for the related rpc 
calls,= STORAGE_SIZE_SCAN - STORAGE_SIZE_RETURN</td>
+    <td>MAX, SUM of STORAGE_COUNT_AGGREGATE_FILTER</td>
+    <td>the sum of row count aggregated and filtered for the related rpc 
calls, = STORAGE_SIZE_SCAN - STORAGE_SIZE_RETURN</td>
   </tr>
 </table>
 
 
+
 ### METRICS_JOB
 In Kylin, there are mainly three types of job:
 - "BUILD", for building Cube segments from **HIVE**.
@@ -381,19 +383,19 @@ This Cube is for collecting job metrics. The details are 
as follows:
   </tr>
   <tr>
     <td>HOST</td>
-    <td>the host of server for query engine</td>
+    <td>the host of server for job engine</td>
   </tr>
   <tr>
     <td>KUSER</td>
-    <td>the user who executes the query</td>
+    <td>the user who run the job</td>
   </tr>
   <tr>
     <td>PROJECT</td>
-    <td>the project where the query executes</td>
+    <td>the project where the job runs</td>
   </tr>
   <tr>
     <td>CUBE_NAME</td>
-    <td>the cube which the query hits.</td>
+    <td>the cube with which the job is related</td>
   </tr>
   <tr>
     <td>JOB_TYPE</td>
@@ -401,7 +403,7 @@ This Cube is for collecting job metrics. The details are as 
follows:
   </tr>
   <tr>
     <td>CUBING_TYPE</td>
-    <td>in kylin,there are two cubing algorithms,Layered & Fast(InMemory)</td>
+    <td>in kylin, there are two cubing algorithms, Layered & 
Fast(InMemory)</td>
   </tr>
 </table>
 
@@ -414,44 +416,45 @@ This Cube is for collecting job metrics. The details are 
as follows:
     <td></td>
   </tr>
   <tr>
-    <td>MIN,MAX,SUM,PERCENTILE_APPROX of DURATION</td>
+    <td>MIN, MAX, SUM, PERCENTILE_APPROX of DURATION</td>
     <td>the duration from a job start to finish</td>
   </tr>
   <tr>
-    <td>MIN,MAX,SUM of TABLE_SIZE</td>
+    <td>MIN, MAX, SUM of TABLE_SIZE</td>
     <td>the size of data source in bytes</td>
   </tr>
   <tr>
-    <td>MIN,MAX,SUM of CUBE_SIZE</td>
+    <td>MIN, MAX, SUM of CUBE_SIZE</td>
     <td>the size of created Cube segment in bytes</td>
   </tr>
   <tr>
-    <td>MIN,MAX,SUM of PER_BYTES_TIME_COST</td>
+    <td>MIN, MAX, SUM of PER_BYTES_TIME_COST</td>
     <td>= DURATION / TABLE_SIZE</td>
   </tr>
   <tr>
-    <td>MIN,MAX,SUM of WAIT_RESOURCE_TIME</td>
+    <td>MIN, MAX, SUM of WAIT_RESOURCE_TIME</td>
     <td>a job may includes serveral MR(map reduce) jobs. Those MR jobs may 
wait because of lack of Hadoop resources.</td>
   </tr>
   <tr>
-    <td>MAX,SUM of step_duration_distinct_columns</td>
+    <td>MAX, SUM of step_duration_distinct_columns</td>
     <td></td>
   </tr>
   <tr>
-    <td>MAX,SUM of step_duration_dictionary</td>
+    <td>MAX, SUM of step_duration_dictionary</td>
     <td></td>
   </tr>
   <tr>
-    <td>MAX,SUM of step_duration_inmem_cubing</td>
+    <td>MAX, SUM of step_duration_inmem_cubing</td>
     <td></td>
   </tr>
   <tr>
-    <td>MAX,SUM of step_duration_hfile_convert</td>
+    <td>MAX, SUM of step_duration_hfile_convert</td>
     <td></td>
   </tr>
 </table>
 
 
+
 ### METRICS_JOB_EXCEPTION
 This Cube is for collecting job exception metrics. The details are as follows:
 
@@ -461,19 +464,19 @@ This Cube is for collecting job exception metrics. The 
details are as follows:
   </tr>
   <tr>
     <td>HOST</td>
-    <td>the host of server for query engine</td>
+    <td>the host of server for job engine</td>
   </tr>
   <tr>
     <td>KUSER</td>
-    <td>the user who executes the query</td>
+    <td>the user who run a job</td>
   </tr>
   <tr>
     <td>PROJECT</td>
-    <td>the project where the query executes</td>
+    <td>the project where the job runs</td>
   </tr>
   <tr>
     <td>CUBE_NAME</td>
-    <td>the cube which the query hits.</td>
+    <td>the cube with which the job is related</td>
   </tr>
   <tr>
     <td>JOB_TYPE</td>
@@ -481,15 +484,16 @@ This Cube is for collecting job exception metrics. The 
details are as follows:
   </tr>
   <tr>
     <td>CUBING_TYPE</td>
-    <td>in kylin,there are two cubing algorithms,Layered & Fast(InMemory)</td>
+    <td>in kylin, there are two cubing algorithms, Layered & 
Fast(InMemory)</td>
   </tr>
   <tr>
     <td>EXCEPTION</td>
-    <td>when running a job,exceptions may happen. It's for classifying 
different exception types</td>
+    <td>when running a job, exceptions may happen. It's for classifying 
different exception types</td>
   </tr>
 </table>
 
 
+
 <table>
   <tr>
     <th>Measure</th>

Reply via email to