[CARBONDATA-2815][Doc] Add documentation for spilling memory and datamap rebuild

Add documentation for:1.spilling unsafe memory for data loading,2.datamap 
rebuild for index datamap

This closes #2604


Project: http://git-wip-us.apache.org/repos/asf/carbondata/repo
Commit: http://git-wip-us.apache.org/repos/asf/carbondata/commit/6d6a5b2e
Tree: http://git-wip-us.apache.org/repos/asf/carbondata/tree/6d6a5b2e
Diff: http://git-wip-us.apache.org/repos/asf/carbondata/diff/6d6a5b2e

Branch: refs/heads/external-format
Commit: 6d6a5b2eb7eb30a39438019ddfed48dacd14a06f
Parents: 12725b7
Author: xuchuanyin <xuchuan...@hust.edu.cn>
Authored: Thu Aug 2 22:39:49 2018 +0800
Committer: chenliang613 <chenliang...@huawei.com>
Committed: Sat Aug 4 08:53:53 2018 +0800

----------------------------------------------------------------------
 docs/configuration-parameters.md   |  3 ++-
 docs/datamap/datamap-management.md | 16 ++++++++++++----
 2 files changed, 14 insertions(+), 5 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/carbondata/blob/6d6a5b2e/docs/configuration-parameters.md
----------------------------------------------------------------------
diff --git a/docs/configuration-parameters.md b/docs/configuration-parameters.md
index 77cf230..eee85e2 100644
--- a/docs/configuration-parameters.md
+++ b/docs/configuration-parameters.md
@@ -69,7 +69,8 @@ This section provides the details of all the configurations 
required for CarbonD
 | carbon.options.bad.record.path |  | Specifies the HDFS path where bad 
records are stored. By default the value is Null. This path must to be 
configured by the user if bad record logger is enabled or bad record action 
redirect. | |
 | carbon.enable.vector.reader | true | This parameter increases the 
performance of select queries as it fetch columnar batch of size 4*1024 rows 
instead of fetching data row by row. | |
 | carbon.blockletgroup.size.in.mb | 64 MB | The data are read as a group of 
blocklets which are called blocklet groups. This parameter specifies the size 
of the blocklet group. Higher value results in better sequential IO access.The 
minimum value is 16MB, any value lesser than 16MB will reset to the default 
value (64MB). |  |
-| carbon.task.distribution | block | **block**: Setting this value will launch 
one task per block. This setting is suggested in case of concurrent queries and 
queries having big shuffling scenarios. **custom**: Setting this value will 
group the blocks and distribute it uniformly to the available resources in the 
cluster. This enhances the query performance but not suggested in case of 
concurrent queries and queries having big shuffling scenarios. **blocklet**: 
Setting this value will launch one task per blocklet. This setting is suggested 
in case of concurrent queries and queries having big shuffling scenarios. 
**merge_small_files**: Setting this value will merge all the small partitions 
to a size of (128 MB is the default value of 
"spark.sql.files.maxPartitionBytes",it is configurable) during querying. The 
small partitions are combined to a map task to reduce the number of read task. 
This enhances the performance. | | 
+| carbon.task.distribution | block | **block**: Setting this value will launch 
one task per block. This setting is suggested in case of concurrent queries and 
queries having big shuffling scenarios. **custom**: Setting this value will 
group the blocks and distribute it uniformly to the available resources in the 
cluster. This enhances the query performance but not suggested in case of 
concurrent queries and queries having big shuffling scenarios. **blocklet**: 
Setting this value will launch one task per blocklet. This setting is suggested 
in case of concurrent queries and queries having big shuffling scenarios. 
**merge_small_files**: Setting this value will merge all the small partitions 
to a size of (128 MB is the default value of 
"spark.sql.files.maxPartitionBytes",it is configurable) during querying. The 
small partitions are combined to a map task to reduce the number of read task. 
This enhances the performance. | |
+| carbon.load.sortmemory.spill.percentage | 0 | If we use unsafe memory during 
data loading, this configuration will be used to control the behavior of 
spilling inmemory pages to disk. Internally in Carbondata, during sorting 
carbondata will sort data in pages and add them in unsafe memory. If the memory 
is insufficient, carbondata will spill the pages to disk and generate sort temp 
file. This configuration controls how many pages in memory will be spilled to 
disk based size. The size can be calculated by multiplying this configuration 
value with 'carbon.sort.storage.inmemory.size.inmb'. For example, default value 
0 means that no pages in unsafe memory will be spilled and all the newly sorted 
data will be spilled to disk; Value 50 means that if the unsafe memory is 
insufficient, about half of pages in the unsafe memory will be spilled to disk 
while value 100 means that almost all pages in unsafe memory will be spilled. 
**Note**: This configuration only works for 'LOCAL_SORT' and 'BA
 TCH_SORT' and the actual spilling behavior may slightly be different in each 
data loading. | Integer values between 0 and 100 |
 
 * **Compaction Configuration**
   

http://git-wip-us.apache.org/repos/asf/carbondata/blob/6d6a5b2e/docs/datamap/datamap-management.md
----------------------------------------------------------------------
diff --git a/docs/datamap/datamap-management.md 
b/docs/datamap/datamap-management.md
index 01bb69f..1695a23 100644
--- a/docs/datamap/datamap-management.md
+++ b/docs/datamap/datamap-management.md
@@ -22,13 +22,13 @@ Currently, there are 5 DataMap implementation in CarbonData.
 | timeseries       | time dimension rollup table.             | event_time, 
xx_granularity, please refer to [Timeseries 
DataMap](https://github.com/apache/carbondata/blob/master/docs/datamap/timeseries-datamap-guide.md)
 | Automatic        |
 | mv               | multi-table pre-aggregate table,         | No DMPROPERTY 
is required                | Manual           |
 | lucene           | lucene indexing for text column          | index_columns 
to specifying the index columns | Manual/Automatic |
-| bloom            | bloom filter for high cardinality column, geospatial 
column | index_columns to specifying the index columns | Manual/Automatic |
+| bloomfilter      | bloom filter for high cardinality column, geospatial 
column | index_columns to specifying the index columns | Manual/Automatic |
 
 ## DataMap Management
 
 There are two kinds of management semantic for DataMap.
 
-1. Autmatic Refresh: Create datamap without `WITH DEFERED REBUILD` in the 
statement
+1. Automatic Refresh: Create datamap without `WITH DEFERED REBUILD` in the 
statement, which is by default.
 2. Manual Refresh: Create datamap with `WITH DEFERED REBUILD` in the statement
 
 ### Automatic Refresh
@@ -51,15 +51,23 @@ If user do want to perform above operations on the main 
table, user can first dr
 
 If user drop the main table, the datamap will be dropped immediately too.
 
+We do recommend you to use this management for index datamap.
+
 ### Manual Refresh
 
 When user creates a datamap specifying maunal refresh semantic, the datamap is 
created with status *disabled* and query will NOT use this datamap until user 
can issue REBUILD DATAMAP command to build the datamap. For every REBUILD 
DATAMAP command, system will trigger a full rebuild of the datamap. After 
rebuild is done, system will change datamap status to *enabled*, so that it can 
be used in query rewrite.
 
-For every new data loading, data update, delete, the related datamap will be 
made *disabled*.
+For every new data loading, data update, delete, the related datamap will be 
made *disabled*,
+which means that the following queries will not benefit from the datamap 
before it becomes *enabled* again.
 
 If the main table is dropped by user, the related datamap will be dropped 
immediately.
 
-*Note: If you are creating a datamap on external table, you need to do manual 
managment of the datamap.*
+**Note**:
++ If you are creating a datamap on external table, you need to do manual 
management of the datamap.
++ For index datamap such as BloomFilter datamap, there is no need to do manual 
refresh.
+ By default it is automatic refresh,
+ which means its data will get refreshed immediately after the datamap is 
created or the main table is loaded.
+ Manual refresh on this datamap will has no impact.
 
 
 

Reply via email to