http://git-wip-us.apache.org/repos/asf/carbondata-site/blob/e88f2b48/content/streaming-guide.html
----------------------------------------------------------------------
diff --git a/content/streaming-guide.html b/content/streaming-guide.html
index 812f1aa..e9788a9 100644
--- a/content/streaming-guide.html
+++ b/content/streaming-guide.html
@@ -190,16 +190,16 @@
 5,col5
 </code></pre>
 <p>Start spark-shell in new terminal, type :paste, then copy and run the 
following code.</p>
-<div class="highlight highlight-source-scala"><pre> <span 
class="pl-k">import</span> <span class="pl-smi">java.io.</span><span 
class="pl-smi">File</span>
- <span class="pl-k">import</span> <span 
class="pl-smi">org.apache.spark.sql.</span>{<span 
class="pl-smi">CarbonEnv</span>, <span class="pl-smi">SparkSession</span>}
- <span class="pl-k">import</span> <span 
class="pl-smi">org.apache.spark.sql.CarbonSession.</span><span 
class="pl-smi">_</span>
- <span class="pl-k">import</span> <span 
class="pl-smi">org.apache.spark.sql.streaming.</span>{<span 
class="pl-smi">ProcessingTime</span>, <span 
class="pl-smi">StreamingQuery</span>}
- <span class="pl-k">import</span> <span 
class="pl-smi">org.apache.carbondata.core.util.path.</span><span 
class="pl-smi">CarbonStorePath</span>
+<div class="highlight highlight-source-scala"><pre> <span 
class="pl-k">import</span> <span class="pl-en">java</span>.<span 
class="pl-en">io</span>.<span class="pl-en">File</span>
+ <span class="pl-k">import</span> <span class="pl-en">org</span>.<span 
class="pl-en">apache</span>.<span class="pl-en">spark</span>.<span 
class="pl-en">sql</span>.{<span class="pl-en">CarbonEnv</span>, <span 
class="pl-en">SparkSession</span>}
+ <span class="pl-k">import</span> <span class="pl-en">org</span>.<span 
class="pl-en">apache</span>.<span class="pl-en">spark</span>.<span 
class="pl-en">sql</span>.<span class="pl-en">CarbonSession</span>.<span 
class="pl-en">_</span>
+ <span class="pl-k">import</span> <span class="pl-en">org</span>.<span 
class="pl-en">apache</span>.<span class="pl-en">spark</span>.<span 
class="pl-en">sql</span>.<span class="pl-en">streaming</span>.{<span 
class="pl-en">ProcessingTime</span>, <span class="pl-en">StreamingQuery</span>}
+ <span class="pl-k">import</span> <span class="pl-en">org</span>.<span 
class="pl-en">apache</span>.<span class="pl-en">carbondata</span>.<span 
class="pl-en">core</span>.<span class="pl-en">util</span>.<span 
class="pl-en">path</span>.<span class="pl-en">CarbonStorePath</span>
  
- <span class="pl-k">val</span> <span class="pl-en">warehouse</span> <span 
class="pl-k">=</span> <span class="pl-k">new</span> <span 
class="pl-en">File</span>(<span class="pl-s"><span 
class="pl-pds">"</span>./warehouse<span 
class="pl-pds">"</span></span>).getCanonicalPath
- <span class="pl-k">val</span> <span class="pl-en">metastore</span> <span 
class="pl-k">=</span> <span class="pl-k">new</span> <span 
class="pl-en">File</span>(<span class="pl-s"><span 
class="pl-pds">"</span>./metastore<span 
class="pl-pds">"</span></span>).getCanonicalPath
+ <span class="pl-k">val</span> <span class="pl-smi">warehouse</span> <span 
class="pl-k">=</span> <span class="pl-k">new</span> <span 
class="pl-en">File</span>(<span class="pl-s"><span 
class="pl-pds">"</span>./warehouse<span 
class="pl-pds">"</span></span>).getCanonicalPath
+ <span class="pl-k">val</span> <span class="pl-smi">metastore</span> <span 
class="pl-k">=</span> <span class="pl-k">new</span> <span 
class="pl-en">File</span>(<span class="pl-s"><span 
class="pl-pds">"</span>./metastore<span 
class="pl-pds">"</span></span>).getCanonicalPath
  
- <span class="pl-k">val</span> <span class="pl-en">spark</span> <span 
class="pl-k">=</span> <span class="pl-en">SparkSession</span>
+ <span class="pl-k">val</span> <span class="pl-smi">spark</span> <span 
class="pl-k">=</span> <span class="pl-en">SparkSession</span>
    .builder()
    .master(<span class="pl-s"><span class="pl-pds">"</span>local<span 
class="pl-pds">"</span></span>)
    .appName(<span class="pl-s"><span class="pl-pds">"</span>StreamExample<span 
class="pl-pds">"</span></span>)
@@ -220,12 +220,12 @@
 <span class="pl-s">      | STORED BY 'carbondata'</span>
 <span class="pl-s">      | TBLPROPERTIES('streaming'='true')<span 
class="pl-pds">"""</span></span>.stripMargin)
 
- <span class="pl-k">val</span> <span class="pl-en">carbonTable</span> <span 
class="pl-k">=</span> <span class="pl-en">CarbonEnv</span>.getCarbonTable(<span 
class="pl-en">Some</span>(<span class="pl-s"><span 
class="pl-pds">"</span>default<span class="pl-pds">"</span></span>), <span 
class="pl-s"><span class="pl-pds">"</span>carbon_table<span 
class="pl-pds">"</span></span>)(spark)
- <span class="pl-k">val</span> <span class="pl-en">tablePath</span> <span 
class="pl-k">=</span> <span 
class="pl-en">CarbonStorePath</span>.getCarbonTablePath(carbonTable.getAbsoluteTableIdentifier)
+ <span class="pl-k">val</span> <span class="pl-smi">carbonTable</span> <span 
class="pl-k">=</span> <span class="pl-en">CarbonEnv</span>.getCarbonTable(<span 
class="pl-en">Some</span>(<span class="pl-s"><span 
class="pl-pds">"</span>default<span class="pl-pds">"</span></span>), <span 
class="pl-s"><span class="pl-pds">"</span>carbon_table<span 
class="pl-pds">"</span></span>)(spark)
+ <span class="pl-k">val</span> <span class="pl-smi">tablePath</span> <span 
class="pl-k">=</span> <span 
class="pl-en">CarbonStorePath</span>.getCarbonTablePath(carbonTable.getAbsoluteTableIdentifier)
  
  <span class="pl-c"><span class="pl-c">//</span> batch load</span>
- <span class="pl-k">var</span> <span class="pl-en">qry</span><span 
class="pl-k">:</span> <span class="pl-en">StreamingQuery</span> <span 
class="pl-k">=</span> <span class="pl-c1">null</span>
- <span class="pl-k">val</span> <span class="pl-en">readSocketDF</span> <span 
class="pl-k">=</span> spark.readStream
+ <span class="pl-k">var</span> <span class="pl-smi">qry</span><span 
class="pl-k">:</span> <span class="pl-en">StreamingQuery</span> <span 
class="pl-k">=</span> <span class="pl-c1">null</span>
+ <span class="pl-k">val</span> <span class="pl-smi">readSocketDF</span> <span 
class="pl-k">=</span> spark.readStream
    .format(<span class="pl-s"><span class="pl-pds">"</span>socket<span 
class="pl-pds">"</span></span>)
    .option(<span class="pl-s"><span class="pl-pds">"</span>host<span 
class="pl-pds">"</span></span>, <span class="pl-s"><span 
class="pl-pds">"</span>localhost<span class="pl-pds">"</span></span>)
    .option(<span class="pl-s"><span class="pl-pds">"</span>port<span 
class="pl-pds">"</span></span>, <span class="pl-c1">9099</span>)
@@ -327,7 +327,7 @@ streaming table using following DDL.</p>
 </table>
 <h2>
 <a id="change-segment-status" class="anchor" href="#change-segment-status" 
aria-hidden="true"><span aria-hidden="true" class="octicon 
octicon-link"></span></a>Change segment status</h2>
-<p>Use below command to change the status of "streaming" segment to "streaming 
finish" segment.</p>
+<p>Use below command to change the status of "streaming" segment to "streaming 
finish" segment. If the streaming application is running, this command will be 
blocked.</p>
 <div class="highlight highlight-source-sql"><pre><span 
class="pl-k">ALTER</span> <span class="pl-k">TABLE</span> streaming_table 
FINISH STREAMING</pre></div>
 <h2>
 <a id="handoff-streaming-finish-segment-to-columnar-segment" class="anchor" 
href="#handoff-streaming-finish-segment-to-columnar-segment" 
aria-hidden="true"><span aria-hidden="true" class="octicon 
octicon-link"></span></a>Handoff "streaming finish" segment to columnar 
segment</h2>
@@ -379,8 +379,8 @@ streaming table using following DDL.</p>
  <span class="pl-k">case</span> <span class="pl-k">class</span> <span 
class="pl-en">StreamData</span>(<span class="pl-v">id</span>: <span 
class="pl-k">Int</span>, <span class="pl-v">name</span>: <span 
class="pl-k">String</span>, <span class="pl-v">city</span>: <span 
class="pl-k">String</span>, <span class="pl-v">salary</span>: <span 
class="pl-k">Float</span>, <span class="pl-v">file</span>: <span 
class="pl-en">FileElement</span>)
  ...
 
- <span class="pl-k">var</span> <span class="pl-en">qry</span><span 
class="pl-k">:</span> <span class="pl-en">StreamingQuery</span> <span 
class="pl-k">=</span> <span class="pl-c1">null</span>
- <span class="pl-k">val</span> <span class="pl-en">readSocketDF</span> <span 
class="pl-k">=</span> spark.readStream
+ <span class="pl-k">var</span> <span class="pl-smi">qry</span><span 
class="pl-k">:</span> <span class="pl-en">StreamingQuery</span> <span 
class="pl-k">=</span> <span class="pl-c1">null</span>
+ <span class="pl-k">val</span> <span class="pl-smi">readSocketDF</span> <span 
class="pl-k">=</span> spark.readStream
    .format(<span class="pl-s"><span class="pl-pds">"</span>socket<span 
class="pl-pds">"</span></span>)
    .option(<span class="pl-s"><span class="pl-pds">"</span>host<span 
class="pl-pds">"</span></span>, <span class="pl-s"><span 
class="pl-pds">"</span>localhost<span class="pl-pds">"</span></span>)
    .option(<span class="pl-s"><span class="pl-pds">"</span>port<span 
class="pl-pds">"</span></span>, <span class="pl-c1">9099</span>)
@@ -388,8 +388,8 @@ streaming table using following DDL.</p>
    .as[<span class="pl-k">String</span>]
    .map(_.split(<span class="pl-s"><span class="pl-pds">"</span>,<span 
class="pl-pds">"</span></span>))
    .map { fields <span class="pl-k">=&gt;</span> {
-     <span class="pl-k">val</span> <span class="pl-en">tmp</span> <span 
class="pl-k">=</span> fields(<span class="pl-c1">4</span>).split(<span 
class="pl-s"><span class="pl-pds">"</span><span class="pl-cce">\\</span>$<span 
class="pl-pds">"</span></span>)
-     <span class="pl-k">val</span> <span class="pl-en">file</span> <span 
class="pl-k">=</span> <span class="pl-en">FileElement</span>(tmp(<span 
class="pl-c1">0</span>).split(<span class="pl-s"><span 
class="pl-pds">"</span>:<span class="pl-pds">"</span></span>), tmp(<span 
class="pl-c1">1</span>).toInt)
+     <span class="pl-k">val</span> <span class="pl-smi">tmp</span> <span 
class="pl-k">=</span> fields(<span class="pl-c1">4</span>).split(<span 
class="pl-s"><span class="pl-pds">"</span><span class="pl-cce">\\</span>$<span 
class="pl-pds">"</span></span>)
+     <span class="pl-k">val</span> <span class="pl-smi">file</span> <span 
class="pl-k">=</span> <span class="pl-en">FileElement</span>(tmp(<span 
class="pl-c1">0</span>).split(<span class="pl-s"><span 
class="pl-pds">"</span>:<span class="pl-pds">"</span></span>), tmp(<span 
class="pl-c1">1</span>).toInt)
      <span class="pl-en">StreamData</span>(fields(<span 
class="pl-c1">0</span>).toInt, fields(<span class="pl-c1">1</span>), 
fields(<span class="pl-c1">2</span>), fields(<span 
class="pl-c1">3</span>).toFloat, file)
    } }
 
@@ -408,11 +408,11 @@ streaming table using following DDL.</p>
 <h3>
 <a id="how-to-implement-a-customized-stream-parser" class="anchor" 
href="#how-to-implement-a-customized-stream-parser" aria-hidden="true"><span 
aria-hidden="true" class="octicon octicon-link"></span></a>How to implement a 
customized stream parser</h3>
 <p>If user needs to implement a customized stream parser to convert a specific 
InternalRow to Object[], it needs to implement <code>initialize</code> method 
and <code>parserRow</code> method of interface <code>CarbonStreamParser</code>, 
for example:</p>
-<div class="highlight highlight-source-scala"><pre> <span 
class="pl-k">package</span> <span 
class="pl-en">org.XXX.XXX.streaming.parser</span>
+<div class="highlight highlight-source-scala"><pre> <span 
class="pl-k">package</span> <span class="pl-en">org</span>.<span 
class="pl-en">XXX</span>.<span class="pl-en">XXX</span>.<span 
class="pl-en">streaming</span>.<span class="pl-en">parser</span>
  
- <span class="pl-k">import</span> <span 
class="pl-smi">org.apache.hadoop.conf.</span><span 
class="pl-smi">Configuration</span>
- <span class="pl-k">import</span> <span 
class="pl-smi">org.apache.spark.sql.catalyst.</span><span 
class="pl-smi">InternalRow</span>
- <span class="pl-k">import</span> <span 
class="pl-smi">org.apache.spark.sql.types.</span><span 
class="pl-smi">StructType</span>
+ <span class="pl-k">import</span> <span class="pl-en">org</span>.<span 
class="pl-en">apache</span>.<span class="pl-en">hadoop</span>.<span 
class="pl-en">conf</span>.<span class="pl-en">Configuration</span>
+ <span class="pl-k">import</span> <span class="pl-en">org</span>.<span 
class="pl-en">apache</span>.<span class="pl-en">spark</span>.<span 
class="pl-en">sql</span>.<span class="pl-en">catalyst</span>.<span 
class="pl-en">InternalRow</span>
+ <span class="pl-k">import</span> <span class="pl-en">org</span>.<span 
class="pl-en">apache</span>.<span class="pl-en">spark</span>.<span 
class="pl-en">sql</span>.<span class="pl-en">types</span>.<span 
class="pl-en">StructType</span>
  
  <span class="pl-k">class</span> <span class="pl-en">XXXStreamParserImp</span> 
<span class="pl-k">extends</span> <span class="pl-e">CarbonStreamParser</span> {
  

http://git-wip-us.apache.org/repos/asf/carbondata-site/blob/e88f2b48/src/main/resources/application.conf
----------------------------------------------------------------------
diff --git a/src/main/resources/application.conf 
b/src/main/resources/application.conf
index 5fe9837..ef8c3dc 100644
--- a/src/main/resources/application.conf
+++ b/src/main/resources/application.conf
@@ -7,7 +7,9 @@ fileList=["configuration-parameters",
   "streaming-guide",
   "supported-data-types-in-carbondata",
   "troubleshooting",
-  "useful-tips-on-carbondata"
+  "useful-tips-on-carbondata",
+  "sdk-writer-guide",
+  "datamap-developer-guide"
   ]
 dataMapFileList=[
   "preaggregate-datamap-guide",

http://git-wip-us.apache.org/repos/asf/carbondata-site/blob/e88f2b48/src/main/webapp/configuration-parameters.html
----------------------------------------------------------------------
diff --git a/src/main/webapp/configuration-parameters.html 
b/src/main/webapp/configuration-parameters.html
index 4f4f1b0..8db28db 100644
--- a/src/main/webapp/configuration-parameters.html
+++ b/src/main/webapp/configuration-parameters.html
@@ -232,6 +232,11 @@
 <td>true</td>
 <td>If this parameter value is set to true, show tables command will list all 
the tables including datatmaps(eg: Preaggregate table), else datamaps will be 
excluded from the table list.</td>
 </tr>
+<tr>
+<td>carbon.segment.lock.files.preserve.hours</td>
+<td>48</td>
+<td>This property value indicates the number of hours the segment lock files 
will be preserved after dataload. These lock files will be deleted with the 
clean command after the configured number of hours.</td>
+</tr>
 </tbody>
 </table>
 <h2>

http://git-wip-us.apache.org/repos/asf/carbondata-site/blob/e88f2b48/src/main/webapp/data-management-on-carbondata.html
----------------------------------------------------------------------
diff --git a/src/main/webapp/data-management-on-carbondata.html 
b/src/main/webapp/data-management-on-carbondata.html
index 57c28a4..49f2fe8 100644
--- a/src/main/webapp/data-management-on-carbondata.html
+++ b/src/main/webapp/data-management-on-carbondata.html
@@ -196,6 +196,7 @@ STORED BY 'carbondata'
 [TBLPROPERTIES (property_name=property_value, ...)]
 [LOCATION 'path']
 </code></pre>
+<p><strong>NOTE:</strong> CarbonData also supports "STORED AS carbondata". 
Find example code at <a 
href="https://github.com/apache/carbondata/blob/master/examples/spark2/src/main/scala/org/apache/carbondata/examples/CarbonSessionExample.scala";
 target=_blank>CarbonSessionExample</a> in the CarbonData repo.</p>
 <h3>
 <a id="usage-guidelines" class="anchor" href="#usage-guidelines" 
aria-hidden="true"><span aria-hidden="true" class="octicon 
octicon-link"></span></a>Usage Guidelines</h3>
 <p>Following are the guidelines for TBLPROPERTIES, CarbonData's additional 
table options can be set via carbon.properties.</p>
@@ -244,7 +245,7 @@ And if you care about loading resources isolation strictly, 
because the system u
 <p>This command is for setting block size of this table, the default value is 
1024 MB and supports a range of 1 MB to 2048 MB.</p>
 <pre><code>TBLPROPERTIES ('TABLE_BLOCKSIZE'='512')
 </code></pre>
-<p>NOTE: 512 or 512M both are accepted.</p>
+<p><strong>NOTE:</strong> 512 or 512M both are accepted.</p>
 </li>
 <li>
 <p><strong>Table Compaction Configuration</strong></p>
@@ -286,11 +287,10 @@ Following are 5 configurations:</p>
  TBLPROPERTIES ('SORT_COLUMNS'='productName,storeCity',
                 'SORT_SCOPE'='NO_SORT')
 </code></pre>
+<p><strong>NOTE:</strong> CarbonData also supports "using carbondata". Find 
example code at <a 
href="https://github.com/apache/carbondata/blob/master/examples/spark2/src/main/scala/org/apache/carbondata/examples/SparkSessionExample.scala";
 target=_blank>SparkSessionExample</a> in the CarbonData repo.</p>
 <h2>
 <a id="create-table-as-select" class="anchor" href="#create-table-as-select" 
aria-hidden="true"><span aria-hidden="true" class="octicon 
octicon-link"></span></a>CREATE TABLE AS SELECT</h2>
 <p>This function allows user to create a Carbon table from any of the 
Parquet/Hive/Carbon table. This is beneficial when the user wants to create 
Carbon table from any other Parquet/Hive table and use the Carbon query engine 
to query and achieve better query results for cases where Carbon is faster than 
other file formats. Also this feature can be used for backing up the data.</p>
-<h3>
-<a id="syntax" class="anchor" href="#syntax" aria-hidden="true"><span 
aria-hidden="true" class="octicon octicon-link"></span></a>Syntax</h3>
 <pre><code>CREATE TABLE [IF NOT EXISTS] [db_name.]table_name 
 STORED BY 'carbondata' 
 [TBLPROPERTIES (key1=val1, key2=val2, ...)] 
@@ -321,6 +321,42 @@ carbon.sql("SELECT * FROM target_table").show
 
 </code></pre>
 <h2>
+<a id="create-external-table" class="anchor" href="#create-external-table" 
aria-hidden="true"><span aria-hidden="true" class="octicon 
octicon-link"></span></a>CREATE EXTERNAL TABLE</h2>
+<p>This function allows user to create external table by specifying 
location.</p>
+<pre><code>CREATE EXTERNAL TABLE [IF NOT EXISTS] [db_name.]table_name 
+STORED BY 'carbondata' LOCATION ?$FilesPath?
+</code></pre>
+<h3>
+<a id="create-external-table-on-managed-table-data-location" class="anchor" 
href="#create-external-table-on-managed-table-data-location" 
aria-hidden="true"><span aria-hidden="true" class="octicon 
octicon-link"></span></a>Create external table on managed table data 
location.</h3>
+<p>Managed table data location provided will have both FACT and Metadata 
folder.
+This data can be generated by creating a normal carbon table and use this path 
as $FilesPath in the above syntax.</p>
+<p><strong>Example:</strong></p>
+<pre><code>sql("CREATE TABLE origin(key INT, value STRING) STORED BY 
'carbondata'")
+sql("INSERT INTO origin select 100,'spark'")
+sql("INSERT INTO origin select 200,'hive'")
+// creates a table in $storeLocation/origin
+
+sql(s"""
+|CREATE EXTERNAL TABLE source
+|STORED BY 'carbondata'
+|LOCATION '$storeLocation/origin'
+""".stripMargin)
+checkAnswer(sql("SELECT count(*) from source"), sql("SELECT count(*) from 
origin"))
+</code></pre>
+<h3>
+<a id="create-external-table-on-non-transactional-table-data-location" 
class="anchor" 
href="#create-external-table-on-non-transactional-table-data-location" 
aria-hidden="true"><span aria-hidden="true" class="octicon 
octicon-link"></span></a>Create external table on Non-Transactional table data 
location.</h3>
+<p>Non-Transactional table data location will have only carbondata and 
carbonindex files, there will not be a metadata folder (table status and 
schema).
+Our SDK module currently support writing data in this format.</p>
+<p><strong>Example:</strong></p>
+<pre><code>sql(
+s"""CREATE EXTERNAL TABLE sdkOutputTable STORED BY 'carbondata' LOCATION
+|'$writerPath' """.stripMargin)
+</code></pre>
+<p>Here writer path will have carbondata and index files.
+This can be SDK output. Refer <a 
href="https://github.com/apache/carbondata/blob/master/docs/sdk-writer-guide.html";
 target=_blank>SDK Writer Guide</a>.</p>
+<p><strong>Note:</strong>
+Dropping of the external table should not delete the files present in the 
location.</p>
+<h2>
 <a id="create-database" class="anchor" href="#create-database" 
aria-hidden="true"><span aria-hidden="true" class="octicon 
octicon-link"></span></a>CREATE DATABASE</h2>
 <p>This function creates a new database. By default the database is created in 
Carbon store location, but you can also specify custom location.</p>
 <pre><code>CREATE DATABASE [IF NOT EXISTS] database_name [LOCATION path];
@@ -394,7 +430,8 @@ Change of decimal data type from lower precision to higher 
precision will only b
 <ul>
 <li>Invalid scenario - Change of decimal precision from (10,2) to (10,5) is 
invalid as in this case only scale is increased but total number of digits 
remains the same.</li>
 <li>Valid scenario - Change of decimal precision from (10,2) to (12,3) is 
valid as the total number of digits are increased by 2 but scale is increased 
only by 1 which will not lead to any data loss.</li>
-<li>NOTE: The allowed range is 38,38 (precision, scale) and is a valid upper 
case scenario which is not resulting in data loss.</li>
+<li>
+<strong>NOTE:</strong> The allowed range is 38,38 (precision, scale) and is a 
valid upper case scenario which is not resulting in data loss.</li>
 </ul>
 <p>Example1:Changing data type of column a1 from INT to BIGINT.</p>
 <pre><code>ALTER TABLE test_db.carbon CHANGE a1 a1 BIGINT
@@ -420,7 +457,7 @@ Change of decimal data type from lower precision to higher 
precision will only b
 <p>Example:</p>
 <pre><code>REFRESH TABLE dbcarbon.productSalesTable
 </code></pre>
-<p>NOTE:</p>
+<p><strong>NOTE:</strong></p>
 <ul>
 <li>The new database name and the old database name should be same.</li>
 <li>Before executing this command the old table schema and data should be 
copied into the new database location.</li>
@@ -484,7 +521,7 @@ false: CSV file is without file header.
 true: CSV file is with file header.</p>
 <pre><code>OPTIONS('HEADER'='false') 
 </code></pre>
-<p>NOTE: If the HEADER option exist and is set to 'true', then the FILEHEADER 
option is not required.</p>
+<p><strong>NOTE:</strong> If the HEADER option exist and is set to 'true', 
then the FILEHEADER option is not required.</p>
 </li>
 <li>
 <p><strong>FILEHEADER:</strong> Headers can be provided in the LOAD DATA 
command if headers are missing in the source files.</p>
@@ -525,24 +562,27 @@ true: CSV file is with file header.</p>
 <p><strong>COLUMNDICT:</strong> Dictionary file path for specified column.</p>
 
<pre><code>OPTIONS('COLUMNDICT'='column1:dictionaryFilePath1,column2:dictionaryFilePath2')
 </code></pre>
-<p>NOTE: ALL_DICTIONARY_PATH and COLUMNDICT can't be used together.</p>
+<p><strong>NOTE:</strong> ALL_DICTIONARY_PATH and COLUMNDICT can't be used 
together.</p>
 </li>
 <li>
 <p><strong>DATEFORMAT/TIMESTAMPFORMAT:</strong> Date and Timestamp format for 
specified column.</p>
 <pre><code>OPTIONS('DATEFORMAT' = 'yyyy-MM-dd','TIMESTAMPFORMAT'='yyyy-MM-dd 
HH:mm:ss')
 </code></pre>
-<p>NOTE: Date formats are specified by date pattern strings. The date pattern 
letters in CarbonData are same as in JAVA. Refer to <a 
href="http://docs.oracle.com/javase/7/docs/api/java/text/SimpleDateFormat.html"; 
target=_blank rel="nofollow">SimpleDateFormat</a>.</p>
+<p><strong>NOTE:</strong> Date formats are specified by date pattern strings. 
The date pattern letters in CarbonData are same as in JAVA. Refer to <a 
href="http://docs.oracle.com/javase/7/docs/api/java/text/SimpleDateFormat.html"; 
target=_blank rel="nofollow">SimpleDateFormat</a>.</p>
 </li>
 <li>
 <p><strong>SORT COLUMN BOUNDS:</strong> Range bounds for sort columns.</p>
-<pre><code>OPTIONS('SORT_COLUMN_BOUNDS'='v11,v21,v31;v12,v22,v32;v13,v23,v33')
+<p>Suppose the table is created with 'SORT_COLUMNS'='name,id' and the range 
for name is aaa<del>zzz, the value range for id is 0</del>1000. Then during 
data loading, we can specify the following option to enhance data loading 
performance.</p>
+<pre><code>OPTIONS('SORT_COLUMN_BOUNDS'='f,250;l,500;r,750')
 </code></pre>
-<p>NOTE:</p>
+<p>Each bound is separated by ';' and each field value in bound is separated 
by ','. In the example above, we provide 3 bounds to distribute records to 4 
partitions. The values 'f','l','r' can evenly distribute the records. Inside 
carbondata, for a record we compare the value of sort columns with that of the 
bounds and decide which partition the record will be forwarded to.</p>
+<p><strong>NOTE:</strong></p>
 <ul>
 <li>SORT_COLUMN_BOUNDS will be used only when the SORT_SCOPE is 
'local_sort'.</li>
-<li>Each bound is separated by ';' and each field value in bound is separated 
by ','.</li>
-<li>Carbondata will use these bounds as ranges to process data 
concurrently.</li>
+<li>Carbondata will use these bounds as ranges to process data concurrently 
during the final sort percedure. The records will be sorted and written out 
inside each partition. Since the partition is sorted, all records will be 
sorted.</li>
 <li>Since the actual order and literal order of the dictionary column are not 
necessarily the same, we do not recommend you to use this feature if the first 
sort column is 'dictionary_include'.</li>
+<li>The option works better if your CPU usage during loading is low. If your 
system is already CPU tense, better not to use this option. Besides, it depends 
on the user to specify the bounds. If user does not know the exactly bounds to 
make the data distributed evenly among the bounds, loading performance will 
still be better than before or at least the same as before.</li>
+<li>Users can find more information about this option in the description of 
PR1953.</li>
 </ul>
 </li>
 <li>
@@ -552,7 +592,7 @@ true: CSV file is with file header.</p>
 <p>This option specifies whether to use single pass for loading data or not. 
By default this option is set to FALSE.</p>
 <pre><code> OPTIONS('SINGLE_PASS'='TRUE')
 </code></pre>
-<p>NOTE:</p>
+<p><strong>NOTE:</strong></p>
 <ul>
 <li>If this option is set to TRUE then data loading will take less time.</li>
 <li>If this option is set to some invalid value other than TRUE or FALSE then 
it uses the default value.</li>
@@ -580,7 +620,7 @@ 
projectjoindate,projectenddate,attendance,utilization,salary',
 </code></pre>
 </li>
 </ul>
-<p>NOTE:</p>
+<p><strong>NOTE:</strong></p>
 <ul>
 <li>BAD_RECORDS_ACTION property can have four type of actions for bad records 
FORCE, REDIRECT, IGNORE and FAIL.</li>
 <li>FAIL option is its Default value. If the FAIL option is used, then data 
loading fails if any bad records are found.</li>
@@ -608,10 +648,10 @@ It comes with the functionality to aggregate the records 
of a table by performin
 [ WHERE { &lt;filter_condition&gt; } ]
 </code></pre>
 <p>Overwrite insert data:</p>
-<pre><code>INSERT OVERWRITE &lt;CARBONDATA TABLE&gt; SELECT * FROM 
sourceTableName 
+<pre><code>INSERT OVERWRITE TABLE &lt;CARBONDATA TABLE&gt; SELECT * FROM 
sourceTableName 
 [ WHERE { &lt;filter_condition&gt; } ]
 </code></pre>
-<p>NOTE:</p>
+<p><strong>NOTE:</strong></p>
 <ul>
 <li>The source table and the CarbonData table must have the same table 
schema.</li>
 <li>The data type of source and destination table columns should be same</li>
@@ -623,7 +663,7 @@ It comes with the functionality to aggregate the records of 
a table by performin
 </code></pre>
 <pre><code>INSERT INTO table1 SELECT item1, item2, item3 FROM table2 where 
item2='xyz'
 </code></pre>
-<pre><code>INSERT OVERWRITE table1 SELECT * FROM TABLE2
+<pre><code>INSERT OVERWRITE TABLE table1 SELECT * FROM TABLE2
 </code></pre>
 <h2>
 <a id="update-and-delete" class="anchor" href="#update-and-delete" 
aria-hidden="true"><span aria-hidden="true" class="octicon 
octicon-link"></span></a>UPDATE AND DELETE</h2>
@@ -639,7 +679,7 @@ SET (column_name1, column_name2, ... column_name n) = 
(column1_expression , colu
 SET (column_name1, column_name2) =(select sourceColumn1, sourceColumn2 from 
sourceTable [ WHERE { &lt;filter_condition&gt; } ] )
 [ WHERE { &lt;filter_condition&gt; } ]
 </code></pre>
-<p>NOTE:The update command fails if multiple input rows in source table are 
matched with single row in destination table.</p>
+<p><strong>NOTE:</strong> The update command fails if multiple input rows in 
source table are matched with single row in destination table.</p>
 <p>Examples:</p>
 <pre><code>UPDATE t3 SET (t3_salary) = (t3_salary + 9) WHERE t3_name = 'aaa1'
 </code></pre>
@@ -668,8 +708,8 @@ SET (column_name1, column_name2) =(select sourceColumn1, 
sourceColumn2 from sour
 <h2>
 <a id="compaction" class="anchor" href="#compaction" aria-hidden="true"><span 
aria-hidden="true" class="octicon octicon-link"></span></a>COMPACTION</h2>
 <p>Compaction improves the query performance significantly.</p>
-<p>There are two types of compaction, Minor and Major compaction.</p>
-<pre><code>ALTER TABLE [db_name.]table_name COMPACT 'MINOR/MAJOR'
+<p>There are several types of compaction.</p>
+<pre><code>ALTER TABLE [db_name.]table_name COMPACT 'MINOR/MAJOR/CUSTOM'
 </code></pre>
 <ul>
 <li><strong>Minor Compaction</strong></li>
@@ -693,6 +733,14 @@ Configure the property carbon.major.compaction.size with 
appropriate value in MB
 <pre><code>ALTER TABLE table_name COMPACT 'MAJOR'
 </code></pre>
 <ul>
+<li><strong>Custom Compaction</strong></li>
+</ul>
+<p>In Custom compaction, user can directly specify segment ids to be merged 
into one large segment.
+All specified segment ids should exist and be valid, otherwise compaction will 
fail.
+Custom compaction is usually done during the off-peak time.</p>
+<pre><code>ALTER TABLE table_name COMPACT 'CUSTOM' WHERE SEGMENT.ID IN (2,3,4)
+</code></pre>
+<ul>
 <li><strong>CLEAN SEGMENTS AFTER Compaction</strong></li>
 </ul>
 <p>Clean the segments which are compacted:</p>
@@ -788,7 +836,7 @@ STORED BY 'carbondata'
 [TBLPROPERTIES ('PARTITION_TYPE'='HASH',
                 'NUM_PARTITIONS'='N' ...)]
 </code></pre>
-<p>NOTE: N is the number of hash partitions</p>
+<p><strong>NOTE:</strong> N is the number of hash partitions</p>
 <p>Example:</p>
 <pre><code>CREATE TABLE IF NOT EXISTS hash_partition_table(
     col_A STRING,
@@ -809,7 +857,7 @@ STORED BY 'carbondata'
 [TBLPROPERTIES ('PARTITION_TYPE'='RANGE',
                 'RANGE_INFO'='2014-01-01, 2015-01-01, 2016-01-01, ...')]
 </code></pre>
-<p>NOTE:</p>
+<p><strong>NOTE:</strong></p>
 <ul>
 <li>The 'RANGE_INFO' must be defined in ascending order in the table 
properties.</li>
 <li>The default format for partition column of Date/Timestamp type is 
yyyy-MM-dd. Alternate formats for Date/Timestamp could be defined in 
CarbonProperties.</li>
@@ -836,7 +884,7 @@ STORED BY 'carbondata'
 [TBLPROPERTIES ('PARTITION_TYPE'='LIST',
                 'LIST_INFO'='A, B, C, ...')]
 </code></pre>
-<p>NOTE: List partition supports list info in one level group.</p>
+<p><strong>NOTE:</strong> List partition supports list info in one level 
group.</p>
 <p>Example:</p>
 <pre><code>CREATE TABLE IF NOT EXISTS list_partition_table(
     col_B INT,
@@ -870,7 +918,7 @@ STORED BY 'carbondata'
 <p>Drop both partition definition and data</p>
 <pre><code>ALTER TABLE [db_name].table_name DROP PARTITION(partition_id) WITH 
DATA
 </code></pre>
-<p>NOTE:</p>
+<p><strong>NOTE:</strong></p>
 <ul>
 <li>Hash partition table is not supported for ADD, SPLIT and DROP 
commands.</li>
 <li>Partition Id: in CarbonData like the hive, folders are not used to divide 
partitions instead partition id is used to replace the task id. It could make 
use of the characteristic and meanwhile reduce some metadata.</li>
@@ -897,7 +945,7 @@ STORED BY 'carbondata'
 TBLPROPERTIES('BUCKETNUMBER'='noOfBuckets',
 'BUCKETCOLUMNS'='columnname')
 </code></pre>
-<p>NOTE:</p>
+<p><strong>NOTE:</strong></p>
 <ul>
 <li>Bucketing cannot be performed for columns of Complex Data Types.</li>
 <li>Columns in the BUCKETCOLUMN parameter must be dimensions. The BUCKETCOLUMN 
parameter cannot be a measure or a combination of measures and dimensions.</li>
@@ -920,11 +968,15 @@ TBLPROPERTIES ('BUCKETNUMBER'='4', 
'BUCKETCOLUMNS'='productName')
 <h3>
 <a id="show-segment" class="anchor" href="#show-segment" 
aria-hidden="true"><span aria-hidden="true" class="octicon 
octicon-link"></span></a>SHOW SEGMENT</h3>
 <p>This command is used to list the segments of CarbonData table.</p>
-<pre><code>SHOW SEGMENTS FOR TABLE [db_name.]table_name LIMIT 
number_of_segments
+<pre><code>SHOW [HISTORY] SEGMENTS FOR TABLE [db_name.]table_name LIMIT 
number_of_segments
 </code></pre>
-<p>Example:</p>
+<p>Example:
+Show visible segments</p>
 <pre><code>SHOW SEGMENTS FOR TABLE CarbonDatabase.CarbonTable LIMIT 4
 </code></pre>
+<p>Show all segments, include invisible segments</p>
+<pre><code>SHOW HISTORY SEGMENTS FOR TABLE CarbonDatabase.CarbonTable LIMIT 4
+</code></pre>
 <h3>
 <a id="delete-segment-by-id" class="anchor" href="#delete-segment-by-id" 
aria-hidden="true"><span aria-hidden="true" class="octicon 
octicon-link"></span></a>DELETE SEGMENT BY ID</h3>
 <p>This command is used to delete segment by using the segment ID. Each 
segment has a unique segment ID associated with it.
@@ -957,7 +1009,7 @@ The segment created before the particular date will be 
removed from the specific
 <p>Set the segment IDs for table</p>
 <pre><code>SET carbon.input.segments.&lt;database_name&gt;.&lt;table_name&gt; 
= &lt;list of segment IDs&gt;
 </code></pre>
-<p>NOTE:
+<p><strong>NOTE:</strong>
 carbon.input.segments: Specifies the segment IDs to be queried. This property 
allows you to query specified segments of the specified table. The CarbonScan 
will read data from specified segments only.</p>
 <p>If user wants to query with segments reading in multi threading mode, then 
CarbonSession. threadSet can be used instead of SET query.</p>
 <pre><code>CarbonSession.threadSet 
("carbon.input.segments.&lt;database_name&gt;.&lt;table_name&gt;","&lt;list of 
segment IDs&gt;");

http://git-wip-us.apache.org/repos/asf/carbondata-site/blob/e88f2b48/src/main/webapp/datamap-developer-guide.html
----------------------------------------------------------------------
diff --git a/src/main/webapp/datamap-developer-guide.html 
b/src/main/webapp/datamap-developer-guide.html
new file mode 100644
index 0000000..18ec74d
--- /dev/null
+++ b/src/main/webapp/datamap-developer-guide.html
@@ -0,0 +1,211 @@
+<!DOCTYPE html>
+<html lang="en">
+<head>
+    <meta charset="utf-8">
+    <meta http-equiv="X-UA-Compatible" content="IE=edge">
+    <meta name="viewport" content="width=device-width, initial-scale=1">
+    <link href='images/favicon.ico' rel='shortcut icon' type='image/x-icon'>
+    <!-- The above 3 meta tags *must* come first in the head; any other head 
content must come *after* these tags -->
+    <title>CarbonData</title>
+    <style>
+
+    </style>
+    <!-- Bootstrap -->
+
+    <link rel="stylesheet" href="css/bootstrap.min.css">
+    <link href="css/style.css" rel="stylesheet">
+    <!-- HTML5 shim and Respond.js for IE8 support of HTML5 elements and media 
queries -->
+    <!-- WARNING: Respond.js doesn't work if you view the page via file:// -->
+    <!--[if lt IE 9]>
+    <script 
src="https://oss.maxcdn.com/html5shiv/3.7.3/html5shiv.min.js";></script>
+    <script 
src="https://oss.maxcdn.scom/respond/1.4.2/respond.min.js";></script>
+    <![endif]-->
+    <script src="js/jquery.min.js"></script>
+    <script src="js/bootstrap.min.js"></script>
+
+
+</head>
+<body>
+<header>
+    <nav class="navbar navbar-default navbar-custom cd-navbar-wrapper">
+        <div class="container">
+            <div class="navbar-header">
+                <button aria-controls="navbar" aria-expanded="false" 
data-target="#navbar" data-toggle="collapse"
+                        class="navbar-toggle collapsed" type="button">
+                    <span class="sr-only">Toggle navigation</span>
+                    <span class="icon-bar"></span>
+                    <span class="icon-bar"></span>
+                    <span class="icon-bar"></span>
+                </button>
+                <a href="index.html" class="logo">
+                    <img src="images/CarbonDataLogo.png" alt="CarbonData logo" 
title="CarbocnData logo"/>
+                </a>
+            </div>
+            <div class="navbar-collapse collapse cd_navcontnt" id="navbar">
+                <ul class="nav navbar-nav navbar-right navlist-custom">
+                    <li><a href="index.html" class="hidden-xs"><i class="fa 
fa-home" aria-hidden="true"></i> </a>
+                    </li>
+                    <li><a href="index.html" class="hidden-lg hidden-md 
hidden-sm">Home</a></li>
+                    <li class="dropdown">
+                        <a href="#" class="dropdown-toggle " 
data-toggle="dropdown" role="button" aria-haspopup="true"
+                           aria-expanded="false"> Download <span 
class="caret"></span></a>
+                        <ul class="dropdown-menu">
+                            <li>
+                                <a 
href="https://dist.apache.org/repos/dist/release/carbondata/1.3.1/";
+                                   target="_blank">Apache CarbonData 
1.3.1</a></li>
+                            <li>
+                                <a 
href="https://dist.apache.org/repos/dist/release/carbondata/1.3.0/";
+                                   target="_blank">Apache CarbonData 
1.3.0</a></li>
+                            <li>
+                                <a 
href="https://dist.apache.org/repos/dist/release/carbondata/1.2.0/";
+                                   target="_blank">Apache CarbonData 
1.2.0</a></li>
+                            <li>
+                                <a 
href="https://dist.apache.org/repos/dist/release/carbondata/1.1.1/";
+                                   target="_blank">Apache CarbonData 
1.1.1</a></li>
+                            <li>
+                                <a 
href="https://dist.apache.org/repos/dist/release/carbondata/1.1.0/";
+                                   target="_blank">Apache CarbonData 
1.1.0</a></li>
+                            <li>
+                                <a 
href="http://archive.apache.org/dist/incubator/carbondata/1.0.0-incubating/";
+                                   target="_blank">Apache CarbonData 
1.0.0</a></li>
+                            <li>
+                                <a 
href="http://archive.apache.org/dist/incubator/carbondata/0.2.0-incubating/";
+                                   target="_blank">Apache CarbonData 
0.2.0</a></li>
+                            <li>
+                                <a 
href="http://archive.apache.org/dist/incubator/carbondata/0.1.1-incubating/";
+                                   target="_blank">Apache CarbonData 
0.1.1</a></li>
+                            <li>
+                                <a 
href="http://archive.apache.org/dist/incubator/carbondata/0.1.0-incubating/";
+                                   target="_blank">Apache CarbonData 
0.1.0</a></li>
+                            <li>
+                                <a 
href="https://cwiki.apache.org/confluence/display/CARBONDATA/Releases";
+                                   target="_blank">Release Archive</a></li>
+                        </ul>
+                    </li>
+                    <li><a href="mainpage.html" 
class="active">Documentation</a></li>
+                    <li class="dropdown">
+                        <a href="#" class="dropdown-toggle" 
data-toggle="dropdown" role="button" aria-haspopup="true"
+                           aria-expanded="false">Community <span 
class="caret"></span></a>
+                        <ul class="dropdown-menu">
+                            <li>
+                                <a 
href="https://github.com/apache/carbondata/blob/master/docs/How-to-contribute-to-Apache-CarbonData.md";
+                                   target="_blank">Contributing to 
CarbonData</a></li>
+                            <li>
+                                <a 
href="https://github.com/apache/carbondata/blob/master/docs/release-guide.md";
+                                   target="_blank">Release Guide</a></li>
+                            <li>
+                                <a 
href="https://cwiki.apache.org/confluence/display/CARBONDATA/PMC+and+Committers+member+list";
+                                   target="_blank">Project PMC and 
Committers</a></li>
+                            <li>
+                                <a 
href="https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=66850609";
+                                   target="_blank">CarbonData Meetups</a></li>
+                            <li><a href="security.html">Apache CarbonData 
Security</a></li>
+                            <li><a 
href="https://issues.apache.org/jira/browse/CARBONDATA"; target="_blank">Apache
+                                Jira</a></li>
+                            <li><a href="videogallery.html">CarbonData Videos 
</a></li>
+                        </ul>
+                    </li>
+                    <li class="dropdown">
+                        <a href="http://www.apache.org/"; class="apache_link 
hidden-xs dropdown-toggle"
+                           data-toggle="dropdown" role="button" 
aria-haspopup="true" aria-expanded="false">Apache</a>
+                        <ul class="dropdown-menu">
+                            <li><a href="http://www.apache.org/"; 
target="_blank">Apache Homepage</a></li>
+                            <li><a href="http://www.apache.org/licenses/"; 
target="_blank">License</a></li>
+                            <li><a 
href="http://www.apache.org/foundation/sponsorship.html";
+                                   target="_blank">Sponsorship</a></li>
+                            <li><a 
href="http://www.apache.org/foundation/thanks.html"; 
target="_blank">Thanks</a></li>
+                        </ul>
+                    </li>
+
+                    <li class="dropdown">
+                        <a href="http://www.apache.org/"; class="hidden-lg 
hidden-md hidden-sm dropdown-toggle"
+                           data-toggle="dropdown" role="button" 
aria-haspopup="true" aria-expanded="false">Apache</a>
+                        <ul class="dropdown-menu">
+                            <li><a href="http://www.apache.org/"; 
target="_blank">Apache Homepage</a></li>
+                            <li><a href="http://www.apache.org/licenses/"; 
target="_blank">License</a></li>
+                            <li><a 
href="http://www.apache.org/foundation/sponsorship.html";
+                                   target="_blank">Sponsorship</a></li>
+                            <li><a 
href="http://www.apache.org/foundation/thanks.html"; 
target="_blank">Thanks</a></li>
+                        </ul>
+                    </li>
+
+                    <li>
+                        <a href="#" id="search-icon"><i class="fa fa-search" 
aria-hidden="true"></i></a>
+
+                    </li>
+
+                </ul>
+            </div><!--/.nav-collapse -->
+            <div id="search-box">
+                <form method="get" action="http://www.google.com/search"; 
target="_blank">
+                    <div class="search-block">
+                        <table border="0" cellpadding="0" width="100%">
+                            <tr>
+                                <td style="width:80%">
+                                    <input type="text" name="q" size=" 5" 
maxlength="255" value=""
+                                           class="search-input"  
placeholder="Search...."    required/>
+                                </td>
+                                <td style="width:20%">
+                                    <input type="submit" value="Search"/></td>
+                            </tr>
+                            <tr>
+                                <td align="left" style="font-size:75%" 
colspan="2">
+                                    <input type="checkbox" name="sitesearch" 
value="carbondata.apache.org" checked/>
+                                    <span style=" position: relative; top: 
-3px;"> Only search for CarbonData</span>
+                                </td>
+                            </tr>
+                        </table>
+                    </div>
+                </form>
+            </div>
+        </div>
+    </nav>
+</header> <!-- end Header part -->
+
+<div class="fixed-padding"></div> <!--  top padding with fixde header  -->
+
+<section><!-- Dashboard nav -->
+    <div class="container-fluid q">
+        <div class="col-sm-12  col-md-12 maindashboard">
+            <div class="row">
+                <section>
+                    <div style="padding:10px 15px;">
+                        <div id="viewpage" name="viewpage">
+                            <div class="row">
+                                <div class="col-sm-12  col-md-12">
+                                    <div><h1>
+<a id="datamap-developer-guide" class="anchor" href="#datamap-developer-guide" 
aria-hidden="true"><span aria-hidden="true" class="octicon 
octicon-link"></span></a>DataMap Developer Guide</h1>
+<h3>
+<a id="introduction" class="anchor" href="#introduction" 
aria-hidden="true"><span aria-hidden="true" class="octicon 
octicon-link"></span></a>Introduction</h3>
+<p>DataMap is a data structure that can be used to accelerate certain query of 
the table. Different DataMap can be implemented by developers.
+Currently, there are two 2 types of DataMap supported:</p>
+<ol>
+<li>IndexDataMap: DataMap that leveraging index to accelerate filter query</li>
+<li>MVDataMap: DataMap that leveraging Materialized View to accelerate olap 
style query, like SPJG query (select, predicate, join, groupby)</li>
+</ol>
+<h3>
+<a id="datamap-provider" class="anchor" href="#datamap-provider" 
aria-hidden="true"><span aria-hidden="true" class="octicon 
octicon-link"></span></a>DataMap provider</h3>
+<p>When user issues <code>CREATE DATAMAP dm ON TABLE main USING 
'provider'</code>, the corresponding DataMapProvider implementation will be 
created and initialized.
+Currently, the provider string can be:</p>
+<ol>
+<li>preaggregate: one type of MVDataMap that do pre-aggregate of single 
table</li>
+<li>timeseries: one type of MVDataMap that do pre-aggregate based on time 
dimension of the table</li>
+<li>class name IndexDataMapFactory  implementation: Developer can implement 
new type of IndexDataMap by extending IndexDataMapFactory</li>
+</ol>
+<p>When user issues <code>DROP DATAMAP dm ON TABLE main</code>, the 
corresponding DataMapProvider interface will be called.</p>
+</div>
+</div>
+</div>
+</div>
+<div class="doc-footer">
+    <a href="#top" class="scroll-top">Top</a>
+</div>
+</div>
+</section>
+</div>
+</div>
+</div>
+</section><!-- End systemblock part -->
+<script src="js/custom.js"></script>
+</body>
+</html>
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/carbondata-site/blob/e88f2b48/src/main/webapp/faq.html
----------------------------------------------------------------------
diff --git a/src/main/webapp/faq.html b/src/main/webapp/faq.html
index e6e0a2b..517fea2 100644
--- a/src/main/webapp/faq.html
+++ b/src/main/webapp/faq.html
@@ -186,6 +186,8 @@
 <li><a href="#how-to-resolve-abstract-method-error">How to resolve Abstract 
Method Error?</a></li>
 <li><a 
href="#how-carbon-will-behave-when-execute-insert-operation-in-abnormal-scenarios">How
 Carbon will behave when execute insert operation in abnormal 
scenarios?</a></li>
 <li><a 
href="#why-aggregate-query-is-not-fetching-data-from-aggregate-table">Why 
aggregate query is not fetching data from aggregate table?</a></li>
+<li><a 
href="#Why-all-executors-are-showing-success-in-Spark-UI-even-after-Dataload-command-failed-at-driver-side">Why
 all executors are showing success in Spark UI even after Dataload command 
failed at Driver side?</a></li>
+<li><a 
href="#Why-different-time-zone-result-for-select-query-output-when-query-SDK-writer-output">Why
 different time zone result for select query output when query SDK writer 
output?</a></li>
 </ul>
 <h2>
 <a id="what-are-bad-records" class="anchor" href="#what-are-bad-records" 
aria-hidden="true"><span aria-hidden="true" class="octicon 
octicon-link"></span></a>What are Bad Records?</h2>
@@ -311,6 +313,19 @@ When aggregate function having 'join' with equal 
filter.</li>
 create datamap ag1 on table gdp21 using 'preaggregate' as select cntry, 
sum(gdp) from gdp21 group by cntry;
 select cntry,sum(gdp) from gdp21,pop1 where cntry=ctry group by cntry;
 </code></pre>
+<h2>
+<a 
id="why-all-executors-are-showing-success-in-spark-ui-even-after-dataload-command-failed-at-driver-side"
 class="anchor" 
href="#why-all-executors-are-showing-success-in-spark-ui-even-after-dataload-command-failed-at-driver-side"
 aria-hidden="true"><span aria-hidden="true" class="octicon 
octicon-link"></span></a>Why all executors are showing success in Spark UI even 
after Dataload command failed at Driver side?</h2>
+<p>Spark executor shows task as failed after the maximum number of retry 
attempts, but loading the data having bad records and BAD_RECORDS_ACTION 
(carbon.bad.records.action) is set as ?FAIL? will attempt only once but will 
send the signal to driver as failed instead of throwing the exception to retry, 
as there is no point to retry if bad record found and BAD_RECORDS_ACTION is set 
to fail. Hence the Spark executor displays this one attempt as successful but 
the command has actually failed to execute. Task attempts or executor logs can 
be checked to observe the failure reason.</p>
+<h2>
+<a 
id="why-different-time-zone-result-for-select-query-output-when-query-sdk-writer-output"
 class="anchor" 
href="#why-different-time-zone-result-for-select-query-output-when-query-sdk-writer-output"
 aria-hidden="true"><span aria-hidden="true" class="octicon 
octicon-link"></span></a>Why different time zone result for select query output 
when query SDK writer output?</h2>
+<p>SDK writer is an independent entity, hence SDK writer can generate 
carbondata files from a non-cluster machine that has different time zones. But 
at cluster when those files are read, it always takes cluster time-zone. Hence, 
the value of timestamp and date datatype fields are not original value.
+If wanted to control timezone of data while writing, then set cluster's 
time-zone in SDK writer by calling below API.</p>
+<pre><code>TimeZone.setDefault(timezoneValue)
+</code></pre>
+<p><strong>Example:</strong></p>
+<pre><code>cluster timezone is Asia/Shanghai
+TimeZone.setDefault(TimeZone.getTimeZone("Asia/Shanghai"))
+</code></pre>
 </div>
 </div>
 </div>

http://git-wip-us.apache.org/repos/asf/carbondata-site/blob/e88f2b48/src/main/webapp/mainpage.html
----------------------------------------------------------------------
diff --git a/src/main/webapp/mainpage.html b/src/main/webapp/mainpage.html
index e597cb4..7a5501b 100644
--- a/src/main/webapp/mainpage.html
+++ b/src/main/webapp/mainpage.html
@@ -197,6 +197,8 @@
                                             <li><a 
href="installation-guide.html">Installation Guide</a></li>
                                             <li><a 
href="configuration-parameters.html">Configuring CarbonData</a></li>
                                             <li><a 
href="streaming-guide.html">Streaming Guide</a></li>
+                                            <li><a 
href="sdk-writer-guide.html">SDK Writer Guide</a></li>
+                                            <li><a 
href="datamap-developer-guide.html">DataMap Developer Guide</a></li>
                                             <li><a 
href="preaggregate-datamap-guide.html">CarbonData Pre-aggregate DataMap</a></li>
                                             <li><a 
href="timeseries-datamap-guide.html">CarbonData Timeseries DataMap</a></li>
                                             <li><a 
href="faq.html">FAQs</a></li>

http://git-wip-us.apache.org/repos/asf/carbondata-site/blob/e88f2b48/src/main/webapp/pdf/CarbonData
 Documentation.pdf
----------------------------------------------------------------------
diff --git a/src/main/webapp/pdf/CarbonData Documentation.pdf 
b/src/main/webapp/pdf/CarbonData Documentation.pdf
index f60eea0..544de04 100644
Binary files a/src/main/webapp/pdf/CarbonData Documentation.pdf and 
b/src/main/webapp/pdf/CarbonData Documentation.pdf differ

http://git-wip-us.apache.org/repos/asf/carbondata-site/blob/e88f2b48/src/main/webapp/preaggregate-datamap-guide.html
----------------------------------------------------------------------
diff --git a/src/main/webapp/preaggregate-datamap-guide.html 
b/src/main/webapp/preaggregate-datamap-guide.html
index ab7beca..35a0f62 100644
--- a/src/main/webapp/preaggregate-datamap-guide.html
+++ b/src/main/webapp/preaggregate-datamap-guide.html
@@ -190,16 +190,16 @@
 <p>Package carbon jar, and copy 
assembly/target/scala-2.11/carbondata_2.11-x.x.x-SNAPSHOT-shade-hadoop2.7.2.jar 
to $SPARK_HOME/jars</p>
 <div class="highlight highlight-source-shell"><pre>mvn clean package 
-DskipTests -Pspark-2.2</pre></div>
 <p>Start spark-shell in new terminal, type :paste, then copy and run the 
following code.</p>
-<div class="highlight highlight-source-scala"><pre> <span 
class="pl-k">import</span> <span class="pl-smi">java.io.</span><span 
class="pl-smi">File</span>
- <span class="pl-k">import</span> <span 
class="pl-smi">org.apache.spark.sql.</span>{<span 
class="pl-smi">CarbonEnv</span>, <span class="pl-smi">SparkSession</span>}
- <span class="pl-k">import</span> <span 
class="pl-smi">org.apache.spark.sql.CarbonSession.</span><span 
class="pl-smi">_</span>
- <span class="pl-k">import</span> <span 
class="pl-smi">org.apache.spark.sql.streaming.</span>{<span 
class="pl-smi">ProcessingTime</span>, <span 
class="pl-smi">StreamingQuery</span>}
- <span class="pl-k">import</span> <span 
class="pl-smi">org.apache.carbondata.core.util.path.</span><span 
class="pl-smi">CarbonStorePath</span>
+<div class="highlight highlight-source-scala"><pre> <span 
class="pl-k">import</span> <span class="pl-en">java</span>.<span 
class="pl-en">io</span>.<span class="pl-en">File</span>
+ <span class="pl-k">import</span> <span class="pl-en">org</span>.<span 
class="pl-en">apache</span>.<span class="pl-en">spark</span>.<span 
class="pl-en">sql</span>.{<span class="pl-en">CarbonEnv</span>, <span 
class="pl-en">SparkSession</span>}
+ <span class="pl-k">import</span> <span class="pl-en">org</span>.<span 
class="pl-en">apache</span>.<span class="pl-en">spark</span>.<span 
class="pl-en">sql</span>.<span class="pl-en">CarbonSession</span>.<span 
class="pl-en">_</span>
+ <span class="pl-k">import</span> <span class="pl-en">org</span>.<span 
class="pl-en">apache</span>.<span class="pl-en">spark</span>.<span 
class="pl-en">sql</span>.<span class="pl-en">streaming</span>.{<span 
class="pl-en">ProcessingTime</span>, <span class="pl-en">StreamingQuery</span>}
+ <span class="pl-k">import</span> <span class="pl-en">org</span>.<span 
class="pl-en">apache</span>.<span class="pl-en">carbondata</span>.<span 
class="pl-en">core</span>.<span class="pl-en">util</span>.<span 
class="pl-en">path</span>.<span class="pl-en">CarbonStorePath</span>
  
- <span class="pl-k">val</span> <span class="pl-en">warehouse</span> <span 
class="pl-k">=</span> <span class="pl-k">new</span> <span 
class="pl-en">File</span>(<span class="pl-s"><span 
class="pl-pds">"</span>./warehouse<span 
class="pl-pds">"</span></span>).getCanonicalPath
- <span class="pl-k">val</span> <span class="pl-en">metastore</span> <span 
class="pl-k">=</span> <span class="pl-k">new</span> <span 
class="pl-en">File</span>(<span class="pl-s"><span 
class="pl-pds">"</span>./metastore<span 
class="pl-pds">"</span></span>).getCanonicalPath
+ <span class="pl-k">val</span> <span class="pl-smi">warehouse</span> <span 
class="pl-k">=</span> <span class="pl-k">new</span> <span 
class="pl-en">File</span>(<span class="pl-s"><span 
class="pl-pds">"</span>./warehouse<span 
class="pl-pds">"</span></span>).getCanonicalPath
+ <span class="pl-k">val</span> <span class="pl-smi">metastore</span> <span 
class="pl-k">=</span> <span class="pl-k">new</span> <span 
class="pl-en">File</span>(<span class="pl-s"><span 
class="pl-pds">"</span>./metastore<span 
class="pl-pds">"</span></span>).getCanonicalPath
  
- <span class="pl-k">val</span> <span class="pl-en">spark</span> <span 
class="pl-k">=</span> <span class="pl-en">SparkSession</span>
+ <span class="pl-k">val</span> <span class="pl-smi">spark</span> <span 
class="pl-k">=</span> <span class="pl-en">SparkSession</span>
    .builder()
    .master(<span class="pl-s"><span class="pl-pds">"</span>local<span 
class="pl-pds">"</span></span>)
    .appName(<span class="pl-s"><span 
class="pl-pds">"</span>preAggregateExample<span class="pl-pds">"</span></span>)
@@ -236,16 +236,16 @@
 <span class="pl-s">      | GROUP BY country</span>
 <span class="pl-s">    <span class="pl-pds">"""</span></span>.stripMargin)
       
-  <span class="pl-k">import</span> <span 
class="pl-smi">spark.implicits.</span><span class="pl-smi">_</span>
-  <span class="pl-k">import</span> <span 
class="pl-smi">org.apache.spark.sql.</span><span class="pl-smi">SaveMode</span>
-  <span class="pl-k">import</span> <span 
class="pl-smi">scala.util.</span><span class="pl-smi">Random</span>
+  <span class="pl-k">import</span> <span class="pl-en">spark</span>.<span 
class="pl-en">implicits</span>.<span class="pl-en">_</span>
+  <span class="pl-k">import</span> <span class="pl-en">org</span>.<span 
class="pl-en">apache</span>.<span class="pl-en">spark</span>.<span 
class="pl-en">sql</span>.<span class="pl-en">SaveMode</span>
+  <span class="pl-k">import</span> <span class="pl-en">scala</span>.<span 
class="pl-en">util</span>.<span class="pl-en">Random</span>
  
   <span class="pl-c"><span class="pl-c">//</span> Load data to the main table, 
it will also</span>
   <span class="pl-c"><span class="pl-c">//</span> trigger immediate load to 
pre-aggregate table.</span>
   <span class="pl-c"><span class="pl-c">//</span> These two loading operation 
is carried out in a</span>
   <span class="pl-c"><span class="pl-c">//</span> transactional manner, 
meaning that the whole </span>
   <span class="pl-c"><span class="pl-c">//</span> operation will fail if one 
of the loading fails</span>
-  <span class="pl-k">val</span> <span class="pl-en">r</span> <span 
class="pl-k">=</span> <span class="pl-k">new</span> <span 
class="pl-en">Random</span>()
+  <span class="pl-k">val</span> <span class="pl-smi">r</span> <span 
class="pl-k">=</span> <span class="pl-k">new</span> <span 
class="pl-en">Random</span>()
   spark.sparkContext.parallelize(<span class="pl-c1">1</span> to <span 
class="pl-c1">10</span>)
    .map(x <span class="pl-k">=&gt;</span> (<span class="pl-s"><span 
class="pl-pds">"</span>ID.<span class="pl-pds">"</span></span> <span 
class="pl-k">+</span> r.nextInt(<span class="pl-c1">100000</span>), <span 
class="pl-s"><span class="pl-pds">"</span>country<span 
class="pl-pds">"</span></span> <span class="pl-k">+</span> x <span 
class="pl-k">%</span> <span class="pl-c1">8</span>, x <span 
class="pl-k">%</span> <span class="pl-c1">50</span>, x <span 
class="pl-k">%</span> <span class="pl-c1">60</span>))
    .toDF(<span class="pl-s"><span class="pl-pds">"</span>user_id<span 
class="pl-pds">"</span></span>, <span class="pl-s"><span 
class="pl-pds">"</span>country<span class="pl-pds">"</span></span>, <span 
class="pl-s"><span class="pl-pds">"</span>quantity<span 
class="pl-pds">"</span></span>, <span class="pl-s"><span 
class="pl-pds">"</span>price<span class="pl-pds">"</span></span>)
@@ -276,7 +276,10 @@ AS
 <p>The string followed by USING is called DataMap Provider, in this version 
CarbonData supports two
 kinds of DataMap:</p>
 <ol>
-<li>preaggregate, for pre-aggregate table. No DMPROPERTY is required for this 
DataMap</li>
+<li>preaggregate, for pre-aggregate table. Pre-Aggregate table supports two 
values for DMPROPERTIES.
+a. 'path' is used to specify the store location of the 
datamap.('path'='/location/').
+b. 'partitioning' when set to false enables user to disable partitioning of 
the datamap.
+Default value is true for this property.</li>
 <li>timeseries, for timeseries roll-up table. Please refer to <a 
href="https://github.com/apache/carbondata/blob/master/docs/datamap/timeseries-datamap-guide.html";
 target=_blank>Timeseries DataMap</a>
 </li>
 </ol>

Reply via email to