http://git-wip-us.apache.org/repos/asf/incubator-carbondata-site/blob/4f8753c1/content/docs/latest/dml-operation-on-carbondata.html
----------------------------------------------------------------------
diff --git a/content/docs/latest/dml-operation-on-carbondata.html 
b/content/docs/latest/dml-operation-on-carbondata.html
deleted file mode 100644
index a6a6d75..0000000
--- a/content/docs/latest/dml-operation-on-carbondata.html
+++ /dev/null
@@ -1,446 +0,0 @@
-<!--
-    Licensed to the Apache Software Foundation (ASF) under one
-    or more contributor license agreements.  See the NOTICE file
-    distributed with this work for additional information
-    regarding copyright ownership.  The ASF licenses this file
-    to you under the Apache License, Version 2.0 (the
-    "License"); you may not use this file except in compliance
-    with the License.  You may obtain a copy of the License at
-
-      http://www.apache.org/licenses/LICENSE-2.0
-
-    Unless required by applicable law or agreed to in writing,
-    software distributed under the License is distributed on an
-    "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-    KIND, either express or implied.  See the License for the
-    specific language governing permissions and limitations
-    under the License.
--->
-<h1>DML Operations on CarbonData</h1><p>This tutorial guides you through the 
data manipulation
-    language support provided by CarbonData.</p><h2>Overview</h2><p>The 
following DML operations are
-    supported in CarbonData :</p>
-<ul>
-    <li><a href="#load-data">LOAD DATA</a></li>
-    <li><a href="#insert-data">INSERT DATA INTO A CARBONDATA TABLE</a></li>
-    <li><a href="#show-segments">SHOW SEGMENTS</a></li>
-    <li><a href="#delete-id">DELETE SEGMENT BY ID</a></li>
-    <li><a href="#delete-date">DELETE SEGMENT BY DATE</a></li>
-    <li><a href="#update-carbondata">UPDATE CARBONDATA TABLE</a></li>
-    <li><a href="#delete-table">DELETE RECORDS FROM CARBONDATA TABLE</a></li>
-</ul><h2 id="load-data">LOAD DATA</h2><p>This command loads the user data in 
raw format to the
-    CarbonData specific data format store, this allows CarbonData to provide 
good performance while
-    querying the data. Please visit Data Management for more
-    details on LOAD.</p><h3>Syntax</h3><p><code>
-    LOAD DATA [LOCAL] INPATH &#39;folder_path&#39;
-    INTO TABLE [db_name.]table_name
-    OPTIONS(property_name=property_value, ...)
-</code></p><p>OPTIONS are not mandatory for data loading process. Inside 
OPTIONS user can provide
-    either of any options like DELIMITER, QUOTECHAR, ESCAPECHAR, MULTILINE as 
per requirement.</p>
-<p>NOTE: The path shall be canonical path.</p><h3>Parameter Description</h3>
-<table class="table table-striped table-bordered">
-    <thead>
-    <tr>
-        <th>Parameter</th>
-        <th>Description</th>
-        <th>Optional</th>
-    </tr>
-    </thead>
-    <tbody>
-    <tr>
-        <td>folder_path</td>
-        <td>Path of raw csv data folder or file.</td>
-        <td>NO</td>
-    </tr>
-    <tr>
-        <td>db_name</td>
-        <td>Database name, if it is not specified then it uses the current 
database.</td>
-        <td>YES</td>
-    </tr>
-    <tr>
-        <td>table_name</td>
-        <td>The name of the table in provided database.</td>
-        <td>NO</td>
-    </tr>
-    <tr>
-        <td>OPTIONS</td>
-        <td>Extra options provided to Load</td>
-        <td>YES</td>
-    </tr>
-    </tbody>
-</table><h3>Usage Guidelines</h3><p>You can use the following options to load 
data:</p>
-<ul>
-    <li><p><strong>DELIMITER:</strong> Delimiters can be provided in the load 
command.</p>
-        <p><code>
-            OPTIONS(&#39;DELIMITER&#39;=&#39;,&#39;)
-        </code></p></li>
-    <li><p><strong>QUOTECHAR:</strong> Quote Characters can be provided in the 
load command.</p>
-        <p><code>
-            OPTIONS(&#39;QUOTECHAR&#39;=&#39;&quot;&#39;)
-        </code></p></li>
-    <li><p><strong>COMMENTCHAR:</strong> Comment Characters can be provided in 
the load command if
-        user want to comment lines.</p>
-        <p><code>
-            OPTIONS(&#39;COMMENTCHAR&#39;=&#39;#&#39;)
-        </code></p></li>
-    <li><p><strong>FILEHEADER:</strong> Headers can be provided in the LOAD 
DATA command if headers
-        are missing in the source files.</p>
-        <p><code>
-            OPTIONS(&#39;FILEHEADER&#39;=&#39;column1,column2&#39;)
-        </code></p></li>
-    <li><p><strong>MULTILINE:</strong> CSV with new line character in 
quotes.</p>
-        <p><code>
-            OPTIONS(&#39;MULTILINE&#39;=&#39;true&#39;)
-        </code></p></li>
-    <li><p><strong>ESCAPECHAR:</strong> Escape char can be provided if user 
want strict validation
-        of escape character on CSV.</p>
-        <p><code>
-            OPTIONS(&#39;ESCAPECHAR&#39;=&#39;\&#39;)
-        </code></p></li>
-    <li><p><strong>COMPLEX_DELIMITER_LEVEL_1:</strong> Split the complex type 
data column in a row
-        (eg., a$b$c --&gt; Array = {a,b,c}).</p>
-        <p><code>
-            OPTIONS(&#39;COMPLEX_DELIMITER_LEVEL_1&#39;=&#39;$&#39;)
-        </code></p></li>
-    <li><p><strong>COMPLEX_DELIMITER_LEVEL_2:</strong> Split the complex type 
nested data column in
-        a row. Applies level_1 delimiter &amp; applies level_2 based on 
complex data type (eg.,
-        a:b$c:d --&gt; Array&gt; = {{a,b},{c,d}}).</p>
-        <p><code>
-            OPTIONS(&#39;COMPLEX_DELIMITER_LEVEL_2&#39;=&#39;:&#39;)
-        </code></p></li>
-    <li><p><strong>ALL_DICTIONARY_PATH:</strong> All dictionary files path.</p>
-        <p><code>
-            
OPTIONS(&#39;ALL_DICTIONARY_PATH&#39;=&#39;/opt/alldictionary/data.dictionary&#39;)
-        </code></p></li>
-    <li><p><strong>COLUMNDICT:</strong> Dictionary file path for specified 
column.</p>
-        <p><code>
-            OPTIONS(&#39;COLUMNDICT&#39;=&#39;column1:dictionaryFilePath1,
-            column2:dictionaryFilePath2&#39;)
-        </code></p>
-        <p>NOTE: ALL_DICTIONARY_PATH and COLUMNDICT can not be used 
together.</p></li>
-    <li><p><strong>DATEFORMAT:</strong> Date format for specified column.</p>
-        <p><code>
-            OPTIONS(&#39;DATEFORMAT&#39;=&#39;column1:dateFormat1, 
column2:dateFormat2&#39;)
-        </code></p>
-        <p>NOTE: Date formats are specified by date pattern strings. The date 
pattern letters in
-            CarbonData are same as in JAVA. Refer to <a
-                    
href="http://docs.oracle.com/javase/7/docs/api/java/text/SimpleDateFormat.html";>SimpleDateFormat</a>.
-        </p></li>
-    <li><p><strong>USE_KETTLE:</strong> This option is used to specify whether 
to use kettle for
-        loading data or not. By default kettle is not used for data 
loading.</p>
-        <p><code>
-            OPTIONS(&#39;USE_KETTLE&#39;=&#39;FALSE&#39;)
-        </code></p></li>
-    <p>Note : It is recommended to set the value for this option as false.</p>
-    <li><p><strong>SINGLE_PASS:</strong> Single Pass Loading enables single 
job to finish data
-        loading with dictionary generation on the fly. It enhances performance 
in the scenarios
-        where the subsequent data loading after initial load involves fewer 
incremental updates on
-        the dictionary.</p>
-
-        <p>This option specifies whether to use single pass for loading data 
or not. By default this
-            option is set to FALSE.</p>
-        <p><code>
-            OPTIONS(&#39;SINGLE_PASS&#39;=&#39;TRUE&#39;)
-        </code></p>
-        <p>Note :</p>
-        <ul>
-            <li><p>If this option is set to TRUE then data loading will take 
less time.</p></li>
-            <li><p>If this option is set to some invalid value other than TRUE 
or FALSE then it uses
-                the default value.</p></li>
-        </ul>
-    </li>
-</ul>
-
-<h3>Example:</h3><p><pre><code>
-    LOAD DATA local inpath &#39;/opt/rawdata/data.csv&#39; INTO table 
carbontable
-    options(&#39;DELIMITER&#39;=&#39;,&#39;, 
&#39;QUOTECHAR&#39;=&#39;&quot;&#39;,&#39;COMMENTCHAR&#39;=&#39;#&#39;,
-    
&#39;FILEHEADER&#39;=&#39;empno,empname,designation,doj,workgroupcategory,workgroupcategoryname,deptno,deptname,projectcode,
-    projectjoindate,projectenddate,attendance,utilization,salary&#39;,
-    
&#39;MULTILINE&#39;=&#39;true&#39;,&#39;ESCAPECHAR&#39;=&#39;\&#39;,&#39;COMPLEX_DELIMITER_LEVEL_1&#39;=&#39;$&#39;,
-    &#39;COMPLEX_DELIMITER_LEVEL_2&#39;=&#39;:&#39;,
-    &#39;ALL_DICTIONARY_PATH&#39;=&#39;/opt/alldictionary/data.dictionary&#39;)
-</code></pre></p><h2 id="insert-data">INSERT DATA INTO A CARBONDATA 
TABLE</h2><p>This command inserts data
-    into a CarbonData table. It is defined as a combination of two queries 
Insert and Select query
-    respectively. It inserts records from a source table into a target 
CarbonData table. The source
-    table can be a Hive table, Parquet table or a CarbonData table itself. It 
comes with the
-    functionality to aggregate the records of a table by performing Select 
query on source table and
-    load its corresponding resultant records into a CarbonData 
table.</p><p><strong>NOTE</strong> :
-    The client node where the INSERT command is executing, must be part of the 
cluster.</p><h3>
-    Syntax</h3><p><code>
-    INSERT INTO TABLE &lt;CARBONDATA TABLE&gt; SELECT * FROM sourceTableName
-    [ WHERE { &lt;filter_condition&gt; } ];
-</code></p><p>You can also omit the <code>table</code> keyword and write your 
query as:</p><p><code>
-    INSERT INTO &lt;CARBONDATA TABLE&gt; SELECT * FROM sourceTableName
-    [ WHERE { &lt;filter_condition&gt; } ];
-</code></p><h3>Parameter Description</h3>
-<table class="table table-striped table-bordered">
-    <thead>
-    <tr>
-        <th>Parameter</th>
-        <th>Description</th>
-    </tr>
-    </thead>
-    <tbody>
-    <tr>
-        <td>CARBON TABLE</td>
-        <td>The name of the Carbon table in which you want to perform the 
insert operation.</td>
-    </tr>
-    <tr>
-        <td>sourceTableName</td>
-        <td>The table from which the records are read and inserted into 
destination CarbonData
-            table.
-        </td>
-    </tr>
-    </tbody>
-</table><h3>Usage Guidelines</h3><p>The following condition must be met for 
successful insert
-    operation :</p>
-<ul>
-    <li>The source table and the CarbonData table must have the same table 
schema.</li>
-    <li>The table must be created.</li>
-    <li>Overwrite is not supported for CarbonData table.</li>
-    <li>The data type of source and destination table columns should be same, 
else the data from
-        source table will be treated as bad records and the INSERT command 
fails.
-    </li>
-    <li>INSERT INTO command does not support partial success if bad records 
are found, it will
-        fail.
-    </li>
-    <li>Data cannot be loaded or updated in source table while insert from 
source table to target
-        table is in progress.
-    </li>
-</ul><p>To enable data load or update during insert operation, configure the 
following property to
-    true.</p><p><code>
-    carbon.insert.persist.enable=true
-</code></p><p>By default the above configuration will be 
false.</p><p><strong>NOTE</strong>:
-    Enabling this property will reduce the 
performance.</p><h3>Examples</h3><p><code>
-    INSERT INTO table1 SELECT item1 ,sum(item2 + 1000) as result FROM
-    table2 group by item1;
-</code></p><p><code>
-    INSERT INTO table1 SELECT item1, item2, item3 FROM table2
-    where item2=&#39;xyz&#39;;
-</code></p><p><code>
-    INSERT INTO table1 SELECT * FROM table2
-    where exists (select * from table3
-    where table2.item1 = table3.item1);
-</code></p><p><strong>The Status Success/Failure shall be captured in the 
driver log.</strong></p>
-<h2 id="show-segments">SHOW SEGMENTS</h2><p>This command is used to get the 
segments of CarbonData
-    table.</p><p><code>
-    SHOW SEGMENTS FOR TABLE [db_name.]table_name
-    LIMIT number_of_segments;
-</code></p><h3>Parameter Description</h3>
-<table class="table table-striped table-bordered">
-    <thead>
-    <tr>
-        <th>Parameter</th>
-        <th>Description</th>
-        <th>Optional</th>
-    </tr>
-    </thead>
-    <tbody>
-    <tr>
-        <td>db_name</td>
-        <td>Database name, if it is not specified then it uses the current 
database.</td>
-        <td>YES</td>
-    </tr>
-    <tr>
-        <td>table_name</td>
-        <td>The name of the table in provided database.</td>
-        <td>NO</td>
-    </tr>
-    <tr>
-        <td>number_of_segments</td>
-        <td>Limit the output to this number.</td>
-        <td>YES</td>
-    </tr>
-    </tbody>
-</table><h3>Example:</h3><p><code>
-    SHOW SEGMENTS FOR TABLE CarbonDatabase.CarbonTable LIMIT 4;
-</code></p><h2 id="delete-id">DELETE SEGMENT BY ID</h2><p>This command is used 
to delete segment by
-    using the segment ID. Each segment has a unique segment ID associated with 
it. Using this
-    segment ID, you can remove the segment.</p><p>The following command will 
get the segmentID.</p>
-<p><code>
-    SHOW SEGMENTS FOR Table dbname.tablename LIMIT number_of_segments
-</code></p><p>After you retrieve the segment ID of the segment that you want 
to delete, execute the
-    following command to delete the selected segment.</p><p><code>
-    DELETE SEGMENT segment_sequence_id1, segments_sequence_id2, ....
-    FROM TABLE tableName
-</code></p><h3>Parameter Description</h3>
-<table class="table table-striped table-bordered">
-    <thead>
-    <tr>
-        <th>Parameter</th>
-        <th>Description</th>
-        <th>Optional</th>
-    </tr>
-    </thead>
-    <tbody>
-    <tr>
-        <td>segment_id</td>
-        <td>Segment Id of the load.</td>
-        <td>NO</td>
-    </tr>
-    <tr>
-        <td>db_name</td>
-        <td>Database name, if it is not specified then it uses the current 
database.</td>
-        <td>YES</td>
-    </tr>
-    <tr>
-        <td>table_name</td>
-        <td>The name of the table in provided database.</td>
-        <td>NO</td>
-    </tr>
-    </tbody>
-</table><h3>Example:</h3><p><code>
-    DELETE SEGMENT 0 FROM TABLE CarbonDatabase.CarbonTable;
-    DELETE SEGMENT 0.1,5,8 FROM TABLE CarbonDatabase.CarbonTable;
-</code> NOTE: Here 0.1 is compacted segment sequence id. </p><h2 
id="delete-date">DELETE SEGMENT BY
-    DATE</h2><p>This command will allow to delete the CarbonData segment(s) 
from the store based on
-    the date provided by the user in the DML command. The segment created 
before the particular date
-    will be removed from the specific stores.</p><p><code>
-    DELETE FROM TABLE [schema_name.]table_name
-    WHERE[DATE_FIELD]BEFORE [DATE_VALUE]
-</code></p><h3>Parameter Description</h3>
-<table class="table table-striped table-bordered">
-    <thead>
-    <tr>
-        <th>Parameter</th>
-        <th>Description</th>
-        <th>Optional</th>
-    </tr>
-    </thead>
-    <tbody>
-    <tr>
-        <td>DATE_VALUE</td>
-        <td>Valid segment load start time value. All the segments before this 
specified date will be
-            deleted.
-        </td>
-        <td>NO</td>
-    </tr>
-    <tr>
-        <td>db_name</td>
-        <td>Database name, if it is not specified then it uses the current 
database.</td>
-        <td>YES</td>
-    </tr>
-    <tr>
-        <td>table_name</td>
-        <td>The name of the table in provided database.</td>
-        <td>NO</td>
-    </tr>
-    </tbody>
-</table><h3>Example:</h3><p><code>
-    DELETE SEGMENTS FROM TABLE CarbonDatabase.CarbonTable
-    WHERE STARTTIME BEFORE &#39;2017-06-01 12:05:06&#39;;
-</code></p><h2 id="update-carbondata">Update CarbonData Table</h2><p>This 
command will allow to
-    update the carbon table based on the column expression and optional filter 
conditions.</p><h3>
-    Syntax</h3><p><code>
-    UPDATE &lt;table_name&gt;
-    SET (column_name1, column_name2, ... column_name n) =
-    (column1_expression , column2_expression . .. column n_expression )
-    [ WHERE { &lt;filter_condition&gt; } ];
-</code></p><p>alternatively the following the command can also be used for 
updating the CarbonData
-    Table :</p><p><code>
-    UPDATE &lt;table_name&gt;
-    SET (column_name1, column_name2,) =
-    (select sourceColumn1, sourceColumn2 from sourceTable
-    [ WHERE { &lt;filter_condition&gt; } ] )
-    [ WHERE { &lt;filter_condition&gt; } ];
-</code></p><h3>Parameter Description</h3>
-<table class="table table-striped table-bordered">
-    <thead>
-    <tr>
-        <th>Parameter</th>
-        <th>Description</th>
-    </tr>
-    </thead>
-    <tbody>
-    <tr>
-        <td>table_name</td>
-        <td>The name of the Carbon table in which you want to perform the 
update operation.</td>
-    </tr>
-    <tr>
-        <td>column_name</td>
-        <td>The destination columns to be updated.</td>
-    </tr>
-    <tr>
-        <td>sourceColumn</td>
-        <td>The source table column values to be updated in destination 
table.</td>
-    </tr>
-    <tr>
-        <td>sourceTable</td>
-        <td>The table from which the records are updated into destination 
Carbon table.</td>
-    </tr>
-    </tbody>
-</table><h3>Usage Guidelines</h3><p>The following conditions must be met for 
successful updation
-    :</p>
-<ul>
-    <li>The update command fails if multiple input rows in source table are 
matched with single row
-        in destination table.
-    </li>
-    <li>If the source table generates empty records, the update operation will 
complete successfully
-        without updating the table.
-    </li>
-    <li>If a source table row does not correspond to any of the existing rows 
in a destination
-        table, the update operation will complete successfully without 
updating the table.
-    </li>
-    <li>In sub-query, if the source table and the target table are same, then 
the update operation
-        fails.
-    </li>
-    <li>If the sub-query used in UPDATE statement contains aggregate method or 
group by query, then
-        the UPDATE operation fails.
-    </li>
-</ul><h3>Examples</h3><p>Update is not supported for queries that contain 
aggregate or group by.</p>
-<p><code>
-    UPDATE t_carbn01 a
-    SET (a.item_type_code, a.profit) = ( SELECT b.item_type_cd,
-    sum(b.profit) from t_carbn01b b
-    WHERE item_type_cd =2 group by item_type_code);
-</code></p><p>Here the Update Operation fails as the query contains aggregate 
function sum(b.profit)
-    and group by clause in the sub-query.</p><p><code>
-    UPDATE carbonTable1 d
-    SET(d.column3,d.column5 ) = (SELECT s.c33 ,s.c55
-    FROM sourceTable1 s WHERE d.column1 = s.c11)
-    WHERE d.column1 = &#39;china&#39; EXISTS( SELECT * from table3 o where 
o.c2 &gt; 1);
-</code></p><p><code>
-    UPDATE carbonTable1 d SET (c3) = (SELECT s.c33 from sourceTable1 s
-    WHERE d.column1 = s.c11)
-    WHERE exists( select * from iud.other o where o.c2 &gt; 1);
-</code></p><p><code>
-    UPDATE carbonTable1 SET (c2, c5 ) = (c2 + 1, concat(c5 , &quot;y&quot; ));
-</code></p><p><code>
-    UPDATE carbonTable1 d SET (c2, c5 ) = (c2 + 1, &quot;xyx&quot;)
-    WHERE d.column1 = &#39;india&#39;;
-</code></p><p><code>
-    UPDATE carbonTable1 d SET (c2, c5 ) = (c2 + 1, &quot;xyx&quot;)
-    WHERE d.column1 = &#39;india&#39;
-    and EXISTS( SELECT * FROM table3 o WHERE o.column2 &gt; 1);
-</code></p><p><strong>The Status Success/Failure shall be captured in the 
driver log and the
-    client.</strong></p><h2 id="delete-table">Delete Records from CarbonData 
Table</h2><p>This
-    command allows us to delete records from CarbonData 
table.</p><h3>Syntax</h3><p><code>
-    DELETE FROM table_name [WHERE expression];
-</code></p><h3>Parameter Description</h3>
-<table class="table table-striped table-bordered">
-    <thead>
-    <tr>
-        <th>Parameter</th>
-        <th>Description</th>
-    </tr>
-    </thead>
-    <tbody>
-    <tr>
-        <td>table_name</td>
-        <td>The name of the Carbon table in which you want to perform the 
delete.</td>
-    </tr>
-    </tbody>
-</table><h3>Examples</h3><p><code>
-    DELETE FROM columncarbonTable1 d WHERE d.column1 = &#39;china&#39;;
-</code></p><p><code>
-    DELETE FROM dest WHERE column1 IN (&#39;china&#39;, &#39;USA&#39;);
-</code></p><p><code>
-    DELETE FROM columncarbonTable1
-    WHERE column1 IN (SELECT column11 FROM sourceTable2);
-</code></p><p><code>
-    DELETE FROM columncarbonTable1
-    WHERE column1 IN (SELECT column11 FROM sourceTable2 WHERE
-    column1 = &#39;USA&#39;);
-</code></p><p><code>
-    DELETE FROM columncarbonTable1 WHERE column2 &gt;= 4
-</code></p><p><strong>The Status Success/Failure shall be captured in the 
driver log and the
-    client.</strong></p>
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/incubator-carbondata-site/blob/4f8753c1/content/docs/latest/faq.html
----------------------------------------------------------------------
diff --git a/content/docs/latest/faq.html b/content/docs/latest/faq.html
deleted file mode 100644
index 6faf8f4..0000000
--- a/content/docs/latest/faq.html
+++ /dev/null
@@ -1,66 +0,0 @@
-<!--
-    Licensed to the Apache Software Foundation (ASF) under one
-    or more contributor license agreements.  See the NOTICE file
-    distributed with this work for additional information
-    regarding copyright ownership.  The ASF licenses this file
-    to you under the Apache License, Version 2.0 (the
-    "License"); you may not use this file except in compliance
-    with the License.  You may obtain a copy of the License at
-
-      http://www.apache.org/licenses/LICENSE-2.0
-
-    Unless required by applicable law or agreed to in writing,
-    software distributed under the License is distributed on an
-    "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-    KIND, either express or implied.  See the License for the
-    specific language governing permissions and limitations
-    under the License.
--->
-<h1><a id="FAQs_0"></a>FAQs</h1>
-<ul>
-    <li><a href="#what-are-bad-records">What are Bad Records?</a></li>
-    <li><a href="#where-are-bad-records-stored-in-carbondata">Where are Bad 
Records Stored in CarbonData?</a></li>
-    <li><a href="#how-to-enable-bad-record-logging">How to enable Bad Record 
Logging?</a></li>
-    <li><a href="#how-to-ignore-the-bad-records">How to ignore the Bad 
Records?</a></li>
-    <li><a 
href="#how-to-specify-store-location-while-creating-carbon-session">How to 
specify store location while creating carbon session?</a></li>
-    <li><a href="#what-is-carbon-lock-type">What is Carbon Lock Type?</a></li>
-    <li><a href="#how-to-resolve-abstract-method-error">How to resolve 
Abstract Method Error?</a></li>
-</ul>
-<h2 id="what-are-bad-records"><a id="What_are_Bad_Records_10"></a>What are Bad 
Records?</h2>
-<p>Records that fail to get loaded into the CarbonData due to data type 
incompatibility or are empty or have incompatible format are classified as Bad 
Records.</p>
-<h2 id="where-are-bad-records-stored-in-carbondata"><a 
id="Where_are_Bad_Records_Stored_in_CarbonData_13"></a>Where are Bad Records 
Stored in CarbonData?</h2>
-<p>The bad records are stored at the location set in 
carbon.badRecords.location in carbon.properties file.<br>
-    By default <strong>carbon.badRecords.location</strong> specifies the 
following location <code>/opt/Carbon/Spark/badrecords</code>.</p>
-<h2 id="how-to-enable-bad-record-logging"><a 
id="How_to_enable_Bad_Record_Logging_17"></a>How to enable Bad Record 
Logging?</h2>
-<p>While loading data we can specify the approach to handle Bad Records. In 
order to analyse the cause of the Bad Records the parameter 
<code>BAD_RECORDS_LOGGER_ENABLE</code> must be set to value <code>TRUE</code>. 
There are multiple approaches to handle Bad Records which can be specified  by 
the parameter <code>BAD_RECORDS_ACTION</code>.</p>
-<ul>
-    <li>To pad the incorrect values of the csv rows with NULL value and load 
the data in CarbonData, set the following in the query :</li>
-</ul>
-<pre><code>'BAD_RECORDS_ACTION'='FORCE'
-</code></pre>
-<ul>
-    <li>To write the Bad Records without padding incorrect values with NULL in 
the raw csv (set in the parameter <strong>carbon.badRecords.location</strong>), 
set the following in the query :</li>
-</ul>
-<pre><code>'BAD_RECORDS_ACTION'='REDIRECT'
-</code></pre>
-<h2 id="how-to-ignore-the-bad-records"><a 
id="How_to_ignore_the_Bad_Records_30"></a>How to ignore the Bad Records?</h2>
-<p>To ignore the Bad Records from getting stored in the raw csv, we need to 
set the following in the query :</p>
-<pre><code>'BAD_RECORDS_ACTION'='IGNORE'
-</code></pre>
-<h2 id="how-to-specify-store-location-while-creating-carbon-session"><a 
id="How_to_specify_store_location_while_creating_carbon_session_36"></a>How to 
specify store location while creating carbon session?</h2>
-<p>The store location specified while creating carbon session is used by the 
CarbonData to store the meta data like the schema, dictionary files, dictionary 
meta data and sort indexes.</p>
-<p>Try creating <code>carbonsession</code> with <code>storepath</code> 
specified in the following manner :</p>
-<pre><code>val carbon = 
SparkSession.builder().config(sc.getConf).getOrCreateCarbonSession(&lt;store_path&gt;)
-</code></pre>
-<p>Example:</p>
-<pre><code>val carbon = 
SparkSession.builder().config(sc.getConf).getOrCreateCarbonSession(&quot;hdfs://localhost:9000/carbon/store
 &quot;)
-</code></pre>
-<h2 id="what-is-carbon-lock-type"><a id="What_is_Carbon_Lock_Type_48"></a>What 
is Carbon Lock Type?</h2>
-<p>The Apache CarbonData acquires lock on the files to prevent concurrent 
operation from modifying the same files. The lock can be of the following types 
depending on the storage location, for HDFS we specify it to be of type 
HDFSLOCK. By default it is set to type LOCALLOCK.<br>
-    The property carbon.lock.type configuration specifies the type of lock to 
be acquired during concurrent operations on table. This property can be set 
with the following values :</p>
-<ul>
-    <li><strong>LOCALLOCK</strong> : This Lock is created on local file system 
as file. This lock is useful when only one spark driver (thrift server) runs on 
a machine and no other CarbonData spark application is launched 
concurrently.</li>
-    <li><strong>HDFSLOCK</strong> : This Lock is created on HDFS file system 
as file. This lock is useful when multiple CarbonData spark applications are 
launched and no ZooKeeper is running on cluster and the HDFS supports, file 
based locking.</li>
-</ul>
-<h2 id="how-to-resolve-abstract-method-error"><a 
id="How_to_resolve_Abstract_Method_Error_54"></a>How to resolve Abstract Method 
Error?</h2>
-<p>In order to build CarbonData project it is necessary to specify the spark 
profile. The spark profile sets the Spark Version. You need to specify the 
<code>spark version</code> while using Maven to build project.</p>

http://git-wip-us.apache.org/repos/asf/incubator-carbondata-site/blob/4f8753c1/content/docs/latest/file-structure-of-carbondata.html
----------------------------------------------------------------------
diff --git a/content/docs/latest/file-structure-of-carbondata.html 
b/content/docs/latest/file-structure-of-carbondata.html
deleted file mode 100644
index 3ca022b..0000000
--- a/content/docs/latest/file-structure-of-carbondata.html
+++ /dev/null
@@ -1,7 +0,0 @@
-<h1>CarbonData File Structure</h1><p>CarbonData files contain groups of data 
called blocklets, along with all required information like schema, offsets and 
indices etc, in a file footer, co-located in HDFS.</p><p>The file footer can be 
read once to build the indices in memory, which can be utilized for optimizing 
the scans and processing for all subsequent queries.</p><p>Each blocklet in the 
file is further divided into chunks of data called data chunks. Each data chunk 
is organized either in columnar format or row format, and stores the data of 
either a single column or a set of columns. All blocklets in a file contain the 
same number and type of data chunks.</p><p><img 
src="../../docs/latest/images/carbon_data_file_structure_new.png?raw=true" 
alt="CarbonData File Structure" /></p><p>Each data chunk contains multiple 
groups of data called as pages. There are three types of pages.</p>
-<ul>
-  <li>Data Page: Contains the encoded data of a column/group of columns.</li>
-  <li>Row ID Page (optional): Contains the row ID mappings used when the data 
page is stored as an inverted index.</li>
-  <li>RLE Page (optional): Contains additional metadata used when the data 
page is RLE coded.</li>
-</ul><p><img 
src="../../docs/latest/images/carbon_data_format_new.png?raw=true" 
alt="CarbonData File Format" /></p>
-

http://git-wip-us.apache.org/repos/asf/incubator-carbondata-site/blob/4f8753c1/content/docs/latest/images/CarbonData_icon.png
----------------------------------------------------------------------
diff --git a/content/docs/latest/images/CarbonData_icon.png 
b/content/docs/latest/images/CarbonData_icon.png
deleted file mode 100644
index 3ea7f54..0000000
Binary files a/content/docs/latest/images/CarbonData_icon.png and /dev/null 
differ

http://git-wip-us.apache.org/repos/asf/incubator-carbondata-site/blob/4f8753c1/content/docs/latest/images/CarbonData_logo.png
----------------------------------------------------------------------
diff --git a/content/docs/latest/images/CarbonData_logo.png 
b/content/docs/latest/images/CarbonData_logo.png
deleted file mode 100644
index bc09b23..0000000
Binary files a/content/docs/latest/images/CarbonData_logo.png and /dev/null 
differ

http://git-wip-us.apache.org/repos/asf/incubator-carbondata-site/blob/4f8753c1/content/docs/latest/images/carbon_data_file_structure_new.png
----------------------------------------------------------------------
diff --git a/content/docs/latest/images/carbon_data_file_structure_new.png 
b/content/docs/latest/images/carbon_data_file_structure_new.png
deleted file mode 100644
index 3f9241b..0000000
Binary files a/content/docs/latest/images/carbon_data_file_structure_new.png 
and /dev/null differ

http://git-wip-us.apache.org/repos/asf/incubator-carbondata-site/blob/4f8753c1/content/docs/latest/images/carbon_data_format_new.png
----------------------------------------------------------------------
diff --git a/content/docs/latest/images/carbon_data_format_new.png 
b/content/docs/latest/images/carbon_data_format_new.png
deleted file mode 100644
index 9d0b194..0000000
Binary files a/content/docs/latest/images/carbon_data_format_new.png and 
/dev/null differ

http://git-wip-us.apache.org/repos/asf/incubator-carbondata-site/blob/4f8753c1/content/docs/latest/images/query_failure_beeline.png
----------------------------------------------------------------------
diff --git a/content/docs/latest/images/query_failure_beeline.png 
b/content/docs/latest/images/query_failure_beeline.png
deleted file mode 100644
index e4ec22b..0000000
Binary files a/content/docs/latest/images/query_failure_beeline.png and 
/dev/null differ

http://git-wip-us.apache.org/repos/asf/incubator-carbondata-site/blob/4f8753c1/content/docs/latest/images/query_failure_issue.png
----------------------------------------------------------------------
diff --git a/content/docs/latest/images/query_failure_issue.png 
b/content/docs/latest/images/query_failure_issue.png
deleted file mode 100644
index 87270d2..0000000
Binary files a/content/docs/latest/images/query_failure_issue.png and /dev/null 
differ

http://git-wip-us.apache.org/repos/asf/incubator-carbondata-site/blob/4f8753c1/content/docs/latest/images/query_failure_job_details.png
----------------------------------------------------------------------
diff --git a/content/docs/latest/images/query_failure_job_details.png 
b/content/docs/latest/images/query_failure_job_details.png
deleted file mode 100644
index 26e607d..0000000
Binary files a/content/docs/latest/images/query_failure_job_details.png and 
/dev/null differ

http://git-wip-us.apache.org/repos/asf/incubator-carbondata-site/blob/4f8753c1/content/docs/latest/images/query_failure_logs.png
----------------------------------------------------------------------
diff --git a/content/docs/latest/images/query_failure_logs.png 
b/content/docs/latest/images/query_failure_logs.png
deleted file mode 100644
index 8fbdfa6..0000000
Binary files a/content/docs/latest/images/query_failure_logs.png and /dev/null 
differ

http://git-wip-us.apache.org/repos/asf/incubator-carbondata-site/blob/4f8753c1/content/docs/latest/images/query_failure_procedure.png
----------------------------------------------------------------------
diff --git a/content/docs/latest/images/query_failure_procedure.png 
b/content/docs/latest/images/query_failure_procedure.png
deleted file mode 100644
index 9d2c81f..0000000
Binary files a/content/docs/latest/images/query_failure_procedure.png and 
/dev/null differ

http://git-wip-us.apache.org/repos/asf/incubator-carbondata-site/blob/4f8753c1/content/docs/latest/images/query_failure_spark_ui.png
----------------------------------------------------------------------
diff --git a/content/docs/latest/images/query_failure_spark_ui.png 
b/content/docs/latest/images/query_failure_spark_ui.png
deleted file mode 100644
index 1802760..0000000
Binary files a/content/docs/latest/images/query_failure_spark_ui.png and 
/dev/null differ

http://git-wip-us.apache.org/repos/asf/incubator-carbondata-site/blob/4f8753c1/content/docs/latest/installation-guide.html
----------------------------------------------------------------------
diff --git a/content/docs/latest/installation-guide.html 
b/content/docs/latest/installation-guide.html
deleted file mode 100644
index 168f895..0000000
--- a/content/docs/latest/installation-guide.html
+++ /dev/null
@@ -1,330 +0,0 @@
-<!--
-    Licensed to the Apache Software Foundation (ASF) under one
-    or more contributor license agreements.  See the NOTICE file
-    distributed with this work for additional information
-    regarding copyright ownership.  The ASF licenses this file
-    to you under the Apache License, Version 2.0 (the
-    "License"); you may not use this file except in compliance
-    with the License.  You may obtain a copy of the License at
-
-      http://www.apache.org/licenses/LICENSE-2.0
-
-    Unless required by applicable law or agreed to in writing,
-    software distributed under the License is distributed on an
-    "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-    KIND, either express or implied.  See the License for the
-    specific language governing permissions and limitations
-    under the License.
---><h1>Installation Guide</h1><p>This tutorial guides you through the 
installation and configuration
-    of CarbonData in the following two modes :</p>
-<ul>
-    <li><a href="#installing-spark-cluster">Installing and
-        Configuring CarbonData on Standalone Spark Cluster</a></li>
-    <li><a href="#installing-yarn-cluster">Installing and
-        Configuring CarbonData on "Spark on YARN" Cluster</a></li>
-</ul><p>followed by :</p>
-<ul>
-    <li><a href="#query-execution">Query Execution using CarbonData
-        Thrift Server</a></li>
-</ul><h2 id="installing-spark-cluster">Installing and Configuring CarbonData 
on Standalone Spark Cluster</h2><h3>
-    Prerequisites</h3>
-<ul>
-    <li><p>Hadoop HDFS and Yarn should be installed and running.</p></li>
-    <li><p>Spark should be installed and running on all the cluster 
nodes.</p></li>
-    <li><p>CarbonData user should have permission to access HDFS.</p></li>
-</ul><h3>Procedure</h3>
-<ul>
-    <li><p><a
-            
href="https://github.com/apache/incubator-carbondata/blob/master/build/README.md";
 target="_blank">Build
-        the CarbonData</a> project and get the assembly jar from
-        "./assembly/target/scala-2.10/carbondata_xxx.jar" and put in the 
<code>&quot;&lt;SPARK_HOME&gt;/carbonlib&quot;</code>
-        folder.</p>
-        <p>NOTE: Create the carbonlib folder if it does not exists inside 
<code>&quot;&lt;SPARK_HOME&gt;&quot;</code>
-            path.</p></li>
-    <li><p>Add the carbonlib folder path in the Spark classpath. (Edit 
<code>&quot;&lt;SPARK_HOME&gt;/conf/spark-env.sh&quot;</code>
-        file and modify the value of SPARK_CLASSPATH by appending 
<br/><code>&quot;&lt;SPARK_HOME&gt;/carbonlib/*&quot;</code>
-        to the existing value)</p></li>
-    <li><p>Copy the carbon.properties.template to 
<code>&quot;&lt;SPARK_HOME&gt;/conf/carbon.properties&quot;</code>
-        folder from "./conf/" of CarbonData repository.</p></li>
-    <li><p>Copy the "carbonplugins" folder to 
<code>&quot;&lt;SPARK_HOME&gt;/carbonlib&quot;</code>
-        folder from "./processing/" folder of CarbonData repository.</p>
-        <p>NOTE: carbonplugins will contain .kettle folder.</p></li>
-    <li><p>In Spark node, configure the properties mentioned in the following 
table in <code>&quot;&lt;SPARK_HOME&gt;/conf/spark-defaults.conf&quot;</code>
-        file.</p></li>
-</ul>
-<table class="table table-striped table-bordered">
-    <thead>
-    <tr>
-        <th>Property</th>
-        <th>Description</th>
-        <th>Value</th>
-    </tr>
-    </thead>
-    <tbody>
-    <tr>
-        <td>carbon.kettle.home</td>
-        <td>Path that will be used by CarbonData internally to create graph 
for loading the data
-        </td>
-        <td>$SPARK_HOME /carbonlib/carbonplugins</td>
-    </tr>
-    <tr>
-        <td>spark.driver.extraJavaOptions</td>
-        <td>A string of extra JVM options to pass to the driver. For instance, 
GC settings or other
-            logging.
-        </td>
-        
<td>-Dcarbon.properties.filepath=$SPARK_HOME/conf/carbon.properties</td>
-    </tr>
-    <tr>
-        <td>spark.executor.extraJavaOptions</td>
-        <td>A string of extra JVM options to pass to executors. For instance, 
GC settings or other
-            logging. NOTE: You can enter multiple values separated by space.
-        </td>
-        
<td>-Dcarbon.properties.filepath=$SPARK_HOME/conf/carbon.properties</td>
-    </tr>
-    </tbody>
-</table>
-<ul>
-    <li>Add the following properties in 
<code>&quot;&lt;SPARK_HOME&gt;/conf/&quot;
-        carbon.properties</code>:
-    </li>
-</ul>
-<table class="table table-striped table-bordered">
-    <thead>
-    <tr>
-        <th>Property</th>
-        <th>Required</th>
-        <th>Description</th>
-        <th>Example</th>
-        <th>Remark</th>
-    </tr>
-    </thead>
-    <tbody>
-    <tr>
-        <td>carbon.storelocation</td>
-        <td>NO</td>
-        <td>Location where data CarbonData will create the store and write the 
data in its own
-            format.
-        </td>
-        <td>hdfs://HOSTNAME:PORT/Opt/CarbonStore</td>
-        <td>Propose to set HDFS directory</td>
-    </tr>
-    <tr>
-        <td>carbon.kettle.home</td>
-        <td>YES</td>
-        <td>Path that will be used by CarbonData internally to create graph 
for loading the data.
-        </td>
-        <td>$SPARK_HOME/carbonlib/carbonplugins</td>
-        <td></td>
-    </tr>
-    </tbody>
-</table>
-<ul>
-    <li>Verify the installation. For example:</li>
-</ul><p><code>
-    ./spark-shell --master spark://HOSTNAME:PORT --total-executor-cores 2
-    --executor-memory 2G
-</code></p><p>NOTE: Make sure you have permissions for CarbonData JARs and 
files through which
-    driver and executor will start.</p><p>To get started with CarbonData : <a
-        href="mainpage.html?page=quickStart">Quick Start</a> , <a 
href="mainpage.html?page=ddl">DDL
-    Operations on CarbonData</a></p>
-   <h2 id="installing-yarn-cluster">Installing and Configuring CarbonData on 
"Spark on YARN"
-    Cluster</h2><p>This section provides the procedure to install CarbonData 
on "Spark on YARN"
-    cluster.</p><h3>Prerequisites</h3>
-<ul>
-    <li>Hadoop HDFS and Yarn should be installed and running.</li>
-    <li>Spark should be installed and running in all the clients.</li>
-    <li>CarbonData user should have permission to access HDFS.</li>
-</ul><h3>Procedure</h3><p>The following steps are only for Driver Nodes. 
(Driver nodes are the one
-    which starts the spark context.)</p>
-<ul>
-    <li><p><a
-            
href="https://github.com/apache/incubator-carbondata/blob/master/build/README.md";
 target="_blank">Build
-        the CarbonData</a> project and get the assembly jar from
-        "./assembly/target/scala-2.10/carbondata_xxx.jar" and put in the 
<code>&quot;&lt;SPARK_HOME&gt;/carbonlib&quot;</code>
-        folder.</p>
-        <p>NOTE: Create the carbonlib folder if it does not exists inside 
<code>&quot;&lt;SPARK_HOME&gt;&quot;</code>
-            path.</p></li>
-    <li><p>Copy "carbonplugins" folder to 
<code>&quot;&lt;SPARK_HOME&gt;/carbonlib&quot;</code>
-        folder from "./processing/" folder of CarbonData repository. 
carbonplugins will contain
-        .kettle folder.</p></li>
-    <li><p>Copy the "carbon.properties.template" to 
<code>&quot;&lt;SPARK_HOME&gt;/conf/carbon.properties&quot;</code>
-        folder from conf folder of CarbonData repository.</p></li>
-    <li>Modify the parameters in "spark-default.conf" located in the 
<code>&quot;&lt;SPARK_HOME&gt;/conf</code>"
-    </li>
-</ul>
-<table class="table table-striped table-bordered">
-    <thead>
-    <tr>
-        <th>Property</th>
-        <th>Description</th>
-        <th>Value</th>
-    </tr>
-    </thead>
-    <tbody>
-    <tr>
-        <td>spark.master</td>
-        <td>Set this value to run the Spark in yarn cluster mode.</td>
-        <td>Set "yarn-client" to run the Spark in yarn cluster mode.</td>
-    </tr>
-    <tr>
-        <td>spark.yarn.dist.files</td>
-        <td>Comma-separated list of files to be placed in the working 
directory of each executor.
-        </td>
-        
<td><code>&quot;&lt;YOUR_SPARK_HOME_PATH&gt;&quot;/conf/carbon.properties</code></td>
-    </tr>
-    <tr>
-        <td>spark.yarn.dist.archives</td>
-        <td>Comma-separated list of archives to be extracted into the working 
directory of each
-            executor.
-        </td>
-        
<td><code>&quot;&lt;YOUR_SPARK_HOME_PATH&gt;&quot;/carbonlib/carbondata_xxx.jar</code></td>
-    </tr>
-    <tr>
-        <td>spark.executor.extraJavaOptions</td>
-        <td>A string of extra JVM options to pass to executors. For instance 
NOTE: You can enter
-            multiple values separated by space.
-        </td>
-        
<td><code>-Dcarbon.properties.filepath=&quot;&lt;YOUR_SPARK_HOME_PATH&gt;&quot;/conf/carbon.properties</code>
-        </td>
-    </tr>
-    <tr>
-        <td>spark.executor.extraClassPath</td>
-        <td>Extra classpath entries to prepend to the classpath of executors. 
NOTE: If
-            SPARK_CLASSPATH is defined in spark-env.sh, then comment it and 
append the values in
-            below parameter spark.driver.extraClassPath
-        </td>
-        <td><code>
-            
&quot;&lt;YOUR_SPARK_HOME_PATH&gt;&quot;/carbonlib/carbonlib/carbondata_xxx.jar</code>
-        </td>
-    </tr>
-    <tr>
-        <td>spark.driver.extraClassPath</td>
-        <td>Extra classpath entries to prepend to the classpath of the driver. 
NOTE: If
-            SPARK_CLASSPATH is defined in spark-env.sh, then comment it and 
append the value in
-            below parameter spark.driver.extraClassPath.
-        </td>
-        <td><code>
-            
&quot;&lt;YOUR_SPARK_HOME_PATH&gt;&quot;/carbonlib/carbonlib/carbondata_xxx.jar</code>
-        </td>
-    </tr>
-    <tr>
-        <td>spark.driver.extraJavaOptions</td>
-        <td>A string of extra JVM options to pass to the driver. For instance, 
GC settings or other
-            logging.
-        </td>
-        
<td><code>-Dcarbon.properties.filepath=&quot;&lt;YOUR_SPARK_HOME_PATH&gt;&quot;/conf/carbon.properties</code>
-        </td>
-    </tr>
-    <tr>
-        <td>carbon.kettle.home</td>
-        <td>Path that will be used by CarbonData internally to create graph 
for loading the data.
-        </td>
-        
<td><code>&quot;&lt;YOUR_SPARK_HOME_PATH&gt;&quot;/carbonlib/carbonplugins</code></td>
-    </tr>
-    </tbody>
-</table>
-<ul>
-    <li>Add the following properties in <code>&lt;SPARK_HOME&gt;/conf/ 
carbon.properties</code>:
-    </li>
-</ul>
-<table class="table table-striped table-bordered">
-    <thead>
-    <tr>
-        <th>Property</th>
-        <th>Required</th>
-        <th>Description</th>
-        <th>Example</th>
-        <th>Default Value</th>
-    </tr>
-    </thead>
-    <tbody>
-    <tr>
-        <td>carbon.storelocation</td>
-        <td>NO</td>
-        <td>Location where CarbonData will create the store and write the data 
in its own format.
-        </td>
-        <td>hdfs://HOSTNAME:PORT/Opt/CarbonStore</td>
-        <td>Propose to set HDFS directory</td>
-    </tr>
-    <tr>
-        <td>carbon.kettle.home</td>
-        <td>YES</td>
-        <td>Path that will be used by CarbonData internally to create graph 
for loading the data.
-        </td>
-        <td>$SPARK_HOME/carbonlib/carbonplugins</td>
-        <td></td>
-    </tr>
-    </tbody>
-</table>
-<ul>
-    <li>Verify the installation.</li>
-</ul><p><code>
-    ./bin/spark-shell --master yarn-client --driver-memory 1g
-    --executor-cores 2 --executor-memory 2G
-</code> NOTE: Make sure you have permissions for CarbonData JARs and files 
through which driver and
-    executor will start.</p><p>Getting started with CarbonData :<a
-        href="mainpage.html?page=quickStart">Quick Start</a> , <a 
href="mainpage.html?page=ddl">DDL Operations on CarbonData</a></p>
-
-<h2 id="query-execution">
-    Query Execution Using CarbonData Thrift Server</h2><h3>Starting CarbonData 
Thrift Server</h3><p>
-    a. cd <code>&lt;SPARK_HOME&gt;</code></p><p>b. Run the following command 
to start the CarbonData
-    thrift server.</p><p><code>
-    ./bin/spark-submit --conf spark.sql.hive.thriftServer.singleSession=true
-    --class org.apache.carbondata.spark.thriftserver.CarbonThriftServer
-    $SPARK_HOME/carbonlib/$CARBON_ASSEMBLY_JAR &lt;carbon_store_path&gt;
-</code></p>
-<table class="table table-striped table-bordered">
-    <thead>
-    <tr>
-        <th>Parameter</th>
-        <th>Description</th>
-        <th>Example</th>
-    </tr>
-    </thead>
-    <tbody>
-    <tr>
-        <td>CARBON_ASSEMBLY_JAR</td>
-        <td>CarbonData assembly jar name present in the 
<code>&quot;&lt;SPARK_HOME&gt;&quot;/carbonlib/</code>
-            folder.
-        </td>
-        
<td>carbondata_2.10-0.1.0-incubating-SNAPSHOT-shade-hadoop2.7.2.jar</td>
-    </tr>
-    <tr>
-        <td>carbon_store_path</td>
-        <td>This is a parameter to the CarbonThriftServer class. This a HDFS 
path where CarbonData
-            files will be kept. Strongly Recommended to put same as 
carbon.storelocation parameter
-            of carbon.properties.
-        </td>
-        
<td><code>hdfs//&lt;host_name&gt;:54310/user/hive/warehouse/carbon.store</code></td>
-    </tr>
-    </tbody>
-</table><h3>Examples</h3>
-<ul>
-    <li>Start with default memory and executors</li>
-</ul><p><pre><code>
-    ./bin/spark-submit --conf spark.sql.hive.thriftServer.singleSession=true
-    --class org.apache.carbondata.spark.thriftserver.CarbonThriftServer
-    $SPARK_HOME/carbonlib
-    /carbondata_2.10-0.1.0-incubating-SNAPSHOT-shade-hadoop2.7.2.jar
-    hdfs://hacluster/user/hive/warehouse/carbon.store
-</code></pre></p>
-<ul>
-    <li>Start with Fixed executors and resources</li>
-</ul><p><pre><code>
-    ./bin/spark-submit --conf spark.sql.hive.thriftServer.singleSession=true
-    --class org.apache.carbondata.spark.thriftserver.CarbonThriftServer
-    --num-executors 3 --driver-memory 20g --executor-memory 250g
-    --executor-cores 32
-    /srv/OSCON/BigData/HACluster/install/spark/sparkJdbc/lib
-    /carbondata_2.10-0.1.0-incubating-SNAPSHOT-shade-hadoop2.7.2.jar
-    hdfs://hacluster/user/hive/warehouse/carbon.store
-</code></pre></p><h3>Connecting to CarbonData Thrift Server Using Beeline</h3>
-<p>
-<pre><code>
-    cd SPARK_HOME
-    ./bin/beeline jdbc:hive2://thriftserver_host:port
-
-    Example:
-    ./bin/beeline jdbc:hive2://10.10.10.10:10000
-</code></pre>
-</p>

http://git-wip-us.apache.org/repos/asf/incubator-carbondata-site/blob/4f8753c1/content/docs/latest/mainpage.html
----------------------------------------------------------------------
diff --git a/content/docs/latest/mainpage.html 
b/content/docs/latest/mainpage.html
deleted file mode 100644
index fb1daee..0000000
--- a/content/docs/latest/mainpage.html
+++ /dev/null
@@ -1,167 +0,0 @@
-<!DOCTYPE html>
-<html lang="en">
-  <head>
-    <meta charset="utf-8">
-    <meta http-equiv="X-UA-Compatible" content="IE=edge">
-    <meta name="viewport" content="width=device-width, initial-scale=1">
-    <link href='../../images/favicon.ico' rel='shortcut icon' 
type='image/x-icon'>
-    <!-- The above 3 meta tags *must* come first in the head; any other head 
content must come *after* these tags -->
-    <title>CarbonData</title>
-<style>
-
-</style>
-    <!-- Bootstrap -->
-
-    <link rel="stylesheet" href="../../css/bootstrap.min.css">
-    <link href="../../css/style.css" rel="stylesheet">
-    <link href="../../css/print.css" rel="stylesheet" >
-    <!-- HTML5 shim and Respond.js for IE8 support of HTML5 elements and media 
queries -->
-    <!-- WARNING: Respond.js doesn't work if you view the page via file:// -->
-    <!--[if lt IE 9]>
-      <script 
src="https://oss.maxcdn.com/html5shiv/3.7.3/html5shiv.min.js";></script>
-      <script 
src="https://oss.maxcdn.scom/respond/1.4.2/respond.min.js";></script>
-    <![endif]-->
-    <script src="../../js/jquery.min.js"></script>
-    <script src="../../js/bootstrap.min.js"></script>
-
-
-
-  </head>
-  <body>
-    <header>
-     <nav class="navbar navbar-default navbar-custom cd-navbar-wrapper" >
-      <div class="container">
-        <div class="navbar-header">
-          <button aria-controls="navbar" aria-expanded="false" 
data-target="#navbar" data-toggle="collapse" class="navbar-toggle collapsed" 
type="button">
-            <span class="sr-only">Toggle navigation</span>
-            <span class="icon-bar"></span>
-            <span class="icon-bar"></span>
-            <span class="icon-bar"></span>
-          </button>
-          <a href="../../index.html" class="logo">
-             <img src="../../images/CarbonDataLogo.png" alt="CarbonData logo" 
title="CarbocnData logo"  />
-          </a>
-        </div>
-        <div class="navbar-collapse collapse cd_navcontnt" id="navbar">
-         <ul class="nav navbar-nav navbar-right navlist-custom">
-              <li><a href="../../index.html" class="hidden-xs"><i class="fa 
fa-home" aria-hidden="true"></i> </a></li>
-              <li><a href="../../index.html" class="hidden-lg hidden-md 
hidden-sm">Home</a></li>
-              <li class="dropdown">
-                  <a href="#" class="dropdown-toggle " data-toggle="dropdown" 
role="button" aria-haspopup="true" aria-expanded="false"> Download <span 
class="caret"></span></a>
-                  <ul class="dropdown-menu">
-                      <li>
-                          <a 
href="https://www.apache.org/dyn/closer.lua/incubator/carbondata/1.0.0-incubating";
-                             target="_blank">Apache CarbonData 1.0.0</a></li>
-                      <li>
-                          <a 
href="https://www.apache.org/dyn/closer.lua/incubator/carbondata/0.2.0-incubating";
-                             target="_blank">Apache CarbonData 0.2.0</a></li>
-                      <li>
-                          <a 
href="https://www.apache.org/dyn/closer.lua/incubator/carbondata/0.1.1-incubating";
-                             target="_blank">Apache CarbonData 0.1.1</a></li>
-                      <li>
-                          <a 
href="https://www.apache.org/dyn/closer.lua/incubator/carbondata/0.1.0-incubating";
-                             target="_blank">Apache CarbonData 0.1.0</a></li>
-                      <li>
-                          <a 
href="https://cwiki.apache.org/confluence/display/CARBONDATA/Releases";
-                             target="_blank">Release Archive</a></li>
-                  </ul>
-                </li>
-
-              <li><a href="mainpage.html?page=userguide" 
class="">Documentation</a></li>
-              <li class="dropdown">
-                  <a href="#" class="dropdown-toggle" data-toggle="dropdown" 
role="button" aria-haspopup="true" aria-expanded="false">Community <span 
class="caret"></span></a>
-                  <ul class="dropdown-menu">
-                      <li><a 
href="https://github.com/apache/incubator-carbondata/blob/master/docs/How-to-contribute-to-Apache-CarbonData.md";
-                             target="_blank">Contributing to 
CarbonData</a></li></li>
-                      <li><a 
href="https://cwiki.apache.org/confluence/display/CARBONDATA/Committers"; 
target="_blank">Project Committers</a></li>
-                      <li><a 
href="https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=66850609";
 target="_blank">CarbonData Meetups </a></li>
-                      <li><a href="../../security.html">Apache CarbonData 
Security</a></li>
-                  </ul>
-                </li>
-                <li class="dropdown">
-                  <a href="http://www.apache.org/"; class="apache_link 
hidden-xs dropdown-toggle" data-toggle="dropdown" role="button" 
aria-haspopup="true" aria-expanded="false">Apache</a>
-                   <ul class="dropdown-menu">
-                      <li><a href="http://www.apache.org/";  
target="_blank">Apache Homepage</a></li>
-                      <li><a href="http://www.apache.org/licenses/";  
target="_blank">License</a></li>
-                      <li><a 
href="http://www.apache.org/foundation/sponsorship.html";  
target="_blank">Sponsorship</a></li>
-                      <li><a 
href="http://www.apache.org/foundation/thanks.html";  
target="_blank">Thanks</a></li>
-                    </ul>
-                </li>
-
-                <li class="dropdown">
-                  <a href="http://www.apache.org/"; class="hidden-lg hidden-md 
hidden-sm dropdown-toggle" data-toggle="dropdown" role="button" 
aria-haspopup="true" aria-expanded="false">Apache</a>
-                   <ul class="dropdown-menu">
-                      <li><a href="http://www.apache.org/";  
target="_blank">Apache Homepage</a></li>
-                      <li><a href="http://www.apache.org/licenses/";  
target="_blank">License</a></li>
-                      <li><a 
href="http://www.apache.org/foundation/sponsorship.html";  
target="_blank">Sponsorship</a></li>
-                      <li><a 
href="http://www.apache.org/foundation/thanks.html";  
target="_blank">Thanks</a></li>
-                    </ul>
-                </li>
-
-             <li>
-                 <a href="#" id="search-icon" ><i class="fa fa-search" 
aria-hidden="true"></i></a>
-
-             </li>
-
-           </ul>
-        </div><!--/.nav-collapse -->
-          <div id="search-box" >
-              <form method="get" action="http://www.google.com/search";>
-                  <div class="search-block">
-                      <table border="0" cellpadding="0" width="100%">
-                          <tr>
-                              <td style="width:80%">
-                                  <input type="text" name="q" size=" 5" 
maxlength="255" value="" class="search-input" />
-                              </td>
-                              <td style="width:20%">
-                                  <input type="submit" value="Search" 
/></td></tr>
-                          <tr><td align="left"  style="font-size:75%" 
colspan="2">
-                              <input type="checkbox"  name="sitesearch" 
value="carbondata.apache.org" checked /> Only search for CarbonData   </td></tr>
-                      </table>
-                  </div>
-              </form>
-          </div>
-      </div>
-    </nav>
-     </header> <!-- end Header part -->
-
-   <div class="fixed-padding"></div> <!--  top padding with fixde header  -->
-
-   <section><!-- Dashboard nav -->
-    <div class="container-fluid q">
-        <div class="col-sm-12  col-md-12 maindashboard">
-              <div class="row">
-                <section>
-                  <div style="padding:10px 15px;">
-                    <div class="doc-header">
-                        <div class="doc-toc">
-                            <a href="mainpage.html?page=userguide" class="icon 
toc-icon"></a>
-                        </div>
-                       <img src="images/CarbonData_icon.png" alt="" 
class="logo-print" >
-                       <span>Version: 1.0.0 | Published: 30-01-2017</span>
-                       <i class="fa fa-print print-icon" aria-hidden="true" 
onclick="divPrint(); " title="Print"></i>
-                    </div>
-                    <div id="viewpage" name="viewpage">   </div>
-                    <div class="doc-footer">
-                         <a href="#top" class="scroll-top">Top</a>
-                    </div>
-                  </div>
-                </section>
-              </div>
-        </div>
-      </div>
-    </section><!-- End systemblock part -->
-
-  <!-- jQuery (necessary for Bootstrap's JavaScript plugins) -->
-
-    <script src="../../js/custom.js"></script>
-    <script src="../../js/mdNavigation.js" type="text/javascript"></script>
-
-    <script type="text/javascript">
-     <!-- $("#leftmenu").load("table-of-content.html");-->
-    </script>
-
-
-
-  </body>
-  </html>

http://git-wip-us.apache.org/repos/asf/incubator-carbondata-site/blob/4f8753c1/content/docs/latest/overview-of-carbondata.html
----------------------------------------------------------------------
diff --git a/content/docs/latest/overview-of-carbondata.html 
b/content/docs/latest/overview-of-carbondata.html
deleted file mode 100644
index 5f4aff3..0000000
--- a/content/docs/latest/overview-of-carbondata.html
+++ /dev/null
@@ -1,51 +0,0 @@
-<!--
-    Licensed to the Apache Software Foundation (ASF) under one
-    or more contributor license agreements.  See the NOTICE file
-    distributed with this work for additional information
-    regarding copyright ownership.  The ASF licenses this file
-    to you under the Apache License, Version 2.0 (the
-    "License"); you may not use this file except in compliance
-    with the License.  You may obtain a copy of the License at
-
-      http://www.apache.org/licenses/LICENSE-2.0
-
-    Unless required by applicable law or agreed to in writing,
-    software distributed under the License is distributed on an
-    "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-    KIND, either express or implied.  See the License for the
-    specific language governing permissions and limitations
-    under the License.
--->
-<h1>Overview</h1><p>This tutorial provides a detailed overview about :</p>
-<ul>
-  <li><a href="#introduction">Introduction</a></li>
-  <li><a href="#features">Features</a></li>
-</ul>
-
-<div id="introduction"></div>
-<h2>Introduction</h2><p>CarbonData is a fully indexed columnar and Hadoop 
native data-store for processing heavy analytical workloads and detailed 
queries on big data. CarbonData allows faster interactive query using advanced 
columnar storage, index, compression and encoding techniques to improve 
computing efficiency, which helps in speeding up queries by an order of 
magnitude faster over PetaBytes of data.</p><p>In customer benchmarks, 
CarbonData has proven to manage Petabyte of data running on extraordinarily 
low-cost hardware and answers queries around 10 times faster than the current 
open source solutions (column-oriented SQL on Hadoop data-stores).</p><p>Some 
of the salient features of CarbonData are :</p>
-<ul>
-  <li>Low-Latency for various types of data access patterns like Sequential, 
Random and OLAP.</li>
-  <li>Fast query on fast data.</li>
-  <li>Space efficiency.</li>
-  <li>General format available on Hadoop-ecosystem.</li>
-</ul>
-
-<div id="features"></div>
-<h2>Features</h2><p>CarbonData file format is a columnar store in HDFS. It has 
many features that a modern columnar format has, such as splittable, 
compression schema, complex data type etc and CarbonData has following unique 
features:</p>
-<ul>
-  <li><p>Unique Data Organization: Though CarbonData stores data in Columnar 
format, it differs from traditional Columnar formats as the columns in each 
row-group(Data Block) is sorted independent of the other columns. Though this 
arrangement requires CarbonData to store the row-number mapping against each 
column value, it makes it possible to use binary search for faster filtering 
and since the values are sorted, same/similar values come together which yields 
better compression and offsets the storage overhead required by the row number 
mapping.</p></li>
-  <li><p>Advanced Push Down Optimizations: CarbonData pushes as much of query 
processing as possible close to the data to minimize the amount of data being 
read, processed, converted and transmitted/shuffled. Using projections and 
filters it reads only the required columns form the store and also reads only 
the rows that match the filter conditions provided in the query.</p></li>
-  <li><p>Multi Level Indexing: CarbonData uses multiple indices at various 
levels to enable faster search and speed up query processing.</p></li>
-  <li><p>Dictionary Encoding: Most databases and big data SQL data stores 
employ columnar encoding to achieve data compression by storing small integers 
numbers (surrogate value) instead of full string values. However, almost all 
existing databases and data stores divide the data into row groups containing 
anywhere from few thousand to a million rows and employ dictionary encoding 
only within each row group. Hence, the same column value can have different 
surrogate values in different row groups. So, while reading the data, 
conversion from surrogate value to actual value needs to be done immediately 
after the data is read from the disk. But CarbonData employs global surrogate 
key which means that a common dictionary is maintained for the full store on 
one machine/node. So CarbonData can perform all the query processing work such 
as grouping/aggregation, sorting etc on light weight surrogate values. The 
conversion from surrogate to actual values needs to be done only on the final 
res
 ult. This procedure improves performance on two aspects. Conversion from 
surrogate values to actual values is done only for the final result rows which 
are much less than the actual rows read from the store. All query processing 
and computation such as grouping/aggregation, sorting, and so on is done on 
lightweight surrogate values which requires less memory and CPU time compared 
to actual values.</p></li>
-  <li><p>Deep Spark Integration: It has built-in spark integration for Spark 
1.6.2, 2.1 and interfaces for Spark SQL, DataFrame API and query optimization. 
It supports bulk data ingestion and allows saving of spark dataframes as 
CarbonData files.</p></li>
-  <li><p>Update Delete Support: It supports batch updates like daily update 
scenarios for OLAP and Base+Delta file based design.</p></li>
-  <li><p>Bucketing : It is a technique that is used for uniform distribution 
of data across files in CarbonData. It enhances the performance of join 
queries. While loading the data, records are placed into buckets based on 
hashing algorithm. During the execution of join queries the records can be 
fetched from buckets with out need of shuffling.This feature is used to 
distribute/organize the table/partition data into multiple files placing 
similar records in same file.</p></li>
-  <li><p>Global Multi Dimensional Keys(MDK) based B+Tree Index for all non- 
measure columns: Aids in quickly locating the row groups(Data Blocks) that 
contain the data matching search/filter criteria.</p></li>
-  <li><p>Min-Max Index for all columns: Aids in quickly locating the row 
groups(Data Blocks) that contain the data matching search/filter 
criteria.</p></li>
-  <li><p>Data Block level Inverted Index for all columns: Aids in quickly 
locating the rows that contain the data matching search/filter criteria within 
a row group(Data Blocks).</p></li>
-  <li><p>Store data along with index: Significantly accelerates query 
performance and reduces the I/O scans and CPU resources, when there are filters 
in the query. CarbonData index consists of multiple levels of indices. A 
processing framework can leverage this index to reduce the task it needs to 
schedule and process. It can also do skip scan in more finer grain units 
(called blocklet) in task side scanning instead of scanning the whole 
file.</p></li>
-  <li><p>Operable encoded data: It supports efficient compression and global 
encoding schemes and can query on compressed/encoded data. The data can be 
converted just before returning the results to the users, which is "late 
materialized".</p></li>
-  <li><p>Column group: Allows multiple columns to form a column group that 
would be stored as row format. This reduces the row reconstruction cost at 
query time.</p></li>
-  <li><p>Support for various use cases with one single Data format: Examples 
are interactive OLAP-style query, Sequential Access (big scan) and Random 
Access (narrow scan).</p></li>
-</ul>

http://git-wip-us.apache.org/repos/asf/incubator-carbondata-site/blob/4f8753c1/content/docs/latest/quick-start-guide.html
----------------------------------------------------------------------
diff --git a/content/docs/latest/quick-start-guide.html 
b/content/docs/latest/quick-start-guide.html
deleted file mode 100644
index a2688c2..0000000
--- a/content/docs/latest/quick-start-guide.html
+++ /dev/null
@@ -1,96 +0,0 @@
-<!--
-    Licensed to the Apache Software Foundation (ASF) under one
-    or more contributor license agreements.  See the NOTICE file
-    distributed with this work for additional information
-    regarding copyright ownership.  The ASF licenses this file
-    to you under the Apache License, Version 2.0 (the
-    "License"); you may not use this file except in compliance
-    with the License.  You may obtain a copy of the License at
-    
-      http://www.apache.org/licenses/LICENSE-2.0
-    
-    Unless required by applicable law or agreed to in writing,
-    software distributed under the License is distributed on an
-    "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-    KIND, either express or implied.  See the License for the
-    specific language governing permissions and limitations
-    under the License.
---><h1>Quick Start</h1><p>This tutorial provides a quick introduction to using 
CarbonData.</p><h2>
-    Prerequisites</h2>
-<ul>
-    <li><a 
href="https://github.com/apache/incubator-carbondata/blob/master/build"; 
target="_blank">Installation and
-        building CarbonData</a>.
-    </li>
-    <li>Create a sample.csv file using the following commands. The CSV file is 
required for loading
-        data into CarbonData.
-    </li>
-</ul><p><code>
-    cd carbondata
-    cat &gt; sample.csv &lt;&lt; EOF
-    id,name,city,age
-    1,david,shenzhen,31
-    2,eason,shenzhen,27
-    3,jarry,wuhan,35
-    EOF
-</code></p><h2>Interactive Analysis with Spark Shell Version 2.1</h2><p>Apache 
Spark Shell provides
-    a simple way to learn the API, as well as a powerful tool to analyze data 
interactively. Please
-    visit <a href="http://spark.apache.org/docs/latest/"; 
target="_blank">Apache Spark Documentation</a> for more
-    details on Spark shell.</p><h4>Basics</h4><p>Start Spark shell by running 
the following command
-    in the Spark directory:</p><p><code>
-    ./bin/spark-shell --jars &lt;carbondata assembly jar path&gt;
-</code></p><p>In this shell, SparkSession is readily available as 'spark' and 
Spark context is
-    readily available as 'sc'.</p><p>In order to create a CarbonSession we 
will have to configure it
-    explicitly in the following manner :</p>
-<ul>
-    <li>Import the following :</li>
-</ul><p><code>
-    import org.apache.spark.sql.SparkSession
-    import org.apache.spark.sql.CarbonSession._
-</code></p>
-<ul>
-    <li>Create a CarbonSession :</li>
-</ul><pre><code>val carbon = SparkSession
-            .builder()
-            .config(sc.getConf)
-            .getOrCreateCarbonSession(&quot;&lt;hdfs store path&gt;&quot;)
-</code></pre>
-<p>NOTE: By default metastore location is pointed to 
“…/carbon.metastore”, user can provide own metastore location to 
CarbonSession like</p>
-<pre><code>`SparkSession.builder().config(sc.getConf).getOrCreateCarbonSession(&quot;&lt;hdfs
 store path&gt;&quot;, &quot;&lt;local metastore path&gt;&quot;)`
-</code></pre><h4>Executing Queries</h4><h5>Creating a Table</h5><p><code>
-    scala&gt;carbon.sql(&quot;CREATE TABLE IF NOT EXISTS test_table(id string, 
name string, city string, age Int) STORED BY &#39;carbondata&#39;&quot;)
-</code></p><h5>Loading Data to a Table</h5><p><code>
-    scala&gt;carbon.sql(&quot;LOAD DATA INPATH &#39;sample.csv file path&#39; 
INTO TABLE test_table&quot;)
-</code> NOTE:Please provide the real file path of sample.csv for the above 
script.</p><h6>Query Data
-    from a Table</h6><p><code>scala&gt;carbon.sql("SELECT * FROM 
test_table").show()</code></p><p><code>scala&gt;carbon.sql("SELECT
-    city, avg(age), sum(age) FROM test_table GROUP BY city").show()</code></p>
-<h2>Interactive Analysis with Spark Shell Version 1.6</h2>
-<h4>Basics</h4><p>Start Spark shell by running the following
-    command in the Spark directory:</p><p><code>
-    ./bin/spark-shell --jars &lt;carbondata assembly jar path&gt;
-</code></p><p>NOTE: In this shell, SparkContext is readily available as sc.</p>
-<ul>
-    <li>In order to execute the Queries we need to import CarbonContext:</li>
-</ul><p><code>
-    import org.apache.spark.sql.CarbonContext
-</code></p>
-<ul>
-    <li>Create an instance of CarbonContext in the following manner :</li>
-</ul><p><code>
-    val cc = new CarbonContext(sc)
-</code></p><p>NOTE: By default store location is pointed to "../carbon.store", 
user can provide own
-    store location to CarbonContext like new CarbonContext(sc, 
storeLocation).</p><h4>Executing
-    Queries</h4>
-<h5>Creating a Table</h5><p><code>
-    scala&gt;cc.sql(&quot;CREATE TABLE IF NOT EXISTS test_table (id string, 
name string, city
-    string, age Int) STORED BY &#39;carbondata&#39;&quot;)
-</code>
-</p>To see the table created :<p><code>
-    scala&gt;cc.sql(&quot;SHOW TABLES&quot;).show()
-</code></p><h5>Loading Data to a Table</h5><p><code>
-    scala&gt;cc.sql(&quot;LOAD DATA INPATH &#39;sample.csv file path&#39; INTO 
TABLE test_table&quot;)
-</code><br/>
-<p>NOTE:Please provide the real file path of sample.csv for the above 
script.</p><h5>Query
-    Data from a Table</h5><p><code>
-    scala&gt;cc.sql(&quot;SELECT * FROM test_table&quot;).show()
-    scala&gt;cc.sql(&quot;SELECT city, avg(age), sum(age) FROM test_table 
GROUP BY city&quot;).show()
-</code></p>
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/incubator-carbondata-site/blob/4f8753c1/content/docs/latest/supported-data-types-in-carbondata.html
----------------------------------------------------------------------
diff --git a/content/docs/latest/supported-data-types-in-carbondata.html 
b/content/docs/latest/supported-data-types-in-carbondata.html
deleted file mode 100644
index da7efb4..0000000
--- a/content/docs/latest/supported-data-types-in-carbondata.html
+++ /dev/null
@@ -1,25 +0,0 @@
-<h1>Data Types</h1><h4>CarbonData supports the following data types:</h4>
-<ul>
-  Numeric Types
-  <li>SMALLINT</li>
-  <li>INT/INTEGER</li>
-  <li>BIGINT</li>
-  <li>DOUBLE</li>
-  <li>DECIMAL</li>
-</ul>
-<ul>
-  Date/Time Types
-  <li>DATE</li>
-  <li>TIMESTAMP</li>
-</ul>
-
-<ul>
-  String Types
-  <li>STRING</li>
-</ul>
-<ul>
-  Complex Types
-  <li>arrays: ARRAY<code>&lt;data_type&gt;</code></li>
-  <li>structs: STRUCT<code>&lt;col_name : data_type COMMENT col_comment, 
...&gt;</code></li>
-</ul>
-

http://git-wip-us.apache.org/repos/asf/incubator-carbondata-site/blob/4f8753c1/content/docs/latest/troubleshooting.html
----------------------------------------------------------------------
diff --git a/content/docs/latest/troubleshooting.html 
b/content/docs/latest/troubleshooting.html
deleted file mode 100644
index 4c728c6..0000000
--- a/content/docs/latest/troubleshooting.html
+++ /dev/null
@@ -1,217 +0,0 @@
-<!--
-    Licensed to the Apache Software Foundation (ASF) under one
-    or more contributor license agreements.  See the NOTICE file
-    distributed with this work for additional information
-    regarding copyright ownership.  The ASF licenses this file
-    to you under the Apache License, Version 2.0 (the
-    "License"); you may not use this file except in compliance
-    with the License.  You may obtain a copy of the License at
-
-      http://www.apache.org/licenses/LICENSE-2.0
-
-    Unless required by applicable law or agreed to in writing,
-    software distributed under the License is distributed on an
-    "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-    KIND, either express or implied.  See the License for the
-    specific language governing permissions and limitations
-    under the License.
---><h1>Troubleshooting</h1>
-<p>This tutorial is designed to provide troubleshooting for end users and 
developers
-    who are building, deploying, and using CarbonData.</p>
-<h2 id="failed-to-load-thrift-libraries">Failed to load thrift libraries</h2>
-<p><strong>Symptom</strong></p>
-<p>Thrift throws following exception :</p>
-<pre><code>thrift: error while loading shared libraries:
-libthriftc.so.0: cannot open shared object file: No such file or directory
-</code></pre>
-<p><strong>Possible Cause</strong></p>
-<p>The complete path to the directory containing the libraries is not 
configured correctly.</p>
-<p><strong>Procedure</strong></p>
-<p>Follow the steps below to ensure loading of libraries appropriately :</p>
-<ol>
-    <li>
-        <p>For ubuntu you have to add a custom.conf file to 
/etc/ld.so.conf.d<br>
-            For example,</p>
-        <pre><code>sudo gedit /etc/ld.so.conf.d/randomLibs.conf
-</code></pre>
-        <p>Inside this file you are supposed to configure the complete path to 
the directory that
-            contains all the libraries that you wish to add to the system, let 
us say
-            /home/ubuntu/localLibs</p>
-    </li>
-    <li>
-        <p>To ensure your library location ,check for existence of <a 
href="http://libthrift.so";>libthrift.so</a>
-        </p>
-    </li>
-    <li>
-        <p>Save and run the following command to update the system with this 
libs.</p>
-        <pre><code>sudo ldconfig
-</code></pre>
-    </li>
-</ol>
-Note : Remember to add only the path to the directory, not the full path for 
that file, all the libraries inside that path will be automatically indexed.
-<h2 id="failed-to-launch-the-spark-shell">Failed to launch the Spark Shell</h2>
-<p><strong>Symptom</strong></p>
-<p>The shell prompts the following error :</p>
-<pre><code>org.apache.spark.sql.CarbonContext$$anon$$apache$spark$sql$catalyst$analysis
-$OverrideCatalog$_setter_$org$apache$spark$sql$catalyst$analysis
-$OverrideCatalog$$overrides_$e
-</code></pre>
-<p><strong>Possible Cause</strong></p>
-<p>The Spark Version and the selected Spark Profile do not match.</p>
-<p><strong>Procedure</strong></p>
-<ol>
-    <li>
-        <p>Ensure your spark version and selected profile for spark are 
correct.</p>
-    </li>
-    <li>
-        <p>Use the following command :</p>
-    </li>
-</ol>
-<pre><code>
- &quot;mvn -Pspark-2.1 -Dspark.version {yourSparkVersion} clean package&quot;
-
-</code></pre>
-Note :  Refrain from using &quot;mvn clean package&quot; without specifying 
the profile.
-<h2 id="failed-to-execute-load-query-on-cluster">Failed to execute load query 
on cluster.
-</h2>
-<p><strong>Symptom</strong></p>
-<p>Load query failed with the following exception:</p>
-<pre><code>Dictionary file is locked for updation.
-</code></pre>
-<p><strong>Possible Cause</strong></p>
-<p>The carbon.properties file is not identical in all the nodes of the 
cluster.</p>
-<p><strong>Procedure</strong></p>
-<p>Follow the steps to ensure the carbon.properties file is consistent across 
all the nodes:</p>
-<ol>
-    <li>
-        <p>Copy the carbon.properties file from the master node to all the 
other nodes in the
-            cluster.<br>
-            For example, you can use ssh to copy this file to all the 
nodes.</p>
-    </li>
-    <li>
-        <p>For the changes to take effect, restart the Spark cluster.</p>
-    </li>
-</ol>
-<h2 id="failed-to-execute-insert-query-on-cluster">Failed to execute insert 
query on
-    cluster.</h2>
-<p><strong>Symptom</strong></p>
-<p>Load query failed with the following exception:</p>
-<pre><code>Dictionary file is locked for updation.
-</code></pre>
-<p><strong>Possible Cause</strong></p>
-<p>The carbon.properties file is not identical in all the nodes of the 
cluster.</p>
-<p><strong>Procedure</strong></p>
-<p>Follow the steps to ensure the carbon.properties file is consistent across 
all the nodes:</p>
-<ol>
-    <li>
-        <p>Copy the carbon.properties file from the master node to all the 
other nodes in the
-            cluster.<br>
-            For example, you can use scp to copy this file to all the 
nodes.</p>
-    </li>
-    <li>
-        <p>For the changes to take effect, restart the Spark cluster.</p>
-    </li>
-</ol>
-<h2 id="failed-to-connect-to-hiveuser-with-thrift">Failed to connect to 
hiveuser with
-    thrift</h2>
-<p><strong>Symptom</strong></p>
-<p>We get the following exception :</p>
-<pre><code>Cannot connect to hiveuser.
-</code></pre>
-<p><strong>Possible Cause</strong></p>
-<p>The external process does not have permission to access.</p>
-<p><strong>Procedure</strong></p>
-<p>Ensure that the Hiveuser in mysql must allow its access to the external 
processes.</p>
-<h2 id="failure-to-read-the-metastore-db-during-table-creation">Failure to 
read the
-    metastore db during table creation.</h2>
-<p><strong>Symptom</strong></p>
-<p>We get the following exception on trying to connect :</p>
-<pre><code>Cannot read the metastore db
-</code></pre>
-<p><strong>Possible Cause</strong></p>
-<p>The metastore db is dysfunctional.</p>
-<p><strong>Procedure</strong></p>
-<p>Remove the metastore db from the carbon.metastore in the Spark 
Directory.</p>
-<h2 id="failed-to-load-data-on-the-cluster">Failed to load data on the 
cluster</h2>
-<p><strong>Symptom</strong></p>
-<p>Data loading fails with the following exception :</p>
-<pre><code>Data Load failure exeception
-</code></pre>
-<p><strong>Possible Cause</strong></p>
-<p>The following issue can cause the failure :</p>
-<ol>
-    <li>
-        <p>The core-site.xml, hive-site.xml, yarn-site and carbon.properties 
are not consistent
-            across all nodes of the cluster.</p>
-    </li>
-    <li>
-        <p>Path to hdfs ddl is not configured correctly in the 
carbon.properties.</p>
-    </li>
-</ol>
-<p><strong>Procedure</strong></p>
-<p>Follow the steps to ensure the following configuration files are consistent 
across all the
-    nodes:</p>
-<ol>
-    <li>
-        <p>Copy the core-site.xml, hive-site.xml, yarn-site,carbon.properties 
files from the master
-            node to all the other nodes in the cluster.<br>
-            For example, you can use scp to copy this file to all the 
nodes.</p>
-        <p>Note : Set the path to hdfs ddl in carbon.properties in the master 
node.</p>
-    </li>
-    <li>
-        <p>For the changes to take effect, restart the Spark cluster.</p>
-    </li>
-</ol>
-<h2 id="failed-to-insert-data-on-the-cluster">Failed to insert data on the 
cluster</h2>
-<p><strong>Symptom</strong></p>
-<p>Insertion fails with the following exception :</p>
-<pre><code>Data Load failure exeception
-</code></pre>
-<p><strong>Possible Cause</strong></p>
-<p>The following issue can cause the failure :</p>
-<ol>
-    <li>
-        <p>The core-site.xml, hive-site.xml, yarn-site and carbon.properties 
are not consistent
-            across all nodes of the cluster.</p>
-    </li>
-    <li>
-        <p>Path to hdfs ddl is not configured correctly in the 
carbon.properties.</p>
-    </li>
-</ol>
-<p><strong>Procedure</strong></p>
-<p>Follow the steps to ensure the following configuration files are consistent 
across all the
-    nodes:</p>
-<ol>
-    <li>
-        <p>Copy the core-site.xml, hive-site.xml, yarn-site,carbon.properties 
files from the master
-            node to all the other nodes in the cluster.<br>
-            For example, you can use scp to copy this file to all the 
nodes.</p>
-        <p>Note : Set the path to hdfs ddl in carbon.properties in the master 
node.</p>
-    </li>
-    <li>
-        <p>For the changes to take effect, restart the Spark cluster.</p>
-    </li>
-</ol>
-<h2 id="failed-to-execute-concurrent-operations">Failed to execute Concurrent 
Operations.
-</h2>
-<p><strong>Symptom</strong></p>
-<p>Execution of Concurrent Operations (Load,Insert,Update) on table by 
multiple workers fails with
-    the following exception :</p>
-<pre><code>Table is locked for updation.
-</code></pre>
-<p><strong>Possible Cause</strong></p>
-<p>Concurrency not supported.</p>
-<p><strong>Procedure</strong></p>
-<p>Worker must wait for the query execution to complete and the table to 
release the lock for
-    another query execution to succeed…</p>
-<h2 id="failed-to-create-a-table-with-a-single-numeric-column">Failed to 
create a table
-    with a single numeric column.</h2>
-<p><strong>Symptom</strong></p>
-<p>Execution fails with the following exception :</p>
-<pre><code>Table creation fails.
-</code></pre>
-<p><strong>Possible Cause</strong></p>
-<p>Behavior not supported.</p>
-<p><strong>Procedure</strong></p>
-<p>A single column that can be considered as dimension is mandatory for table 
creation.</p>
-

Reply via email to