Docs/format md files for pdf (#1)

* Modified MDs for PdfGeneration


Project: http://git-wip-us.apache.org/repos/asf/carbondata/repo
Commit: http://git-wip-us.apache.org/repos/asf/carbondata/commit/0c6f5f34
Tree: http://git-wip-us.apache.org/repos/asf/carbondata/tree/0c6f5f34
Diff: http://git-wip-us.apache.org/repos/asf/carbondata/diff/0c6f5f34

Branch: refs/heads/branch-1.1
Commit: 0c6f5f34c3724d40aa7aac08ee63a7193167782b
Parents: a8b6726
Author: Jatin Demla <jatin.deml...@gmail.com>
Authored: Wed May 24 00:46:22 2017 +0530
Committer: ravipesala <ravi.pes...@gmail.com>
Committed: Thu Jun 15 12:57:23 2017 +0530

----------------------------------------------------------------------
 docs/configuration-parameters.md     |  8 ++--
 docs/data-management.md              |  9 ----
 docs/ddl-operation-on-carbondata.md  | 35 ++++++++------
 docs/dml-operation-on-carbondata.md  |  2 +-
 docs/faq.md                          | 20 ++++++--
 docs/file-structure-of-carbondata.md |  7 +--
 docs/installation-guide.md           | 78 ++++++++++++++++---------------
 docs/quick-start-guide.md            | 39 ++++++++++++----
 docs/troubleshooting.md              |  9 ++--
 docs/useful-tips-on-carbondata.md    |  2 +-
 10 files changed, 121 insertions(+), 88 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/carbondata/blob/0c6f5f34/docs/configuration-parameters.md
----------------------------------------------------------------------
diff --git a/docs/configuration-parameters.md b/docs/configuration-parameters.md
index e4f8f33..c63f73d 100644
--- a/docs/configuration-parameters.md
+++ b/docs/configuration-parameters.md
@@ -114,7 +114,7 @@ This section provides the details of all the configurations 
required for CarbonD
 
 | Parameter | Default Value | Description |
 
|-----------------------------------|---------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
-| carbon.numberof.preserve.segments | 0 | If the user wants to preserve some 
number of segments from being compacted then he can set this property. Example: 
carbon.numberof.preserve.segments=2 then 2 latest segments will always be 
excluded from the compaction. No segments will be preserved by default. |
+| carbon.numberof.preserve.segments | 0 | If the user wants to preserve some 
number of segments from being compacted then he can set this property. Example: 
carbon.numberof.preserve.segments = 2 then 2 latest segments will always be 
excluded from the compaction. No segments will be preserved by default. |
 | carbon.allowed.compaction.days | 0 | Compaction will merge the segments 
which are loaded with in the specific number of days configured. Example: If 
the configuration is 2, then the segments which are loaded in the time frame of 
2 days only will get merged. Segments which are loaded 2 days apart will not be 
merged. This is disabled by default. |
 | carbon.enable.auto.load.merge | false | To enable compaction while data 
loading. |
 
@@ -130,9 +130,9 @@ This section provides the details of all the configurations 
required for CarbonD
   
 | Parameter | Default Value | Description |
 
|---------------------------------------|---------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
-| high.cardinality.identify.enable | true | If the parameter is true, the high 
cardinality columns of the dictionary code are automatically recognized and 
these columns will not be used as global dictionary encoding. If the parameter 
is false, all dictionary encoding columns are used as dictionary encoding. The 
high cardinality column must meet the following requirements: value of 
cardinality > configured value of high.cardinalityEqually, the value of 
cardinality is higher than the threshold.value of cardinality/ row number x 100 
> configured value of high.cardinality.row.count.percentageEqually, the ratio 
of the cardinality value to data row number is higher than the configured 
percentage. |
-| high.cardinality.threshold | 1000000 | It is a threshold to identify high 
cardinality of the columns.If the value of columns' cardinality > the 
configured value, then the columns are excluded from dictionary encoding. |
-| high.cardinality.row.count.percentage | 80 | Percentage to identify whether 
column cardinality is more than configured percent of total row 
count.Configuration value formula:Value of cardinality/ row number x 100 > 
configured value of high.cardinality.row.count.percentageThe value of the 
parameter must be larger than 0. |
+| high.cardinality.identify.enable | true | If the parameter is true, the high 
cardinality columns of the dictionary code are automatically recognized and 
these columns will not be used as global dictionary encoding. If the parameter 
is false, all dictionary encoding columns are used as dictionary encoding. The 
high cardinality column must meet the following requirements: value of 
cardinality > configured value of high.cardinality. Equally, the value of 
cardinality is higher than the threshold.value of cardinality/ row number x 100 
> configured value of high.cardinality.row.count.percentage. Equally, the ratio 
of the cardinality value to data row number is higher than the configured 
percentage. |
+| high.cardinality.threshold | 1000000  | It is a threshold to identify high 
cardinality of the columns.If the value of columns' cardinality > the 
configured value, then the columns are excluded from dictionary encoding. |
+| high.cardinality.row.count.percentage | 80 | Percentage to identify whether 
column cardinality is more than configured percent of total row 
count.Configuration value formula:Value of cardinality/ row number x 100 > 
configured value of high.cardinality.row.count.percentage. The value of the 
parameter must be larger than 0. |
 | carbon.cutOffTimestamp | 1970-01-01 05:30:00 | Sets the start date for 
calculating the timestamp. Java counts the number of milliseconds from start of 
"1970-01-01 00:00:00". This property is used to customize the start of 
position. For example "2000-01-01 00:00:00". The date must be in the form 
"carbon.timestamp.format". NOTE: The CarbonData supports data store up to 68 
years from the cut-off time defined. For example, if the cut-off time is 
1970-01-01 05:30:00, then the data can be stored up to 2038-01-01 05:30:00. |
 | carbon.timegranularity | SECOND | The property used to set the data 
granularity level DAY, HOUR, MINUTE, or SECOND. |
   

http://git-wip-us.apache.org/repos/asf/carbondata/blob/0c6f5f34/docs/data-management.md
----------------------------------------------------------------------
diff --git a/docs/data-management.md b/docs/data-management.md
index 42411de..81866a1 100644
--- a/docs/data-management.md
+++ b/docs/data-management.md
@@ -155,12 +155,3 @@ CLEAN FILES FOR TABLE table1
     To update we need to specify the column expression with an optional filter 
condition(s).
 
     For update commands refer to [DML operations on 
CarbonData](dml-operation-on-carbondata.md).
-
-
-    
-
-
-
-
- 
- 

http://git-wip-us.apache.org/repos/asf/carbondata/blob/0c6f5f34/docs/ddl-operation-on-carbondata.md
----------------------------------------------------------------------
diff --git a/docs/ddl-operation-on-carbondata.md 
b/docs/ddl-operation-on-carbondata.md
index 6222714..66c9d30 100644
--- a/docs/ddl-operation-on-carbondata.md
+++ b/docs/ddl-operation-on-carbondata.md
@@ -20,7 +20,7 @@
 # DDL Operations on CarbonData
 This tutorial guides you through the data definition language support provided 
by CarbonData.
 
-## Overview 
+## Overview
 The following DDL operations are supported in CarbonData :
 
 * [CREATE TABLE](#create-table)
@@ -37,6 +37,7 @@ The following DDL operations are supported in CarbonData :
 
 ## CREATE TABLE
   This command can be used to create a CarbonData table by specifying the list 
of fields along with the table properties.
+
 ```
    CREATE TABLE [IF NOT EXISTS] [db_name.]table_name
                     [(col_name data_type , ...)]
@@ -49,9 +50,9 @@ The following DDL operations are supported in CarbonData :
 
 | Parameter | Description | Optional |
 
|---------------|-----------------------------------------------------------------------------------------------------------------------------------------------|----------|
-| db_name | Name of the database. Database name should consist of alphanumeric 
characters and underscore(_) special character. | Yes |
-| field_list | Comma separated List of fields with data type. The field names 
should consist of alphanumeric characters and underscore(_) special character. 
| No |
-| table_name | The name of the table in Database. Table Name should consist of 
alphanumeric characters and underscore(_) special character. | No |
+| db_name | Name of the database. Database name should consist of alphanumeric 
characters and underscore(\_) special character. | Yes |
+| field_list | Comma separated List of fields with data type. The field names 
should consist of alphanumeric characters and underscore(\_) special character. 
| No |
+| table_name | The name of the table in Database. Table Name should consist of 
alphanumeric characters and underscore(\_) special character. | No |
 | STORED BY | "org.apache.carbondata.format", identifies and creates a 
CarbonData table. | No |
 | TBLPROPERTIES | List of CarbonData table properties. |  |
 
@@ -62,6 +63,7 @@ The following DDL operations are supported in CarbonData :
    - **Dictionary Encoding Configuration**
 
        Dictionary encoding is enabled by default for all String columns, and 
disabled for non-String columns. You can include and exclude columns for 
dictionary encoding.
+
 ```
        TBLPROPERTIES ('DICTIONARY_EXCLUDE'='column1, column2')
        TBLPROPERTIES ('DICTIONARY_INCLUDE'='column1, column2')
@@ -72,15 +74,17 @@ The following DDL operations are supported in CarbonData :
    - **Row/Column Format Configuration**
 
        Column groups with more than one column are stored in row format, 
instead of columnar format. By default, each column is a separate column group.
+
 ```
-TBLPROPERTIES ('COLUMN_GROUPS'='(column1, column2),
-(Column3,Column4,Column5)')
+       TBLPROPERTIES ('COLUMN_GROUPS'='(column1, column2),
+       (Column3,Column4,Column5)')
 ```
 
    - **Table Block Size Configuration**
 
      The block size of table files can be defined using the property 
TABLE_BLOCKSIZE. It accepts only integer values. The default value is 1024 MB 
and supports a range of 1 MB to 2048 MB.
      If you do not specify this value in the DDL command, default value is 
used.
+
 ```
        TBLPROPERTIES ('TABLE_BLOCKSIZE'='512')
 ```
@@ -91,6 +95,7 @@ TBLPROPERTIES ('COLUMN_GROUPS'='(column1, column2),
 
       Inverted index is very useful to improve compression ratio and query 
speed, especially for those low-cardinality columns which are in reward 
position.
       By default inverted index is enabled. The user can disable the inverted 
index creation for some columns.
+
 ```
        TBLPROPERTIES ('NO_INVERTED_INDEX'='column1, column3')
 ```
@@ -188,7 +193,7 @@ This command is used to add a new column to the existing 
table.
 
|--------------------|-----------------------------------------------------------------------------------------------------------|
 | db_Name            | Name of the database. If this parameter is left 
unspecified, the current database is selected.            |
 | table_name         | Name of the existing table.                             
                                                  |
-| col_name data_type | Name of comma-separated column with data type. Column 
names contain letters, digits, and underscores (_). |
+| col_name data_type | Name of comma-separated column with data type. Column 
names contain letters, digits, and underscores (\_). |
 
 NOTE: Do not name the column after name, tupleId, PositionId, and 
PositionReference when creating Carbon tables because they are used internally 
by UPDATE, DELETE, and secondary index.
 
@@ -207,15 +212,18 @@ NOTE: Do not name the column after name, tupleId, 
PositionId, and PositionRefere
 ```
 
 ```
-    ALTER TABLE carbon ADD COLUMNS (a1 INT, b1 STRING) 
TBLPROPERTIES('DICTIONARY_EXCLUDE'='b1');
+    ALTER TABLE carbon ADD COLUMNS (a1 INT, b1 STRING)
+    TBLPROPERTIES('DICTIONARY_EXCLUDE'='b1');
 ```
 
 ```
-    ALTER TABLE carbon ADD COLUMNS (a1 INT, b1 STRING) 
TBLPROPERTIES('DICTIONARY_INCLUDE'='a1');
+    ALTER TABLE carbon ADD COLUMNS (a1 INT, b1 STRING)
+    TBLPROPERTIES('DICTIONARY_INCLUDE'='a1');
 ```
 
 ```
-    ALTER TABLE carbon ADD COLUMNS (a1 INT, b1 STRING) 
TBLPROPERTIES('DEFAULT.VALUE.a1'='10');
+    ALTER TABLE carbon ADD COLUMNS (a1 INT, b1 STRING)
+    TBLPROPERTIES('DEFAULT.VALUE.a1'='10');
 ```
 
 
@@ -232,7 +240,7 @@ This command is used to delete a existing column or 
multiple columns in a table.
 
|------------|----------------------------------------------------------------------------------------------------------|
 | db_Name    | Name of the database. If this parameter is left unspecified, 
the current database is selected.           |
 | table_name | Name of the existing table.                                     
                                         |
-| col_name   | Name of comma-separated column with data type. Column names 
contain letters, digits, and underscores (_) |
+| col_name   | Name of comma-separated column with data type. Column names 
contain letters, digits, and underscores (\_) |
 
 #### Usage Guidelines
 
@@ -270,7 +278,8 @@ If the table contains 4 columns namely a1, b1, c1, and d1.
 This command is used to change the data type from INT to BIGINT or decimal 
precision from lower to higher.
 
 ```
-    ALTER TABLE [db_name.]table_name CHANGE col_name col_name 
changed_column_type;
+    ALTER TABLE [db_name.]table_name
+    CHANGE col_name col_name changed_column_type;
 ```
 
 #### Parameter Description
@@ -278,7 +287,7 @@ This command is used to change the data type from INT to 
BIGINT or decimal preci
 
|---------------------|-----------------------------------------------------------------------------------------------------------|
 | db_Name             | Name of the database. If this parameter is left 
unspecified, the current database is selected.            |
 | table_name          | Name of the existing table.                            
                                                   |
-| col_name            | Name of comma-separated column with data type. Column 
names contain letters, digits, and underscores (_). |
+| col_name            | Name of comma-separated column with data type. Column 
names contain letters, digits, and underscores (\_). |
 | changed_column_type | The change in the data type.                           
                                                   |
 
 #### Usage Guidelines

http://git-wip-us.apache.org/repos/asf/carbondata/blob/0c6f5f34/docs/dml-operation-on-carbondata.md
----------------------------------------------------------------------
diff --git a/docs/dml-operation-on-carbondata.md 
b/docs/dml-operation-on-carbondata.md
index 579b9cb..f9d9f45 100644
--- a/docs/dml-operation-on-carbondata.md
+++ b/docs/dml-operation-on-carbondata.md
@@ -107,7 +107,7 @@ You can use the following options to load data:
 - **COMPLEX_DELIMITER_LEVEL_2:** Split the complex type nested data column in 
a row. Applies level_1 delimiter & applies level_2 based on complex data type 
(eg., a:b$c:d --> Array> = {{a,b},{c,d}}).
 
     ```
-    OPTIONS('COMPLEX_DELIMITER_LEVEL_2'=':') 
+    OPTIONS('COMPLEX_DELIMITER_LEVEL_2'=':')
     ```
 
 - **ALL_DICTIONARY_PATH:** All dictionary files path.

http://git-wip-us.apache.org/repos/asf/carbondata/blob/0c6f5f34/docs/faq.md
----------------------------------------------------------------------
diff --git a/docs/faq.md b/docs/faq.md
index cae4f97..88db7d5 100644
--- a/docs/faq.md
+++ b/docs/faq.md
@@ -58,12 +58,16 @@ To ignore the Bad Records from getting stored in the raw 
csv, we need to set the
 The store location specified while creating carbon session is used by the 
CarbonData to store the meta data like the schema, dictionary files, dictionary 
meta data and sort indexes.
 
 Try creating ``carbonsession`` with ``storepath`` specified in the following 
manner :
+
 ```
-val carbon = 
SparkSession.builder().config(sc.getConf).getOrCreateCarbonSession(<store_path>)
+val carbon = SparkSession.builder().config(sc.getConf)
+             .getOrCreateCarbonSession(<store_path>)
 ```
 Example:
+
 ```
-val carbon = 
SparkSession.builder().config(sc.getConf).getOrCreateCarbonSession("hdfs://localhost:9000/carbon/store
 ")
+val carbon = SparkSession.builder().config(sc.getConf)
+             .getOrCreateCarbonSession("hdfs://localhost:9000/carbon/store")
 ```
 
 ## What is Carbon Lock Type?
@@ -77,7 +81,8 @@ In order to build CarbonData project it is necessary to 
specify the spark profil
 
 ## How Carbon will behave when execute insert operation in abnormal scenarios?
 Carbon support insert operation, you can refer to the syntax mentioned in [DML 
Operations on 
CarbonData](http://carbondata.apache.org/dml-operation-on-carbondata).
-First, create a soucre table in spark-sql and load data into this created 
table. 
+First, create a soucre table in spark-sql and load data into this created 
table.
+
 ```
 CREATE TABLE source_table(
 id String,
@@ -85,6 +90,7 @@ name String,
 city String)
 ROW FORMAT DELIMITED FIELDS TERMINATED BY ",";
 ```
+
 ```
 SELECT * FROM source_table;
 id  name    city
@@ -92,9 +98,11 @@ id  name    city
 2   erlu    hangzhou
 3   davi    shenzhen
 ```
+
 **Scenario 1** :
 
 Suppose, the column order in carbon table is different from source table, use 
script "SELECT * FROM carbon table" to query, will get the column order similar 
as source table, rather than in carbon table's column order as expected. 
+
 ```
 CREATE TABLE IF NOT EXISTS carbon_table(
 id String,
@@ -102,9 +110,11 @@ city String,
 name String)
 STORED BY 'carbondata';
 ```
+
 ```
 INSERT INTO TABLE carbon_table SELECT * FROM source_table;
 ```
+
 ```
 SELECT * FROM carbon_table;
 id  city    name
@@ -112,9 +122,11 @@ id  city    name
 2   erlu    hangzhou
 3   davi    shenzhen
 ```
+
 As result shows, the second column is city in carbon table, but what inside is 
name, such as jack. This phenomenon is same with insert data into hive table.
 
 If you want to insert data into corresponding column in carbon table, you have 
to specify the column order same in insert statment. 
+
 ```
 INSERT INTO TABLE carbon_table SELECT id, city, name FROM source_table;
 ```
@@ -122,9 +134,11 @@ INSERT INTO TABLE carbon_table SELECT id, city, name FROM 
source_table;
 **Scenario 2** :
 
 Insert operation will be failed when the number of column in carbon table is 
different from the column specified in select statement. The following insert 
operation will be failed.
+
 ```
 INSERT INTO TABLE carbon_table SELECT id, city FROM source_table;
 ```
+
 **Scenario 3** :
 
 When the column type in carbon table is different from the column specified in 
select statement. The insert operation will still success, but you may get NULL 
in result, because NULL will be substitute value when conversion type failed.

http://git-wip-us.apache.org/repos/asf/carbondata/blob/0c6f5f34/docs/file-structure-of-carbondata.md
----------------------------------------------------------------------
diff --git a/docs/file-structure-of-carbondata.md 
b/docs/file-structure-of-carbondata.md
index e6be48d..7ac234c 100644
--- a/docs/file-structure-of-carbondata.md
+++ b/docs/file-structure-of-carbondata.md
@@ -24,7 +24,7 @@ CarbonData files contain groups of data called blocklets, 
along with all require
 The file footer can be read once to build the indices in memory, which can be 
utilized for optimizing the scans and processing for all subsequent queries.
 
 ### Understanding CarbonData File Structure
-* Block : It would be as same as HDFS block, CarbonData creates one file for 
each data block, user can specify TABLE_BLOCKSIZE during creation table. Each 
file contains File Header, Blocklets and File Footer. 
+* Block : It would be as same as HDFS block, CarbonData creates one file for 
each data block, user can specify TABLE_BLOCKSIZE during creation table. Each 
file contains File Header, Blocklets and File Footer.
 
 ![CarbonData File 
Structure](../docs/images/carbon_data_file_structure_new.png?raw=true)
 
@@ -32,7 +32,7 @@ The file footer can be read once to build the indices in 
memory, which can be ut
 * File Footer : it contains Number of rows, segmentinfo ,all blocklets’ info 
and index, you can find the detail from the below diagram.
 * Blocklet : Rows are grouped to form a blocklet, the size of the blocklet is 
configurable and default size is 64MB, Blocklet contains Column Page groups for 
each column.
 * Column Page Group : Data of one column and it is further divided into pages, 
it is guaranteed to be contiguous in file.
-* Page : It has the data of one column and the number of row is fixed to 32000 
size. 
+* Page : It has the data of one column and the number of row is fixed to 32000 
size.
 
 ![CarbonData File Format](../docs/images/carbon_data_format_new.png?raw=true)
 
@@ -40,6 +40,3 @@ The file footer can be read once to build the indices in 
memory, which can be ut
 * Data Page: Contains the encoded data of a column of columns.
 * Row ID Page (optional): Contains the row ID mappings used when the data page 
is stored as an inverted index.
 * RLE Page (optional): Contains additional metadata used when the data page is 
RLE coded.
-
-
-

http://git-wip-us.apache.org/repos/asf/carbondata/blob/0c6f5f34/docs/installation-guide.md
----------------------------------------------------------------------
diff --git a/docs/installation-guide.md b/docs/installation-guide.md
index f4ca656..d9f27dd 100644
--- a/docs/installation-guide.md
+++ b/docs/installation-guide.md
@@ -54,24 +54,24 @@ followed by :
     
 6. In Spark node[master], configure the properties mentioned in the following 
table in `$SPARK_HOME/conf/spark-defaults.conf` file.
 
-   | Property | Value | Description |
-   
|---------------------------------|-----------------------------------------------------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------|
-   | spark.driver.extraJavaOptions | 
`-Dcarbon.properties.filepath=$SPARK_HOME/conf/carbon.properties` | A string of 
extra JVM options to pass to the driver. For instance, GC settings or other 
logging. |
-   | spark.executor.extraJavaOptions | 
`-Dcarbon.properties.filepath=$SPARK_HOME/conf/carbon.properties` | A string of 
extra JVM options to pass to executors. For instance, GC settings or other 
logging. **NOTE**: You can enter multiple values separated by space. |
+| Property | Value | Description |
+|---------------------------------|-----------------------------------------------------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------|
+| spark.driver.extraJavaOptions | `-Dcarbon.properties.filepath = 
$SPARK_HOME/conf/carbon.properties` | A string of extra JVM options to pass to 
the driver. For instance, GC settings or other logging. |
+| spark.executor.extraJavaOptions | `-Dcarbon.properties.filepath = 
$SPARK_HOME/conf/carbon.properties` | A string of extra JVM options to pass to 
executors. For instance, GC settings or other logging. **NOTE**: You can enter 
multiple values separated by space. |
 
 7. Add the following properties in `$SPARK_HOME/conf/carbon.properties` file:
 
-   | Property             | Required | Description                             
                                               | Example                        
     | Remark  |
-   
|----------------------|----------|----------------------------------------------------------------------------------------|-------------------------------------|---------|
-   | carbon.storelocation | NO       | Location where data CarbonData will 
create the store and write the data in its own format. | 
hdfs://HOSTNAME:PORT/Opt/CarbonStore      | Propose to set HDFS directory |
+| Property             | Required | Description                                
                                            | Example                           
  | Remark  |
+|----------------------|----------|----------------------------------------------------------------------------------------|-------------------------------------|---------|
+| carbon.storelocation | NO       | Location where data CarbonData will create 
the store and write the data in its own format. | 
hdfs://HOSTNAME:PORT/Opt/CarbonStore      | Propose to set HDFS directory |
 
 
 8. Verify the installation. For example:
 
-   ```
-   ./spark-shell --master spark://HOSTNAME:PORT --total-executor-cores 2
-   --executor-memory 2G
-   ```
+```
+./spark-shell --master spark://HOSTNAME:PORT --total-executor-cores 2
+--executor-memory 2G
+```
 
 **NOTE**: Make sure you have permissions for CarbonData JARs and files through 
which driver and executor will start.
 
@@ -98,37 +98,37 @@ To get started with CarbonData : [Quick 
Start](quick-start-guide.md), [DDL Opera
 
 3. Create `tar,gz` file of carbonlib folder and move it inside the carbonlib 
folder.
 
-    ```
-       cd $SPARK_HOME
-       tar -zcvf carbondata.tar.gz carbonlib/
-       mv carbondata.tar.gz carbonlib/
-    ```
+```
+cd $SPARK_HOME
+tar -zcvf carbondata.tar.gz carbonlib/
+mv carbondata.tar.gz carbonlib/
+```
 
 4. Configure the properties mentioned in the following table in 
`$SPARK_HOME/conf/spark-defaults.conf` file.
 
-   | Property | Description | Value |
-   
|---------------------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|------------------------------------------------------------------------------|
-   | spark.master | Set this value to run the Spark in yarn cluster mode. | 
Set yarn-client to run the Spark in yarn cluster mode. |
-   | spark.yarn.dist.files | Comma-separated list of files to be placed in the 
working directory of each executor. |`$SPARK_HOME/conf/carbon.properties` |
-   | spark.yarn.dist.archives | Comma-separated list of archives to be 
extracted into the working directory of each executor. 
|`$SPARK_HOME/carbonlib/carbondata.tar.gz` |
-   | spark.executor.extraJavaOptions | A string of extra JVM options to pass 
to executors. For instance  **NOTE**: You can enter multiple values separated 
by space. |`-Dcarbon.properties.filepath=carbon.properties` |
-   | spark.executor.extraClassPath | Extra classpath entries to prepend to the 
classpath of executors. **NOTE**: If SPARK_CLASSPATH is defined in 
spark-env.sh, then comment it and append the values in below parameter 
spark.driver.extraClassPath |`carbondata.tar.gz/carbonlib/*` |
-   | spark.driver.extraClassPath | Extra classpath entries to prepend to the 
classpath of the driver. **NOTE**: If SPARK_CLASSPATH is defined in 
spark-env.sh, then comment it and append the value in below parameter 
spark.driver.extraClassPath. |`$SPARK_HOME/carbonlib/*` |
-   | spark.driver.extraJavaOptions | A string of extra JVM options to pass to 
the driver. For instance, GC settings or other logging. 
|`-Dcarbon.properties.filepath=$SPARK_HOME/conf/carbon.properties` |
+| Property | Description | Value |
+|---------------------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|------------------------------------------------------------------------------|
+| spark.master | Set this value to run the Spark in yarn cluster mode. | Set 
yarn-client to run the Spark in yarn cluster mode. |
+| spark.yarn.dist.files | Comma-separated list of files to be placed in the 
working directory of each executor. |`$SPARK_HOME/conf/carbon.properties` |
+| spark.yarn.dist.archives | Comma-separated list of archives to be extracted 
into the working directory of each executor. 
|`$SPARK_HOME/carbonlib/carbondata.tar.gz` |
+| spark.executor.extraJavaOptions | A string of extra JVM options to pass to 
executors. For instance  **NOTE**: You can enter multiple values separated by 
space. |`-Dcarbon.properties.filepath = carbon.properties` |
+| spark.executor.extraClassPath | Extra classpath entries to prepend to the 
classpath of executors. **NOTE**: If SPARK_CLASSPATH is defined in 
spark-env.sh, then comment it and append the values in below parameter 
spark.driver.extraClassPath |`carbondata.tar.gz/carbonlib/*` |
+| spark.driver.extraClassPath | Extra classpath entries to prepend to the 
classpath of the driver. **NOTE**: If SPARK_CLASSPATH is defined in 
spark-env.sh, then comment it and append the value in below parameter 
spark.driver.extraClassPath. |`$SPARK_HOME/carbonlib/*` |
+| spark.driver.extraJavaOptions | A string of extra JVM options to pass to the 
driver. For instance, GC settings or other logging. 
|`-Dcarbon.properties.filepath = $SPARK_HOME/conf/carbon.properties` |
 
 
 5. Add the following properties in `$SPARK_HOME/conf/carbon.properties`:
 
-   | Property | Required | Description | Example | Default Value |
-   
|----------------------|----------|----------------------------------------------------------------------------------------|-------------------------------------|---------------|
-   | carbon.storelocation | NO | Location where CarbonData will create the 
store and write the data in its own format. | 
hdfs://HOSTNAME:PORT/Opt/CarbonStore | Propose to set HDFS directory|
+| Property | Required | Description | Example | Default Value |
+|----------------------|----------|----------------------------------------------------------------------------------------|-------------------------------------|---------------|
+| carbon.storelocation | NO | Location where CarbonData will create the store 
and write the data in its own format. | hdfs://HOSTNAME:PORT/Opt/CarbonStore | 
Propose to set HDFS directory|
 
 6. Verify the installation.
 
-   ```
-     ./bin/spark-shell --master yarn-client --driver-memory 1g
-     --executor-cores 2 --executor-memory 2G
-   ```
+```
+ ./bin/spark-shell --master yarn-client --driver-memory 1g
+ --executor-cores 2 --executor-memory 2G
+```
   **NOTE**: Make sure you have permissions for CarbonData JARs and files 
through which driver and executor will start.
 
   Getting started with CarbonData : [Quick Start](quick-start-guide.md), [DDL 
Operations on CarbonData](ddl-operation-on-carbondata.md)
@@ -141,11 +141,12 @@ To get started with CarbonData : [Quick 
Start](quick-start-guide.md), [DDL Opera
 
    b. Run the following command to start the CarbonData thrift server.
 
-   ```
-   ./bin/spark-submit --conf spark.sql.hive.thriftServer.singleSession=true
-   --class org.apache.carbondata.spark.thriftserver.CarbonThriftServer
-   $SPARK_HOME/carbonlib/$CARBON_ASSEMBLY_JAR <carbon_store_path>
-   ```
+```
+./bin/spark-submit
+--conf spark.sql.hive.thriftServer.singleSession=true
+--class org.apache.carbondata.spark.thriftserver.CarbonThriftServer
+$SPARK_HOME/carbonlib/$CARBON_ASSEMBLY_JAR <carbon_store_path>
+```
 
 | Parameter | Description | Example |
 
|---------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-----------------------------------------------------------------------------------------------------------|
@@ -157,7 +158,8 @@ To get started with CarbonData : [Quick 
Start](quick-start-guide.md), [DDL Opera
    * Start with default memory and executors.
 
 ```
-./bin/spark-submit --conf spark.sql.hive.thriftServer.singleSession=true 
+./bin/spark-submit
+--conf spark.sql.hive.thriftServer.singleSession=true
 --class org.apache.carbondata.spark.thriftserver.CarbonThriftServer 
 $SPARK_HOME/carbonlib
 /carbondata_2.10-0.1.0-incubating-SNAPSHOT-shade-hadoop2.7.2.jar 

http://git-wip-us.apache.org/repos/asf/carbondata/blob/0c6f5f34/docs/quick-start-guide.md
----------------------------------------------------------------------
diff --git a/docs/quick-start-guide.md b/docs/quick-start-guide.md
index c7ad73b..1c490ac 100644
--- a/docs/quick-start-guide.md
+++ b/docs/quick-start-guide.md
@@ -61,22 +61,31 @@ import org.apache.spark.sql.CarbonSession._
 * Create a CarbonSession :
 
 ```
-val carbon = 
SparkSession.builder().config(sc.getConf).getOrCreateCarbonSession("<hdfs store 
path>")
+val carbon = SparkSession.builder().config(sc.getConf)
+             .getOrCreateCarbonSession("<hdfs store path>")
 ```
-**NOTE**: By default metastore location is pointed to `../carbon.metastore`, 
user can provide own metastore location to CarbonSession like 
`SparkSession.builder().config(sc.getConf).getOrCreateCarbonSession("<hdfs 
store path>", "<local metastore path>")`
+**NOTE**: By default metastore location is pointed to `../carbon.metastore`, 
user can provide own metastore location to CarbonSession like 
`SparkSession.builder().config(sc.getConf)
+.getOrCreateCarbonSession("<hdfs store path>", "<local metastore path>")`
 
 #### Executing Queries
 
 ###### Creating a Table
 
 ```
-scala>carbon.sql("CREATE TABLE IF NOT EXISTS test_table(id string, name 
string, city string, age Int) STORED BY 'carbondata'")
+scala>carbon.sql("CREATE TABLE
+                        IF NOT EXISTS test_table(
+                                  id string,
+                                  name string,
+                                  city string,
+                                  age Int)
+                       STORED BY 'carbondata'")
 ```
 
 ###### Loading Data to a Table
 
 ```
-scala>carbon.sql("LOAD DATA INPATH 'sample.csv file path' INTO TABLE 
test_table")
+scala>carbon.sql("LOAD DATA INPATH 'sample.csv file path'
+                  INTO TABLE test_table")
 ```
 **NOTE**: Please provide the real file path of `sample.csv` for the above 
script.
 
@@ -85,7 +94,9 @@ scala>carbon.sql("LOAD DATA INPATH 'sample.csv file path' 
INTO TABLE test_table"
 ```
 scala>carbon.sql("SELECT * FROM test_table").show()
 
-scala>carbon.sql("SELECT city, avg(age), sum(age) FROM test_table GROUP BY 
city").show()
+scala>carbon.sql("SELECT city, avg(age), sum(age)
+                  FROM test_table
+                  GROUP BY city").show()
 ```
 
 ## Interactive Analysis with Spark Shell Version 1.6
@@ -97,7 +108,8 @@ Start Spark shell by running the following command in the 
Spark directory:
 ```
 ./bin/spark-shell --jars <carbondata assembly jar path>
 ```
-**NOTE**: Assembly jar will be available after [building 
CarbonData](https://github.com/apache/carbondata/blob/master/build/README.md) 
and can be copied from `./assembly/target/scala-2.1x/carbondata_xxx.jar`
+**NOTE**: Assembly jar will be available after [building 
CarbonData](https://github.com/apache/carbondata/
+blob/master/build/README.md) and can be copied from 
`./assembly/target/scala-2.1x/carbondata_xxx.jar`
 
 **NOTE**: In this shell, SparkContext is readily available as `sc`.
 
@@ -119,7 +131,13 @@ val cc = new CarbonContext(sc, "<hdfs store path>")
 ###### Creating a Table
 
 ```
-scala>cc.sql("CREATE TABLE IF NOT EXISTS test_table (id string, name string, 
city string, age Int) STORED BY 'carbondata'")
+scala>cc.sql("CREATE TABLE
+              IF NOT EXISTS test_table (
+                         id string,
+                         name string,
+                         city string,
+                         age Int)
+              STORED BY 'carbondata'")
 ```
 To see the table created :
 
@@ -130,7 +148,8 @@ scala>cc.sql("SHOW TABLES").show()
 ###### Loading Data to a Table
 
 ```
-scala>cc.sql("LOAD DATA INPATH 'sample.csv file path' INTO TABLE test_table")
+scala>cc.sql("LOAD DATA INPATH 'sample.csv file path'
+              INTO TABLE test_table")
 ```
 **NOTE**: Please provide the real file path of `sample.csv` for the above 
script.
 
@@ -138,5 +157,7 @@ scala>cc.sql("LOAD DATA INPATH 'sample.csv file path' INTO 
TABLE test_table")
 
 ```
 scala>cc.sql("SELECT * FROM test_table").show()
-scala>cc.sql("SELECT city, avg(age), sum(age) FROM test_table GROUP BY 
city").show()
+scala>cc.sql("SELECT city, avg(age), sum(age)
+              FROM test_table
+              GROUP BY city").show()
 ```

http://git-wip-us.apache.org/repos/asf/carbondata/blob/0c6f5f34/docs/troubleshooting.md
----------------------------------------------------------------------
diff --git a/docs/troubleshooting.md b/docs/troubleshooting.md
index 27ec8e3..5464997 100644
--- a/docs/troubleshooting.md
+++ b/docs/troubleshooting.md
@@ -62,11 +62,10 @@ who are building, deploying, and using CarbonData.
 
   2. Use the following command :
 
-    ```
-     "mvn -Pspark-2.1 -Dspark.version {yourSparkVersion} clean package"
-    ```
-
-    Note :  Refrain from using "mvn clean package" without specifying the 
profile.
+```
+"mvn -Pspark-2.1 -Dspark.version {yourSparkVersion} clean package"
+```
+Note :  Refrain from using "mvn clean package" without specifying the profile.
 
 ## Failed to execute load query on cluster.
 

http://git-wip-us.apache.org/repos/asf/carbondata/blob/0c6f5f34/docs/useful-tips-on-carbondata.md
----------------------------------------------------------------------
diff --git a/docs/useful-tips-on-carbondata.md 
b/docs/useful-tips-on-carbondata.md
index bfddf29..40a3947 100644
--- a/docs/useful-tips-on-carbondata.md
+++ b/docs/useful-tips-on-carbondata.md
@@ -175,7 +175,7 @@ excessive memory usage.
 | Parameter | Default Value | Description/Tuning |
 |-----------|-------------|--------|
 |carbon.number.of.cores.while.loading|Default: 2.This value should be >= 
2|Specifies the number of cores used for data processing during data loading in 
CarbonData. |
-|carbon.sort.size|Data loading|Default: 100000. The value should be >= 
100.|Threshhold to write local file in sort step when loading data|
+|carbon.sort.size|Default: 100000. The value should be >= 100.|Threshhold to 
write local file in sort step when loading data|
 |carbon.sort.file.write.buffer.size|Default:  50000.|DataOutputStream buffer. |
 |carbon.number.of.cores.block.sort|Default: 7 | If you have huge memory and 
cpus, increase it as you will|
 |carbon.merge.sort.reader.thread|Default: 3 |Specifies the number of cores 
used for temp file merging during data loading in CarbonData.|

Reply via email to