Repository: incubator-carbondata
Updated Branches:
  refs/heads/master 10703cf60 -> 3374aeb74


update carbondata documents

update carbondata documents


Project: http://git-wip-us.apache.org/repos/asf/incubator-carbondata/repo
Commit: 
http://git-wip-us.apache.org/repos/asf/incubator-carbondata/commit/acb9d223
Tree: http://git-wip-us.apache.org/repos/asf/incubator-carbondata/tree/acb9d223
Diff: http://git-wip-us.apache.org/repos/asf/incubator-carbondata/diff/acb9d223

Branch: refs/heads/master
Commit: acb9d22385d4f554625f23885a6e64b7c9ceeeb1
Parents: 10703cf
Author: chenliang613 <chenliang...@huawei.com>
Authored: Thu Jan 19 18:32:30 2017 +0800
Committer: ravipesala <ravi.pes...@gmail.com>
Committed: Thu Jan 19 16:26:27 2017 +0530

----------------------------------------------------------------------
 README.md                       |  37 +++-----
 docs/overview-of-carbondata.md  | 178 -----------------------------------
 docs/quick-start-guide.md       |  71 ++++----------
 docs/table-of-content.md        |  47 ---------
 docs/use-cases-of-carbondata.md |  77 ---------------
 docs/user-guide-toc.md          |  47 ---------
 docs/using-carbondata.md        |  35 -------
 7 files changed, 32 insertions(+), 460 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/incubator-carbondata/blob/acb9d223/README.md
----------------------------------------------------------------------
diff --git a/README.md b/README.md
index ce71666..afd3061 100644
--- a/README.md
+++ b/README.md
@@ -17,7 +17,7 @@
     under the License.
 -->
 
-<img src="/docs/images/format/CarbonData_logo.png" width="200" height="40">
+<img src="/docs/images/CarbonData_logo.png" width="200" height="40">
 
 Apache CarbonData(incubating) is an indexed columnar data format for fast 
analytics on big data platform, e.g.Apache Hadoop, Apache Spark, etc.
 
@@ -36,36 +36,25 @@ CarbonData file format is a columnar store in HDFS, it has 
many features that a
 * Operable encoded data :Through supporting efficient compression and global 
encoding schemes, can query on compressed/encoded data, the data can be 
converted just before returning the results to the users, which is "late 
materialized". 
 * Supports for various use cases with one single Data format : like 
interactive OLAP-style query, Sequential Access (big scan), Random Access 
(narrow scan). 
 
-## Building CarbonData,using development tools and cluster deployment guide
-Please refer [Building CarbonData and Configuring 
IDE](https://cwiki.apache.org/confluence/display/CARBONDATA/Building+CarbonData+And+IDE+Configuration)
+## Building CarbonData
+CarbonData is built using Apache Maven, to [build 
CarbonData](https://github.com/apache/incubator-carbondata/blob/master/build)
 
-Please refer [Cluster Deployment 
Guide](https://cwiki.apache.org/confluence/display/CARBONDATA/Cluster+deployment+guide)
-
-## Getting Started
-Read the [quick 
start](https://cwiki.apache.org/confluence/display/CARBONDATA/Quick+Start)
-
-## Usage of CarbonData
- [DDL Operations on 
CarbonData](https://cwiki.apache.org/confluence/display/CARBONDATA/DDL+operations+on+CarbonData)
 
- 
- [DML Operations on 
CarbonData](https://cwiki.apache.org/confluence/display/CARBONDATA/DML+operations+on+CarbonData)
  
- 
- [CarbonData data 
management](https://cwiki.apache.org/confluence/display/CARBONDATA/Data+Management)
  
-
-## CarbonData File Structure and interfaces
-Please refer [CarbonData File 
Format](https://cwiki.apache.org/confluence/display/CARBONDATA/CarbonData+File+Structure+and+Format)
-
-## CarbonData FAQ 
-[Configurations For Optimizing CarbonData 
Performance](https://cwiki.apache.org/confluence/display/CARBONDATA/Configurations+For+Optimizing+CarbonData+Performance)
-
-[Suggestion to create CarbonData table]
-(https://cwiki.apache.org/confluence/display/CARBONDATA/Suggestion+to+create+CarbonData+table)
+## Online Documentation
+* [Quick 
Start](https://github.com/apache/incubator-carbondata/blob/master/docs/quick-start-guide.md)
+* [Data 
Management](https://github.com/apache/incubator-carbondata/blob/master/docs/data-management.md)
+* [DDL Operations on 
CarbonData](https://github.com/apache/incubator-carbondata/blob/master/docs/ddl-operation-on-carbondata.md)
 
+* [DML Operations on 
CarbonData](https://github.com/apache/incubator-carbondata/blob/master/docs/dml-operation-on-carbondata.md)
  
+* [Cluster Installation and 
Deployment](https://github.com/apache/incubator-carbondata/blob/master/docs/installation-guide.md)
+* [FAQ](https://github.com/apache/incubator-carbondata/blob/master/docs/faq.md)
+* [Trouble 
Shooting](https://github.com/apache/incubator-carbondata/blob/master/docs/troubleshooting.md)
+* [Useful 
Tips](https://github.com/apache/incubator-carbondata/blob/master/docs/useful-tips-on-carbondata.md)
 
 ## Other Technical Material
 [Apache CarbonData meetup 
material](https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=66850609)
 
 ## Fork and Contribute
 This is an active open source project for everyone, and we are always open to 
people who want to use this system or contribute to it. 
-This guide document introduce [how to contribute to 
CarbonData](https://cwiki.apache.org/confluence/display/CARBONDATA/Contributing+to+CarbonData).
+This guide document introduce [how to contribute to 
CarbonData](https://github.com/apache/incubator-carbondata/blob/master/docs/How-to-contribute-to-Apache-CarbonData.md).
 
 ## Contact us
 To get involved in CarbonData:

http://git-wip-us.apache.org/repos/asf/incubator-carbondata/blob/acb9d223/docs/overview-of-carbondata.md
----------------------------------------------------------------------
diff --git a/docs/overview-of-carbondata.md b/docs/overview-of-carbondata.md
deleted file mode 100644
index 5e983cb..0000000
--- a/docs/overview-of-carbondata.md
+++ /dev/null
@@ -1,178 +0,0 @@
-<!--
-    Licensed to the Apache Software Foundation (ASF) under one
-    or more contributor license agreements.  See the NOTICE file
-    distributed with this work for additional information
-    regarding copyright ownership.  The ASF licenses this file
-    to you under the Apache License, Version 2.0 (the
-    "License"); you may not use this file except in compliance
-    with the License.  You may obtain a copy of the License at
-
-      http://www.apache.org/licenses/LICENSE-2.0
-
-    Unless required by applicable law or agreed to in writing,
-    software distributed under the License is distributed on an
-    "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-    KIND, either express or implied.  See the License for the
-    specific language governing permissions and limitations
-    under the License.
--->
-# Overview
-This tutorial provides a detailed overview about :
-
-* [Introduction](#introduction)
-* [CarbonData File Structure](#carbondata-file-structure)
-* [Features](#features)
-* [Data Types](#data-types)
-* [Interfaces](#interfaces)
-
-##  Introduction
-
-CarbonData is a fully indexed columnar and Hadoop native data-store for 
processing heavy analytical workloads and detailed queries on big data. 
CarbonData allows  faster interactive query using advanced columnar storage, 
index, compression and encoding techniques to improve computing efficiency, 
which helps in speeding up queries by an order of magnitude faster over 
PetaBytes of data.
-
-In customer benchmarks, CarbonData has proven to manage Petabyte of data 
running on extraordinarily low-cost hardware and answers queries around 10 
times faster than the current open source solutions (column-oriented SQL on 
Hadoop data-stores).
-
-Some of the salient features of CarbonData are :
-
-* Low-Latency for various types of data access patterns like Sequential, 
Random and OLAP.
-* Fast query on fast data.
-* Space efficiency.
-* General format available on Hadoop-ecosystem.
-
-##  CarbonData File Structure
-
-CarbonData files contain groups of data called blocklets, along with all 
required information like schema, offsets and indices etc, in a file footer, 
co-located in HDFS.
-
-The file footer can be read once to build the indices in memory, which can be 
utilized for optimizing the scans and processing for all subsequent queries.
-
-Each blocklet in the file is further divided into chunks of data called data 
chunks. Each data chunk is organized either in columnar format or row format, 
and stores the data of either a single column or a set of columns. All 
blocklets in a file contain the same number and type of data chunks.
-
-![CarbonData File 
Structure](../../../src/site/markdown/images/carbon_data_file_structure_new.png?raw=true)
-
-Each data chunk contains multiple groups of data called as pages. There are 
three types of pages.
-
-* Data Page: Contains the encoded data of a column/group of columns.
-* Row ID Page (optional): Contains the row ID mappings used when the data page 
is stored as an inverted index.
-* RLE Page (optional): Contains additional metadata used when the data page is 
RLE coded.
-
-![CarbonData File 
Format](../../../src/site/markdown/images/carbon_data_format_new.png?raw=true)
-
-##  Features
-
-CarbonData file format is a columnar store in HDFS. It has many features that 
a modern columnar format has, such as splittable, compression schema, complex 
data type etc and CarbonData has following unique features:
-
-* Unique Data Organization: Though CarbonData stores data in Columnar format, 
it differs from traditional Columnar formats as the columns in each 
row-group(Data Block) is sorted independent of the other columns. Though this 
arrangement requires CarbonData to store the row-number mapping against each 
column value, it makes it possible to use binary search for faster filtering 
and since the values are sorted, same/similar values come together which yields 
better compression and offsets the storage overhead required by the row number 
mapping.
-
-* Advanced Push Down Optimizations: CarbonData pushes as much of query 
processing as possible close to the data to minimize the amount of data being 
read, processed, converted and transmitted/shuffled. Using projections and 
filters it reads only the required columns form the store and also reads only 
the rows that match the filter conditions provided in the query.
-
-* Multi Level Indexing: CarbonData uses multiple indices at various levels to 
enable faster search and speed up query processing.
-
-* Global Multi Dimensional Keys(MDK) based B+Tree Index for all non- measure 
columns: Aids in quickly locating the row groups(Data Blocks) that contain the 
data matching search/filter criteria.
-
-* Min-Max Index for all columns: Aids in quickly locating the row groups(Data 
Blocks) that contain the data matching search/filter criteria.
-
-* Data Block level Inverted Index for all columns: Aids in quickly locating 
the rows that contain the data matching search/filter criteria within a row 
group(Data Blocks).
-
-* Dictionary Encoding: Most databases and big data SQL data stores employ 
columnar encoding to achieve data compression by storing small integers numbers 
(surrogate value) instead of full string values. However, almost all existing 
databases and data stores divide the data into row groups containing anywhere 
from few thousand to a million rows and employ dictionary encoding only within 
each row group. Hence, the same column value can have different surrogate 
values in different row groups. So, while reading the data, conversion from 
surrogate value to actual value needs to be done immediately after the data is 
read from the disk. But CarbonData employs global surrogate key which means 
that a common dictionary is maintained for the full store on one machine/node. 
So CarbonData can perform all the query processing work such as 
grouping/aggregation, sorting etc on light weight surrogate values. The 
conversion from surrogate to actual values needs to be done only on the final 
result. Th
 is procedure improves performance on two aspects.      Conversion from 
surrogate values to actual values is done only for the final result rows which 
are much less than the actual rows read from the store. All query processing 
and computation such as grouping/aggregation, sorting, and so on is done on 
lightweight surrogate values which requires less memory and CPU time compared 
to actual values.
-
-* Deep Spark Integration: It has built-in spark integration for Spark 1.5, 1.6 
and interfaces for Spark SQL, DataFrame API and query optimization. It supports 
bulk data ingestion and allows saving of spark dataframes as CarbonData files.
-
-* Update Delete Support: It supports batch updates like daily update scenarios 
for OLAP and Base+Delta file based design.
-
-* Store data along with index: Significantly accelerates query performance and 
reduces the I/O scans and CPU resources, when there are filters in the query. 
CarbonData index consists of multiple levels of indices. A processing framework 
can leverage this index to reduce the task it needs to schedule and process. It 
can also do skip scan in more finer grain units (called blocklet) in task side 
scanning instead of scanning the whole file.
-
-* Operable encoded data: It supports efficient compression and global encoding 
schemes and can query on compressed/encoded data. The data can be converted 
just before returning the results to the users, which is "late materialized".
-
-* Column group: Allows multiple columns to form a column group that would be 
stored as row format. This reduces the row reconstruction cost at query time.
-
-* Support for various use cases with one single Data format: Examples are 
interactive OLAP-style query, Sequential Access (big scan) and Random Access 
(narrow scan).
-
-##  Data Types
-
-#### CarbonData supports the following data types:
-
-  * Numeric Types
-  * SMALLINT
-  * INT/INTEGER
-  * BIGINT
-  * DOUBLE
-  * DECIMAL
-
-  * Date/Time Types
-  * TIMESTAMP
-
-  * String Types
-  * STRING
-
-  * Complex Types
-    * arrays: ARRAY``<data_type>``
-    * structs: STRUCT``<col_name : data_type COMMENT col_comment, ...>``
-
-##  Interfaces
-
-####  API
-CarbonData can be used in following scenarios:
-
-* For MapReduce application user
-
-   This User API is provided by carbon-hadoop. In this scenario, user can 
process CarbonData files in his MapReduce application by choosing 
CarbonInput/OutputFormat, and is responsible for using it correctly. Currently 
only CarbonInputFormat is provided and OutputFormat will be provided soon.
-
-* For Spark user
-
-   This User API is provided by Spark itself. There are two levels of APIs
-
-   * **CarbonData File**
-
-      Similar to parquet, json, or other data source in Spark, CarbonData can 
be used with data source API. For example (please refer to DataFrameAPIExample 
for more detail):
-      
-```
-      // User can create a DataFrame from any data source 
-      // or transformation.
-      val df = ...
-
-      // Write data
-      // User can write a DataFrame to a CarbonData file
-      df.write
-      .format("carbondata")
-      .option("tableName", "carbontable")
-      .mode(SaveMode.Overwrite)
-      .save()
-
-
-      // read CarbonData by data source API
-      df = carbonContext.read
-      .format("carbondata")
-      .option("tableName", "carbontable")
-      .load("/path")
-
-      // User can then use DataFrame for analysis
-      df.count
-      SVMWithSGD.train(df, numIterations)
-
-      // User can also register the DataFrame with a table name, 
-      // and use SQL for analysis
-      df.registerTempTable("t1")  // register temporary table 
-                                  // in SparkSQL catalog
-      df.registerHiveTable("t2")  // Or, use a implicit funtion 
-                                  // to register to Hive metastore
-      sqlContext.sql("select count(*) from t1").show
-```
-
-   * **Managed CarbonData Table**
-
-      CarbonData has in built support for high level concept like Table, 
Database, and supports full data lifecycle management, instead of dealing with 
just files user can use CarbonData specific DDL to manipulate data in Table and 
Database level. Please refer 
[DDL](https://github.com/HuaweiBigData/carbondata/wiki/Language-Manual:-DDL) 
and 
[DML](https://github.com/HuaweiBigData/carbondata/wiki/Language-Manual:-DML).
-      
-```
-      // Use SQL to manage table and query data
-      create database db1;
-      use database db1;
-      show databases;
-      create table tbl1 using org.apache.carbondata.spark;
-      load data into table tlb1 path 'some_files';
-      select count(*) from tbl1;
-```
-
-*   For developer who want to integrate CarbonData into processing engines 
like spark, hive or flink, use API provided by carbon-hadoop and 
carbon-processing:
-       - **Query** : Integrate carbon-hadoop with engine specific API, like 
spark data source API.
-
-       - **Data life cycle management** : CarbonData provides utility 
functions in carbon-processing to manage data life cycle, like data loading, 
compact, retention, schema evolution. Developer can implement DDLs of their 
choice and leverage these utility function to do data life cycle management.
-

http://git-wip-us.apache.org/repos/asf/incubator-carbondata/blob/acb9d223/docs/quick-start-guide.md
----------------------------------------------------------------------
diff --git a/docs/quick-start-guide.md b/docs/quick-start-guide.md
index 8800ca6..ceeaac0 100644
--- a/docs/quick-start-guide.md
+++ b/docs/quick-start-guide.md
@@ -20,45 +20,20 @@
 # Quick Start
 This tutorial provides a quick introduction to using CarbonData.
 
-## Getting started with Apache CarbonData
-
-* [Installation](#installation)
-* [Prerequisites](#prerequisites)
-* [Interactive Analysis with Spark Shell Version 
2.1](#interactive-analysis-with-spark-shell)
-  - Basics
-  - Executing Queries
-      * Creating a Table
-      * Loading Data to a Table
-      * Query Data from a Table
-* Interactive Analysis with Spark Shell Version 1.6
-   - Basics
-   - Executing Queries
-     * Creating a Table
-     * Loading Data to a Table
-     * Query Data from a Table
-* [Building CarbonData](#building-carbondata)
-
-
-##  Installation
-* Download a released package of [Spark 1.6.2 or 
2.1.0](http://spark.apache.org/downloads.html).
-* Download and install [Apache Thrift 
0.9.3](http://thrift-tutorial.readthedocs.io/en/latest/installation.html), make 
sure Thrift is added to system path.
-* Download [Apache CarbonData 
code](https://github.com/apache/incubator-carbondata) and build it. Please 
visit [Building CarbonData And IDE 
Configuration](https://github.com/apache/incubator-carbondata/blob/master/build/README.md)
 for more information.
-
 ##  Prerequisites
-
+* [Installation and building 
CarbonData](https://github.com/apache/incubator-carbondata/blob/master/build).
 * Create a sample.csv file using the following commands. The CSV file is 
required for loading data into CarbonData.
 
 ```
-$ cd carbondata
-$ cat > sample.csv << EOF
-  id,name,city,age
-  1,david,shenzhen,31
-  2,eason,shenzhen,27
-  3,jarry,wuhan,35
-  EOF
+cd carbondata
+cat > sample.csv << EOF
+id,name,city,age
+1,david,shenzhen,31
+2,eason,shenzhen,27
+3,jarry,wuhan,35
+EOF
 ```
 
-
 ## Interactive Analysis with Spark Shell
 
 ## Version 2.1
@@ -70,7 +45,7 @@ Apache Spark Shell provides a simple way to learn the API, as 
well as a powerful
 Start Spark shell by running the following command in the Spark directory:
 
 ```
-./bin/spark-shell --jars <carbondata jar path>
+./bin/spark-shell --jars <carbondata assembly jar path>
 ```
 
 In this shell, SparkSession is readily available as 'spark' and Spark context 
is readily available as 'sc'.
@@ -103,29 +78,27 @@ scala>carbon.sql("create table if not exists test_table
 ##### Loading Data to a Table
 
 ```
-scala>carbon.sql(s"load data inpath '${new 
java.io.File("../carbondata/sample.csv").getCanonicalPath}' into table 
test_table")
+scala>carbon.sql("load data inpath 'sample.csv file's path' into table 
test_table")
 ```
+NOTE:Please provide the real file path of sample.csv for the above script.
 
 ###### Query Data from a Table
 
 ```
-scala>spark.sql("select * from test_table").show
+scala>spark.sql("select * from test_table").show()
 
-scala>spark.sql("select city, avg(age),
-sum(age) from test_table group by city").show
+scala>spark.sql("select city, avg(age), sum(age) from test_table group by 
city").show()
 ```
 
-
 ## Interactive Analysis with Spark Shell
 ## Version 1.6
 
-
 #### Basics
 
 Start Spark shell by running the following command in the Spark directory:
 
 ```
-./bin/spark-shell --jars <carbondata jar path>
+./bin/spark-shell --jars <carbondata assembly jar path>
 ```
 
 NOTE: In this shell, SparkContext is readily available as sc.
@@ -154,25 +127,19 @@ scala>cc.sql("create table if not exists test_table (id 
string, name string, cit
 To see the table created :
 
 ```
-scala>cc.sql("show tables").show
+scala>cc.sql("show tables").show()
 ```
 
 ##### Loading Data to a Table
 
 ```
-scala>cc.sql(s"load data inpath '${new 
java.io.File("../carbondata/sample.csv").getCanonicalPath}' into table 
test_table")
+scala>cc.sql("load data inpath 'sample.csv file's path' into table test_table")
 ```
+NOTE:Please provide the real file path of sample.csv for the above script.
 
 ##### Query Data from a Table
 
 ```
-scala>cc.sql("select * from test_table").show
-scala>cc.sql("select city, avg(age), sum(age) from test_table group by 
city").show
+scala>cc.sql("select * from test_table").show()
+scala>cc.sql("select city, avg(age), sum(age) from test_table group by 
city").show()
 ```
-
-## Building CarbonData
-
-To get started, get CarbonData from the 
[downloads](http://carbondata.incubator.apache.org/) section on the 
[http://carbondata.incubator.apache.org.](http://carbondata.incubator.apache.org.)
-CarbonData uses Hadoop’s client libraries for HDFS and YARN and Spark's 
libraries. Downloads are pre-packaged for a handful of popular Spark versions.
-
-If you’d like to build CarbonData from source, visit [Building CarbonData 
And IDE 
Configuration](https://github.com/apache/incubator-carbondata/blob/master/build/README.md).
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/incubator-carbondata/blob/acb9d223/docs/table-of-content.md
----------------------------------------------------------------------
diff --git a/docs/table-of-content.md b/docs/table-of-content.md
deleted file mode 100644
index cbec50b..0000000
--- a/docs/table-of-content.md
+++ /dev/null
@@ -1,47 +0,0 @@
-<!--
-    Licensed to the Apache Software Foundation (ASF) under one
-    or more contributor license agreements.  See the NOTICE file
-    distributed with this work for additional information
-    regarding copyright ownership.  The ASF licenses this file
-    to you under the Apache License, Version 2.0 (the
-    "License"); you may not use this file except in compliance
-    with the License.  You may obtain a copy of the License at
-
-      http://www.apache.org/licenses/LICENSE-2.0
-
-    Unless required by applicable law or agreed to in writing,
-    software distributed under the License is distributed on an
-    "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-    KIND, either express or implied.  See the License for the
-    specific language governing permissions and limitations
-    under the License.
--->
-# Table of Contents
-
-* [Quick Start](quick-start-guide.md)
-    * [Getting started with Apache CarbonData]()
-* [User Guide](user-guide-toc.md)
-    * [Overview](overview-of-carbondata.md)
-       * Introduction
-       * CarbonData File Structure
-       * Features
-       * Data Types
-       * Interfaces
-    * [Installation Guide](installation-guide.md)
-       * Installing and Configuring CarbonData on Standalone Spark Cluster
-       * Installing and Configuring CarbonData on “Spark on YARN” Cluster
-    * [Configuring CarbonData](configuration-parameters.md)
-       * System Configuration
-       * Performance Configuration
-       * Miscellaneous Configuration
-       * Spark Configuration
-    * [Using CarbonData](using-carbondata.md)
-       * [Data Management](data-management.md)
-       * [DDL Operations on CarbonData](ddl-operation-on-carbondata.md )
-       * [DML Operations on CarbonData](dml-operation-on-carbondata.md )
-* [Useful Tips](useful-tips-on-carbondata.md)
-    * Suggestion to create CarbonData Table
-    * Configurations for Optimizing CarbonData Performance
-* [CarbonData Use Cases](use-cases-of-carbondata.md)
-* [Troubleshooting](troubleshooting.md)
-* [FAQs](faq.md)
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/incubator-carbondata/blob/acb9d223/docs/use-cases-of-carbondata.md
----------------------------------------------------------------------
diff --git a/docs/use-cases-of-carbondata.md b/docs/use-cases-of-carbondata.md
deleted file mode 100644
index 112b6bc..0000000
--- a/docs/use-cases-of-carbondata.md
+++ /dev/null
@@ -1,77 +0,0 @@
-<!--
-    Licensed to the Apache Software Foundation (ASF) under one
-    or more contributor license agreements.  See the NOTICE file
-    distributed with this work for additional information
-    regarding copyright ownership.  The ASF licenses this file
-    to you under the Apache License, Version 2.0 (the
-    "License"); you may not use this file except in compliance
-    with the License.  You may obtain a copy of the License at
-
-      http://www.apache.org/licenses/LICENSE-2.0
-
-    Unless required by applicable law or agreed to in writing,
-    software distributed under the License is distributed on an
-    "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-    KIND, either express or implied.  See the License for the
-    specific language governing permissions and limitations
-    under the License.
--->
-
-# CarbonData Use Cases
-This tutorial discusses about the problems that CarbonData addresses. It shall 
take you through the identified top use cases of CarbonData.
-
-## Introduction
-For big data interactive analysis scenarios, many customers expect sub-second 
response to query TB-PB level data on general hardware clusters with just a few 
nodes.
-
-In the current big data ecosystem, there are few columnar storage formats such 
as ORC and Parquet that are designed for SQL on Big Data. Apache Hive’s ORC 
format is a columnar storage format with basic indexing capability. However, 
ORC cannot meet the sub-second query response expectation on TB level data, as 
it performs only stride level dictionary encoding and all analytical operations 
such as filtering and aggregation is done on the actual data. Apache Parquet is 
a columnar storage format that can improve performance in comparison to ORC due 
to its more efficient storage organization. Though Parquet can provide query 
response on TB level data in a few seconds, it is still far from the sub-second 
expectation of interactive analysis users. Cloudera Kudu can effectively solve 
some query performance issues, but kudu is not hadoop native, can’t 
seamlessly integrate historic HDFS data into new kudu system.
-
-However, CarbonData uses specially engineered optimizations targeted to 
improve performance of analytical queries which can include filters, 
aggregation and distinct counts,
-the required data to be stored in an indexed, well organized, read-optimized 
format, CarbonData’s query performance can achieve sub-second response.
-
-## Motivation: Single Format to provide Low Latency Response for all Use Cases
-The main motivation behind CarbonData is to provide a single storage format 
for all the usecases of querying big data on Hadoop. Thus CarbonData is able to 
cover all use-cases 
-into a single storage format.
-
-  
![Motivation](../../../src/site/markdown/images/carbon_data_motivation.png?raw=true)
-
-## Use Cases
-### Sequential Access
-  - Supports queries that select only a few columns with a group by clause but 
do not contain any filters. 
-  This results in full scan over the complete store for the selected columns.
-  
-  
![Sequential_Scan](../../../src/site/markdown/images/carbon_data_full_scan.png?raw=true)
-  
-  **Scenario**
-  
-  - ETL jobs
-  - Log Analysis
-    
-### Random Access
-  - Supports Point Query. These are queries used from operational applications 
and usually select all or most of the columns and involves a large number of 
-  filters which reduce the result to a small size. Such queries generally do 
not involve any aggregation or group by clause.
-    - Row-key query(like HBase)
-    - Narrow Scan
-    - Requires second/sub-second level low latency
-    
-   
![random_access](../../../src/site/markdown/images/carbon_data_random_scan.png?raw=true)
-    
-  **Scenario**
-
-   - Operational Query
-   - User Profiling
-    
-### Olap Style Query
-  - Supports Interactive data analysis for any dimensions. These are queries 
which are typically fired from Interactive Analysis tools. 
-  Such queries often select a few columns and involves filters and group by on 
a column or a grouping expression. 
-  It also supports queries that :
-    - Involves aggregation/join
-    - Roll-up,Drill-down,Slicing and Dicing
-    - Low-latency ad-hoc query
-    
-   
![Olap_style_query](../../../src/site/markdown/images/carbon_data_olap_scan.png?raw=true)
-    
-   **Scenario**
-    
-  - Dash-board reporting
-  - Fraud & Ad-hoc Analysis
-    

http://git-wip-us.apache.org/repos/asf/incubator-carbondata/blob/acb9d223/docs/user-guide-toc.md
----------------------------------------------------------------------
diff --git a/docs/user-guide-toc.md b/docs/user-guide-toc.md
deleted file mode 100644
index c05fb09..0000000
--- a/docs/user-guide-toc.md
+++ /dev/null
@@ -1,47 +0,0 @@
-<!--
-    Licensed to the Apache Software Foundation (ASF) under one
-    or more contributor license agreements.  See the NOTICE file
-    distributed with this work for additional information
-    regarding copyright ownership.  The ASF licenses this file
-    to you under the Apache License, Version 2.0 (the
-    "License"); you may not use this file except in compliance
-    with the License.  You may obtain a copy of the License at
-
-      http://www.apache.org/licenses/LICENSE-2.0
-
-    Unless required by applicable law or agreed to in writing,
-    software distributed under the License is distributed on an
-    "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-    KIND, either express or implied.  See the License for the
-    specific language governing permissions and limitations
-    under the License.
--->
-# User Guide
-Welcome to Apache CarbonData. Apache CarbonData(incubating) is a new big data 
file format for faster interactive query using advanced columnar storage, 
index, compression and encoding techniques to improve computing efficiency, 
which helps in speeding up queries by an order of magnitude faster over 
PetaBytes of data. 
-This user guide provides a detailed description about the CarbonData and its 
features.
-
-Let's get started !
-
-* [Overview](overview-of-carbondata.md)
-    * Introduction
-    * CarbonData File Structure
-    * Features
-    * Data Types
-    * Interfaces
-* [Installation Guide](installation-guide.md)
-    * Installing and Configuring CarbonData on Standalone Spark Cluster
-    * Installing and Configuring CarbonData on "Spark on YARN Cluster
-* [Configuring CarbonData](configuration-parameters.md)
-    * System Configuration
-    * Performance Configuration
-    * Miscellaneous Configuration
-    * Spark Configuration
-* [Using CarbonData](using-carbondata.md)
-    * [Data Management](data-management.md)
-    * [DDL Operations on CarbonData](ddl-operation-on-carbondata.md )
-    * [DML Operations on CarbonData](dml-operation-on-carbondata.md )
-
-
-
-
-

http://git-wip-us.apache.org/repos/asf/incubator-carbondata/blob/acb9d223/docs/using-carbondata.md
----------------------------------------------------------------------
diff --git a/docs/using-carbondata.md b/docs/using-carbondata.md
deleted file mode 100644
index 83a3655..0000000
--- a/docs/using-carbondata.md
+++ /dev/null
@@ -1,35 +0,0 @@
-# Using CarbonData
-This tutorial discusses the disciplines related to management of data in 
Apache CarbonData.
-Following below each section is a brief introduction to respective disciplines 
related to data
-management.
-
-## Data Management
-This section shall be dealing with the disciplines related to managing data in 
the application,
-focusing on conceptual details related to operations like load data, delete 
data, update data
-and Compacting Data.
-
-For complete details refer to [Data Management](data-management.md)
-
-## Data Definition Language Support
-This section deals with the aspects related to creation and modification of 
the structure of database.
-It shall discuss in detail about
-
-*  Table creation
-*  Table deletion
-*  Table description
-*  Compaction
-
-For complete details refer to [DDL Operations on 
CarbonData](ddl-operation-on-carbondata.md )
-
-## Data Manipulation Language Support
-This section deals with the aspects related to data manipulation in database. 
It shall discuss in detail about selecting, loading and deleting in a database.
-This manipulation comprises of
-
-*  Loading data into database tables
-*  Retrieving existing data
-*  Deleting data from existing tables
-*  Deleting segments from existing tables
-*  Updating data in existing tables
-
-For complete details refer to [DML Operations on 
CarbonData](dml-operation-on-carbondata.md)
-

Reply via email to