[CARBONDATA-2478] Added datamap-developer-guide.md file to Readme.md [CARBONDATA-2478] Added datamap-developer-guide.md file to Readme.md
This closes #2305 Project: http://git-wip-us.apache.org/repos/asf/carbondata/repo Commit: http://git-wip-us.apache.org/repos/asf/carbondata/commit/d0567af2 Tree: http://git-wip-us.apache.org/repos/asf/carbondata/tree/d0567af2 Diff: http://git-wip-us.apache.org/repos/asf/carbondata/diff/d0567af2 Branch: refs/heads/branch-1.4 Commit: d0567af28cf7582bbf3367b8ecb019962e5304ce Parents: 70ef024 Author: vandana <vandana.yadav...@gmail.com> Authored: Mon May 14 15:46:19 2018 +0530 Committer: ravipesala <ravi.pes...@gmail.com> Committed: Thu Aug 9 23:42:44 2018 +0530 ---------------------------------------------------------------------- README.md | 7 ++++--- 1 file changed, 4 insertions(+), 3 deletions(-) ---------------------------------------------------------------------- http://git-wip-us.apache.org/repos/asf/carbondata/blob/d0567af2/README.md ---------------------------------------------------------------------- diff --git a/README.md b/README.md index d8f7226..d76b080 100644 --- a/README.md +++ b/README.md @@ -37,9 +37,9 @@ Spark2.2: </a> ## Features CarbonData file format is a columnar store in HDFS, it has many features that a modern columnar format has, such as splittable, compression schema ,complex data type etc, and CarbonData has following unique features: -* Stores data along with index: it can significantly accelerate query performance and reduces the I/O scans and CPU resources, where there are filters in the query. CarbonData index consists of multiple level of indices, a processing framework can leverage this index to reduce the task it needs to schedule and process, and it can also do skip scan in more finer grain unit (called blocklet) in task side scanning instead of scanning the whole file. -* Operable encoded data :Through supporting efficient compression and global encoding schemes, can query on compressed/encoded data, the data can be converted just before returning the results to the users, which is "late materialized". -* Supports for various use cases with one single Data format : like interactive OLAP-style query, Sequential Access (big scan), Random Access (narrow scan). +* Stores data along with index: it can significantly accelerate query performance and reduces the I/O scans and CPU resources, where there are filters in the query. CarbonData index consists of multiple level of indices, a processing framework can leverage this index to reduce the task it needs to schedule and process, and it can also do skip scan in more finer grain unit (called blocklet) in task side scanning instead of scanning the whole file. +* Operable encoded data :Through supporting efficient compression and global encoding schemes, can query on compressed/encoded data, the data can be converted just before returning the results to the users, which is "late materialized". +* Supports for various use cases with one single Data format : like interactive OLAP-style query, Sequential Access (big scan), Random Access (narrow scan). ## Building CarbonData CarbonData is built using Apache Maven, to [build CarbonData](https://github.com/apache/carbondata/blob/master/build) @@ -53,6 +53,7 @@ CarbonData is built using Apache Maven, to [build CarbonData](https://github.com * [Configuring Carbondata](https://github.com/apache/carbondata/blob/master/docs/configuration-parameters.md) * [Streaming Ingestion](https://github.com/apache/carbondata/blob/master/docs/streaming-guide.md) * [SDK Guide](https://github.com/apache/carbondata/blob/master/docs/sdk-guide.md) +* [DataMap Developer Guide](https://github.com/apache/carbondata/blob/master/docs/datamap-developer-guide.md) * [CarbonData Pre-aggregate DataMap](https://github.com/apache/carbondata/blob/master/docs/datamap/preaggregate-datamap-guide.md) * [CarbonData Timeseries DataMap](https://github.com/apache/carbondata/blob/master/docs/datamap/timeseries-datamap-guide.md) * [FAQ](https://github.com/apache/carbondata/blob/master/docs/faq.md)