akashrn5 commented on a change in pull request #3275: [WIP]Added documentation for mv URL: https://github.com/apache/carbondata/pull/3275#discussion_r292306290
########## File path: docs/datamap/mv-datamap-guide.md ########## @@ -0,0 +1,265 @@ +<!-- + Licensed to the Apache Software Foundation (ASF) under one or more + contributor license agreements. See the NOTICE file distributed with + this work for additional information regarding copyright ownership. + The ASF licenses this file to you under the Apache License, Version 2.0 + (the "License"); you may not use this file except in compliance with + the License. You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + + Unless required by applicable law or agreed to in writing, software + distributed under the License is distributed on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + See the License for the specific language governing permissions and + limitations under the License. +--> + +# CarbonData MV DataMap + +* [Quick Example](#quick-example) +* [MV DataMap](#mv-datamap-introduction) +* [Loading Data](#loading-data) +* [Querying Data](#querying-data) +* [Compaction](#compacting-mv-tables) +* [Data Management](#data-management-with-mv-tables) + +## Quick example +Download and unzip spark-2.2.0-bin-hadoop2.7.tgz, and export $SPARK_HOME + +Package carbon jar, and copy assembly/target/scala-2.11/carbondata_2.11-x.x.x-SNAPSHOT-shade-hadoop2.7.2.jar to $SPARK_HOME/jars +```shell +mvn clean package -DskipTests -Pspark-2.2 -Pmv +``` + +Start spark-shell in new terminal, type :paste, then copy and run the following code. +```scala + import java.io.File + import org.apache.spark.sql.{CarbonEnv, SparkSession} + import org.apache.spark.sql.CarbonSession._ + import org.apache.spark.sql.streaming.{ProcessingTime, StreamingQuery} + import org.apache.carbondata.core.util.path.CarbonStorePath + + val warehouse = new File("./warehouse").getCanonicalPath + val metastore = new File("./metastore").getCanonicalPath + + val spark = SparkSession + .builder() + .master("local") + .appName("MVDatamapExample") + .config("spark.sql.warehouse.dir", warehouse) + .getOrCreateCarbonSession(warehouse, metastore) + + spark.sparkContext.setLogLevel("ERROR") + + // drop table if exists previously + spark.sql(s"DROP TABLE IF EXISTS sales") + + // Create main table + spark.sql( + s""" + | CREATE TABLE sales ( + | user_id string, + | country string, + | quantity int, + | price bigint) + | STORED AS carbondata + """.stripMargin) + + // Create mv datamap table on the main table + // If main table already have data, following command + // will trigger one immediate load to the mv table + spark.sql( + s""" + | CREATE DATAMAP agg_sales + | ON TABLE sales + | USING "mv" + | AS + | SELECT country, sum(quantity), avg(price) + | FROM sales + | GROUP BY country + """.stripMargin) + + import spark.implicits._ + import org.apache.spark.sql.SaveMode + import scala.util.Random + + // Load data to the main table, it will also + // trigger immediate load to mv table in case of non-lazy datamap. + val r = new Random() + spark.sparkContext.parallelize(1 to 10) + .map(x => ("ID." + r.nextInt(100000), "country" + x % 8, x % 50, x % 60)) + .toDF("user_id", "country", "quantity", "price") + .write + .format("carbondata") + .option("tableName", "sales") + .option("compress", "true") + .mode(SaveMode.Append) + .save() + + spark.sql( + s""" + |SELECT country, sum(quantity), avg(price) + | from sales GROUP BY country + """.stripMargin).show + + spark.stop +``` + +## MV DataMap Introduction + Pre-aggregate datamap supports only aggregation on single table. MV datamap was implemented to + support projection, projection with filter, aggregation and join capabilities also. MV tables are Review comment: `join capabilities also` to `join capabilities` ---------------------------------------------------------------- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services