[carbondata] branch master updated: [CARBONDATA-3357] Support TableProperties from single parent table and restrict alter/delete/partition on mv

2019-05-27 Thread ravipesala
This is an automated email from the ASF dual-hosted git repository.

ravipesala pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/carbondata.git


The following commit(s) were added to refs/heads/master by this push:
 new 2a28dba  [CARBONDATA-3357] Support TableProperties from single parent 
table and restrict alter/delete/partition on mv
2a28dba is described below

commit 2a28dba04236ce976984d9cbc398eb8fa517d6f5
Author: Indhumathi27 
AuthorDate: Wed Apr 24 01:04:21 2019 +0530

[CARBONDATA-3357] Support TableProperties from single parent table and 
restrict alter/delete/partition on mv

Inherit Table Properties from main table to mv datamap table, if datamap 
has single parent table, else use
default table properties.
Restrict Alter/Delete/Partition operations on MV

This closes #3184
---
 .../core/datamap/DataMapStoreManager.java  |  27 +-
 .../carbondata/core/datamap/DataMapUtil.java   |   1 +
 .../core/metadata/schema/table/CarbonTable.java|  17 --
 .../core/metadata/schema/table/DataMapSchema.java  |  14 +
 .../carbondata/mv/datamap/MVDataMapProvider.scala  |  19 +-
 .../apache/carbondata/mv/datamap/MVHelper.scala| 110 ++--
 .../org/apache/carbondata/mv/datamap/MVUtil.scala  | 287 +
 .../mv/rewrite/MVCountAndCaseTestCase.scala|   2 -
 .../carbondata/mv/rewrite/MVCreateTestCase.scala   |  29 +--
 .../mv/rewrite/MVIncrementalLoadingTestcase.scala  |   1 -
 .../mv/rewrite/MVMultiJoinTestCase.scala   |   8 +-
 .../carbondata/mv/rewrite/MVTpchTestCase.scala |  10 +-
 .../mv/rewrite/TestAllOperationsOnMV.scala | 255 ++
 .../mv/rewrite/matching/TestSQLBatch.scala |   4 +-
 .../preaggregate/TestPreAggregateLoad.scala|   2 +-
 .../TestTimeSeriesUnsupportedSuite.scala   |   8 +-
 .../scala/org/apache/spark/sql/CarbonEnv.scala |   9 +-
 .../command/datamap/CarbonDropDataMapCommand.scala |   9 +
 .../management/CarbonCleanFilesCommand.scala   |   3 +-
 .../execution/command/mv/DataMapListeners.scala| 146 ++-
 .../CarbonAlterTableDropHivePartitionCommand.scala |   7 +-
 .../preaaggregate/PreAggregateListeners.scala  |   6 +-
 .../preaaggregate/PreAggregateTableHelper.scala| 102 +---
 .../schema/CarbonAlterTableRenameCommand.scala |   7 +-
 .../spark/sql/execution/strategy/DDLStrategy.scala |   4 +-
 .../spark/sql/hive/CarbonAnalysisRules.scala   |  10 +-
 .../scala/org/apache/spark/util/DataMapUtil.scala  | 160 
 27 files changed, 1054 insertions(+), 203 deletions(-)

diff --git 
a/core/src/main/java/org/apache/carbondata/core/datamap/DataMapStoreManager.java
 
b/core/src/main/java/org/apache/carbondata/core/datamap/DataMapStoreManager.java
index 81b1fb2..89402c2 100644
--- 
a/core/src/main/java/org/apache/carbondata/core/datamap/DataMapStoreManager.java
+++ 
b/core/src/main/java/org/apache/carbondata/core/datamap/DataMapStoreManager.java
@@ -281,19 +281,22 @@ public final class DataMapStoreManager {
   dataMapCatalogs = new ConcurrentHashMap<>();
   List dataMapSchemas = getAllDataMapSchemas();
   for (DataMapSchema schema : dataMapSchemas) {
-DataMapCatalog dataMapCatalog = 
dataMapCatalogs.get(schema.getProviderName());
-if (dataMapCatalog == null) {
-  dataMapCatalog = dataMapProvider.createDataMapCatalog();
-  if (null == dataMapCatalog) {
-throw new RuntimeException("Internal Error.");
+if (schema.getProviderName()
+
.equalsIgnoreCase(dataMapProvider.getDataMapSchema().getProviderName())) {
+  DataMapCatalog dataMapCatalog = 
dataMapCatalogs.get(schema.getProviderName());
+  if (dataMapCatalog == null) {
+dataMapCatalog = dataMapProvider.createDataMapCatalog();
+if (null == dataMapCatalog) {
+  throw new RuntimeException("Internal Error.");
+}
+dataMapCatalogs.put(schema.getProviderName(), dataMapCatalog);
+  }
+  try {
+dataMapCatalog.registerSchema(schema);
+  } catch (Exception e) {
+// Ignore the schema
+LOGGER.error("Error while registering schema", e);
   }
-  dataMapCatalogs.put(schema.getProviderName(), dataMapCatalog);
-}
-try {
-  dataMapCatalog.registerSchema(schema);
-} catch (Exception e) {
-  // Ignore the schema
-  LOGGER.error("Error while registering schema", e);
 }
   }
 }
diff --git 
a/core/src/main/java/org/apache/carbondata/core/datamap/DataMapUtil.java 
b/core/src/main/java/org/apache/carbondata/core/datamap/DataMapUtil.java
index 0a604fb..e20f19a 100644
--- a/core/src/main/java/org/apache/carbondata/core/datamap/DataMapUtil.java
+++ b/core/src/main/java/org/apache/carbondata/core/datamap/DataMapUtil.java
@@ -270,4 +270,5 @@ public class DataMapUtil {
 }
 return 

[carbondata] branch master updated: [CARBONDATA-3384] Fix NullPointerException for update/delete using index server

2019-05-27 Thread ravipesala
This is an automated email from the ASF dual-hosted git repository.

ravipesala pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/carbondata.git


The following commit(s) were added to refs/heads/master by this push:
 new bd16325  [CARBONDATA-3384] Fix NullPointerException for update/delete 
using index server
bd16325 is described below

commit bd1632564acb248db7080b9fd5f76b8e8da79101
Author: kunal642 
AuthorDate: Wed May 15 11:35:18 2019 +0530

[CARBONDATA-3384] Fix NullPointerException for update/delete using index 
server

Problem:
After update the segment cache is cleared from the executor, then in any 
subsequent query only one index file is considered for creating the 
BlockUniqueIdentifier. Therefore the query throws NullPointer when accessing 
the segmentProperties.

Solution:
Consider all index file for the segment for Identifier creation.

This closes #3218
---
 .../indexstore/blockletindex/BlockletDataMapFactory.java |  4 ++--
 .../carbondata/hadoop/api/CarbonTableInputFormat.java|  4 +++-
 .../indexserver/InvalidateSegmentCacheRDD.scala  | 16 ++--
 3 files changed, 15 insertions(+), 9 deletions(-)

diff --git 
a/core/src/main/java/org/apache/carbondata/core/indexstore/blockletindex/BlockletDataMapFactory.java
 
b/core/src/main/java/org/apache/carbondata/core/indexstore/blockletindex/BlockletDataMapFactory.java
index e4a3ad8..446507f 100644
--- 
a/core/src/main/java/org/apache/carbondata/core/indexstore/blockletindex/BlockletDataMapFactory.java
+++ 
b/core/src/main/java/org/apache/carbondata/core/indexstore/blockletindex/BlockletDataMapFactory.java
@@ -344,6 +344,7 @@ public class BlockletDataMapFactory extends 
CoarseGrainDataMapFactory
 Set tableBlockIndexUniqueIdentifiers =
 segmentMap.get(distributable.getSegment().getSegmentNo());
 if (tableBlockIndexUniqueIdentifiers == null) {
+  tableBlockIndexUniqueIdentifiers = new HashSet<>();
   Set indexFiles = 
distributable.getSegment().getCommittedIndexFile().keySet();
   for (String indexFile : indexFiles) {
 CarbonFile carbonFile = FileFactory.getCarbonFile(indexFile);
@@ -363,10 +364,9 @@ public class BlockletDataMapFactory extends 
CoarseGrainDataMapFactory
 identifiersWrapper.add(
 new 
TableBlockIndexUniqueIdentifierWrapper(tableBlockIndexUniqueIdentifier,
 this.getCarbonTable()));
-tableBlockIndexUniqueIdentifiers = new HashSet<>();
 tableBlockIndexUniqueIdentifiers.add(tableBlockIndexUniqueIdentifier);
-segmentMap.put(distributable.getSegment().getSegmentNo(), 
tableBlockIndexUniqueIdentifiers);
   }
+  segmentMap.put(distributable.getSegment().getSegmentNo(), 
tableBlockIndexUniqueIdentifiers);
 } else {
   for (TableBlockIndexUniqueIdentifier tableBlockIndexUniqueIdentifier :
   tableBlockIndexUniqueIdentifiers) {
diff --git 
a/hadoop/src/main/java/org/apache/carbondata/hadoop/api/CarbonTableInputFormat.java
 
b/hadoop/src/main/java/org/apache/carbondata/hadoop/api/CarbonTableInputFormat.java
index 458c95e..dd86dcb 100644
--- 
a/hadoop/src/main/java/org/apache/carbondata/hadoop/api/CarbonTableInputFormat.java
+++ 
b/hadoop/src/main/java/org/apache/carbondata/hadoop/api/CarbonTableInputFormat.java
@@ -564,7 +564,9 @@ public class CarbonTableInputFormat extends 
CarbonInputFormat {
 allSegments.getInvalidSegments(), toBeCleanedSegments));
 for (InputSplit extendedBlocklet : extendedBlocklets) {
   CarbonInputSplit blocklet = (CarbonInputSplit) extendedBlocklet;
-  blockletToRowCountMap.put(blocklet.getSegmentId() + "," + 
blocklet.getFilePath(),
+  String filePath = blocklet.getFilePath();
+  String blockName = filePath.substring(filePath.lastIndexOf("/") + 1);
+  blockletToRowCountMap.put(blocklet.getSegmentId() + "," + blockName,
   (long) blocklet.getDetailInfo().getRowCount());
 }
   } else {
diff --git 
a/integration/spark2/src/main/scala/org/apache/carbondata/indexserver/InvalidateSegmentCacheRDD.scala
 
b/integration/spark2/src/main/scala/org/apache/carbondata/indexserver/InvalidateSegmentCacheRDD.scala
index 1aa8cd9..bc83d2f 100644
--- 
a/integration/spark2/src/main/scala/org/apache/carbondata/indexserver/InvalidateSegmentCacheRDD.scala
+++ 
b/integration/spark2/src/main/scala/org/apache/carbondata/indexserver/InvalidateSegmentCacheRDD.scala
@@ -43,12 +43,16 @@ class InvalidateSegmentCacheRDD(@transient private val ss: 
SparkSession, databas
   }
 
   override protected def internalGetPartitions: Array[Partition] = {
-executorsList.zipWithIndex.map {
-  case (executor, idx) =>
-// create a dummy split for each executor to accumulate the cache size.
-val dummySplit = new CarbonInputSplit()
-dummySplit.setLocation(Array(executor))
-new DataMapRDDPartition(id, idx, dummySplit)
+if 

Jenkins build is still unstable: carbondata-master-spark-2.2 » Apache CarbonData :: Spark Common Test #1709

2019-05-27 Thread Apache Jenkins Server
See 




Jenkins build is still unstable: carbondata-master-spark-2.2 » Apache CarbonData :: Store SDK #1709

2019-05-27 Thread Apache Jenkins Server
See 




Jenkins build is still unstable: carbondata-master-spark-2.2 #1709

2019-05-27 Thread Apache Jenkins Server
See 




Jenkins build is still unstable: carbondata-master-spark-2.1 » Apache CarbonData :: Spark Common Test #3545

2019-05-27 Thread Apache Jenkins Server
See 




Jenkins build is still unstable: carbondata-master-spark-2.1 #3545

2019-05-27 Thread Apache Jenkins Server
See 




Jenkins build is still unstable: carbondata-master-spark-2.1 » Apache CarbonData :: Spark2 #3545

2019-05-27 Thread Apache Jenkins Server
See 




Jenkins build is still unstable: carbondata-master-spark-2.1 » Apache CarbonData :: Store SDK #3545

2019-05-27 Thread Apache Jenkins Server
See