[carbondata] branch master updated: [CARBONDATA-3393] Merge Index Job Failure should not trigger the merge index job again. Exception should be propagated to the caller.

2019-05-28 Thread ravipesala
This is an automated email from the ASF dual-hosted git repository.

ravipesala pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/carbondata.git


The following commit(s) were added to refs/heads/master by this push:
 new 706e8d3  [CARBONDATA-3393] Merge Index Job Failure should not trigger 
the merge index job again. Exception should be propagated to the caller.
706e8d3 is described below

commit 706e8d34c40da97e0d123f58eac3f6da3953f4d0
Author: dhatchayani 
AuthorDate: Tue May 28 19:29:46 2019 +0530

[CARBONDATA-3393] Merge Index Job Failure should not trigger the merge 
index job again. Exception should be propagated to the caller.

Problem:
If the merge index job is failed, the same job is triggered again.

Solution:
Merge index job exception has to be propagated to the caller. It should not 
trigger the same job again.

Changes:
(1) Merge index job failure will not be propagated to the caller. And will 
only be LOGGED.
(2) Implement a new method to write the SegmentFile based on the current 
load timestamp. This helps in case of merge index failures and writing merge 
index for old store.

This closes #3226
---
 .../core/constants/CarbonCommonConstants.java  | 12 +++
 .../carbondata/core/metadata/SegmentFileStore.java | 21 +++
 .../org/apache/spark/rdd/CarbonMergeFilesRDD.scala | 41 +++---
 3 files changed, 62 insertions(+), 12 deletions(-)

diff --git 
a/core/src/main/java/org/apache/carbondata/core/constants/CarbonCommonConstants.java
 
b/core/src/main/java/org/apache/carbondata/core/constants/CarbonCommonConstants.java
index aa9dd05..311019c 100644
--- 
a/core/src/main/java/org/apache/carbondata/core/constants/CarbonCommonConstants.java
+++ 
b/core/src/main/java/org/apache/carbondata/core/constants/CarbonCommonConstants.java
@@ -346,6 +346,18 @@ public final class CarbonCommonConstants {
   public static final String CARBON_MERGE_INDEX_IN_SEGMENT_DEFAULT = "true";
 
   /**
+   * It is the user defined property to specify whether to throw exception or 
not in case
+   * if the MERGE INDEX JOB is failed. Default value - TRUE
+   * TRUE - throws exception and fails the corresponding LOAD job
+   * FALSE - Logs the exception and continue with the LOAD
+   */
+  @CarbonProperty
+  public static final String CARBON_MERGE_INDEX_FAILURE_THROW_EXCEPTION =
+  "carbon.merge.index.failure.throw.exception";
+
+  public static final String 
CARBON_MERGE_INDEX_FAILURE_THROW_EXCEPTION_DEFAULT = "true";
+
+  /**
* property to be used for specifying the max byte limit for string/varchar 
data type till
* where storing min/max in data file will be considered
*/
diff --git 
a/core/src/main/java/org/apache/carbondata/core/metadata/SegmentFileStore.java 
b/core/src/main/java/org/apache/carbondata/core/metadata/SegmentFileStore.java
index 69e5dc3..cbf58c7 100644
--- 
a/core/src/main/java/org/apache/carbondata/core/metadata/SegmentFileStore.java
+++ 
b/core/src/main/java/org/apache/carbondata/core/metadata/SegmentFileStore.java
@@ -139,12 +139,32 @@ public class SegmentFileStore {
*/
   public static String writeSegmentFile(CarbonTable carbonTable, String 
segmentId, String UUID)
   throws IOException {
+return writeSegmentFile(carbonTable, segmentId, UUID, null);
+  }
+
+  /**
+   * Write segment file to the metadata folder of the table selecting only the 
current load files
+   *
+   * @param carbonTable
+   * @param segmentId
+   * @param UUID
+   * @param currentLoadTimeStamp
+   * @return
+   * @throws IOException
+   */
+  public static String writeSegmentFile(CarbonTable carbonTable, String 
segmentId, String UUID,
+  final String currentLoadTimeStamp) throws IOException {
 String tablePath = carbonTable.getTablePath();
 boolean supportFlatFolder = carbonTable.isSupportFlatFolder();
 String segmentPath = CarbonTablePath.getSegmentPath(tablePath, segmentId);
 CarbonFile segmentFolder = FileFactory.getCarbonFile(segmentPath);
 CarbonFile[] indexFiles = segmentFolder.listFiles(new CarbonFileFilter() {
   @Override public boolean accept(CarbonFile file) {
+if (null != currentLoadTimeStamp) {
+  return file.getName().contains(currentLoadTimeStamp) && (
+  file.getName().endsWith(CarbonTablePath.INDEX_FILE_EXT) || 
file.getName()
+  .endsWith(CarbonTablePath.MERGE_INDEX_FILE_EXT));
+}
 return (file.getName().endsWith(CarbonTablePath.INDEX_FILE_EXT) || 
file.getName()
 .endsWith(CarbonTablePath.MERGE_INDEX_FILE_EXT));
   }
@@ -185,6 +205,7 @@ public class SegmentFileStore {
 return null;
   }
 
+
   /**
* Move the loaded data from source folder to destination folder.
*/
diff --git 
a/integration/spark-common/src/main/scala/org/apache/spark/rdd/CarbonMergeFilesRDD.scala
 
b/integration/spark-common/src/main/scala/org/apache/spark/rdd/CarbonMergeFilesRDD.scala
inde

[carbondata] branch master updated: [DOCUMENTATION] Document change for GLOBAL_SORT_PARTITIONS

2019-05-28 Thread ravipesala
This is an automated email from the ASF dual-hosted git repository.

ravipesala pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/carbondata.git


The following commit(s) were added to refs/heads/master by this push:
 new 10cbf4e  [DOCUMENTATION] Document change for GLOBAL_SORT_PARTITIONS
10cbf4e is described below

commit 10cbf4ec018de4671284e9f6974d05b22609f3a0
Author: manishnalla1994 
AuthorDate: Mon May 27 12:09:04 2019 +0530

[DOCUMENTATION] Document change for GLOBAL_SORT_PARTITIONS

Documentation change done for Global Sort Partitions during Range Column 
DataLoad/Compaction.

This closes #3234
---
 docs/dml-of-carbondata.md | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/docs/dml-of-carbondata.md b/docs/dml-of-carbondata.md
index 6ec0520..3e2a22d 100644
--- a/docs/dml-of-carbondata.md
+++ b/docs/dml-of-carbondata.md
@@ -281,6 +281,8 @@ CarbonData DML statements are documented here,which 
includes:
 
 If the SORT_SCOPE is defined as GLOBAL_SORT, then user can specify the 
number of partitions to use while shuffling data for sort using 
GLOBAL_SORT_PARTITIONS. If it is not configured, or configured less than 1, 
then it uses the number of map task as reduce task. It is recommended that each 
reduce task deal with 512MB-1GB data.
 For RANGE_COLUMN, GLOBAL_SORT_PARTITIONS is used to specify the number of 
range partitions also.
+GLOBAL_SORT_PARTITIONS should be specified optimally during RANGE_COLUMN 
LOAD because if a higher number is configured then the load time may be less 
but it will result in creation of more files which would degrade the query and 
compaction performance.
+Conversely, if less partitions are configured then the load performance 
may degrade due to less use of parallelism but the query and compaction will 
become faster. Hence the user may choose optimal number depending on the use 
case.
   ```
   OPTIONS('GLOBAL_SORT_PARTITIONS'='2')
   ```



[carbondata] branch master updated: [CARBONDATA-3396] Range Compaction Data Mismatch Fix

2019-05-28 Thread ravipesala
This is an automated email from the ASF dual-hosted git repository.

ravipesala pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/carbondata.git


The following commit(s) were added to refs/heads/master by this push:
 new ce40c64  [CARBONDATA-3396] Range Compaction Data Mismatch Fix
ce40c64 is described below

commit ce40c64f552d02417400111e9865ff77a05d4fbd
Author: manishnalla1994 
AuthorDate: Mon May 27 11:41:10 2019 +0530

[CARBONDATA-3396] Range Compaction Data Mismatch Fix

Problem : When we have to compact the data second time and the ranges made 
first time have data in more than one file/blocklet, then while compacting 
second time if the first blocklet does not contain any record then the other 
files are also skipped. Also, Global Sort and Local Sort with Range Column were 
taking different time for same data load and compaction as during write step we 
give only 1 core to Global Sort.

Solution : For the first issue we are reading all the blocklets of a given 
range and then breaking only when the batch size is full. For the second issue 
in case of range column both the sort scopes will now take same number of cores 
and behave similarly.

Also changed the number of tasks to be launched during the compaction, now 
based on the number of tasks during load.

This closes #3233
---
 .../core/constants/CarbonCommonConstants.java  |  4 
 .../AbstractDetailQueryResultIterator.java | 14 +
 .../scan/result/iterator/RawResultIterator.java| 11 +--
 .../carbondata/core/util/CarbonProperties.java | 23 --
 .../carbondata/spark/rdd/CarbonMergerRDD.scala | 18 -
 .../processing/merger/CarbonCompactionUtil.java| 11 +++
 .../store/CarbonFactDataHandlerModel.java  |  3 ++-
 7 files changed, 53 insertions(+), 31 deletions(-)

diff --git 
a/core/src/main/java/org/apache/carbondata/core/constants/CarbonCommonConstants.java
 
b/core/src/main/java/org/apache/carbondata/core/constants/CarbonCommonConstants.java
index e78ea17..aa9dd05 100644
--- 
a/core/src/main/java/org/apache/carbondata/core/constants/CarbonCommonConstants.java
+++ 
b/core/src/main/java/org/apache/carbondata/core/constants/CarbonCommonConstants.java
@@ -1193,10 +1193,6 @@ public final class CarbonCommonConstants {
 
   public static final String CARBON_RANGE_COLUMN_SCALE_FACTOR_DEFAULT = "3";
 
-  public static final String CARBON_ENABLE_RANGE_COMPACTION = 
"carbon.enable.range.compaction";
-
-  public static final String CARBON_ENABLE_RANGE_COMPACTION_DEFAULT = "false";
-
   
//
   // Query parameter start here
   
//
diff --git 
a/core/src/main/java/org/apache/carbondata/core/scan/result/iterator/AbstractDetailQueryResultIterator.java
 
b/core/src/main/java/org/apache/carbondata/core/scan/result/iterator/AbstractDetailQueryResultIterator.java
index f39e549..d7f2c0b 100644
--- 
a/core/src/main/java/org/apache/carbondata/core/scan/result/iterator/AbstractDetailQueryResultIterator.java
+++ 
b/core/src/main/java/org/apache/carbondata/core/scan/result/iterator/AbstractDetailQueryResultIterator.java
@@ -24,7 +24,6 @@ import java.util.concurrent.ExecutorService;
 
 import org.apache.carbondata.common.CarbonIterator;
 import org.apache.carbondata.common.logging.LogServiceFactory;
-import org.apache.carbondata.core.constants.CarbonCommonConstants;
 import org.apache.carbondata.core.datastore.DataRefNode;
 import org.apache.carbondata.core.datastore.FileReader;
 import org.apache.carbondata.core.datastore.block.AbstractIndex;
@@ -89,18 +88,7 @@ public abstract class AbstractDetailQueryResultIterator 
extends CarbonIterato
 
   AbstractDetailQueryResultIterator(List infos, QueryModel 
queryModel,
   ExecutorService execService) {
-String batchSizeString =
-
CarbonProperties.getInstance().getProperty(CarbonCommonConstants.DETAIL_QUERY_BATCH_SIZE);
-if (null != batchSizeString) {
-  try {
-batchSize = Integer.parseInt(batchSizeString);
-  } catch (NumberFormatException ne) {
-LOGGER.error("Invalid inmemory records size. Using default value");
-batchSize = CarbonCommonConstants.DETAIL_QUERY_BATCH_SIZE_DEFAULT;
-  }
-} else {
-  batchSize = CarbonCommonConstants.DETAIL_QUERY_BATCH_SIZE_DEFAULT;
-}
+batchSize = CarbonProperties.getQueryBatchSize();
 this.recorder = queryModel.getStatisticsRecorder();
 this.blockExecutionInfos = infos;
 this.fileReader = FileFactory.getFileHolder(
diff --git 
a/core/src/main/java/org/apache/carbondata/core/scan/result/iterator/RawResultIterator.java
 
b/core/src/main/java/org/apache/carbondata/core/scan/result/iterator/RawResultIterator.java
index 4d471b6..911a7dd 100644
--- 
a/core/src/main/java/org/apach

[carbondata] branch master updated: [CARBONDATA-3397]Remove SparkUnknown Expression to Index Server

2019-05-28 Thread ravipesala
This is an automated email from the ASF dual-hosted git repository.

ravipesala pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/carbondata.git


The following commit(s) were added to refs/heads/master by this push:
 new 15bae6e  [CARBONDATA-3397]Remove SparkUnknown Expression to Index 
Server
15bae6e is described below

commit 15bae6e5848bc83d4a6f65499fe7dacf88f5a67a
Author: BJangir 
AuthorDate: Mon May 27 14:55:39 2019 +0530

[CARBONDATA-3397]Remove SparkUnknown Expression to Index Server

Problem
if Query has UDF and it is registered to the Main driver Since UDF function 
will not be available in Index server , query will be failed in Indexserver 
(with NoClassDefincationFound).

Solution
UDF are SparkUnkownFilter(RowLevelFilterExecuterImpl) so Remove the 
SparkUnknown Expression because anyway for pruning we select all blocks. 
org.apache.carbondata.core.scan.filter.executer.RowLevelFilterExecuterImpl#isScanRequired.

Supply all the UDFs functions and it's related lambda expressions to 
IndexServer also. But it has below issues
a. Spark FunctionRegistry is not writable
b. sending All functions from Main Server to Index server will be costly(in 
Size) & no way to find implicit function and explicit user created functions.

So Solution 1 is adopted.

This closes #3238
---
 .../core/datamap/DistributableDataMapFormat.java   |  8 
 .../scan/filter/FilterExpressionProcessor.java | 43 ++
 .../carbondata/indexserver/DataMapJobs.scala   | 39 
 3 files changed, 90 insertions(+)

diff --git 
a/core/src/main/java/org/apache/carbondata/core/datamap/DistributableDataMapFormat.java
 
b/core/src/main/java/org/apache/carbondata/core/datamap/DistributableDataMapFormat.java
index f76cfec..57540e4 100644
--- 
a/core/src/main/java/org/apache/carbondata/core/datamap/DistributableDataMapFormat.java
+++ 
b/core/src/main/java/org/apache/carbondata/core/datamap/DistributableDataMapFormat.java
@@ -334,4 +334,12 @@ public class DistributableDataMapFormat extends 
FileInputFormat

[carbondata] branch master updated: [CARBONDATA-3400] Support IndexSever for Spark-Shell in secure Mode(kerberos)

2019-05-28 Thread ravipesala
This is an automated email from the ASF dual-hosted git repository.

ravipesala pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/carbondata.git


The following commit(s) were added to refs/heads/master by this push:
 new bf096e1  [CARBONDATA-3400] Support IndexSever for Spark-Shell in 
secure Mode(kerberos)
bf096e1 is described below

commit bf096e128f35865c7cd46cd5a5058c8e5227d773
Author: BJangir 
AuthorDate: Mon May 27 15:26:21 2019 +0530

[CARBONDATA-3400] Support IndexSever for Spark-Shell in secure 
Mode(kerberos)

Problem
In spark-shell OR Spark-Submit mode, Application user and IndexServer User 
are different .
Application user is based on Kinit user OR based on spark.yarn.principle 
user whereas Indexserver user is based on spark.carbon.indexserver.principal . 
it is possible that both are different as Indexserver should have it's own 
authentication principle and should not depend on Application principle so that 
any application's Query(Thrifserver,Spark-shell,Spark-sql,Spark-Submit) can be 
served from IndexServer.

Solution
Authenticate the IndexServer by it's own principle and keytab.
keytab is required so that long run application (client and indexserver ) 
does not impacted on token expire.

Note:- Spark-default.conf of Thriftserver (beeline), spark-submit 
,spark-sql should have both spark.carbon.indexserver.principal and 
spark.carbon.indexserver.keytab.

This closes #3240
---
 .../scala/org/apache/carbondata/indexserver/IndexServer.scala| 9 -
 1 file changed, 8 insertions(+), 1 deletion(-)

diff --git 
a/integration/spark2/src/main/scala/org/apache/carbondata/indexserver/IndexServer.scala
 
b/integration/spark2/src/main/scala/org/apache/carbondata/indexserver/IndexServer.scala
index e738fb3..f066095 100644
--- 
a/integration/spark2/src/main/scala/org/apache/carbondata/indexserver/IndexServer.scala
+++ 
b/integration/spark2/src/main/scala/org/apache/carbondata/indexserver/IndexServer.scala
@@ -167,9 +167,16 @@ object IndexServer extends ServerInterface {
*/
   def getClient: ServerInterface = {
 import org.apache.hadoop.ipc.RPC
+val indexServerUser = sparkSession.sparkContext.getConf
+  .get("spark.carbon.indexserver.principal", "")
+val indexServerKeyTab = sparkSession.sparkContext.getConf
+  .get("spark.carbon.indexserver.keytab", "")
+val ugi = 
UserGroupInformation.loginUserFromKeytabAndReturnUGI(indexServerUser,
+  indexServerKeyTab)
+LOGGER.info("Login successful for user " + indexServerUser);
 RPC.getProxy(classOf[ServerInterface],
   RPC.getProtocolVersion(classOf[ServerInterface]),
-  new InetSocketAddress(serverIp, serverPort), 
UserGroupInformation.getLoginUser,
+  new InetSocketAddress(serverIp, serverPort), ugi,
   FileFactory.getConfiguration, 
NetUtils.getDefaultSocketFactory(FileFactory.getConfiguration))
   }
 }



Build failed in Jenkins: carbondata-master-spark-2.1 » Apache CarbonData :: Examples #3549

2019-05-28 Thread Apache Jenkins Server
See 


--
[...truncated 278.13 KB...]
|29.7890625|2011-01-01 00:00:...|
|   30.20703125|2009-01-01 00:00:...|
|28.415384615384614|2002-01-01 00:00:...|
| 30.00862068965517|2004-01-01 00:00:...|
|29.231833910034602|2000-01-01 00:00:...|
|29.463709677419356|2008-01-01 00:00:...|
| 28.84351145038168|2012-01-01 00:00:...|
|28.677966101694917|2007-01-01 00:00:...|
|30.522088353413654|2013-01-01 00:00:...|
+--++
only showing top 20 rows

2019-05-28 16:50:05 AUDIT audit:72 - {"time":"May 28, 2019 9:50:05 AM 
PDT","username":"jenkins","opName":"DROP 
TABLE","opId":"18756256031292119","opStatus":"START"}
2019-05-28 16:50:06 AUDIT audit:93 - {"time":"May 28, 2019 9:50:06 AM 
PDT","username":"jenkins","opName":"DROP 
TABLE","opId":"18756256031292119","opStatus":"SUCCESS","opTime":"308 
ms","table":"default.timeseriestable","extraInfo":{}}
- TimeSeriesPreAggregateTableExample
2019-05-28 16:50:06 AUDIT audit:72 - {"time":"May 28, 2019 9:50:06 AM 
PDT","username":"jenkins","opName":"CREATE 
TABLE","opId":"18756256450241057","opStatus":"START"}
2019-05-28 16:50:06 AUDIT audit:93 - {"time":"May 28, 2019 9:50:06 AM 
PDT","username":"jenkins","opName":"CREATE 
TABLE","opId":"18756256450241057","opStatus":"SUCCESS","opTime":"56 
ms","table":"default.persontable","extraInfo":{"bad_record_path":"","streaming":"false","local_dictionary_enable":"true","external":"false","sort_columns":"","comment":""}}
2019-05-28 16:50:06 AUDIT audit:72 - {"time":"May 28, 2019 9:50:06 AM 
PDT","username":"jenkins","opName":"LOAD DATA 
OVERWRITE","opId":"18756256508761173","opStatus":"START"}
2019-05-28 16:50:07 AUDIT audit:93 - {"time":"May 28, 2019 9:50:07 AM 
PDT","username":"jenkins","opName":"LOAD DATA 
OVERWRITE","opId":"18756256508761173","opStatus":"SUCCESS","opTime":"704 
ms","table":"default.personTable","extraInfo":{"SegmentId":"0","DataSize":"771.92KB","IndexSize":"720.0B"}}
2019-05-28 16:50:07 AUDIT audit:72 - {"time":"May 28, 2019 9:50:07 AM 
PDT","username":"jenkins","opName":"CREATE 
DATAMAP","opId":"18756257219311402","opStatus":"START"}
2019-05-28 16:50:08 AUDIT audit:93 - {"time":"May 28, 2019 9:50:08 AM 
PDT","username":"jenkins","opName":"CREATE 
DATAMAP","opId":"18756257219311402","opStatus":"SUCCESS","opTime":"1714 
ms","table":"default.persontable","extraInfo":{"provider":"lucene","dmName":"dm","index_columns":"id
 , name"}}
++
|count(1)|
++
|  10|
++

++
|count(1)|
++
|  10|
++

time for query on table with lucene datamap table:0.311
time for query on table without lucene datamap table:0.176
+-+-+
|   id| name|
+-+-+
|which test1 good7|who and name1|
|which test1 good4|who and name0|
|which test1 good8|who and name7|
|which test1 good5|who and name2|
|which test1 good3|who and name0|
|which test1 good7|who and name1|
|which test1 good4|who and name0|
|which test1 good8|who and name7|
|which test1 good5|who and name2|
|which test1 good3|who and name0|
+-+-+

+-+-+
|   id| name|
+-+-+
|which test1 good7|who and name1|
|which test1 good4|who and name0|
|which test1 good8|who and name7|
|which test1 good5|who and name2|
|which test1 good3|who and name0|
|which test1 good7|who and name1|
|which test1 good4|who and name0|
|which test1 good8|who and name7|
|which test1 good5|who and name2|
|which test1 good3|who and name0|
+-+-+

2019-05-28 16:50:09 AUDIT audit:72 - {"time":"May 28, 2019 9:50:09 AM 
PDT","username":"jenkins","opName":"DROP 
TABLE","opId":"18756259840148152","opStatus":"START"}
2019-05-28 16:50:09 AUDIT audit:93 - {"time":"May 28, 2019 9:50:09 AM 
PDT","username":"jenkins","opName":"DROP 
TABLE","opId":"18756259840148152","opStatus":"SUCCESS","opTime":"116 
ms","table":"default.persontable","extraInfo":{}}
- LuceneDataMapExample
2019-05-28 16:50:09 AUDIT audit:72 - {"time":"May 28, 2019 9:50:09 AM 
PDT","username":"jenkins","opName":"CREATE 
TABLE","opId":"18756259971425598","opStatus":"START"}
2019-05-28 16:50:10 AUDIT audit:93 - {"time":"May 28, 2019 9:50:10 AM 
PDT","username":"jenkins","opName":"CREATE 
TABLE","opId":"18756259971425598","opStatus":"SUCCESS","opTime":"100 
ms","table":"default.origin_table","extraInfo":{"bad_record_path":"","local_dictionary_enable":"true","external":"false","sort_columns":"","comment":""}}
2019-05-28 16:50:10 AUDIT audit:72 - {"time":"May 28, 2019 9:50:10 AM 
PDT","username":"jenkins","opName":"LOAD 
DATA","opId":"18756260082592275","opStatus":"START"}
2019-05-28 16:50:10 ERROR DataLoadExecutor:55 - Data Load is partially success 
for table origin_table
2019-05-28 16:50:10 AUDIT audit:93 - {"

Jenkins build is still unstable: carbondata-master-spark-2.1 » Apache CarbonData :: Store SDK #3549

2019-05-28 Thread Apache Jenkins Server
See 




Jenkins build is still unstable: carbondata-master-spark-2.1 » Apache CarbonData :: Spark Common Test #3549

2019-05-28 Thread Apache Jenkins Server
See 




Jenkins build is still unstable: carbondata-master-spark-2.1 » Apache CarbonData :: Spark2 #3549

2019-05-28 Thread Apache Jenkins Server
See 




Build failed in Jenkins: carbondata-master-spark-2.1 #3549

2019-05-28 Thread Apache Jenkins Server
See 


Changes:

[ravipesala] [CARBONDATA-3387] Support Partition with MV datamap & Show DataMap

[ravipesala] [HOTFIX]Fix select * failure when MV datamap is enabled

[ravipesala] [CARBONDATA-3395] Fix Exception when concurrent readers built with 
same

[ravipesala] [CARBONDATA-3364] Support Read from Hive. Queries are giving empty

--
[...truncated 9.61 MB...]
4   robot4  4   4   9223372036854775803 2.0 true
2019-03-01  2019-02-12 03:03:34.0   12.35   varchar 
Hello World From Carbon 
5   robot5  5   5   9223372036854775802 2.5 true
2019-03-01  2019-02-12 03:03:34.0   12.35   varchar 
Hello World From Carbon 
6   robot6  6   6   9223372036854775801 3.0 true
2019-03-01  2019-02-12 03:03:34.0   12.35   varchar 
Hello World From Carbon 
7   robot7  7   7   9223372036854775800 3.5 true
2019-03-01  2019-02-12 03:03:34.0   12.35   varchar 
Hello World From Carbon 
8   robot8  8   8   9223372036854775799 4.0 true
2019-03-01  2019-02-12 03:03:34.0   12.35   varchar 
Hello World From Carbon 
9   robot9  9   9   9223372036854775798 4.5 true
2019-03-01  2019-02-12 03:03:34.0   12.35   varchar 
Hello World From Carbon 

Data:
0   robot0  2019-03-01  2019-02-12 03:03:34.0   varchar Hello World 
From Carbon 0   0   9223372036854775807 0.0 true
12.35   
1   robot1  2019-03-01  2019-02-12 03:03:34.0   varchar Hello World 
From Carbon 1   1   9223372036854775806 0.5 true
12.35   
2   robot2  2019-03-01  2019-02-12 03:03:34.0   varchar Hello World 
From Carbon 2   2   9223372036854775805 1.0 true
12.35   
3   robot3  2019-03-01  2019-02-12 03:03:34.0   varchar Hello World 
From Carbon 3   3   9223372036854775804 1.5 true
12.35   
4   robot4  2019-03-01  2019-02-12 03:03:34.0   varchar Hello World 
From Carbon 4   4   9223372036854775803 2.0 true
12.35   
5   robot5  2019-03-01  2019-02-12 03:03:34.0   varchar Hello World 
From Carbon 5   5   9223372036854775802 2.5 true
12.35   
6   robot6  2019-03-01  2019-02-12 03:03:34.0   varchar Hello World 
From Carbon 6   6   9223372036854775801 3.0 true
12.35   
7   robot7  2019-03-01  2019-02-12 03:03:34.0   varchar Hello World 
From Carbon 7   7   9223372036854775800 3.5 true
12.35   
8   robot8  2019-03-01  2019-02-12 03:03:34.0   varchar Hello World 
From Carbon 8   8   9223372036854775799 4.0 true
12.35   
9   robot9  2019-03-01  2019-02-12 03:03:34.0   varchar Hello World 
From Carbon 9   9   9223372036854775798 4.5 true
12.35   
- CarbonReaderExample
2019-05-28 16:50:11 AUDIT audit:72 - {"time":"May 28, 2019 9:50:11 AM 
PDT","username":"jenkins","opName":"CREATE 
TABLE","opId":"18756261879365844","opStatus":"START"}
2019-05-28 16:50:11 AUDIT audit:93 - {"time":"May 28, 2019 9:50:11 AM 
PDT","username":"jenkins","opName":"CREATE 
TABLE","opId":"18756261879365844","opStatus":"SUCCESS","opTime":"72 
ms","table":"default.hive_carbon_example","extraInfo":{"bad_record_path":"","local_dictionary_enable":"true","external":"false","sort_columns":"","comment":""}}
2019-05-28 16:50:11 AUDIT audit:72 - {"time":"May 28, 2019 9:50:11 AM 
PDT","username":"jenkins","opName":"LOAD 
DATA","opId":"18756261967899663","opStatus":"START"}
2019-05-28 16:50:12 AUDIT audit:93 - {"time":"May 28, 2019 9:50:12 AM 
PDT","username":"jenkins","opName":"LOAD 
DATA","opId":"18756261967899663","opStatus":"SUCCESS","opTime":"188 
ms","table":"default.hive_carbon_example","extraInfo":{"SegmentId":"0","DataSize":"924.0B","IndexSize":"551.0B"}}
2019-05-28 16:50:12 AUDIT audit:72 - {"time":"May 28, 2019 9:50:12 AM 
PDT","username":"jenkins","opName":"LOAD 
DATA","opId":"18756262167005211","opStatus":"START"}
2019-05-28 16:50:12 AUDIT audit:93 - {"time":"May 28, 2019 9:50:12 AM 
PDT","username":"jenkins","opName":"LOAD 
DATA","opId":"18756262167005211","opStatus":"SUCCESS","opTime":"219 
ms","table":"default.hive_carbon_example","extraInfo":{"SegmentId":"1","DataSize":"924.0B","IndexSize":"551.0B"}}
+---+-++
| id| name|  salary|
+---+-++
|  1|  'liang'|20.0|
|  2|'anubhav'| 2.0|
|  1|  'liang'|20.0|
|  2|'anubhav'| 2.0|
+---+-++

OK
**Total Number Of Rows Fetched ** 0
- HiveExample *** FAILED ***
  java.lang.AssertionError: assertion failed
  at scala.Predef$.assert(Predef.scala:156)
  at 
org.apache.carbondata.examples.HiveExample$.readFromHive(Hiv

Jenkins build is unstable: carbondata-master-spark-2.2 #1715

2019-05-28 Thread Apache Jenkins Server
See 




Jenkins build is unstable: carbondata-master-spark-2.2 » Apache CarbonData :: Spark Common Test #1715

2019-05-28 Thread Apache Jenkins Server
See 




Jenkins build is still unstable: carbondata-master-spark-2.2 » Apache CarbonData :: Store SDK #1715

2019-05-28 Thread Apache Jenkins Server
See 




Jenkins build is still unstable: carbondata-master-spark-2.2 #1716

2019-05-28 Thread Apache Jenkins Server
See 




Jenkins build is still unstable: carbondata-master-spark-2.2 » Apache CarbonData :: Store SDK #1716

2019-05-28 Thread Apache Jenkins Server
See 




Jenkins build is unstable: carbondata-master-spark-2.2 » Apache CarbonData :: Spark Common Test #1716

2019-05-28 Thread Apache Jenkins Server
See 




Jenkins build is still unstable: carbondata-master-spark-2.2 » Apache CarbonData :: Store SDK #1714

2019-05-28 Thread Apache Jenkins Server
See 




[carbondata] branch master updated: [CARBONDATA-3364] Support Read from Hive. Queries are giving empty results from hive.

2019-05-28 Thread ravipesala
This is an automated email from the ASF dual-hosted git repository.

ravipesala pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/carbondata.git


The following commit(s) were added to refs/heads/master by this push:
 new fcca6c5  [CARBONDATA-3364] Support Read from Hive. Queries are giving 
empty results from hive.
fcca6c5 is described below

commit fcca6c5b661ec02adfa17622e980a0c396bd84c2
Author: dhatchayani 
AuthorDate: Mon Apr 29 18:52:57 2019 +0530

[CARBONDATA-3364] Support Read from Hive. Queries are giving empty results 
from hive.

This closes #3192
---
 .../apache/carbondata/examples/HiveExample.scala   | 99 +-
 .../apache/carbondata/examplesCI/RunExamples.scala |  3 +-
 integration/hive/pom.xml   |  9 +-
 .../carbondata/hive/CarbonHiveInputSplit.java  |  8 +-
 .../apache/carbondata/hive/CarbonHiveSerDe.java|  2 +-
 .../carbondata/hive/MapredCarbonInputFormat.java   | 20 ++---
 .../carbondata/hive/MapredCarbonOutputFormat.java  | 12 ++-
 .../{ => test}/server/HiveEmbeddedServer2.java | 20 ++---
 integration/spark-common-test/pom.xml  |  6 ++
 .../TestCreateHiveTableWithCarbonDS.scala  |  4 +-
 integration/spark-common/pom.xml   |  5 ++
 .../apache/spark/util/CarbonReflectionUtils.scala  | 17 ++--
 .../spark/util/DictionaryLRUCacheTestCase.scala|  1 +
 pom.xml|  1 +
 14 files changed, 123 insertions(+), 84 deletions(-)

diff --git 
a/examples/spark2/src/main/scala/org/apache/carbondata/examples/HiveExample.scala
 
b/examples/spark2/src/main/scala/org/apache/carbondata/examples/HiveExample.scala
index b50e763..c043076 100644
--- 
a/examples/spark2/src/main/scala/org/apache/carbondata/examples/HiveExample.scala
+++ 
b/examples/spark2/src/main/scala/org/apache/carbondata/examples/HiveExample.scala
@@ -19,33 +19,36 @@ package org.apache.carbondata.examples
 import java.io.File
 import java.sql.{DriverManager, ResultSet, Statement}
 
-import org.apache.spark.sql.SparkSession
+import org.apache.hadoop.fs.Path
+import org.apache.hadoop.fs.permission.{FsAction, FsPermission}
 
 import org.apache.carbondata.common.logging.LogServiceFactory
-import org.apache.carbondata.core.constants.CarbonCommonConstants
-import org.apache.carbondata.core.util.CarbonProperties
+import org.apache.carbondata.core.datastore.impl.FileFactory
 import org.apache.carbondata.examples.util.ExampleUtils
-import org.apache.carbondata.hive.server.HiveEmbeddedServer2
+import org.apache.carbondata.hive.test.server.HiveEmbeddedServer2
 
 // scalastyle:off println
 object HiveExample {
 
   private val driverName: String = "org.apache.hive.jdbc.HiveDriver"
 
-  def main(args: Array[String]) {
-val carbonSession = ExampleUtils.createCarbonSession("HiveExample")
-exampleBody(carbonSession, CarbonProperties.getStorePath
-  + CarbonCommonConstants.FILE_SEPARATOR
-  + CarbonCommonConstants.DATABASE_DEFAULT_NAME)
-carbonSession.stop()
+  val rootPath = new File(this.getClass.getResource("/").getPath
+  + "../../../..").getCanonicalPath
+  private val targetLoc = s"$rootPath/examples/spark2/target"
+  val metaStoreLoc = s"$targetLoc/metastore_db"
+  val storeLocation = s"$targetLoc/store"
+  val logger = LogServiceFactory.getLogService(this.getClass.getCanonicalName)
 
+
+  def main(args: Array[String]) {
+createCarbonTable(storeLocation)
+readFromHive
 System.exit(0)
   }
 
-  def exampleBody(carbonSession: SparkSession, store: String): Unit = {
-val logger = 
LogServiceFactory.getLogService(this.getClass.getCanonicalName)
-val rootPath = new File(this.getClass.getResource("/").getPath
-  + "../../../..").getCanonicalPath
+  def createCarbonTable(store: String): Unit = {
+
+val carbonSession = ExampleUtils.createCarbonSession("HiveExample")
 
 carbonSession.sql("""DROP TABLE IF EXISTS 
HIVE_CARBON_EXAMPLE""".stripMargin)
 
@@ -56,14 +59,44 @@ object HiveExample {
  | STORED BY 'carbondata'
""".stripMargin)
 
+val inputPath = FileFactory
+  
.getUpdatedFilePath(s"$rootPath/examples/spark2/src/main/resources/sample.csv")
+
 carbonSession.sql(
   s"""
- | LOAD DATA LOCAL INPATH 
'$rootPath/examples/spark2/src/main/resources/sample.csv'
+ | LOAD DATA LOCAL INPATH '$inputPath'
+ | INTO TABLE HIVE_CARBON_EXAMPLE
+   """.stripMargin)
+
+carbonSession.sql(
+  s"""
+ | LOAD DATA LOCAL INPATH '$inputPath'
  | INTO TABLE HIVE_CARBON_EXAMPLE
""".stripMargin)
 
 carbonSession.sql("SELECT * FROM HIVE_CARBON_EXAMPLE").show()
 
+carbonSession.close()
+
+// delete the already existing lock on metastore so that new derby instance
+// for HiveServer can run on the same metastore
+checkAndDeleteDBLock
+
+  }
+
+  def checkAndDeleteDBLock: Unit = {
+val dbLockPath = FileFactory.getUpdatedFi

[carbondata] branch master updated: [CARBONDATA-3395] Fix Exception when concurrent readers built with same split object

2019-05-28 Thread ravipesala
This is an automated email from the ASF dual-hosted git repository.

ravipesala pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/carbondata.git


The following commit(s) were added to refs/heads/master by this push:
 new 36ee528  [CARBONDATA-3395] Fix Exception when concurrent readers built 
with same split object
36ee528 is described below

commit 36ee52836c7bb7bc8e7a4cc6c294d7b77fdba2ee
Author: ajantha-bhat 
AuthorDate: Fri May 24 19:50:57 2019 +0530

[CARBONDATA-3395] Fix Exception when concurrent readers built with same 
split object

problem: Fix Exception when concurrent readers built with same split object

cause: In CarbonInputSplit, BlockletDetailInfo and BlockletInfo are made 
lazy. so, BlockletInfo is prepared during reader builder.
so, when two readers work on same split object, the state of this object is 
changed and leading to array out of bound issue.

solution: a) synchronize BlockletInfo creation,
b) load BlockletDetailInfo before passing to reader inside getSplit() API 
itself.
c) Failure case get the proper identifier to cleanup the datamaps.
d) build_with_splits, need to handle default projection filling if not 
configured.

This closes #3232
---
 .../carbondata/core/indexstore/BlockletDetailInfo.java   |  6 +-
 .../carbondata/hadoop/api/CarbonFileInputFormat.java | 16 ++--
 .../apache/carbondata/sdk/file/CarbonReaderBuilder.java  | 14 ++
 3 files changed, 25 insertions(+), 11 deletions(-)

diff --git 
a/core/src/main/java/org/apache/carbondata/core/indexstore/BlockletDetailInfo.java
 
b/core/src/main/java/org/apache/carbondata/core/indexstore/BlockletDetailInfo.java
index a5aa899..af07f09 100644
--- 
a/core/src/main/java/org/apache/carbondata/core/indexstore/BlockletDetailInfo.java
+++ 
b/core/src/main/java/org/apache/carbondata/core/indexstore/BlockletDetailInfo.java
@@ -108,7 +108,11 @@ public class BlockletDetailInfo implements Serializable, 
Writable {
   public BlockletInfo getBlockletInfo() {
 if (null == blockletInfo) {
   try {
-setBlockletInfoFromBinary();
+synchronized (this) {
+  if (null == blockletInfo) {
+setBlockletInfoFromBinary();
+  }
+}
   } catch (IOException e) {
 throw new RuntimeException(e);
   }
diff --git 
a/hadoop/src/main/java/org/apache/carbondata/hadoop/api/CarbonFileInputFormat.java
 
b/hadoop/src/main/java/org/apache/carbondata/hadoop/api/CarbonFileInputFormat.java
index e83f898..1f34c4f 100644
--- 
a/hadoop/src/main/java/org/apache/carbondata/hadoop/api/CarbonFileInputFormat.java
+++ 
b/hadoop/src/main/java/org/apache/carbondata/hadoop/api/CarbonFileInputFormat.java
@@ -200,17 +200,21 @@ public class CarbonFileInputFormat extends 
CarbonInputFormat implements Se
   }
 });
   }
-  if (getColumnProjection(job.getConfiguration()) == null) {
-// If the user projection is empty, use default all columns as 
projections.
-// All column name will be filled inside getSplits, so can update only 
here.
-String[]  projectionColumns = projectAllColumns(carbonTable);
-setColumnProjection(job.getConfiguration(), projectionColumns);
-  }
+  setAllColumnProjectionIfNotConfigured(job, carbonTable);
   return splits;
 }
 return null;
   }
 
+  public void setAllColumnProjectionIfNotConfigured(JobContext job, 
CarbonTable carbonTable) {
+if (getColumnProjection(job.getConfiguration()) == null) {
+  // If the user projection is empty, use default all columns as 
projections.
+  // All column name will be filled inside getSplits, so can update only 
here.
+  String[]  projectionColumns = projectAllColumns(carbonTable);
+  setColumnProjection(job.getConfiguration(), projectionColumns);
+}
+  }
+
   private List getAllCarbonDataFiles(String tablePath) {
 List carbonFiles;
 try {
diff --git 
a/store/sdk/src/main/java/org/apache/carbondata/sdk/file/CarbonReaderBuilder.java
 
b/store/sdk/src/main/java/org/apache/carbondata/sdk/file/CarbonReaderBuilder.java
index 6ead50d..2db92ea 100644
--- 
a/store/sdk/src/main/java/org/apache/carbondata/sdk/file/CarbonReaderBuilder.java
+++ 
b/store/sdk/src/main/java/org/apache/carbondata/sdk/file/CarbonReaderBuilder.java
@@ -358,8 +358,8 @@ public class CarbonReaderBuilder {
   }
 } catch (Exception ex) {
   // Clear the datamap cache as it can get added in getSplits() method
-  DataMapStoreManager.getInstance()
-  .clearDataMaps(format.getAbsoluteTableIdentifier(hadoopConf));
+  DataMapStoreManager.getInstance().clearDataMaps(
+  
format.getOrCreateCarbonTable((job.getConfiguration())).getAbsoluteTableIdentifier());
   throw ex;
 }
   }
@@ -372,6 +372,8 @@ public class CarbonReaderBuilder {
 }
 final Job job = new Job(new JobConf(hadoopConf));
 CarbonFileInputFormat format = prepareFileI

[carbondata] branch master updated: [HOTFIX]Fix select * failure when MV datamap is enabled

2019-05-28 Thread ravipesala
This is an automated email from the ASF dual-hosted git repository.

ravipesala pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/carbondata.git


The following commit(s) were added to refs/heads/master by this push:
 new faba657  [HOTFIX]Fix select * failure when MV datamap is enabled
faba657 is described below

commit faba657becafe3b68fe73af875385c57384dbc8f
Author: akashrn5 
AuthorDate: Mon May 27 12:28:00 2019 +0530

[HOTFIX]Fix select * failure when MV datamap is enabled

Problem:
when select * is executed with limit, ColumnPruning rule will remove the 
project node from the plan during optimization, so child of limit nod eis 
relation and it fails in modular plan generation

Solution:
so if child of Limit is relation, then make the select node and make the 
modular plan

This closes #3235
---
 .../carbondata/mv/rewrite/MVCreateTestCase.scala   | 18 ++
 .../carbondata/mv/plans/modular/ModularPatterns.scala  | 10 ++
 .../mv/plans/util/Logical2ModularExtractions.scala |  7 +++
 3 files changed, 35 insertions(+)

diff --git 
a/datamap/mv/core/src/test/scala/org/apache/carbondata/mv/rewrite/MVCreateTestCase.scala
 
b/datamap/mv/core/src/test/scala/org/apache/carbondata/mv/rewrite/MVCreateTestCase.scala
index 4f5423e..48f967f 100644
--- 
a/datamap/mv/core/src/test/scala/org/apache/carbondata/mv/rewrite/MVCreateTestCase.scala
+++ 
b/datamap/mv/core/src/test/scala/org/apache/carbondata/mv/rewrite/MVCreateTestCase.scala
@@ -953,6 +953,23 @@ class MVCreateTestCase extends QueryTest with 
BeforeAndAfterAll {
 sql("drop table if exists all_table")
   }
 
+  test("test select * and distinct when MV is enabled") {
+sql("drop table if exists limit_fail")
+sql("CREATE TABLE limit_fail (empname String, designation String, doj 
Timestamp,workgroupcategory int, workgroupcategoryname String, deptno int, 
deptname String,projectcode int, projectjoindate Timestamp, projectenddate 
Timestamp,attendance int,utilization int,salary int)STORED BY 
'org.apache.carbondata.format'")
+sql(s"LOAD DATA local inpath '$resourcesPath/data_big.csv' INTO TABLE 
limit_fail  OPTIONS" +
+"('DELIMITER'= ',', 'QUOTECHAR'= '\"')")
+sql("create datamap limit_fail_dm1 using 'mv' as select 
empname,designation from limit_fail")
+try {
+  val df = sql("select distinct(empname) from limit_fail limit 10")
+  sql("select * from limit_fail limit 10").show()
+  val analyzed = df.queryExecution.analyzed
+  assert(verifyMVDataMap(analyzed, "limit_fail_dm1"))
+} catch {
+  case ex: Exception =>
+assert(false)
+}
+  }
+
   def verifyMVDataMap(logicalPlan: LogicalPlan, dataMapName: String): Boolean 
= {
 val tables = logicalPlan collect {
   case l: LogicalRelation => l.catalogTable.get
@@ -970,6 +987,7 @@ class MVCreateTestCase extends QueryTest with 
BeforeAndAfterAll {
 sql("drop table IF EXISTS fact_streaming_table1")
 sql("drop table IF EXISTS fact_streaming_table2")
 sql("drop table IF EXISTS fact_table_parquet")
+sql("drop table if exists limit_fail")
   }
 
   override def afterAll {
diff --git 
a/datamap/mv/plan/src/main/scala/org/apache/carbondata/mv/plans/modular/ModularPatterns.scala
 
b/datamap/mv/plan/src/main/scala/org/apache/carbondata/mv/plans/modular/ModularPatterns.scala
index a4116d9..30857c8 100644
--- 
a/datamap/mv/plan/src/main/scala/org/apache/carbondata/mv/plans/modular/ModularPatterns.scala
+++ 
b/datamap/mv/plan/src/main/scala/org/apache/carbondata/mv/plans/modular/ModularPatterns.scala
@@ -19,6 +19,7 @@ package org.apache.carbondata.mv.plans.modular
 
 import org.apache.spark.sql.catalyst.expressions.{Expression, NamedExpression, 
PredicateHelper, _}
 import org.apache.spark.sql.catalyst.plans.logical._
+import org.apache.spark.sql.execution.datasources.LogicalRelation
 
 import org.apache.carbondata.mv.plans.{Pattern, _}
 import org.apache.carbondata.mv.plans.modular.Flags._
@@ -118,6 +119,15 @@ abstract class ModularPatterns extends 
Modularizer[ModularPlan] {
   makeSelectModule(output, input, predicate, aliasmap, joinedge, flags,
 children.map(modularizeLater), Seq(Seq(limitExpr)) ++ fspec1, 
wspec)
 
+// if select * is with limit, then projection is removed from plan, so 
send the parent plan
+// to ExtractSelectModule to make the select node
+case limit@Limit(limitExpr, lr: LogicalRelation) =>
+  val (output, input, predicate, aliasmap, joinedge, children, flags1,
+  fspec1, wspec) = ExtractSelectModule.unapply(limit).get
+  val flags = flags1.setFlag(LIMIT)
+  makeSelectModule(output, input, predicate, aliasmap, joinedge, flags,
+children.map(modularizeLater), Seq(Seq(limitExpr)) ++ fspec1, 
wspec)
+
 case Limit(
   limitExpr,
   ExtractSelectModule(output, input, predicate, aliasmap, joinedge, 
children, flags1,
di

[carbondata] branch master updated: [CARBONDATA-3387] Support Partition with MV datamap & Show DataMap Status

2019-05-28 Thread ravipesala
This is an automated email from the ASF dual-hosted git repository.

ravipesala pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/carbondata.git


The following commit(s) were added to refs/heads/master by this push:
 new 51235d4  [CARBONDATA-3387] Support Partition with MV datamap & Show 
DataMap Status
51235d4 is described below

commit 51235d4cf239ea0d167623fed5ae339796d56eae
Author: Indhumathi27 
AuthorDate: Mon May 13 11:08:31 2019 +0530

[CARBONDATA-3387] Support Partition with MV datamap & Show DataMap Status

This PR includes,

Support Partition with Mv Datamap [Datamap with single parent table]

Show DataMap status and ParentTable to Datamap table segment Sync 
Information with SHOW DATAMAP ddl

Optimization for Incremental DataLoad.
In case of below scenario we can avoid reloading the MV
Maintable segments:0,1,2
MV: 0 => 0,1,2
Now after maintable compaction it will reload the 0.1 segment of maintable 
to MV, this is avoided by changing the mapping {0,1,2}=>{0.1}

This closes #3216
---
 .../core/constants/CarbonCommonConstants.java  |   2 +
 .../carbondata/core/datamap/DataMapProvider.java   |  64 +-
 .../core/metadata/schema/table/DataMapSchema.java  |  13 +
 datamap/mv/core/pom.xml|   2 +-
 .../carbondata/mv/datamap/MVDataMapProvider.scala  |  12 +-
 .../apache/carbondata/mv/datamap/MVHelper.scala|  75 ++-
 .../org/apache/carbondata/mv/datamap/MVUtil.scala  |   3 +-
 .../mv/rewrite/MVIncrementalLoadingTestcase.scala  |  23 +
 .../mv/rewrite/TestAllOperationsOnMV.scala | 138 -
 .../mv/rewrite/TestPartitionWithMV.scala   | 688 +
 datamap/mv/plan/pom.xml|   2 +-
 .../mv/plans/util/BirdcageOptimizer.scala  |   4 +-
 .../testsuite/datamap/TestDataMapCommand.scala |  10 +-
 ...StandardPartitionWithPreaggregateTestCase.scala |  10 +
 .../scala/org/apache/spark/sql/CarbonEnv.scala |   5 +-
 .../datamap/CarbonCreateDataMapCommand.scala   |  36 +-
 .../command/datamap/CarbonDataMapShowCommand.scala |  54 +-
 .../command/management/CarbonLoadDataCommand.scala |  10 +-
 .../execution/command/mv/DataMapListeners.scala| 113 +++-
 .../CarbonAlterTableDropHivePartitionCommand.scala |   4 -
 .../preaaggregate/PreAggregateListeners.scala  |   2 +-
 .../command/table/CarbonDropTableCommand.scala |  14 +-
 .../spark/sql/execution/strategy/DDLStrategy.scala |   4 +
 .../processing/util/CarbonLoaderUtil.java  |  43 ++
 24 files changed, 1280 insertions(+), 51 deletions(-)

diff --git 
a/core/src/main/java/org/apache/carbondata/core/constants/CarbonCommonConstants.java
 
b/core/src/main/java/org/apache/carbondata/core/constants/CarbonCommonConstants.java
index 9375414..e78ea17 100644
--- 
a/core/src/main/java/org/apache/carbondata/core/constants/CarbonCommonConstants.java
+++ 
b/core/src/main/java/org/apache/carbondata/core/constants/CarbonCommonConstants.java
@@ -2174,4 +2174,6 @@ public final class CarbonCommonConstants {
*/
   public static final String PARENT_TABLES = "parent_tables";
 
+  public static final String LOAD_SYNC_TIME = "load_sync_time";
+
 }
diff --git 
a/core/src/main/java/org/apache/carbondata/core/datamap/DataMapProvider.java 
b/core/src/main/java/org/apache/carbondata/core/datamap/DataMapProvider.java
index fe2e7dd..c4ee49b 100644
--- a/core/src/main/java/org/apache/carbondata/core/datamap/DataMapProvider.java
+++ b/core/src/main/java/org/apache/carbondata/core/datamap/DataMapProvider.java
@@ -264,23 +264,52 @@ public abstract class DataMapProvider {
 } else {
   for (RelationIdentifier relationIdentifier : relationIdentifiers) {
 List dataMapTableSegmentList = new ArrayList<>();
+// Get all segments for parent relationIdentifier
+List mainTableSegmentList =
+DataMapUtil.getMainTableValidSegmentList(relationIdentifier);
+boolean ifTableStatusUpdateRequired = false;
 for (LoadMetadataDetails loadMetaDetail : listOfLoadFolderDetails) {
   if (loadMetaDetail.getSegmentStatus() == SegmentStatus.SUCCESS
   || loadMetaDetail.getSegmentStatus() == 
SegmentStatus.INSERT_IN_PROGRESS) {
 Map> segmentMaps =
 
DataMapSegmentStatusUtil.getSegmentMap(loadMetaDetail.getExtraInfo());
-dataMapTableSegmentList.addAll(segmentMaps.get(
-relationIdentifier.getDatabaseName() + 
CarbonCommonConstants.POINT
-+ relationIdentifier.getTableName()));
+String mainTableMetaDataPath =
+
CarbonTablePath.getMetadataPath(relationIdentifier.getTablePath());
+LoadMetadataDetails[] parentTableLoadMetaDataDetails =
+SegmentStatusManager.readLoadMetadata(mainTableMetaDataPath);
+String table = relationIdentifier.getDatabaseName() + 
CarbonCommonConstants.POINT
+ 

Jenkins build is back to stable : carbondata-master-spark-2.2 » Apache CarbonData :: Spark2 #1713

2019-05-28 Thread Apache Jenkins Server
See 




Jenkins build is still unstable: carbondata-master-spark-2.2 » Apache CarbonData :: Spark Common Test #1713

2019-05-28 Thread Apache Jenkins Server
See 




Jenkins build is back to stable : carbondata-master-spark-2.2 » Apache CarbonData :: Processing #1713

2019-05-28 Thread Apache Jenkins Server
See 




Jenkins build is still unstable: carbondata-master-spark-2.2 » Apache CarbonData :: Store SDK #1713

2019-05-28 Thread Apache Jenkins Server
See 




Jenkins build is still unstable: carbondata-master-spark-2.1 » Apache CarbonData :: Spark Common Test #3548

2019-05-28 Thread Apache Jenkins Server
See 




Jenkins build is still unstable: carbondata-master-spark-2.1 » Apache CarbonData :: Store SDK #3548

2019-05-28 Thread Apache Jenkins Server
See 




Jenkins build is back to stable : carbondata-master-spark-2.1 » Apache CarbonData :: Processing #3548

2019-05-28 Thread Apache Jenkins Server
See 




Jenkins build is still unstable: carbondata-master-spark-2.1 #3548

2019-05-28 Thread Apache Jenkins Server
See 




Jenkins build is still unstable: carbondata-master-spark-2.1 » Apache CarbonData :: Spark2 #3548

2019-05-28 Thread Apache Jenkins Server
See 




[carbondata] branch master updated: [CARBONDATA-3392] Make LRU mandatory for index server

2019-05-28 Thread ravipesala
This is an automated email from the ASF dual-hosted git repository.

ravipesala pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/carbondata.git


The following commit(s) were added to refs/heads/master by this push:
 new df7339c  [CARBONDATA-3392] Make LRU mandatory for index server
df7339c is described below

commit df7339ce005be48dfb440e4cd02f640d6555e887
Author: kunal642 
AuthorDate: Wed May 15 16:40:28 2019 +0530

[CARBONDATA-3392] Make LRU mandatory for index server

Background:
Currently LRU is optional for the user to configure, but this will raise 
some concerns in case of index server because the invalid segments have to be 
constantly removed from the cache in case of update/delete/compaction scenarios.

Therefore if clear segment job is failed then the job would not fail bu 
there has to be a mechanism to prevent that segment from being in cache forever.

To prevent the above mentioned scenario LRU cache size for executor is a 
mandatory property for the index server application.

This closes #3222
---
 .../carbondata/core/datamap/DataMapUtil.java   | 10 +-
 .../carbondata/core/util/BlockletDataMapUtil.java  |  2 +-
 .../hadoop/api/CarbonTableInputFormat.java | 39 +-
 .../carbondata/indexserver/DataMapJobs.scala   | 18 --
 .../indexserver/DistributedPruneRDD.scala  | 12 +--
 .../carbondata/indexserver/IndexServer.scala   | 19 +--
 .../spark/rdd/CarbonDataRDDFactory.scala   | 10 --
 .../sql/execution/command/cache/CacheUtil.scala| 15 +++--
 .../command/cache/CarbonShowCacheCommand.scala | 23 -
 9 files changed, 86 insertions(+), 62 deletions(-)

diff --git 
a/core/src/main/java/org/apache/carbondata/core/datamap/DataMapUtil.java 
b/core/src/main/java/org/apache/carbondata/core/datamap/DataMapUtil.java
index e20f19a..2371a10 100644
--- a/core/src/main/java/org/apache/carbondata/core/datamap/DataMapUtil.java
+++ b/core/src/main/java/org/apache/carbondata/core/datamap/DataMapUtil.java
@@ -115,7 +115,15 @@ public class DataMapUtil {
 DistributableDataMapFormat dataMapFormat = new 
DistributableDataMapFormat(carbonTable,
 validAndInvalidSegmentsInfo.getValidSegments(), invalidSegment, true,
 dataMapToClear);
-dataMapJob.execute(dataMapFormat);
+try {
+  dataMapJob.execute(dataMapFormat);
+} catch (Exception e) {
+  if 
(dataMapJob.getClass().getName().equalsIgnoreCase(DISTRIBUTED_JOB_NAME)) {
+LOGGER.warn("Failed to clear distributed cache.", e);
+  } else {
+throw e;
+  }
+}
   }
 
   public static void executeClearDataMapJob(CarbonTable carbonTable, String 
jobClassName)
diff --git 
a/core/src/main/java/org/apache/carbondata/core/util/BlockletDataMapUtil.java 
b/core/src/main/java/org/apache/carbondata/core/util/BlockletDataMapUtil.java
index c90c3dc..68aad72 100644
--- 
a/core/src/main/java/org/apache/carbondata/core/util/BlockletDataMapUtil.java
+++ 
b/core/src/main/java/org/apache/carbondata/core/util/BlockletDataMapUtil.java
@@ -228,7 +228,7 @@ public class BlockletDataMapUtil {
 List tableBlockIndexUniqueIdentifiers = 
new ArrayList<>();
 String mergeFilePath =
 identifier.getIndexFilePath() + CarbonCommonConstants.FILE_SEPARATOR + 
identifier
-.getMergeIndexFileName();
+.getIndexFileName();
 segmentIndexFileStore.readMergeFile(mergeFilePath);
 List indexFiles =
 
segmentIndexFileStore.getCarbonMergeFileToIndexFilesMap().get(mergeFilePath);
diff --git 
a/hadoop/src/main/java/org/apache/carbondata/hadoop/api/CarbonTableInputFormat.java
 
b/hadoop/src/main/java/org/apache/carbondata/hadoop/api/CarbonTableInputFormat.java
index dd86dcb..274c7ef 100644
--- 
a/hadoop/src/main/java/org/apache/carbondata/hadoop/api/CarbonTableInputFormat.java
+++ 
b/hadoop/src/main/java/org/apache/carbondata/hadoop/api/CarbonTableInputFormat.java
@@ -557,22 +557,31 @@ public class CarbonTableInputFormat extends 
CarbonInputFormat {
 }
 if (isIUDTable || isUpdateFlow) {
   Map blockletToRowCountMap = new HashMap<>();
-  if 
(CarbonProperties.getInstance().isDistributedPruningEnabled(table.getDatabaseName(),
-  table.getTableName())) {
-List extendedBlocklets = 
CarbonTableInputFormat.convertToCarbonInputSplit(
-getDistributedSplit(table, null, partitions, filteredSegment,
-allSegments.getInvalidSegments(), toBeCleanedSegments));
-for (InputSplit extendedBlocklet : extendedBlocklets) {
-  CarbonInputSplit blocklet = (CarbonInputSplit) extendedBlocklet;
-  String filePath = blocklet.getFilePath();
-  String blockName = filePath.substring(filePath.lastIndexOf("/") + 1);
-  blockletToRowCountMap.put(blocklet.getSegmentId() + "," + blockName,
-  (long) blocklet.getDetailInfo().getRowCount());
+  if (Carbon

Jenkins build is still unstable: carbondata-master-spark-2.1 #3547

2019-05-28 Thread Apache Jenkins Server
See 




Jenkins build is still unstable: carbondata-master-spark-2.1 » Apache CarbonData :: Spark Common Test #3547

2019-05-28 Thread Apache Jenkins Server
See 




Jenkins build is still unstable: carbondata-master-spark-2.1 » Apache CarbonData :: Spark2 #3547

2019-05-28 Thread Apache Jenkins Server
See 




Jenkins build became unstable: carbondata-master-spark-2.1 » Apache CarbonData :: Processing #3547

2019-05-28 Thread Apache Jenkins Server
See 




Jenkins build is still unstable: carbondata-master-spark-2.1 » Apache CarbonData :: Store SDK #3547

2019-05-28 Thread Apache Jenkins Server
See 




Jenkins build is still unstable: carbondata-master-spark-2.2 #1712

2019-05-28 Thread Apache Jenkins Server
See 




Jenkins build is still unstable: carbondata-master-spark-2.2 » Apache CarbonData :: Spark Common Test #1712

2019-05-28 Thread Apache Jenkins Server
See 




Jenkins build became unstable: carbondata-master-spark-2.2 » Apache CarbonData :: Processing #1712

2019-05-28 Thread Apache Jenkins Server
See 




Jenkins build is still unstable: carbondata-master-spark-2.2 » Apache CarbonData :: Store SDK #1712

2019-05-28 Thread Apache Jenkins Server
See 




Jenkins build became unstable: carbondata-master-spark-2.2 » Apache CarbonData :: Spark2 #1712

2019-05-28 Thread Apache Jenkins Server
See 




Jenkins build is still unstable: carbondata-master-spark-2.1 » Apache CarbonData :: Spark2 #3546

2019-05-28 Thread Apache Jenkins Server
See 




Jenkins build is still unstable: carbondata-master-spark-2.1 » Apache CarbonData :: Store SDK #3546

2019-05-28 Thread Apache Jenkins Server
See 




Jenkins build is still unstable: carbondata-master-spark-2.1 » Apache CarbonData :: Spark Common Test #3546

2019-05-28 Thread Apache Jenkins Server
See 




Jenkins build is still unstable: carbondata-master-spark-2.1 #3546

2019-05-28 Thread Apache Jenkins Server
See 




Jenkins build is still unstable: carbondata-master-spark-2.2 » Apache CarbonData :: Spark Common Test #1710

2019-05-28 Thread Apache Jenkins Server
See 




Jenkins build is still unstable: carbondata-master-spark-2.2 #1710

2019-05-28 Thread Apache Jenkins Server
See 




Jenkins build is still unstable: carbondata-master-spark-2.2 » Apache CarbonData :: Store SDK #1710

2019-05-28 Thread Apache Jenkins Server
See 




Jenkins build is unstable: carbondata-master-spark-2.2 » Apache CarbonData :: Spark Common Test #1711

2019-05-28 Thread Apache Jenkins Server
See 




Jenkins build is still unstable: carbondata-master-spark-2.2 #1711

2019-05-28 Thread Apache Jenkins Server
See 




Jenkins build is still unstable: carbondata-master-spark-2.2 » Apache CarbonData :: Store SDK #1711

2019-05-28 Thread Apache Jenkins Server
See