[GitHub] carbondata issue #2940: [CARBONDATA-3116] Support set carbon.query.directQue...

2018-12-02 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/2940
  
Build Success with Spark 2.1.0, Please check CI 
http://136.243.101.176:8080/job/ApacheCarbonPRBuilder2.1/1616/



---


[GitHub] carbondata pull request #2940: [CARBONDATA-3116] Support set carbon.query.di...

2018-12-02 Thread xubo245
Github user xubo245 commented on a diff in the pull request:

https://github.com/apache/carbondata/pull/2940#discussion_r238166894
  
--- Diff: 
integration/spark-common-test/src/test/scala/org/apache/carbondata/integration/spark/testsuite/preaggregate/TestPreAggCreateCommand.scala
 ---
@@ -463,6 +464,39 @@ class TestPreAggCreateCommand extends QueryTest with 
BeforeAndAfterAll {
 executorService.shutdown()
   }
 
+  test("support set carbon.query.directQueryOnDataMap.enabled=true") {
+val rootPath = new File(this.getClass.getResource("/").getPath
+  + "../../../..").getCanonicalPath
+val testData = 
s"$rootPath/integration/spark-common-test/src/test/resources/sample.csv"
+sql("drop table if exists mainTable")
+sql(
+  s"""
+ | CREATE TABLE mainTable
+ |   (id Int,
+ |   name String,
+ |   city String,
+ |   age Int)
+ | STORED BY 'org.apache.carbondata.format'
+  """.stripMargin);
+
+
+sql(
+  s"""
+ | LOAD DATA LOCAL INPATH '$testData'
+ | into table mainTable
+   """.stripMargin);
+
+sql(
+  s"""
+ | create datamap preagg_sum on table mainTable
+ | using 'preaggregate'
+ | as select id,sum(age) from mainTable group by id
+   """.stripMargin);
+
+sql("set carbon.query.directQueryOnDataMap.enabled=true");
--- End diff --

ok, optimized it. Why don't we use the default value for test? 


---


[GitHub] carbondata issue #2970: [CARBONDATA-3142]Add timestamp with thread name whic...

2018-12-02 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/2970
  
Build Success with Spark 2.1.0, Please check CI 
http://136.243.101.176:8080/job/ApacheCarbonPRBuilder2.1/1615/



---


[GitHub] carbondata pull request #2970: [CARBONDATA-3142]Add timestamp with thread na...

2018-12-02 Thread qiuchenjian
GitHub user qiuchenjian opened a pull request:

https://github.com/apache/carbondata/pull/2970

[CARBONDATA-3142]Add timestamp with thread name which created by 
CarbonThreadFactory

[CARBONDATA-3142]Add timestamp with thread name which created by 
CarbonThreadFactory

Be sure to do all of the following checklist to help us incorporate 
your contribution quickly and easily:

 - [NA] Any interfaces changed?
 
 - [√ ] Any backward compatibility impacted?
 
 - [ √] Document update required?

 - [ √] Testing done
Please provide details on 
- Whether new unit test cases have been added or why no new tests 
are required?
- How it is tested? Please attach test report.
- Is it a performance related change? Please attach the performance 
test report.
- Any additional information to help reviewers in testing this 
change.
   
 - [ ] For large changes, please consider breaking it into sub-tasks under 
an umbrella JIRA. 



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/qiuchenjian/carbondata BugFix

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/carbondata/pull/2970.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2970


commit 33d10ecdafe0cbe4bc7de2f4a9346173f11eacb6
Author: qiuchenjian <807169000@...>
Date:   2018-12-03T07:19:37Z

[CARBONDATA-3142]Add timestamp with thread name which created by 
CarbonThreadFactory




---


[GitHub] carbondata pull request #2940: [CARBONDATA-3116] Support set carbon.query.di...

2018-12-02 Thread kunal642
Github user kunal642 commented on a diff in the pull request:

https://github.com/apache/carbondata/pull/2940#discussion_r238161863
  
--- Diff: integration/spark2/pom.xml ---
@@ -105,6 +105,11 @@
 
   
 
+
+  org.apache.httpcomponents
--- End diff --

@xubo245 This change is already there in #2925 .  We can discuss on that PR 
on this. Please remove the changes from this PR. 

@jackylk Let us discuss whether this is actually required. Please refer the 
PR #2925 and give inputs


---


[GitHub] carbondata pull request #2821: [CARBONDATA-3017] Map DDL Support

2018-12-02 Thread Indhumathi27
Github user Indhumathi27 commented on a diff in the pull request:

https://github.com/apache/carbondata/pull/2821#discussion_r238161031
  
--- Diff: 
integration/spark-common-test/src/test/scala/org/apache/carbondata/spark/testsuite/createTable/TestCreateDDLForComplexMapType.scala
 ---
@@ -0,0 +1,330 @@
+/*
+
+Licensed to the Apache Software Foundation (ASF) under one or more
+contributor license agreements. See the NOTICE file distributed with
+this work for additional information regarding copyright ownership.
+The ASF licenses this file to You under the Apache License, Version 2.0
+(the "License"); you may not use this file except in compliance with
+the License. You may obtain a copy of the License at
+*
+http://www.apache.org/licenses/LICENSE-2.0
+*
+Unless required by applicable law or agreed to in writing, software
+distributed under the License is distributed on an "AS IS" BASIS,
+WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or 
implied.
+See the License for the specific language governing permissions and
+limitations under the License.
+*/
+package 
org.apache.carbondata.spark.testsuite.createTable.TestCreateDDLForComplexMapType
+
+import java.io.File
+
+import org.apache.hadoop.conf.Configuration
+import org.apache.spark.sql.Row
+import org.apache.spark.sql.test.util.QueryTest
+import org.scalatest.BeforeAndAfterAll
+
+class TestCreateDDLForComplexMapType extends QueryTest with 
BeforeAndAfterAll {
+  private val conf: Configuration = new Configuration(false)
+
+  val rootPath = new File(this.getClass.getResource("/").getPath
+  + "../../../..").getCanonicalPath
+
+  val path = 
s"$rootPath/examples/spark2/src/main/resources/mapDDLTestData.csv"
+
+  override def beforeAll(): Unit = {
+sql("DROP TABLE IF EXISTS carbon")
+  }
+
+  test("Single Map One Level") {
+sql("DROP TABLE IF EXISTS carbon")
+sql(
+  s"""
+ | CREATE TABLE carbon(
+ | mapField map
+ | )
+ | STORED BY 'carbondata'
+ | """
+.stripMargin)
+val desc = sql(
+  s"""
+ | Describe Formatted
+ | carbon
+ | """.stripMargin).collect()
+
assert(desc(0).get(1).asInstanceOf[String].trim.equals("map"))
+  }
+
+  test("Single Map with Two Nested Level") {
+sql("DROP TABLE IF EXISTS carbon")
+sql(
+  s"""
+ | CREATE TABLE carbon(
+ | mapField map>
+ | )
+ | STORED BY
+ |'carbondata'
+ |"""
+.stripMargin)
+val desc = sql(
+  s"""
+ | Describe Formatted
+ | carbon
+ | """.stripMargin).collect()
+
assert(desc(0).get(1).asInstanceOf[String].trim.equals("map>"))
+  }
+
+  test("Map Type with array type as value") {
+sql("DROP TABLE IF EXISTS carbon")
+sql(
+  s"""
+ | CREATE TABLE carbon(
+ | mapField map>
+ | )
+ | STORED BY 'carbondata'
+ |
+ """
+.stripMargin)
+val desc = sql(
+  s"""
+ | Describe Formatted
+ | carbon
+ | """.stripMargin).collect()
+
assert(desc(0).get(1).asInstanceOf[String].trim.equals("map>"))
+  }
+
+  test("Map Type with struct type as value") {
+sql("DROP TABLE IF EXISTS carbon")
+sql(
+  s"""
+ | CREATE TABLE carbon(
+ | mapField map>
+ | )
+ | STORED BY
+ | 'carbondata'
+ | """
+.stripMargin)
+val desc = sql(
+  s"""
+ | Describe Formatted
+ | carbon
+ | """.stripMargin).collect()
+assert(desc(0).get(1).asInstanceOf[String].trim
+  .equals("map>"))
+  }
+
+  test("Map Type as child to struct type") {
+sql("DROP TABLE IF EXISTS carbon")
+sql(
+  s"""
+ | CREATE TABLE carbon(
+ | mapField struct>
+ | )
+ | STORED BY
+ |'carbondata' """
+.stripMargin)
+val desc = sql(
+  s"""
+ | Describe Formatted
+ | carbon
+ | """.stripMargin).collect()
+assert(desc(0).get(1).asInstanceOf[String].trim
+  .equals("struct>"))
+  }
+
+  test("Map Type as child to array type") {
+sql("DROP TABLE IF EXISTS carbon")
+sql(
+  s"""
+ | CREATE TABLE carbon(
+ | mapField array>
+ | )
+ | STORED BY 'carbondata'
+ | """
+ 

[GitHub] carbondata pull request #2821: [CARBONDATA-3017] Map DDL Support

2018-12-02 Thread Indhumathi27
Github user Indhumathi27 commented on a diff in the pull request:

https://github.com/apache/carbondata/pull/2821#discussion_r238160123
  
--- Diff: 
integration/spark-common-test/src/test/scala/org/apache/carbondata/spark/testsuite/createTable/TestCreateDDLForComplexMapType.scala
 ---
@@ -0,0 +1,330 @@
+/*
+
+Licensed to the Apache Software Foundation (ASF) under one or more
+contributor license agreements. See the NOTICE file distributed with
+this work for additional information regarding copyright ownership.
+The ASF licenses this file to You under the Apache License, Version 2.0
+(the "License"); you may not use this file except in compliance with
+the License. You may obtain a copy of the License at
+*
+http://www.apache.org/licenses/LICENSE-2.0
+*
+Unless required by applicable law or agreed to in writing, software
+distributed under the License is distributed on an "AS IS" BASIS,
+WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or 
implied.
+See the License for the specific language governing permissions and
+limitations under the License.
+*/
+package 
org.apache.carbondata.spark.testsuite.createTable.TestCreateDDLForComplexMapType
+
+import java.io.File
+
+import org.apache.hadoop.conf.Configuration
+import org.apache.spark.sql.Row
+import org.apache.spark.sql.test.util.QueryTest
+import org.scalatest.BeforeAndAfterAll
+
+class TestCreateDDLForComplexMapType extends QueryTest with 
BeforeAndAfterAll {
+  private val conf: Configuration = new Configuration(false)
+
+  val rootPath = new File(this.getClass.getResource("/").getPath
+  + "../../../..").getCanonicalPath
+
+  val path = 
s"$rootPath/examples/spark2/src/main/resources/mapDDLTestData.csv"
+
+  override def beforeAll(): Unit = {
+sql("DROP TABLE IF EXISTS carbon")
+  }
+
+  test("Single Map One Level") {
+sql("DROP TABLE IF EXISTS carbon")
+sql(
+  s"""
+ | CREATE TABLE carbon(
+ | mapField map
+ | )
+ | STORED BY 'carbondata'
+ | """
+.stripMargin)
+val desc = sql(
+  s"""
+ | Describe Formatted
+ | carbon
+ | """.stripMargin).collect()
+
assert(desc(0).get(1).asInstanceOf[String].trim.equals("map"))
+  }
+
+  test("Single Map with Two Nested Level") {
+sql("DROP TABLE IF EXISTS carbon")
+sql(
+  s"""
+ | CREATE TABLE carbon(
+ | mapField map>
+ | )
+ | STORED BY
+ |'carbondata'
+ |"""
+.stripMargin)
+val desc = sql(
+  s"""
+ | Describe Formatted
+ | carbon
+ | """.stripMargin).collect()
+
assert(desc(0).get(1).asInstanceOf[String].trim.equals("map>"))
+  }
+
+  test("Map Type with array type as value") {
+sql("DROP TABLE IF EXISTS carbon")
+sql(
+  s"""
+ | CREATE TABLE carbon(
+ | mapField map>
+ | )
+ | STORED BY 'carbondata'
+ |
+ """
+.stripMargin)
+val desc = sql(
+  s"""
+ | Describe Formatted
+ | carbon
+ | """.stripMargin).collect()
+
assert(desc(0).get(1).asInstanceOf[String].trim.equals("map>"))
+  }
+
+  test("Map Type with struct type as value") {
+sql("DROP TABLE IF EXISTS carbon")
+sql(
+  s"""
+ | CREATE TABLE carbon(
+ | mapField map>
+ | )
+ | STORED BY
+ | 'carbondata'
+ | """
+.stripMargin)
+val desc = sql(
+  s"""
+ | Describe Formatted
+ | carbon
+ | """.stripMargin).collect()
+assert(desc(0).get(1).asInstanceOf[String].trim
+  .equals("map>"))
+  }
+
+  test("Map Type as child to struct type") {
+sql("DROP TABLE IF EXISTS carbon")
+sql(
+  s"""
+ | CREATE TABLE carbon(
+ | mapField struct>
+ | )
+ | STORED BY
+ |'carbondata' """
+.stripMargin)
+val desc = sql(
+  s"""
+ | Describe Formatted
+ | carbon
+ | """.stripMargin).collect()
+assert(desc(0).get(1).asInstanceOf[String].trim
+  .equals("struct>"))
+  }
+
+  test("Map Type as child to array type") {
+sql("DROP TABLE IF EXISTS carbon")
+sql(
+  s"""
+ | CREATE TABLE carbon(
+ | mapField array>
+ | )
+ | STORED BY 'carbondata'
+ | """
+ 

[GitHub] carbondata issue #2878: [CARBONDATA-3107] Optimize error/exception coding fo...

2018-12-02 Thread kevinjmh
Github user kevinjmh commented on the issue:

https://github.com/apache/carbondata/pull/2878
  
conflicts fixed.


---


[jira] [Created] (CARBONDATA-3142) The names of threads created by CarbonThreadFactory are all the same

2018-12-02 Thread Chenjian Qiu (JIRA)
Chenjian Qiu created CARBONDATA-3142:


 Summary: The names of threads created by CarbonThreadFactory  are 
all the same
 Key: CARBONDATA-3142
 URL: https://issues.apache.org/jira/browse/CARBONDATA-3142
 Project: CarbonData
  Issue Type: Improvement
  Components: data-load
Affects Versions: 1.5.0
Reporter: Chenjian Qiu


The names of threads created by CarbonThreadFactory  are all the same, such as 
"ProducerPool_",  this situation is confused to look



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] carbondata pull request #2940: [CARBONDATA-3116] Support set carbon.query.di...

2018-12-02 Thread xubo245
Github user xubo245 commented on a diff in the pull request:

https://github.com/apache/carbondata/pull/2940#discussion_r238156400
  
--- Diff: 
integration/spark-common-test/src/test/scala/org/apache/carbondata/integration/spark/testsuite/preaggregate/TestPreAggCreateCommand.scala
 ---
@@ -463,6 +464,39 @@ class TestPreAggCreateCommand extends QueryTest with 
BeforeAndAfterAll {
 executorService.shutdown()
   }
 
+  test("support set carbon.query.directQueryOnDataMap.enabled=true") {
+val rootPath = new File(this.getClass.getResource("/").getPath
+  + "../../../..").getCanonicalPath
+val testData = 
s"$rootPath/integration/spark-common-test/src/test/resources/sample.csv"
+sql("drop table if exists mainTable")
+sql(
+  s"""
+ | CREATE TABLE mainTable
+ |   (id Int,
+ |   name String,
+ |   city String,
+ |   age Int)
+ | STORED BY 'org.apache.carbondata.format'
+  """.stripMargin);
+
+
+sql(
+  s"""
+ | LOAD DATA LOCAL INPATH '$testData'
+ | into table mainTable
+   """.stripMargin);
--- End diff --

ok, removed


---


[GitHub] carbondata pull request #2963: [CARBONDATA-3139] Fix bugs in MinMaxDataMap e...

2018-12-02 Thread Indhumathi27
Github user Indhumathi27 commented on a diff in the pull request:

https://github.com/apache/carbondata/pull/2963#discussion_r238156188
  
--- Diff: 
integration/spark2/src/test/scala/org/apache/carbondata/datamap/minmax/MinMaxDataMapFunctionSuite.scala
 ---
@@ -0,0 +1,415 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.carbondata.datamap.minmax
+
+import org.apache.spark.sql.test.util.QueryTest
+import org.scalatest.BeforeAndAfterAll
+
+import org.apache.carbondata.core.constants.CarbonCommonConstants
+import org.apache.carbondata.core.util.CarbonProperties
+
+class MinMaxDataMapFunctionSuite extends QueryTest with BeforeAndAfterAll {
+  private val minmaxDataMapFactoryName = 
"org.apache.carbondata.datamap.minmax.MinMaxDataMapFactory"
+  var originalStatEnabled = CarbonProperties.getInstance().getProperty(
+CarbonCommonConstants.ENABLE_QUERY_STATISTICS,
+CarbonCommonConstants.ENABLE_QUERY_STATISTICS_DEFAULT)
+
+  override protected def beforeAll(): Unit = {
+CarbonProperties.getInstance()
+  .addProperty(CarbonCommonConstants.ENABLE_QUERY_STATISTICS, "true")
+
CarbonProperties.getInstance().addProperty(CarbonCommonConstants.CARBON_DATE_FORMAT,
+  "-MM-dd")
+
CarbonProperties.getInstance().addProperty(CarbonCommonConstants.CARBON_TIMESTAMP_FORMAT,
+  "-MM-dd HH:mm:ss")
--- End diff --

Please check whether we can use the default timestamp/date format here.

org.apache.carbondata.core.constants.CarbonCommonConstants#CARBON_TIMESTAMP_DEFAULT_FORMAT


---


[GitHub] carbondata issue #2931: [CARBONDATA-2999] support read schema from S3

2018-12-02 Thread xubo245
Github user xubo245 commented on the issue:

https://github.com/apache/carbondata/pull/2931
  
@KanakaKumar @jackylk @QiangCai @ajantha-bhat @kunal642 Rebased, Please 
review it.


---


[GitHub] carbondata pull request #2969: [CARBONDATA-3127]Fix the TestCarbonSerde exce...

2018-12-02 Thread xubo245
Github user xubo245 commented on a diff in the pull request:

https://github.com/apache/carbondata/pull/2969#discussion_r238154723
  
--- Diff: 
integration/hive/src/test/java/org/apache/carbondata/hive/TestCarbonSerDe.java 
---
@@ -0,0 +1,133 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.carbondata.hive;
+
+import junit.framework.TestCase;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.hive.common.type.HiveDecimal;
+import org.apache.hadoop.hive.serde2.SerDeException;
+import org.apache.hadoop.hive.serde2.SerDeUtils;
+import org.apache.hadoop.hive.serde2.io.DoubleWritable;
+import org.apache.hadoop.hive.serde2.io.HiveDecimalWritable;
+import org.apache.hadoop.hive.serde2.io.ShortWritable;
+import org.apache.hadoop.hive.serde2.objectinspector.StructObjectInspector;
+import org.apache.hadoop.io.*;
+import org.junit.Test;
+
+import java.util.Properties;
+
+public class TestCarbonSerDe extends TestCase {
+@Test
+public void testCarbonHiveSerDe() throws Throwable {
+try {
+// Create the SerDe
+System.out.println("test: testCarbonHiveSerDe");
+
+final CarbonHiveSerDe serDe = new CarbonHiveSerDe();
+final Configuration conf = new Configuration();
+final Properties tbl = createProperties();
+SerDeUtils.initializeSerDe(serDe, conf, tbl, null);
+
+// Data
+final Writable[] arr = new Writable[7];
+
+//primitive types
+arr[0] = new ShortWritable((short) 456);
+arr[1] = new IntWritable(789);
+arr[2] = new LongWritable(1000l);
+arr[3] = new DoubleWritable(5.3);
+arr[4] = new HiveDecimalWritable(HiveDecimal.create(1));
+arr[5] = new Text("carbonSerde binary".getBytes("UTF-8"));
--- End diff --

Please take care the spell format, including the case sensitive,for 
example:CarbonSerDe


---


[GitHub] carbondata pull request #2963: [CARBONDATA-3139] Fix bugs in MinMaxDataMap e...

2018-12-02 Thread dhatchayani
Github user dhatchayani commented on a diff in the pull request:

https://github.com/apache/carbondata/pull/2963#discussion_r238154502
  
--- Diff: 
datamap/example/src/main/java/org/apache/carbondata/datamap/minmax/MinMaxDataMapFactory.java
 ---
@@ -0,0 +1,365 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.carbondata.datamap.minmax;
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.HashSet;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+import java.util.concurrent.ConcurrentHashMap;
+
+import org.apache.carbondata.common.annotations.InterfaceAudience;
+import 
org.apache.carbondata.common.exceptions.sql.MalformedDataMapCommandException;
+import org.apache.carbondata.common.logging.LogServiceFactory;
+import org.apache.carbondata.core.cache.Cache;
+import org.apache.carbondata.core.cache.CacheProvider;
+import org.apache.carbondata.core.cache.CacheType;
+import org.apache.carbondata.core.datamap.DataMapDistributable;
+import org.apache.carbondata.core.datamap.DataMapLevel;
+import org.apache.carbondata.core.datamap.DataMapMeta;
+import org.apache.carbondata.core.datamap.DataMapStoreManager;
+import org.apache.carbondata.core.datamap.Segment;
+import org.apache.carbondata.core.datamap.TableDataMap;
+import org.apache.carbondata.core.datamap.dev.DataMapBuilder;
+import org.apache.carbondata.core.datamap.dev.DataMapWriter;
+import org.apache.carbondata.core.datamap.dev.cgdatamap.CoarseGrainDataMap;
+import 
org.apache.carbondata.core.datamap.dev.cgdatamap.CoarseGrainDataMapFactory;
+import org.apache.carbondata.core.datastore.block.SegmentProperties;
+import org.apache.carbondata.core.datastore.filesystem.CarbonFile;
+import org.apache.carbondata.core.datastore.filesystem.CarbonFileFilter;
+import org.apache.carbondata.core.datastore.impl.FileFactory;
+import org.apache.carbondata.core.features.TableOperation;
+import org.apache.carbondata.core.metadata.schema.table.CarbonTable;
+import org.apache.carbondata.core.metadata.schema.table.DataMapSchema;
+import 
org.apache.carbondata.core.metadata.schema.table.column.CarbonColumn;
+import org.apache.carbondata.core.scan.filter.intf.ExpressionType;
+import org.apache.carbondata.core.statusmanager.SegmentStatusManager;
+import org.apache.carbondata.core.util.CarbonUtil;
+import org.apache.carbondata.core.util.path.CarbonTablePath;
+import org.apache.carbondata.events.Event;
+
+import org.apache.log4j.Logger;
+
+/**
+ * Min Max DataMap Factory
+ */
+@InterfaceAudience.Internal
+public class MinMaxDataMapFactory extends CoarseGrainDataMapFactory {
+  private static final Logger LOGGER =
+  
LogServiceFactory.getLogService(MinMaxDataMapFactory.class.getName());
+  private DataMapMeta dataMapMeta;
+  private String dataMapName;
+  // segmentId -> list of index files
+  private Map> segmentMap = new ConcurrentHashMap<>();
+  private Cache cache;
+
+  public MinMaxDataMapFactory(CarbonTable carbonTable, DataMapSchema 
dataMapSchema)
+  throws MalformedDataMapCommandException {
+super(carbonTable, dataMapSchema);
+
+// this is an example for datamap, we can choose the columns and 
operations that
+// will be supported by this datamap. Furthermore, we can add 
cache-support for this datamap.
+
+this.dataMapName = dataMapSchema.getDataMapName();
+List indexedColumns = 
carbonTable.getIndexedColumns(dataMapSchema);
+
+// operations that will be supported on the indexed columns
+List optOperations = new ArrayList<>();
+optOperations.add(ExpressionType.NOT);
+optOperations.add(ExpressionType.EQUALS);
+optOperations.add(ExpressionType.NOT_EQUALS);
+optOperations.add(ExpressionType.GREATERTHAN);
+optOperations.add(ExpressionType.GREATERTHAN_EQUALTO);
+optOperations.add(ExpressionType.LESSTHAN);

[GitHub] carbondata issue #2969: [CARBONDATA-3127]Fix the TestCarbonSerde exception

2018-12-02 Thread xubo245
Github user xubo245 commented on the issue:

https://github.com/apache/carbondata/pull/2969
  
LGTM


---


[GitHub] carbondata pull request #2963: [CARBONDATA-3139] Fix bugs in MinMaxDataMap e...

2018-12-02 Thread dhatchayani
Github user dhatchayani commented on a diff in the pull request:

https://github.com/apache/carbondata/pull/2963#discussion_r238153927
  
--- Diff: 
datamap/example/src/main/java/org/apache/carbondata/datamap/minmax/MinMaxDataMapFactory.java
 ---
@@ -0,0 +1,365 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.carbondata.datamap.minmax;
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.HashSet;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+import java.util.concurrent.ConcurrentHashMap;
+
+import org.apache.carbondata.common.annotations.InterfaceAudience;
+import 
org.apache.carbondata.common.exceptions.sql.MalformedDataMapCommandException;
+import org.apache.carbondata.common.logging.LogServiceFactory;
+import org.apache.carbondata.core.cache.Cache;
+import org.apache.carbondata.core.cache.CacheProvider;
+import org.apache.carbondata.core.cache.CacheType;
+import org.apache.carbondata.core.datamap.DataMapDistributable;
+import org.apache.carbondata.core.datamap.DataMapLevel;
+import org.apache.carbondata.core.datamap.DataMapMeta;
+import org.apache.carbondata.core.datamap.DataMapStoreManager;
+import org.apache.carbondata.core.datamap.Segment;
+import org.apache.carbondata.core.datamap.TableDataMap;
+import org.apache.carbondata.core.datamap.dev.DataMapBuilder;
+import org.apache.carbondata.core.datamap.dev.DataMapWriter;
+import org.apache.carbondata.core.datamap.dev.cgdatamap.CoarseGrainDataMap;
+import 
org.apache.carbondata.core.datamap.dev.cgdatamap.CoarseGrainDataMapFactory;
+import org.apache.carbondata.core.datastore.block.SegmentProperties;
+import org.apache.carbondata.core.datastore.filesystem.CarbonFile;
+import org.apache.carbondata.core.datastore.filesystem.CarbonFileFilter;
+import org.apache.carbondata.core.datastore.impl.FileFactory;
+import org.apache.carbondata.core.features.TableOperation;
+import org.apache.carbondata.core.metadata.schema.table.CarbonTable;
+import org.apache.carbondata.core.metadata.schema.table.DataMapSchema;
+import 
org.apache.carbondata.core.metadata.schema.table.column.CarbonColumn;
+import org.apache.carbondata.core.scan.filter.intf.ExpressionType;
+import org.apache.carbondata.core.statusmanager.SegmentStatusManager;
+import org.apache.carbondata.core.util.CarbonUtil;
+import org.apache.carbondata.core.util.path.CarbonTablePath;
+import org.apache.carbondata.events.Event;
+
+import org.apache.log4j.Logger;
+
+/**
+ * Min Max DataMap Factory
+ */
+@InterfaceAudience.Internal
+public class MinMaxDataMapFactory extends CoarseGrainDataMapFactory {
+  private static final Logger LOGGER =
+  
LogServiceFactory.getLogService(MinMaxDataMapFactory.class.getName());
+  private DataMapMeta dataMapMeta;
+  private String dataMapName;
+  // segmentId -> list of index files
+  private Map> segmentMap = new ConcurrentHashMap<>();
+  private Cache cache;
+
+  public MinMaxDataMapFactory(CarbonTable carbonTable, DataMapSchema 
dataMapSchema)
+  throws MalformedDataMapCommandException {
+super(carbonTable, dataMapSchema);
+
+// this is an example for datamap, we can choose the columns and 
operations that
+// will be supported by this datamap. Furthermore, we can add 
cache-support for this datamap.
+
+this.dataMapName = dataMapSchema.getDataMapName();
+List indexedColumns = 
carbonTable.getIndexedColumns(dataMapSchema);
+
+// operations that will be supported on the indexed columns
+List optOperations = new ArrayList<>();
+optOperations.add(ExpressionType.NOT);
+optOperations.add(ExpressionType.EQUALS);
+optOperations.add(ExpressionType.NOT_EQUALS);
+optOperations.add(ExpressionType.GREATERTHAN);
+optOperations.add(ExpressionType.GREATERTHAN_EQUALTO);
+optOperations.add(ExpressionType.LESSTHAN);

[GitHub] carbondata pull request #2963: [CARBONDATA-3139] Fix bugs in MinMaxDataMap e...

2018-12-02 Thread dhatchayani
Github user dhatchayani commented on a diff in the pull request:

https://github.com/apache/carbondata/pull/2963#discussion_r238153188
  
--- Diff: 
datamap/example/src/main/java/org/apache/carbondata/datamap/minmax/AbstractMinMaxDataMapWriter.java
 ---
@@ -0,0 +1,248 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.carbondata.datamap.minmax;
+
+import java.io.DataOutputStream;
+import java.io.IOException;
+import java.math.BigDecimal;
+import java.util.List;
+
+import org.apache.carbondata.common.logging.LogServiceFactory;
+import org.apache.carbondata.core.constants.CarbonCommonConstants;
+import org.apache.carbondata.core.datamap.Segment;
+import org.apache.carbondata.core.datamap.dev.DataMapWriter;
+import org.apache.carbondata.core.datastore.impl.FileFactory;
+import org.apache.carbondata.core.datastore.page.ColumnPage;
+import 
org.apache.carbondata.core.datastore.page.encoding.bool.BooleanConvert;
+import 
org.apache.carbondata.core.datastore.page.statistics.ColumnPageStatsCollector;
+import 
org.apache.carbondata.core.datastore.page.statistics.KeyPageStatsCollector;
+import 
org.apache.carbondata.core.datastore.page.statistics.PrimitivePageStatsCollector;
+import org.apache.carbondata.core.metadata.datatype.DataType;
+import org.apache.carbondata.core.metadata.datatype.DataTypes;
+import org.apache.carbondata.core.metadata.encoder.Encoding;
+import 
org.apache.carbondata.core.metadata.schema.table.column.CarbonColumn;
+import org.apache.carbondata.core.util.CarbonUtil;
+import org.apache.carbondata.core.util.DataTypeUtil;
+
+import org.apache.log4j.Logger;
+
+/**
+ * We will record the min & max value for each index column in each 
blocklet.
+ * Since the size of index is quite small, we will combine the index for 
all index columns
+ * in one file.
+ */
+public abstract class AbstractMinMaxDataMapWriter extends DataMapWriter {
+  private static final Logger LOGGER = LogServiceFactory.getLogService(
+  AbstractMinMaxDataMapWriter.class.getName());
+
+  private ColumnPageStatsCollector[] indexColumnMinMaxCollectors;
+  protected int currentBlockletId;
+  private String currentIndexFile;
+  private DataOutputStream currentIndexFileOutStream;
+
+  public AbstractMinMaxDataMapWriter(String tablePath, String dataMapName,
+  List indexColumns, Segment segment, String shardName) 
throws IOException {
+super(tablePath, dataMapName, indexColumns, segment, shardName);
+initStatsCollector();
+initDataMapFile();
+  }
+
+  private void initStatsCollector() {
+indexColumnMinMaxCollectors = new 
ColumnPageStatsCollector[indexColumns.size()];
+CarbonColumn indexCol;
+for (int i = 0; i < indexColumns.size(); i++) {
+  indexCol = indexColumns.get(i);
+  if (indexCol.isMeasure()
+  || (indexCol.isDimension()
+  && DataTypeUtil.isPrimitiveColumn(indexCol.getDataType())
+  && !indexCol.hasEncoding(Encoding.DICTIONARY)
+  && !indexCol.hasEncoding(Encoding.DIRECT_DICTIONARY))) {
+indexColumnMinMaxCollectors[i] = 
PrimitivePageStatsCollector.newInstance(
+indexColumns.get(i).getDataType());
+  } else {
+indexColumnMinMaxCollectors[i] = 
KeyPageStatsCollector.newInstance(DataTypes.BYTE_ARRAY);
+  }
+}
+  }
+
+  private void initDataMapFile() throws IOException {
+if (!FileFactory.isFileExist(dataMapPath) &&
+!FileFactory.mkdirs(dataMapPath, 
FileFactory.getFileType(dataMapPath))) {
+  throw new IOException("Failed to create directory " + dataMapPath);
+}
+
+try {
+  currentIndexFile = MinMaxIndexDataMap.getIndexFile(dataMapPath,
+  MinMaxIndexHolder.MINMAX_INDEX_PREFFIX + indexColumns.size());
+  FileFactory.createNewFile(currentIndexFile, 
FileFactory.getFileType(currentIndexFile));
+  currentIndexFileOutStream = 

[GitHub] carbondata pull request #2963: [CARBONDATA-3139] Fix bugs in MinMaxDataMap e...

2018-12-02 Thread Indhumathi27
Github user Indhumathi27 commented on a diff in the pull request:

https://github.com/apache/carbondata/pull/2963#discussion_r238152299
  
--- Diff: 
datamap/example/src/main/java/org/apache/carbondata/datamap/minmax/AbstractMinMaxDataMapWriter.java
 ---
@@ -0,0 +1,248 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.carbondata.datamap.minmax;
+
+import java.io.DataOutputStream;
+import java.io.IOException;
+import java.math.BigDecimal;
+import java.util.List;
+
+import org.apache.carbondata.common.logging.LogServiceFactory;
+import org.apache.carbondata.core.constants.CarbonCommonConstants;
+import org.apache.carbondata.core.datamap.Segment;
+import org.apache.carbondata.core.datamap.dev.DataMapWriter;
+import org.apache.carbondata.core.datastore.impl.FileFactory;
+import org.apache.carbondata.core.datastore.page.ColumnPage;
+import 
org.apache.carbondata.core.datastore.page.encoding.bool.BooleanConvert;
+import 
org.apache.carbondata.core.datastore.page.statistics.ColumnPageStatsCollector;
+import 
org.apache.carbondata.core.datastore.page.statistics.KeyPageStatsCollector;
+import 
org.apache.carbondata.core.datastore.page.statistics.PrimitivePageStatsCollector;
+import org.apache.carbondata.core.metadata.datatype.DataType;
+import org.apache.carbondata.core.metadata.datatype.DataTypes;
+import org.apache.carbondata.core.metadata.encoder.Encoding;
+import 
org.apache.carbondata.core.metadata.schema.table.column.CarbonColumn;
+import org.apache.carbondata.core.util.CarbonUtil;
+import org.apache.carbondata.core.util.DataTypeUtil;
+
+import org.apache.log4j.Logger;
+
+/**
+ * We will record the min & max value for each index column in each 
blocklet.
+ * Since the size of index is quite small, we will combine the index for 
all index columns
+ * in one file.
+ */
+public abstract class AbstractMinMaxDataMapWriter extends DataMapWriter {
+  private static final Logger LOGGER = LogServiceFactory.getLogService(
+  AbstractMinMaxDataMapWriter.class.getName());
+
+  private ColumnPageStatsCollector[] indexColumnMinMaxCollectors;
+  protected int currentBlockletId;
+  private String currentIndexFile;
+  private DataOutputStream currentIndexFileOutStream;
+
+  public AbstractMinMaxDataMapWriter(String tablePath, String dataMapName,
+  List indexColumns, Segment segment, String shardName) 
throws IOException {
+super(tablePath, dataMapName, indexColumns, segment, shardName);
+initStatsCollector();
+initDataMapFile();
+  }
+
+  private void initStatsCollector() {
+indexColumnMinMaxCollectors = new 
ColumnPageStatsCollector[indexColumns.size()];
+CarbonColumn indexCol;
+for (int i = 0; i < indexColumns.size(); i++) {
+  indexCol = indexColumns.get(i);
+  if (indexCol.isMeasure()
+  || (indexCol.isDimension()
+  && DataTypeUtil.isPrimitiveColumn(indexCol.getDataType())
+  && !indexCol.hasEncoding(Encoding.DICTIONARY)
+  && !indexCol.hasEncoding(Encoding.DIRECT_DICTIONARY))) {
+indexColumnMinMaxCollectors[i] = 
PrimitivePageStatsCollector.newInstance(
+indexColumns.get(i).getDataType());
+  } else {
+indexColumnMinMaxCollectors[i] = 
KeyPageStatsCollector.newInstance(DataTypes.BYTE_ARRAY);
+  }
+}
+  }
+
+  private void initDataMapFile() throws IOException {
+if (!FileFactory.isFileExist(dataMapPath) &&
+!FileFactory.mkdirs(dataMapPath, 
FileFactory.getFileType(dataMapPath))) {
+  throw new IOException("Failed to create directory " + dataMapPath);
+}
+
+try {
+  currentIndexFile = MinMaxIndexDataMap.getIndexFile(dataMapPath,
+  MinMaxIndexHolder.MINMAX_INDEX_PREFFIX + indexColumns.size());
+  FileFactory.createNewFile(currentIndexFile, 
FileFactory.getFileType(currentIndexFile));
+  currentIndexFileOutStream = 

[GitHub] carbondata pull request #2940: [CARBONDATA-3116] Support set carbon.query.di...

2018-12-02 Thread manishgupta88
Github user manishgupta88 commented on a diff in the pull request:

https://github.com/apache/carbondata/pull/2940#discussion_r238152763
  
--- Diff: 
integration/spark-common-test/src/test/scala/org/apache/carbondata/integration/spark/testsuite/preaggregate/TestPreAggCreateCommand.scala
 ---
@@ -463,6 +464,39 @@ class TestPreAggCreateCommand extends QueryTest with 
BeforeAndAfterAll {
 executorService.shutdown()
   }
 
+  test("support set carbon.query.directQueryOnDataMap.enabled=true") {
+val rootPath = new File(this.getClass.getResource("/").getPath
+  + "../../../..").getCanonicalPath
+val testData = 
s"$rootPath/integration/spark-common-test/src/test/resources/sample.csv"
+sql("drop table if exists mainTable")
+sql(
+  s"""
+ | CREATE TABLE mainTable
+ |   (id Int,
+ |   name String,
+ |   city String,
+ |   age Int)
+ | STORED BY 'org.apache.carbondata.format'
+  """.stripMargin);
+
+
+sql(
+  s"""
+ | LOAD DATA LOCAL INPATH '$testData'
+ | into table mainTable
+   """.stripMargin);
--- End diff --

for scala code semi-colon `;` is not required


---


[GitHub] carbondata issue #2931: [CARBONDATA-2999] support read schema from S3

2018-12-02 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/2931
  
Build Success with Spark 2.2.1, Please check CI 
http://95.216.28.178:8080/job/ApacheCarbonPRBuilder1/1825/



---


[GitHub] carbondata issue #2931: [CARBONDATA-2999] support read schema from S3

2018-12-02 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/2931
  
Build Success with Spark 2.3.1, Please check CI 
http://136.243.101.176:8080/job/carbondataprbuilder2.3/9874/



---


[GitHub] carbondata pull request #2940: [CARBONDATA-3116] Support set carbon.query.di...

2018-12-02 Thread kunal642
Github user kunal642 commented on a diff in the pull request:

https://github.com/apache/carbondata/pull/2940#discussion_r238142863
  
--- Diff: 
integration/spark-common-test/src/test/scala/org/apache/carbondata/integration/spark/testsuite/preaggregate/TestPreAggCreateCommand.scala
 ---
@@ -463,6 +464,39 @@ class TestPreAggCreateCommand extends QueryTest with 
BeforeAndAfterAll {
 executorService.shutdown()
   }
 
+  test("support set carbon.query.directQueryOnDataMap.enabled=true") {
+val rootPath = new File(this.getClass.getResource("/").getPath
+  + "../../../..").getCanonicalPath
+val testData = 
s"$rootPath/integration/spark-common-test/src/test/resources/sample.csv"
+sql("drop table if exists mainTable")
+sql(
+  s"""
+ | CREATE TABLE mainTable
+ |   (id Int,
+ |   name String,
+ |   city String,
+ |   age Int)
+ | STORED BY 'org.apache.carbondata.format'
+  """.stripMargin);
+
+
+sql(
+  s"""
+ | LOAD DATA LOCAL INPATH '$testData'
+ | into table mainTable
+   """.stripMargin);
+
+sql(
+  s"""
+ | create datamap preagg_sum on table mainTable
+ | using 'preaggregate'
+ | as select id,sum(age) from mainTable group by id
+   """.stripMargin);
+
+sql("set carbon.query.directQueryOnDataMap.enabled=true");
--- End diff --

Setting this property here would not be of use as this property is never 
validated in tests. 

Check 
https://github.com/apache/carbondata/blob/master/integration/spark-common/src/main/scala/org/apache/spark/sql/test/util/QueryTest.scala#L45
 for reference



---


[GitHub] carbondata issue #2931: [CARBONDATA-2999] support read schema from S3

2018-12-02 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/2931
  
Build Success with Spark 2.1.0, Please check CI 
http://136.243.101.176:8080/job/ApacheCarbonPRBuilder2.1/1614/



---


[GitHub] carbondata issue #2969: [CARBONDATA-3127]Fix the TestCarbonSerde exception

2018-12-02 Thread zzcclp
Github user zzcclp commented on the issue:

https://github.com/apache/carbondata/pull/2969
  
LGTM


---


[GitHub] carbondata issue #2940: [CARBONDATA-3116] Support set carbon.query.directQue...

2018-12-02 Thread xubo245
Github user xubo245 commented on the issue:

https://github.com/apache/carbondata/pull/2940
  
@jackylk @ravipesala @KanakaKumar @kunal642  This is a bug, please review 
it.


---


[GitHub] carbondata issue #2946: [CARBONDATA-3094] Support concurrent read carbondata...

2018-12-02 Thread xubo245
Github user xubo245 commented on the issue:

https://github.com/apache/carbondata/pull/2946
  
@KanakaKumar @ravipesala @jackylk @kunal642 @ajantha-bhat CI pass, Please 
review it.


---


[GitHub] carbondata issue #2878: [CARBONDATA-3107] Optimize error/exception coding fo...

2018-12-02 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/2878
  
Build Success with Spark 2.3.1, Please check CI 
http://136.243.101.176:8080/job/carbondataprbuilder2.3/9873/



---


[GitHub] carbondata issue #2878: [CARBONDATA-3107] Optimize error/exception coding fo...

2018-12-02 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/2878
  
Build Success with Spark 2.2.1, Please check CI 
http://95.216.28.178:8080/job/ApacheCarbonPRBuilder1/1824/



---


[GitHub] carbondata issue #2878: [CARBONDATA-3107] Optimize error/exception coding fo...

2018-12-02 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/2878
  
Build Success with Spark 2.1.0, Please check CI 
http://136.243.101.176:8080/job/ApacheCarbonPRBuilder2.1/1613/



---


[GitHub] carbondata issue #2878: [CARBONDATA-3107] Optimize error/exception coding fo...

2018-12-02 Thread xuchuanyin
Github user xuchuanyin commented on the issue:

https://github.com/apache/carbondata/pull/2878
  
LGTM
Please fix the conflicts


---


[GitHub] carbondata issue #2966: [WIP] test and check no sort by default

2018-12-02 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/2966
  
Build Failed  with Spark 2.3.1, Please check CI 
http://136.243.101.176:8080/job/carbondataprbuilder2.3/9872/



---


[GitHub] carbondata issue #2966: [WIP] test and check no sort by default

2018-12-02 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/2966
  
Build Failed with Spark 2.2.1, Please check CI 
http://95.216.28.178:8080/job/ApacheCarbonPRBuilder1/1823/



---


[GitHub] carbondata issue #2966: [WIP] test and check no sort by default

2018-12-02 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/2966
  
Build Success with Spark 2.1.0, Please check CI 
http://136.243.101.176:8080/job/ApacheCarbonPRBuilder2.1/1612/



---


[GitHub] carbondata issue #2966: [WIP] test and check no sort by default

2018-12-02 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/2966
  
Build Failed with Spark 2.2.1, Please check CI 
http://95.216.28.178:8080/job/ApacheCarbonPRBuilder1/1822/



---


[GitHub] carbondata issue #2966: [WIP] test and check no sort by default

2018-12-02 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/2966
  
Build Failed  with Spark 2.3.1, Please check CI 
http://136.243.101.176:8080/job/carbondataprbuilder2.3/9871/



---


[GitHub] carbondata issue #2966: [WIP] test and check no sort by default

2018-12-02 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/2966
  
Build Failed  with Spark 2.1.0, Please check CI 
http://136.243.101.176:8080/job/ApacheCarbonPRBuilder2.1/1611/



---


[GitHub] carbondata pull request #2969: [CARBONDATA-3127]Fix the TestCarbonSerde exce...

2018-12-02 Thread SteNicholas
Github user SteNicholas commented on a diff in the pull request:

https://github.com/apache/carbondata/pull/2969#discussion_r238089149
  
--- Diff: 
integration/hive/src/test/java/org/apache/carbondata/hive/TestCarbonSerde.java 
---
@@ -1,133 +1,133 @@
-///*
-// * Licensed to the Apache Software Foundation (ASF) under one or more
-// * contributor license agreements.  See the NOTICE file distributed with
-// * this work for additional information regarding copyright ownership.
-// * The ASF licenses this file to You under the Apache License, Version 
2.0
-// * (the "License"); you may not use this file except in compliance with
-// * the License.  You may obtain a copy of the License at
-// *
-// *http://www.apache.org/licenses/LICENSE-2.0
-// *
-// * Unless required by applicable law or agreed to in writing, software
-// * distributed under the License is distributed on an "AS IS" BASIS,
-// * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or 
implied.
-// * See the License for the specific language governing permissions and
-// * limitations under the License.
-// */
-//package org.apache.carbondata.hive;
-//
-//import junit.framework.TestCase;
-//import org.apache.hadoop.conf.Configuration;
-//import org.apache.hadoop.hive.common.type.HiveDecimal;
-//import org.apache.hadoop.hive.serde2.SerDeException;
-//import org.apache.hadoop.hive.serde2.SerDeUtils;
-//import org.apache.hadoop.hive.serde2.io.DoubleWritable;
-//import org.apache.hadoop.hive.serde2.io.HiveDecimalWritable;
-//import org.apache.hadoop.hive.serde2.io.ShortWritable;
-//import 
org.apache.hadoop.hive.serde2.objectinspector.StructObjectInspector;
-//import org.apache.hadoop.io.*;
-//import org.junit.Test;
-//
-//import java.util.Properties;
-//
-//public class TestCarbonSerde extends TestCase {
-//  @Test
-//  public void testCarbonHiveSerDe() throws Throwable {
-//try {
-//  // Create the SerDe
-//  System.out.println("test: testCarbonHiveSerDe");
-//
-//  final CarbonHiveSerDe serDe = new CarbonHiveSerDe();
-//  final Configuration conf = new Configuration();
-//  final Properties tbl = createProperties();
-//  SerDeUtils.initializeSerDe(serDe, conf, tbl, null);
-//
-//  // Data
-//  final Writable[] arr = new Writable[7];
-//
-//  //primitive types
-//  arr[0] = new ShortWritable((short) 456);
-//  arr[1] = new IntWritable(789);
-//  arr[2] = new LongWritable(1000l);
-//  arr[3] = new DoubleWritable((double) 5.3);
-//  arr[4] = new HiveDecimalWritable(HiveDecimal.create(1));
-//  arr[5] = new Text("carbonSerde binary".getBytes("UTF-8"));
-//
-//  final Writable[] arrayContainer = new Writable[1];
-//  final Writable[] array = new Writable[5];
-//  for (int i = 0; i < 5; ++i) {
-//array[i] = new IntWritable(i);
-//  }
-//  arrayContainer[0] = new ArrayWritable(Writable.class, array);
-//  arr[6] = new ArrayWritable(Writable.class, arrayContainer);
-//
-//  final ArrayWritable arrWritable = new 
ArrayWritable(Writable.class, arr);
-//  // Test
-//  deserializeAndSerializeLazySimple(serDe, arrWritable);
-//  System.out.println("test: testCarbonHiveSerDe - OK");
-//
-//} catch (final Throwable e) {
-//  e.printStackTrace();
-//  throw e;
-//}
-//  }
-//
-//  private void deserializeAndSerializeLazySimple(final CarbonHiveSerDe 
serDe,
-//  final ArrayWritable t) throws SerDeException {
-//
-//// Get the row structure
-//final StructObjectInspector oi = (StructObjectInspector) 
serDe.getObjectInspector();
-//
-//// Deserialize
-//final Object row = serDe.deserialize(t);
-//assertEquals("deserialization gives the wrong object class", 
row.getClass(),
-//ArrayWritable.class);
-//assertEquals("size correct after deserialization",
-//serDe.getSerDeStats().getRawDataSize(), t.get().length);
-//assertEquals("deserialization gives the wrong object", t, row);
-//
-//// Serialize
-//final ArrayWritable serializedArr = (ArrayWritable) 
serDe.serializeStartKey(row, oi);
-//assertEquals("size correct after serialization", 
serDe.getSerDeStats().getRawDataSize(),
-//serializedArr.get().length);
-//assertTrue("serialized object should be equal to starting object",
-//arrayWritableEquals(t, serializedArr));
-//  }
-//
-//  private Properties createProperties() {
-//final Properties tbl = new Properties();
-//
-//// Set the configuration parameters
-//tbl.setProperty("columns", 
"ashort,aint,along,adouble,adecimal,ast

[GitHub] carbondata pull request #2969: [CARBONDATA-3127]Fix the TestCarbonSerde exce...

2018-12-02 Thread SteNicholas
Github user SteNicholas commented on a diff in the pull request:

https://github.com/apache/carbondata/pull/2969#discussion_r238089137
  
--- Diff: 
integration/hive/src/test/java/org/apache/carbondata/hive/TestCarbonSerde.java 
---
@@ -1,133 +1,133 @@
-///*
-// * Licensed to the Apache Software Foundation (ASF) under one or more
-// * contributor license agreements.  See the NOTICE file distributed with
-// * this work for additional information regarding copyright ownership.
-// * The ASF licenses this file to You under the Apache License, Version 
2.0
-// * (the "License"); you may not use this file except in compliance with
-// * the License.  You may obtain a copy of the License at
-// *
-// *http://www.apache.org/licenses/LICENSE-2.0
-// *
-// * Unless required by applicable law or agreed to in writing, software
-// * distributed under the License is distributed on an "AS IS" BASIS,
-// * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or 
implied.
-// * See the License for the specific language governing permissions and
-// * limitations under the License.
-// */
-//package org.apache.carbondata.hive;
-//
-//import junit.framework.TestCase;
-//import org.apache.hadoop.conf.Configuration;
-//import org.apache.hadoop.hive.common.type.HiveDecimal;
-//import org.apache.hadoop.hive.serde2.SerDeException;
-//import org.apache.hadoop.hive.serde2.SerDeUtils;
-//import org.apache.hadoop.hive.serde2.io.DoubleWritable;
-//import org.apache.hadoop.hive.serde2.io.HiveDecimalWritable;
-//import org.apache.hadoop.hive.serde2.io.ShortWritable;
-//import 
org.apache.hadoop.hive.serde2.objectinspector.StructObjectInspector;
-//import org.apache.hadoop.io.*;
-//import org.junit.Test;
-//
-//import java.util.Properties;
-//
-//public class TestCarbonSerde extends TestCase {
-//  @Test
-//  public void testCarbonHiveSerDe() throws Throwable {
-//try {
-//  // Create the SerDe
-//  System.out.println("test: testCarbonHiveSerDe");
-//
-//  final CarbonHiveSerDe serDe = new CarbonHiveSerDe();
-//  final Configuration conf = new Configuration();
-//  final Properties tbl = createProperties();
-//  SerDeUtils.initializeSerDe(serDe, conf, tbl, null);
-//
-//  // Data
-//  final Writable[] arr = new Writable[7];
-//
-//  //primitive types
-//  arr[0] = new ShortWritable((short) 456);
-//  arr[1] = new IntWritable(789);
-//  arr[2] = new LongWritable(1000l);
-//  arr[3] = new DoubleWritable((double) 5.3);
-//  arr[4] = new HiveDecimalWritable(HiveDecimal.create(1));
-//  arr[5] = new Text("carbonSerde binary".getBytes("UTF-8"));
-//
-//  final Writable[] arrayContainer = new Writable[1];
-//  final Writable[] array = new Writable[5];
-//  for (int i = 0; i < 5; ++i) {
-//array[i] = new IntWritable(i);
-//  }
-//  arrayContainer[0] = new ArrayWritable(Writable.class, array);
-//  arr[6] = new ArrayWritable(Writable.class, arrayContainer);
-//
-//  final ArrayWritable arrWritable = new 
ArrayWritable(Writable.class, arr);
-//  // Test
-//  deserializeAndSerializeLazySimple(serDe, arrWritable);
-//  System.out.println("test: testCarbonHiveSerDe - OK");
-//
-//} catch (final Throwable e) {
-//  e.printStackTrace();
-//  throw e;
-//}
-//  }
-//
-//  private void deserializeAndSerializeLazySimple(final CarbonHiveSerDe 
serDe,
-//  final ArrayWritable t) throws SerDeException {
-//
-//// Get the row structure
-//final StructObjectInspector oi = (StructObjectInspector) 
serDe.getObjectInspector();
-//
-//// Deserialize
-//final Object row = serDe.deserialize(t);
-//assertEquals("deserialization gives the wrong object class", 
row.getClass(),
-//ArrayWritable.class);
-//assertEquals("size correct after deserialization",
-//serDe.getSerDeStats().getRawDataSize(), t.get().length);
-//assertEquals("deserialization gives the wrong object", t, row);
-//
-//// Serialize
-//final ArrayWritable serializedArr = (ArrayWritable) 
serDe.serializeStartKey(row, oi);
-//assertEquals("size correct after serialization", 
serDe.getSerDeStats().getRawDataSize(),
-//serializedArr.get().length);
-//assertTrue("serialized object should be equal to starting object",
-//arrayWritableEquals(t, serializedArr));
-//  }
-//
-//  private Properties createProperties() {
-//final Properties tbl = new Properties();
-//
-//// Set the configuration parameters
-//tbl.setProperty("columns", 
"ashort,aint,along,adouble,adecimal,ast

[GitHub] carbondata pull request #2969: [CARBONDATA-3127]Fix the TestCarbonSerde exce...

2018-12-02 Thread SteNicholas
Github user SteNicholas commented on a diff in the pull request:

https://github.com/apache/carbondata/pull/2969#discussion_r238089109
  
--- Diff: 
integration/hive/src/test/java/org/apache/carbondata/hive/TestCarbonSerde.java 
---
@@ -1,133 +1,133 @@
-///*
-// * Licensed to the Apache Software Foundation (ASF) under one or more
-// * contributor license agreements.  See the NOTICE file distributed with
-// * this work for additional information regarding copyright ownership.
-// * The ASF licenses this file to You under the Apache License, Version 
2.0
-// * (the "License"); you may not use this file except in compliance with
-// * the License.  You may obtain a copy of the License at
-// *
-// *http://www.apache.org/licenses/LICENSE-2.0
-// *
-// * Unless required by applicable law or agreed to in writing, software
-// * distributed under the License is distributed on an "AS IS" BASIS,
-// * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or 
implied.
-// * See the License for the specific language governing permissions and
-// * limitations under the License.
-// */
-//package org.apache.carbondata.hive;
-//
-//import junit.framework.TestCase;
-//import org.apache.hadoop.conf.Configuration;
-//import org.apache.hadoop.hive.common.type.HiveDecimal;
-//import org.apache.hadoop.hive.serde2.SerDeException;
-//import org.apache.hadoop.hive.serde2.SerDeUtils;
-//import org.apache.hadoop.hive.serde2.io.DoubleWritable;
-//import org.apache.hadoop.hive.serde2.io.HiveDecimalWritable;
-//import org.apache.hadoop.hive.serde2.io.ShortWritable;
-//import 
org.apache.hadoop.hive.serde2.objectinspector.StructObjectInspector;
-//import org.apache.hadoop.io.*;
-//import org.junit.Test;
-//
-//import java.util.Properties;
-//
-//public class TestCarbonSerde extends TestCase {
-//  @Test
-//  public void testCarbonHiveSerDe() throws Throwable {
-//try {
-//  // Create the SerDe
-//  System.out.println("test: testCarbonHiveSerDe");
-//
-//  final CarbonHiveSerDe serDe = new CarbonHiveSerDe();
-//  final Configuration conf = new Configuration();
-//  final Properties tbl = createProperties();
-//  SerDeUtils.initializeSerDe(serDe, conf, tbl, null);
-//
-//  // Data
-//  final Writable[] arr = new Writable[7];
-//
-//  //primitive types
-//  arr[0] = new ShortWritable((short) 456);
-//  arr[1] = new IntWritable(789);
-//  arr[2] = new LongWritable(1000l);
-//  arr[3] = new DoubleWritable((double) 5.3);
-//  arr[4] = new HiveDecimalWritable(HiveDecimal.create(1));
-//  arr[5] = new Text("carbonSerde binary".getBytes("UTF-8"));
-//
-//  final Writable[] arrayContainer = new Writable[1];
-//  final Writable[] array = new Writable[5];
-//  for (int i = 0; i < 5; ++i) {
-//array[i] = new IntWritable(i);
-//  }
-//  arrayContainer[0] = new ArrayWritable(Writable.class, array);
-//  arr[6] = new ArrayWritable(Writable.class, arrayContainer);
-//
-//  final ArrayWritable arrWritable = new 
ArrayWritable(Writable.class, arr);
-//  // Test
-//  deserializeAndSerializeLazySimple(serDe, arrWritable);
-//  System.out.println("test: testCarbonHiveSerDe - OK");
-//
-//} catch (final Throwable e) {
-//  e.printStackTrace();
-//  throw e;
-//}
-//  }
-//
-//  private void deserializeAndSerializeLazySimple(final CarbonHiveSerDe 
serDe,
-//  final ArrayWritable t) throws SerDeException {
-//
-//// Get the row structure
-//final StructObjectInspector oi = (StructObjectInspector) 
serDe.getObjectInspector();
-//
-//// Deserialize
-//final Object row = serDe.deserialize(t);
-//assertEquals("deserialization gives the wrong object class", 
row.getClass(),
-//ArrayWritable.class);
-//assertEquals("size correct after deserialization",
-//serDe.getSerDeStats().getRawDataSize(), t.get().length);
-//assertEquals("deserialization gives the wrong object", t, row);
-//
-//// Serialize
-//final ArrayWritable serializedArr = (ArrayWritable) 
serDe.serializeStartKey(row, oi);
-//assertEquals("size correct after serialization", 
serDe.getSerDeStats().getRawDataSize(),
-//serializedArr.get().length);
-//assertTrue("serialized object should be equal to starting object",
-//arrayWritableEquals(t, serializedArr));
-//  }
-//
-//  private Properties createProperties() {
-//final Properties tbl = new Properties();
-//
-//// Set the configuration parameters
-//tbl.setProperty("columns", 
"ashort,aint,along,adouble,adecimal,ast