[GitHub] incubator-carbondata pull request #766: Refactor integration/presto by optim...

2017-04-08 Thread chenliang613
GitHub user chenliang613 opened a pull request:

https://github.com/apache/incubator-carbondata/pull/766

Refactor integration/presto by optimizing some name definition

Refactor integration/presto by optimizing some name definition.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/chenliang613/incubator-carbondata 
presto_comment

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/incubator-carbondata/pull/766.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #766


commit d3bbc70969343104b43739fd87095eb30983e1c9
Author: chenliang613 
Date:   2017-04-08T09:37:47Z

refactor integration/presto




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-carbondata pull request #766: Refactor integration/presto by optim...

2017-04-08 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/incubator-carbondata/pull/766


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-carbondata issue #766: Refactor integration/presto by optimizing s...

2017-04-08 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/incubator-carbondata/pull/766
  
Build Success with Spark 1.6.2, Please check CI 
http://136.243.101.176:8080/job/ApacheCarbonPRBuilder/1514/



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-carbondata issue #737: [CARBONDATA-882] Add SORT_COLUMNS support i...

2017-04-08 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/incubator-carbondata/pull/737
  
Build Success with Spark 1.6.2, Please check CI 
http://136.243.101.176:8080/job/ApacheCarbonPRBuilder/1515/



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-carbondata issue #737: [CARBONDATA-882] Add SORT_COLUMNS support i...

2017-04-08 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/incubator-carbondata/pull/737
  
Build Success with Spark 1.6.2, Please check CI 
http://136.243.101.176:8080/job/ApacheCarbonPRBuilder/1516/



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-carbondata pull request #767: fix batch no issue for CarbonRowData...

2017-04-08 Thread QiangCai
GitHub user QiangCai opened a pull request:

https://github.com/apache/incubator-carbondata/pull/767

fix batch no issue for CarbonRowDataWriterProcessorStepImpl(12-dev)

fix batch no issue for CarbonRowDataWriterProcessorStepImpl(12-dev)

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/QiangCai/incubator-carbondata fixnosortissue

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/incubator-carbondata/pull/767.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #767


commit 906bd383647b6cbe9f8d7c07536af18d1f65f69d
Author: QiangCai 
Date:   2017-04-08T16:40:17Z

fix batch no issue for CarbonRowDataWriterProcessorStepImpl




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-carbondata issue #767: fix batchno issue for CarbonRowDataWriterPr...

2017-04-08 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/incubator-carbondata/pull/767
  
Build Success with Spark 1.6.2, Please check CI 
http://136.243.101.176:8080/job/ApacheCarbonPRBuilder/1517/



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-carbondata pull request #751: [CARBONDATA-816] Added Example for H...

2017-04-08 Thread anubhav100
Github user anubhav100 commented on a diff in the pull request:


https://github.com/apache/incubator-carbondata/pull/751#discussion_r110524192
  
--- Diff: integration/hive/pom.xml ---
@@ -95,7 +168,9 @@
 
 
 
-**/Test*.java
--- End diff --

changes done


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-carbondata pull request #751: [CARBONDATA-816] Added Example for H...

2017-04-08 Thread anubhav100
Github user anubhav100 commented on a diff in the pull request:


https://github.com/apache/incubator-carbondata/pull/751#discussion_r110524207
  
--- Diff: 
integration/hive/src/main/scala/org/apache/carbondata/hiveexample/HiveExample.scala
 ---
@@ -0,0 +1,146 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.carbondata.hiveexample
+
+import java.io.File
+import java.sql.{DriverManager, ResultSet, SQLException, Statement}
+
+import org.apache.spark.sql.SparkSession
+
+import org.apache.carbondata.common.logging.LogServiceFactory
+import org.apache.carbondata.hive.server.HiveEmbeddedServer2
+
+object HiveExample {
+
+  private val driverName: String = "org.apache.hive.jdbc.HiveDriver"
+
+  /**
+   * @param args
+   * @throws SQLException
+   */
+  @throws[SQLException]
+  def main(args: Array[String]) {
+val rootPath = new File(this.getClass.getResource("/").getPath
++ "../../../..").getCanonicalPath
+val warehouse = s"$rootPath/integration/hive/target/warehouse"
+val metaStore_Db = 
s"$rootPath/integration/hive/target/carbon_metaStore_db"
+val logger = 
LogServiceFactory.getLogService(this.getClass.getCanonicalName)
+
+import org.apache.spark.sql.CarbonSession._
+
+val carbon = SparkSession
+  .builder()
+  .master("local")
+  .appName("HiveExample")
+  .config("carbon.sql.warehouse.dir", warehouse).enableHiveSupport()
+  .getOrCreateCarbonSession(
+"hdfs://localhost:54310/opt/carbonStore", metaStore_Db)
--- End diff --

@QiangCai changes done


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-carbondata pull request #751: [CARBONDATA-816] Added Example for H...

2017-04-08 Thread anubhav100
Github user anubhav100 commented on a diff in the pull request:


https://github.com/apache/incubator-carbondata/pull/751#discussion_r110524213
  
--- Diff: 
integration/hive/src/main/scala/org/apache/carbondata/hiveexample/HiveExample.scala
 ---
@@ -0,0 +1,146 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.carbondata.hiveexample
+
+import java.io.File
+import java.sql.{DriverManager, ResultSet, SQLException, Statement}
+
+import org.apache.spark.sql.SparkSession
+
+import org.apache.carbondata.common.logging.LogServiceFactory
+import org.apache.carbondata.hive.server.HiveEmbeddedServer2
+
+object HiveExample {
+
+  private val driverName: String = "org.apache.hive.jdbc.HiveDriver"
+
+  /**
+   * @param args
+   * @throws SQLException
+   */
+  @throws[SQLException]
+  def main(args: Array[String]) {
+val rootPath = new File(this.getClass.getResource("/").getPath
++ "../../../..").getCanonicalPath
+val warehouse = s"$rootPath/integration/hive/target/warehouse"
+val metaStore_Db = 
s"$rootPath/integration/hive/target/carbon_metaStore_db"
+val logger = 
LogServiceFactory.getLogService(this.getClass.getCanonicalName)
+
+import org.apache.spark.sql.CarbonSession._
+
+val carbon = SparkSession
+  .builder()
+  .master("local")
+  .appName("HiveExample")
+  .config("carbon.sql.warehouse.dir", warehouse).enableHiveSupport()
+  .getOrCreateCarbonSession(
+"hdfs://localhost:54310/opt/carbonStore", metaStore_Db)
+
+carbon.sql("""drop table if exists hive_carbon_example""".stripMargin)
+
+carbon
+  .sql(
+"""create table hive_carbon_example (id int,name string,salary 
double) stored by
+  |'carbondata' """
+  .stripMargin)
+
+carbon.sql(
+  s"""
+   LOAD DATA LOCAL INPATH 
'$rootPath/integration/hive/src/main/resources/data.csv' into
+   table
+ hive_carbon_example
+   """)
+carbon.sql("select * from hive_carbon_example").show()
+
+carbon.stop()
+
+try {
+  Class.forName(driverName)
+}
+catch {
+  case classNotFoundException: ClassNotFoundException =>
+classNotFoundException.printStackTrace()
+}
+
+val hiveEmbeddedServer2 = new HiveEmbeddedServer2()
+hiveEmbeddedServer2.start()
+val port = hiveEmbeddedServer2.getFreePort
+val con = 
DriverManager.getConnection(s"jdbc:hive2://localhost:$port/default", "", "")
+val stmt: Statement = con.createStatement
+
+logger.info(s"HIVE CLI IS STARTED ON PORT $port 
==")
+
+stmt
+  .execute(s"ADD JAR 
$rootPath/assembly/target/scala-2.11/carbondata_2.11-1.1" +
--- End diff --

@QiangCai  changes done


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-carbondata issue #751: [CARBONDATA-816] Added Example for Hive Int...

2017-04-08 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/incubator-carbondata/pull/751
  
Build Success with Spark 1.6.2, Please check CI 
http://136.243.101.176:8080/job/ApacheCarbonPRBuilder/1518/



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-carbondata issue #751: [CARBONDATA-816] Added Example for Hive Int...

2017-04-08 Thread anubhav100
Github user anubhav100 commented on the issue:

https://github.com/apache/incubator-carbondata/pull/751
  
@chenliang613 @QiangCai changes done can you have a look?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-carbondata pull request #768: [CARBONDATA-829] resolved bug for di...

2017-04-08 Thread anubhav100
GitHub user anubhav100 opened a pull request:

https://github.com/apache/incubator-carbondata/pull/768

[CARBONDATA-829] resolved bug for dictionary_exclude not working using 
carbondatasource

.include stringtype in table creator  method 
isDataTypeSupportedForDictionary_Excluded method
.refactor the bad naming convention
.added test case for bug

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/anubhav100/incubator-carbondata CARBONDATA-829

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/incubator-carbondata/pull/768.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #768


commit 0e4664269d888cedf7a35b0dd74ceaec554ef918
Author: anubhav100 
Date:   2017-04-08T17:58:06Z

resolved bug for dictionary_exclude not working using carbondatasource




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-carbondata issue #768: [CARBONDATA-829] resolved bug for dictionar...

2017-04-08 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/incubator-carbondata/pull/768
  
Build Success with Spark 1.6.2, Please check CI 
http://136.243.101.176:8080/job/ApacheCarbonPRBuilder/1519/



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-carbondata issue #767: fix sort_columns issue(12-dev)

2017-04-08 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/incubator-carbondata/pull/767
  
Build Failed  with Spark 1.6.2, Please check CI 
http://136.243.101.176:8080/job/ApacheCarbonPRBuilder/1520/



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Created] (CARBONDATA-888) Dictionary include / exclude option in dataframe writer

2017-04-08 Thread Sanoj MG (JIRA)
Sanoj MG created CARBONDATA-888:
---

 Summary: Dictionary include / exclude option in dataframe writer
 Key: CARBONDATA-888
 URL: https://issues.apache.org/jira/browse/CARBONDATA-888
 Project: CarbonData
  Issue Type: Improvement
  Components: spark-integration
Affects Versions: 1.2.0-incubating
 Environment: HDP 2.5, Spark 1.6
Reporter: Sanoj MG
Priority: Minor
 Fix For: 1.2.0-incubating


While creating a Carbondata table from dataframe, currently it is not possible 
to specify columns that needs to be included in or excluded from the 
dictionary. An option is required to specify it as below : 

df.write.format("carbondata")
  .option("tableName", "test")
  .option("compress","true")
  .option("dictionary_include","incol1,intcol2")
  .option("dictionary_exclude","stringcol1,stringcol2")
  .mode(SaveMode.Overwrite)
.save()

We have lot of integer columns that are dimensions, dataframe.save is used to 
quickly create tables instead of writing ddls, and it would be nice to have 
this feature to execute POCs.  


 
 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CARBONDATA-888) Dictionary include / exclude option in dataframe writer

2017-04-08 Thread Sanoj MG (JIRA)

[ 
https://issues.apache.org/jira/browse/CARBONDATA-888?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15961952#comment-15961952
 ] 

Sanoj MG commented on CARBONDATA-888:
-

Can this be assigned to me, I have already made the code changes and would like 
to create a pr.

> Dictionary include / exclude option in dataframe writer
> ---
>
> Key: CARBONDATA-888
> URL: https://issues.apache.org/jira/browse/CARBONDATA-888
> Project: CarbonData
>  Issue Type: Improvement
>  Components: spark-integration
>Affects Versions: 1.2.0-incubating
> Environment: HDP 2.5, Spark 1.6
>Reporter: Sanoj MG
>Priority: Minor
> Fix For: 1.2.0-incubating
>
>
> While creating a Carbondata table from dataframe, currently it is not 
> possible to specify columns that needs to be included in or excluded from the 
> dictionary. An option is required to specify it as below : 
> df.write.format("carbondata")
>   .option("tableName", "test")
>   .option("compress","true")
>   .option("dictionary_include","incol1,intcol2")
>   .option("dictionary_exclude","stringcol1,stringcol2")
>   .mode(SaveMode.Overwrite)
> .save()
> We have lot of integer columns that are dimensions, dataframe.save is used to 
> quickly create tables instead of writing ddls, and it would be nice to have 
> this feature to execute POCs.  
>  
>  



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[GitHub] incubator-carbondata pull request #769: [CARBONDATA-888] Added include and e...

2017-04-08 Thread sanoj-mg
GitHub user sanoj-mg opened a pull request:

https://github.com/apache/incubator-carbondata/pull/769

[CARBONDATA-888] Added include and exclude dictionary columns in dataframe 
writer

Added options dictionary_include and dictionary_exclude in dataframe 
writer. 

Tested successfully in spark 2.6 / HDP 2.5 

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/sanoj-mg/incubator-carbondata 
CARBONDATA-888-dictionary-incl-excl-dataframe

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/incubator-carbondata/pull/769.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #769


commit c1b1fb47fed5f9323cc28c6e95b663827a6bf86a
Author: Sanoj MG 
Date:   2017-04-08T21:38:09Z

Added options to include and exclude dictionary columns in dataframe writer.




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-carbondata issue #769: [CARBONDATA-888] Added include and exclude ...

2017-04-08 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/incubator-carbondata/pull/769
  
Build Failed  with Spark 1.6.2, Please check CI 
http://136.243.101.176:8080/job/ApacheCarbonPRBuilder/1521/



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-carbondata issue #769: [CARBONDATA-888] Added include and exclude ...

2017-04-08 Thread sanoj-mg
Github user sanoj-mg commented on the issue:

https://github.com/apache/incubator-carbondata/pull/769
  
@jackylk Can you please have a look at the test case? Not sure if "desc 
formatted" the right way to get these column attributes of a carbondata table. 


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-carbondata pull request #767: fix sort_columns issue(12-dev)

2017-04-08 Thread jackylk
Github user jackylk commented on a diff in the pull request:


https://github.com/apache/incubator-carbondata/pull/767#discussion_r110531179
  
--- Diff: core/src/main/java/org/apache/carbondata/core/util/ByteUtil.java 
---
@@ -540,7 +543,7 @@ public static int toInt(byte[] bytes, int offset, final 
int length) {
* @return
*/
   public static float toFloat(byte[] bytes, int offset) {
-return Float.intBitsToFloat(toInt(bytes, offset, SIZEOF_INT));
+return Float.intBitsToFloat(toInt(bytes, offset, SIZEOF_INT) );
--- End diff --

remove space


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-carbondata pull request #767: fix sort_columns issue(12-dev)

2017-04-08 Thread jackylk
Github user jackylk commented on a diff in the pull request:


https://github.com/apache/incubator-carbondata/pull/767#discussion_r110531208
  
--- Diff: 
core/src/main/java/org/apache/carbondata/core/scan/filter/resolver/RowLevelRangeFilterResolverImpl.java
 ---
@@ -147,12 +148,16 @@ public void getStartKey(SegmentProperties 
segmentProperties, long[] startKey,
   
filterValuesList.add(CarbonCommonConstants.MEMBER_DEFAULT_VAL.getBytes());
   continue;
 }
-filterValuesList.add(result.getString().getBytes());
+filterValuesList.add(DataTypeUtil
+
.getBytesBasedOnDataTypeForNoDictionaryColumn(result.getString(),
+result.getDataType()));
   } catch (FilterIllegalMemberException e) {
 // Any invalid member while evaluation shall be ignored, system 
will log the
 // error only once since all rows the evaluation happens so 
inorder to avoid
 // too much log inforation only once the log will be printed.
 FilterUtil.logError(e, invalidRowsPresent);
+  } catch (Throwable throwable) {
--- End diff --

why is this added?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-carbondata issue #767: fix sort_columns issue(12-dev)

2017-04-08 Thread jackylk
Github user jackylk commented on the issue:

https://github.com/apache/incubator-carbondata/pull/767
  
Please describe the bug in JIRA or PR


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-carbondata pull request #765: [CARBONDATA-887]lazy rdd iterator fo...

2017-04-08 Thread jackylk
Github user jackylk commented on a diff in the pull request:


https://github.com/apache/incubator-carbondata/pull/765#discussion_r110531763
  
--- Diff: 
integration/spark-common/src/main/scala/org/apache/carbondata/spark/rdd/NewCarbonDataLoadRDD.scala
 ---
@@ -504,3 +500,66 @@ class NewRddIterator(rddIter: Iterator[Row],
   }
 
 }
+
+/**
+ * LazyRddIterator invoke rdd.iterator method when invoking hasNext method.
+ * @param serializer
+ * @param serializeBytes
+ * @param partition
+ * @param carbonLoadModel
+ * @param context
+ */
+class LazyRddIterator(serializer: SerializerInstance,
+serializeBytes: Array[Byte],
+partition: Partition,
+carbonLoadModel: CarbonLoadModel,
+context: TaskContext) extends CarbonIterator[Array[AnyRef]] {
+
+  val timeStampformatString = 
CarbonProperties.getInstance().getProperty(CarbonCommonConstants
+.CARBON_TIMESTAMP_FORMAT, 
CarbonCommonConstants.CARBON_TIMESTAMP_DEFAULT_FORMAT)
+  val timeStampFormat = new SimpleDateFormat(timeStampformatString)
+  val dateFormatString = 
CarbonProperties.getInstance().getProperty(CarbonCommonConstants
+.CARBON_DATE_FORMAT, CarbonCommonConstants.CARBON_DATE_DEFAULT_FORMAT)
+  val dateFormat = new SimpleDateFormat(dateFormatString)
+  val delimiterLevel1 = carbonLoadModel.getComplexDelimiterLevel1
+  val delimiterLevel2 = carbonLoadModel.getComplexDelimiterLevel2
+  val serializationNullFormat =
--- End diff --

make all these variables private


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-carbondata pull request #765: [CARBONDATA-887]lazy rdd iterator fo...

2017-04-08 Thread jackylk
Github user jackylk commented on a diff in the pull request:


https://github.com/apache/incubator-carbondata/pull/765#discussion_r110531816
  
--- Diff: 
integration/spark-common/src/main/scala/org/apache/carbondata/spark/rdd/NewCarbonDataLoadRDD.scala
 ---
@@ -504,3 +500,66 @@ class NewRddIterator(rddIter: Iterator[Row],
   }
 
 }
+
+/**
+ * LazyRddIterator invoke rdd.iterator method when invoking hasNext method.
+ * @param serializer
+ * @param serializeBytes
+ * @param partition
+ * @param carbonLoadModel
+ * @param context
+ */
+class LazyRddIterator(serializer: SerializerInstance,
+serializeBytes: Array[Byte],
+partition: Partition,
+carbonLoadModel: CarbonLoadModel,
+context: TaskContext) extends CarbonIterator[Array[AnyRef]] {
+
+  val timeStampformatString = 
CarbonProperties.getInstance().getProperty(CarbonCommonConstants
+.CARBON_TIMESTAMP_FORMAT, 
CarbonCommonConstants.CARBON_TIMESTAMP_DEFAULT_FORMAT)
+  val timeStampFormat = new SimpleDateFormat(timeStampformatString)
+  val dateFormatString = 
CarbonProperties.getInstance().getProperty(CarbonCommonConstants
+.CARBON_DATE_FORMAT, CarbonCommonConstants.CARBON_DATE_DEFAULT_FORMAT)
+  val dateFormat = new SimpleDateFormat(dateFormatString)
+  val delimiterLevel1 = carbonLoadModel.getComplexDelimiterLevel1
+  val delimiterLevel2 = carbonLoadModel.getComplexDelimiterLevel2
+  val serializationNullFormat =
+
carbonLoadModel.getSerializationNullFormat.split(CarbonCommonConstants.COMMA, 
2)(1)
+
+  var rddIter: Iterator[Row] = null
+  var uninitialized = true
+  var closed = false
+
+  def hasNext: Boolean = {
+if (uninitialized) {
+  uninitialized = false
+  rddIter = 
serializer.deserialize[RDD[Row]](ByteBuffer.wrap(serializeBytes))
+.iterator(partition, context)
+}
+if (closed) {
+  false
+} else {
+  rddIter.hasNext
+}
+  }
+
+  def next: Array[AnyRef] = {
+val row = rddIter.next()
+val columns = new Array[AnyRef](row.length)
+for (i <- 0 until columns.length) {
--- End diff --

use `row.map` to replace this for statement


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-carbondata pull request #765: [CARBONDATA-887]lazy rdd iterator fo...

2017-04-08 Thread jackylk
Github user jackylk commented on a diff in the pull request:


https://github.com/apache/incubator-carbondata/pull/765#discussion_r110531875
  
--- Diff: 
integration/spark-common/src/main/scala/org/apache/carbondata/spark/rdd/NewCarbonDataLoadRDD.scala
 ---
@@ -504,3 +500,66 @@ class NewRddIterator(rddIter: Iterator[Row],
   }
 
 }
+
+/**
+ * LazyRddIterator invoke rdd.iterator method when invoking hasNext method.
+ * @param serializer
+ * @param serializeBytes
+ * @param partition
+ * @param carbonLoadModel
+ * @param context
+ */
+class LazyRddIterator(serializer: SerializerInstance,
+serializeBytes: Array[Byte],
+partition: Partition,
+carbonLoadModel: CarbonLoadModel,
+context: TaskContext) extends CarbonIterator[Array[AnyRef]] {
+
+  val timeStampformatString = 
CarbonProperties.getInstance().getProperty(CarbonCommonConstants
+.CARBON_TIMESTAMP_FORMAT, 
CarbonCommonConstants.CARBON_TIMESTAMP_DEFAULT_FORMAT)
+  val timeStampFormat = new SimpleDateFormat(timeStampformatString)
+  val dateFormatString = 
CarbonProperties.getInstance().getProperty(CarbonCommonConstants
+.CARBON_DATE_FORMAT, CarbonCommonConstants.CARBON_DATE_DEFAULT_FORMAT)
+  val dateFormat = new SimpleDateFormat(dateFormatString)
+  val delimiterLevel1 = carbonLoadModel.getComplexDelimiterLevel1
+  val delimiterLevel2 = carbonLoadModel.getComplexDelimiterLevel2
+  val serializationNullFormat =
+
carbonLoadModel.getSerializationNullFormat.split(CarbonCommonConstants.COMMA, 
2)(1)
+
+  var rddIter: Iterator[Row] = null
+  var uninitialized = true
+  var closed = false
+
+  def hasNext: Boolean = {
+if (uninitialized) {
+  uninitialized = false
+  rddIter = 
serializer.deserialize[RDD[Row]](ByteBuffer.wrap(serializeBytes))
+.iterator(partition, context)
+}
+if (closed) {
+  false
+} else {
+  rddIter.hasNext
+}
+  }
+
+  def next: Array[AnyRef] = {
+val row = rddIter.next()
+val columns = new Array[AnyRef](row.length)
+for (i <- 0 until columns.length) {
+  columns(i) = CarbonScalaUtil.getString(row.get(i), 
serializationNullFormat,
+delimiterLevel1, delimiterLevel2, timeStampFormat, dateFormat)
+}
+columns
+  }
+
+  override def initialize: Unit = {
--- End diff --

add empty parentheses, for scala coding convention


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-carbondata issue #764: [CARBONDATA-886]remove all redundant local ...

2017-04-08 Thread jackylk
Github user jackylk commented on the issue:

https://github.com/apache/incubator-carbondata/pull/764
  
LGTM


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-carbondata pull request #764: [CARBONDATA-886]remove all redundant...

2017-04-08 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/incubator-carbondata/pull/764


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-carbondata issue #769: [CARBONDATA-888] Added include and exclude ...

2017-04-08 Thread jackylk
Github user jackylk commented on the issue:

https://github.com/apache/incubator-carbondata/pull/769
  
@sanoj-mg Can you refer to #737, I am doing similar thing for dataframe 
writer in spark2 module


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-carbondata issue #769: [CARBONDATA-888] Added include and exclude ...

2017-04-08 Thread chenliang613
Github user chenliang613 commented on the issue:

https://github.com/apache/incubator-carbondata/pull/769
  
@sanoj-mg please let me know your JIRA account's emailid, i will give your 
contributor right, then you can assign issues to yourself.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (CARBONDATA-888) Dictionary include / exclude option in dataframe writer

2017-04-08 Thread Liang Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/CARBONDATA-888?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15962030#comment-15962030
 ] 

Liang Chen commented on CARBONDATA-888:
---

Sure, please let me know your jira account email id, i will give your right.

> Dictionary include / exclude option in dataframe writer
> ---
>
> Key: CARBONDATA-888
> URL: https://issues.apache.org/jira/browse/CARBONDATA-888
> Project: CarbonData
>  Issue Type: Improvement
>  Components: spark-integration
>Affects Versions: 1.2.0-incubating
> Environment: HDP 2.5, Spark 1.6
>Reporter: Sanoj MG
>Priority: Minor
> Fix For: 1.2.0-incubating
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> While creating a Carbondata table from dataframe, currently it is not 
> possible to specify columns that needs to be included in or excluded from the 
> dictionary. An option is required to specify it as below : 
> df.write.format("carbondata")
>   .option("tableName", "test")
>   .option("compress","true")
>   .option("dictionary_include","incol1,intcol2")
>   .option("dictionary_exclude","stringcol1,stringcol2")
>   .mode(SaveMode.Overwrite)
> .save()
> We have lot of integer columns that are dimensions, dataframe.save is used to 
> quickly create tables instead of writing ddls, and it would be nice to have 
> this feature to execute POCs.  
>  
>  



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (CARBONDATA-889) Optimize pom dependency with exclusion to remove unnecessary dependency jar

2017-04-08 Thread Liang Chen (JIRA)
Liang Chen created CARBONDATA-889:
-

 Summary: Optimize pom dependency with exclusion to remove 
unnecessary dependency jar 
 Key: CARBONDATA-889
 URL: https://issues.apache.org/jira/browse/CARBONDATA-889
 Project: CarbonData
  Issue Type: Improvement
Reporter: Liang Chen
 Fix For: 1.2.0-incubating


For example: 
The below's spark dependency will introduce around 90 dependency jar, but some 
of them are unnecessary jar for CarbonData.

  org.apache.spark
  spark-sql_${scala.binary.version}





--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Assigned] (CARBONDATA-889) Optimize pom dependency with exclusion to remove unnecessary dependency jar

2017-04-08 Thread Liang Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/CARBONDATA-889?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Liang Chen reassigned CARBONDATA-889:
-

Assignee: Ravindra Pesala

> Optimize pom dependency with exclusion to remove unnecessary dependency jar 
> 
>
> Key: CARBONDATA-889
> URL: https://issues.apache.org/jira/browse/CARBONDATA-889
> Project: CarbonData
>  Issue Type: Improvement
>Reporter: Liang Chen
>Assignee: Ravindra Pesala
> Fix For: 1.2.0-incubating
>
>
> For example: 
> The below's spark dependency will introduce around 90 dependency jar, but 
> some of them are unnecessary jar for CarbonData.
> 
>   org.apache.spark
>   spark-sql_${scala.binary.version}
> 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Assigned] (CARBONDATA-888) Dictionary include / exclude option in dataframe writer

2017-04-08 Thread Sanoj MG (JIRA)

 [ 
https://issues.apache.org/jira/browse/CARBONDATA-888?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sanoj MG reassigned CARBONDATA-888:
---

Assignee: Sanoj MG

> Dictionary include / exclude option in dataframe writer
> ---
>
> Key: CARBONDATA-888
> URL: https://issues.apache.org/jira/browse/CARBONDATA-888
> Project: CarbonData
>  Issue Type: Improvement
>  Components: spark-integration
>Affects Versions: 1.2.0-incubating
> Environment: HDP 2.5, Spark 1.6
>Reporter: Sanoj MG
>Assignee: Sanoj MG
>Priority: Minor
> Fix For: 1.2.0-incubating
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> While creating a Carbondata table from dataframe, currently it is not 
> possible to specify columns that needs to be included in or excluded from the 
> dictionary. An option is required to specify it as below : 
> df.write.format("carbondata")
>   .option("tableName", "test")
>   .option("compress","true")
>   .option("dictionary_include","incol1,intcol2")
>   .option("dictionary_exclude","stringcol1,stringcol2")
>   .mode(SaveMode.Overwrite)
> .save()
> We have lot of integer columns that are dimensions, dataframe.save is used to 
> quickly create tables instead of writing ddls, and it would be nice to have 
> this feature to execute POCs.  
>  
>  



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[GitHub] incubator-carbondata pull request #751: [CARBONDATA-816] Added Example for H...

2017-04-08 Thread chenliang613
Github user chenliang613 commented on a diff in the pull request:


https://github.com/apache/incubator-carbondata/pull/751#discussion_r110533556
  
--- Diff: 
integration/hive/src/main/scala/org/apache/carbondata/hiveexample/HiveExample.scala
 ---
@@ -0,0 +1,166 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.carbondata.hiveexample
+
+import java.io.File
+import java.sql.{DriverManager, ResultSet, SQLException, Statement}
+
+import org.apache.spark.sql.SparkSession
+
+import org.apache.carbondata.common.logging.LogServiceFactory
+import org.apache.carbondata.hive.server.HiveEmbeddedServer2
+
+object HiveExample {
+
+  private val driverName: String = "org.apache.hive.jdbc.HiveDriver"
+
+  /**
+   * @param args
+   * @throws SQLException
+   */
+  @throws[SQLException]
+  def main(args: Array[String]) {
+val rootPath = new File(this.getClass.getResource("/").getPath
++ "../../../..").getCanonicalPath
+val store = s"$rootPath/integration/hive/target/store"
+val warehouse = s"$rootPath/integration/hive/target/warehouse"
+val metaStore_Db = 
s"$rootPath/integration/hive/target/carbon_metaStore_db"
+val logger = 
LogServiceFactory.getLogService(this.getClass.getCanonicalName)
+
+import org.apache.spark.sql.CarbonSession._
+
+System.setProperty("hadoop.home.dir", "/")
+
+val carbon = SparkSession
+  .builder()
+  .master("local")
+  .appName("HiveExample")
+  .config("carbon.sql.warehouse.dir", warehouse).enableHiveSupport()
+  .getOrCreateCarbonSession(
+store, metaStore_Db)
+
+val carbonJarPath = 
s"$rootPath/assembly/target/scala-2.11/carbondata_2.11-1.1" +
+s".0-incubating-SNAPSHOT-shade-hadoop2.7.2.jar"
--- End diff --

For hadoop version of assembly jar , please don't give the fixed version 
number(2.7.2), how about using 2.* ?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Assigned] (CARBONDATA-836) Error in load using dataframe - columns containing comma

2017-04-08 Thread Sanoj MG (JIRA)

 [ 
https://issues.apache.org/jira/browse/CARBONDATA-836?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sanoj MG reassigned CARBONDATA-836:
---

Assignee: Sanoj MG

> Error in load using dataframe  - columns containing comma
> -
>
> Key: CARBONDATA-836
> URL: https://issues.apache.org/jira/browse/CARBONDATA-836
> Project: CarbonData
>  Issue Type: Bug
>  Components: spark-integration
>Affects Versions: 1.1.0-incubating
> Environment: HDP sandbox 2.5, Spark 1.6.2
>Reporter: Sanoj MG
>Assignee: Sanoj MG
>Priority: Minor
> Fix For: NONE
>
>
> While trying to load data into Carabondata table using dataframe, the columns 
> containing commas are not properly loaded. 
> Eg: 
> scala> df.show(false)
> +---+--+---++-+--+
> |Country|Branch|Name   |Address |ShortName|Status|
> +---+--+---++-+--+
> |2  |1 |Main Branch|, Dubai, UAE|UHO  |256   |
> +---+--+---++-+--+
> scala>  df.write.format("carbondata").option("tableName", 
> "Branch1").option("compress", "true").mode(SaveMode.Overwrite).save()
> scala> cc.sql("select * from branch1").show(false)
> +---+--+---+---+-+--+
> |country|branch|name   |address|shortname|status|
> +---+--+---+---+-+--+
> |2  |1 |Main Branch|   | Dubai   |null  |
> +---+--+---+---+-+--+



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[GitHub] incubator-carbondata issue #767: fix sort_columns issue(12-dev)

2017-04-08 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/incubator-carbondata/pull/767
  
Build Failed  with Spark 1.6.2, Please check CI 
http://136.243.101.176:8080/job/ApacheCarbonPRBuilder/1522/



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Assigned] (CARBONDATA-854) Carbondata with Datastax / Cassandra

2017-04-08 Thread Sanoj MG (JIRA)

 [ 
https://issues.apache.org/jira/browse/CARBONDATA-854?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sanoj MG reassigned CARBONDATA-854:
---

Assignee: Sanoj MG

> Carbondata with Datastax / Cassandra
> 
>
> Key: CARBONDATA-854
> URL: https://issues.apache.org/jira/browse/CARBONDATA-854
> Project: CarbonData
>  Issue Type: Improvement
>  Components: spark-integration
>Affects Versions: 1.1.0-incubating
> Environment: Datastax DSE 5.0 ( DSE analytics )
>Reporter: Sanoj MG
>Assignee: Sanoj MG
>Priority: Minor
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> I am trying to get Carbondata working in a Datastax DSE 5.0 cluster. 
> An exception is thrown while trying to create Carbondata table from spark 
> shell. Below are the steps: 
> scala> import com.datastax.spark.connector._
> scala> import org.apache.spark.sql.SaveMode
> scala> import org.apache.spark.sql.CarbonContext
> scala> import org.apache.spark.sql.types._
> scala> val cc = new CarbonContext(sc, "cfs://127.0.0.1/opt/CarbonStore")
> scala> val df = 
> cc.read.parquet("file:///home/cassandra/testdata-30day/cassandra/zone.parquet")
> scala> df.write.format("carbondata").option("tableName", 
> "zone").option("compress", 
> "true").option("TempCSV","false").mode(SaveMode.Overwrite).save()
> Below exception is thrown and it fails to create carbondata table. 
> java.io.FileNotFoundException: /opt/CarbonStore/default/zone/Metadata/schema 
> (No such file or directory)
> at java.io.FileOutputStream.open0(Native Method)
> at java.io.FileOutputStream.open(FileOutputStream.java:270)
> at java.io.FileOutputStream.(FileOutputStream.java:213)
> at java.io.FileOutputStream.(FileOutputStream.java:133)
> at 
> org.apache.carbondata.core.datastore.impl.FileFactory.getDataOutputStream(FileFactory.java:207)
> at 
> org.apache.carbondata.core.writer.ThriftWriter.open(ThriftWriter.java:84)
> at 
> org.apache.spark.sql.hive.CarbonMetastore.createTableFromThrift(CarbonMetastore.scala:293)
> at 
> org.apache.spark.sql.execution.command.CreateTable.run(carbonTableSchema.scala:163)
> at 
> org.apache.spark.sql.execution.ExecutedCommand.sideEffectResult$lzycompute(commands.scala:58)
> at 
> org.apache.spark.sql.execution.ExecutedCommand.sideEffectResult(commands.scala:56)
> at 
> org.apache.spark.sql.execution.ExecutedCommand.doExecute(commands.scala:70)
> at 
> org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$5.apply(SparkPlan.scala:132)
> at 
> org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$5.apply(SparkPlan.scala:130)
> at 
> org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:150)
> at 
> org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:130)
> at 
> org.apache.spark.sql.execution.QueryExecution.toRdd$lzycompute(QueryExecution.scala:55)
> at 
> org.apache.spark.sql.execution.QueryExecution.toRdd(QueryExecution.scala:55)
> at org.apache.spark.sql.DataFrame.(DataFrame.scala:145)
> at org.apache.spark.sql.DataFrame.(DataFrame.scala:130)
> at org.apache.spark.sql.CarbonContext.sql(CarbonContext.scala:139)
> at 
> org.apache.carbondata.spark.CarbonDataFrameWriter.saveAsCarbonFile(CarbonDataFrameWriter.scala:39)
> at 
> org.apache.spark.sql.CarbonSource.createRelation(CarbonDatasourceRelation.scala:109)
> at 
> org.apache.spark.sql.execution.datasources.ResolvedDataSource$.apply(ResolvedDataSource.scala:222)
> at 
> org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:148)



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)