[GitHub] carbondata issue #2642: [CARBONDATA-2532][Integration] Carbon to support spa...

2018-09-04 Thread sandeep-katta
Github user sandeep-katta commented on the issue:

https://github.com/apache/carbondata/pull/2642
  
@ravipesala Please trigger 2.3.1 CI ,2.2.1 and 2.1.0 is passing


---


[GitHub] carbondata issue #2642: [CARBONDATA-2532][Integration] Carbon to support spa...

2018-08-31 Thread sandeep-katta
Github user sandeep-katta commented on the issue:

https://github.com/apache/carbondata/pull/2642
  
Please build 2.3 ,2.2 and 2.1 builds are passed


---


[GitHub] carbondata issue #2642: [CARBONDATA-2532][Integration] Carbon to support spa...

2018-08-31 Thread sandeep-katta
Github user sandeep-katta commented on the issue:

https://github.com/apache/carbondata/pull/2642
  
@ravipesala please retrigger 2.3 build,test cases issues are fixed


---


[GitHub] carbondata issue #2642: [CARBONDATA-2532][Integration] Carbon to support spa...

2018-08-31 Thread sandeep-katta
Github user sandeep-katta commented on the issue:

https://github.com/apache/carbondata/pull/2642
  

org.apache.carbondata.integration.spark.testsuite.complexType.TestComplexDataType.date
 with struct and array

org.apache.carbondata.spark.testsuite.badrecordloger.BadRecordActionTest.test 
bad record with FAIL option with location and no_sort as sort scope

These 2 testcases are not not because of this PR


---


[GitHub] carbondata issue #2642: [CARBONDATA-2532][Integration] Carbon to support spa...

2018-08-31 Thread sandeep-katta
Github user sandeep-katta commented on the issue:

https://github.com/apache/carbondata/pull/2642
  
retest please


---


[GitHub] carbondata issue #2642: [CARBONDATA-2532][Integration] Carbon to support spa...

2018-08-30 Thread sandeep-katta
Github user sandeep-katta commented on the issue:

https://github.com/apache/carbondata/pull/2642
  
@ravipesala please re-trigger 2.3 build,seems to be random failure


---


[GitHub] carbondata pull request #2642: [CARBONDATA-2532][Integration] Carbon to supp...

2018-08-29 Thread sandeep-katta
Github user sandeep-katta commented on a diff in the pull request:

https://github.com/apache/carbondata/pull/2642#discussion_r213903286
  
--- Diff: 
integration/spark2/src/main/spark2.2/org/apache/spark/sql/CustomDeterministicExpression.scala
 ---
@@ -0,0 +1,42 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.spark.sql
+
+import org.apache.spark.sql.catalyst.InternalRow
+import org.apache.spark.sql.catalyst.expressions.Expression
+import org.apache.spark.sql.catalyst.expressions.codegen.{CodegenContext, 
ExprCode}
+import org.apache.spark.sql.types.{DataType, StringType}
+
+/**
+ * Custom expression to override the deterministic property .
+ */
+case class CustomDeterministicExpression(nonDt: Expression ) extends 
Expression with Serializable{
--- End diff --

in 2.1 and 2.2 
 override def deterministic: Boolean = true
in 2.3
override lazy val deterministic: Boolean = true


---


[GitHub] carbondata pull request #2642: [CARBONDATA-2532][Integration] Carbon to supp...

2018-08-29 Thread sandeep-katta
Github user sandeep-katta commented on a diff in the pull request:

https://github.com/apache/carbondata/pull/2642#discussion_r213902360
  
--- Diff: 
integration/spark-common/src/main/scala/org/apache/spark/sql/execution/streaming/CarbonAppendableStreamSink.scala
 ---
@@ -122,7 +122,7 @@ class CarbonAppendableStreamSink(
 className = 
sparkSession.sessionState.conf.streamingFileCommitProtocolClass,
 jobId = batchId.toString,
 outputPath = fileLogPath,
-isAppend = false)
+false)
--- End diff --

in 2.3 isAppend name is changed to dynamicPartitionOverwrite 


---


[GitHub] carbondata issue #2642: [CARBONDATA-2532][Integration] Carbon to support spa...

2018-08-17 Thread sandeep-katta
Github user sandeep-katta commented on the issue:

https://github.com/apache/carbondata/pull/2642
  
4 test cases are failing in SDV build which is not related this PR code 
changes.
Same 4 test cases are failing other PR also refer 
!https://github.com/apache/carbondata/pull/2643


---


[GitHub] carbondata pull request #2492: [CARBONDATA-2733]should support scalar sub qu...

2018-07-11 Thread sandeep-katta
GitHub user sandeep-katta opened a pull request:

https://github.com/apache/carbondata/pull/2492

[CARBONDATA-2733]should support scalar sub query for partition tables

1)Multiple Projection in plan are supported
2)Scalar sub queries are supported for Carbon Tables

## What changes were proposed in this pull request?
In this PR carbon tables will support scalar sub queries for Carbon Tables.
If multiple projections are present in the plan,carbon should support it

Highlights:
In CarbonLateDecodeStrategy need to filter the queries which does not 
support sub-queries
In CarbonLateDecodeStrategy if one of the projection of the plan is 
failed,catch the exception and 
continue
## How was this patch tested?
UT Code


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/sandeep-katta/carbondata sparkSupport

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/carbondata/pull/2492.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2492


commit 8ea495718064ae5aa948dc088d939c66b5039297
Author: sandeep-katta 
Date:   2018-07-11T13:50:19Z

[CARBONDATA-2733]Carbon should support scalar sub query for partition tables

1)Multiple Projection in plan are supported
2)Scalar sub queries are supported for Carbon Tables




---


[jira] [Updated] (CARBONDATA-2733) Carbon should support scalar sub query for partition tables

2018-07-11 Thread sandeep katta (JIRA)


 [ 
https://issues.apache.org/jira/browse/CARBONDATA-2733?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

sandeep katta updated CARBONDATA-2733:
--
Issue Type: Bug  (was: Improvement)

> Carbon should support scalar sub query for partition tables
> ---
>
> Key: CARBONDATA-2733
> URL: https://issues.apache.org/jira/browse/CARBONDATA-2733
> Project: CarbonData
>  Issue Type: Bug
>    Reporter: sandeep katta
>Priority: Major
>
> 1) Carbon needs to support multiple projections on spark2.3.1
> 2)As a part of SPARK-24085 spark supports scalar sub query ,so it is required 
> to support for carbon tables also



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (CARBONDATA-2733) Carbon should support scalar sub query for partition tables

2018-07-11 Thread sandeep katta (JIRA)
sandeep katta created CARBONDATA-2733:
-

 Summary: Carbon should support scalar sub query for partition 
tables
 Key: CARBONDATA-2733
 URL: https://issues.apache.org/jira/browse/CARBONDATA-2733
 Project: CarbonData
  Issue Type: Improvement
Reporter: sandeep katta


1) Carbon needs to support multiple projections on spark2.3.1

2)As a part of SPARK-24085 spark supports scalar sub query ,so it is required 
to support for carbon tables also



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] carbondata pull request #2366: [CARBONDATA-2532][Integration] Carbon to supp...

2018-06-24 Thread sandeep-katta
Github user sandeep-katta commented on a diff in the pull request:

https://github.com/apache/carbondata/pull/2366#discussion_r197680925
  
--- Diff: 
integration/spark2/src/main/spark2.3/org/apache/spark/sql/hive/CarbonAnalyzer.scala
 ---
@@ -0,0 +1,51 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.spark.sql.hive
+
+import org.apache.spark.sql.SparkSession
+import org.apache.spark.sql.catalyst.analysis.Analyzer
+import org.apache.spark.sql.catalyst.catalog.SessionCatalog
+import org.apache.spark.sql.catalyst.plans.logical.LogicalPlan
+import org.apache.spark.sql.catalyst.rules.Rule
+import org.apache.spark.sql.internal.SQLConf
+import org.apache.spark.util.CarbonReflectionUtils
+
+class CarbonAnalyzer(catalog: SessionCatalog,
--- End diff --

In 2.1 CarbonAnalyzer class is part of CarbonSessionState.scala and the 
code is different from 2.2. Now the 2.2 and 2.3 code is same and as per design 
it is required to copy to 2.3 folder also.


---


[GitHub] carbondata pull request #2366: [CARBONDATA-2532][Integration] Carbon to supp...

2018-06-19 Thread sandeep-katta
Github user sandeep-katta commented on a diff in the pull request:

https://github.com/apache/carbondata/pull/2366#discussion_r196655625
  
--- Diff: store/search/src/main/scala/org/apache/spark/rpc/Master.scala ---
@@ -81,7 +81,7 @@ class Master(sparkConf: SparkConf) {
   do {
 try {
   LOG.info(s"starting registry-service on $hostAddress:$port")
-  val config = RpcEnvConfig(
+  val config = RpcUtil.getRpcEnvConfig(
--- End diff --

After analyzing the #2372 these changes are not required,so reverted


---


[GitHub] carbondata pull request #2366: [CARBONDATA-2532][Integration] Carbon to supp...

2018-06-19 Thread sandeep-katta
Github user sandeep-katta commented on a diff in the pull request:

https://github.com/apache/carbondata/pull/2366#discussion_r196655419
  
--- Diff: 
integration/spark2/src/main/spark2.3/org/apache/spark/sql/hive/CreateCarbonSourceTableAsSelectCommand.scala
 ---
@@ -0,0 +1,124 @@
+
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.spark.sql.hive
+
+import java.net.URI
+
+import org.apache.spark.sql.{AnalysisException, Dataset, Row, SaveMode, 
SparkSession}
+import org.apache.spark.sql.catalyst.catalog.{CatalogTable, 
CatalogTableType, CatalogUtils}
+import org.apache.spark.sql.catalyst.expressions.Attribute
+import org.apache.spark.sql.catalyst.plans.logical.LogicalPlan
+import org.apache.spark.sql.execution.SparkPlan
+import 
org.apache.spark.sql.execution.command.{AlterTableRecoverPartitionsCommand, 
RunnableCommand}
+import org.apache.spark.sql.execution.datasources.{DataSource, 
HadoopFsRelation}
+import org.apache.spark.sql.sources.BaseRelation
+
+/**
+ * Create table 'using carbondata' and insert the query result into it.
+ *
+ * @param table the Catalog Table
+ * @param mode  SaveMode:Ignore,OverWrite,ErrorIfExists,Append
+ * @param query the query whose result will be insert into the new relation
+ *
+ */
+
--- End diff --

fixed


---


[GitHub] carbondata pull request #2366: [CARBONDATA-2532][Integration] Carbon to supp...

2018-06-19 Thread sandeep-katta
Github user sandeep-katta commented on a diff in the pull request:

https://github.com/apache/carbondata/pull/2366#discussion_r196655288
  
--- Diff: 
integration/spark2/src/main/spark2.3/org/apache/spark/sql/hive/CarbonSessionState.scala
 ---
@@ -0,0 +1,269 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.spark.sql.hive
+
+import org.apache.hadoop.conf.Configuration
+import org.apache.hadoop.fs.Path
+import org.apache.spark.sql._
+import org.apache.spark.sql.catalyst.TableIdentifier
+import org.apache.spark.sql.catalyst.analysis.{Analyzer, FunctionRegistry}
+import org.apache.spark.sql.catalyst.catalog._
+import org.apache.spark.sql.catalyst.expressions.Expression
+import org.apache.spark.sql.catalyst.optimizer.Optimizer
+import org.apache.spark.sql.catalyst.parser.ParserInterface
+import org.apache.spark.sql.catalyst.plans.logical.{LogicalPlan}
+import org.apache.spark.sql.catalyst.rules.Rule
+import org.apache.spark.sql.execution.datasources.{FindDataSourceTable, 
PreWriteCheck, ResolveSQLOnFile, _}
+import org.apache.spark.sql.execution.strategy.{CarbonLateDecodeStrategy, 
DDLStrategy, StreamingTableStrategy}
+import org.apache.spark.sql.hive.client.HiveClient
+import org.apache.spark.sql.internal.{SQLConf, SessionState}
+import org.apache.spark.sql.optimizer.{CarbonIUDRule, 
CarbonLateDecodeRule, CarbonUDFTransformRule}
+import org.apache.spark.sql.parser.CarbonSparkSqlParser
+
+import org.apache.carbondata.spark.util.CarbonScalaUtil
+
+/**
+ * This class will have carbon catalog and refresh the relation from cache 
if the carbontable in
+ * carbon catalog is not same as cached carbon relation's carbon table
+ *
+ * @param externalCatalog
+ * @param globalTempViewManager
+ * @param sparkSession
+ * @param functionResourceLoader
+ * @param functionRegistry
+ * @param conf
+ * @param hadoopConf
+ */
+class CarbonHiveSessionCatalog(
+externalCatalog: HiveExternalCatalog,
+globalTempViewManager: GlobalTempViewManager,
+functionRegistry: FunctionRegistry,
+sparkSession: SparkSession,
+conf: SQLConf,
+hadoopConf: Configuration,
+parser: ParserInterface,
+functionResourceLoader: FunctionResourceLoader)
+  extends HiveSessionCatalog (
+externalCatalog,
+globalTempViewManager,
+new HiveMetastoreCatalog(sparkSession),
+functionRegistry,
+conf,
+hadoopConf,
+parser,
+functionResourceLoader
+  ) with CarbonSessionCatalog {
+
+  private lazy val carbonEnv = {
+val env = new CarbonEnv
+env.init(sparkSession)
+env
+  }
+  /**
+   * return's the carbonEnv instance
+   * @return
+   */
+  override def getCarbonEnv() : CarbonEnv = {
+carbonEnv
+  }
+
+  // Initialize all listeners to the Operation bus.
+  CarbonEnv.initListeners()
+
+  override def lookupRelation(name: TableIdentifier): LogicalPlan = {
+val rtnRelation = super.lookupRelation(name)
+val isRelationRefreshed =
+  CarbonSessionUtil.refreshRelation(rtnRelation, name)(sparkSession)
+if (isRelationRefreshed) {
+  super.lookupRelation(name)
+} else {
+  rtnRelation
+}
+  }
+
+  /**
+   * returns hive client from HiveExternalCatalog
+   *
+   * @return
+   */
+  override def getClient(): org.apache.spark.sql.hive.client.HiveClient = {
+sparkSession.asInstanceOf[CarbonSession].sharedState.externalCatalog
+  .asInstanceOf[HiveExternalCatalog].client
+  }
+
+  def alterTableRename(oldTableIdentifier: TableIdentifier,
+  newTableIdentifier: TableIdentifier,
+  newTablePath: String): Unit = {
+getClient().runSqlHive(
+  s"ALTER TABLE ${ oldTableIdentifier.database.get }.${ 
oldTableIdentifier.table } " +
+  s"RENAME TO ${ oldTableIdentifie

[GitHub] carbondata pull request #2366: [CARBONDATA-2532][Integration] Carbon to supp...

2018-06-19 Thread sandeep-katta
Github user sandeep-katta commented on a diff in the pull request:

https://github.com/apache/carbondata/pull/2366#discussion_r196655227
  
--- Diff: 
integration/spark-common/src/main/scala/org/apache/spark/util/CarbonReflectionUtils.scala
 ---
@@ -247,6 +252,32 @@ object CarbonReflectionUtils {
 isFormatted
   }
 
+
+  def getRowDataSourceScanExecObj(relation: LogicalRelation,
--- End diff --

fixed


---


[GitHub] carbondata pull request #2366: [CARBONDATA-2532][Integration] Carbon to supp...

2018-06-19 Thread sandeep-katta
Github user sandeep-katta commented on a diff in the pull request:

https://github.com/apache/carbondata/pull/2366#discussion_r196655176
  
--- Diff: 
integration/spark-common/src/main/scala/org/apache/spark/util/CarbonReflectionUtils.scala
 ---
@@ -247,6 +252,32 @@ object CarbonReflectionUtils {
 isFormatted
   }
 
+
--- End diff --

Fixed


---


[GitHub] carbondata pull request #2366: [CARBONDATA-2532][Integration] Carbon to supp...

2018-06-19 Thread sandeep-katta
Github user sandeep-katta commented on a diff in the pull request:

https://github.com/apache/carbondata/pull/2366#discussion_r196655245
  
--- Diff: 
integration/spark2/src/main/scala/org/apache/spark/sql/execution/strategy/DDLStrategy.scala
 ---
@@ -38,9 +39,17 @@ import org.apache.carbondata.common.logging.{LogService, 
LogServiceFactory}
 import org.apache.carbondata.core.features.TableOperation
 import org.apache.carbondata.core.util.CarbonProperties
 
-/**
- * Carbon strategies for ddl commands
- */
+  /** Carbon strategies for ddl commands
--- End diff --

fixed


---


[GitHub] carbondata pull request #2366: [CARBONDATA-2532][Integration] Carbon to supp...

2018-06-19 Thread sandeep-katta
Github user sandeep-katta commented on a diff in the pull request:

https://github.com/apache/carbondata/pull/2366#discussion_r196655128
  
--- Diff: 
integration/spark2/src/main/scala/org/apache/spark/sql/hive/CarbonPreAggregateRules.scala
 ---
@@ -1787,20 +1839,23 @@ case class 
CarbonPreAggregateDataLoadingRules(sparkSession: SparkSession)
   // named expression list otherwise update the list and add 
it to set
   if 
(!validExpressionsMap.contains(AggExpToColumnMappingModel(sumExp))) {
 namedExpressionList +=
-Alias(expressions.head, name + "_ 
sum")(NamedExpression.newExprId,
+CarbonCompilerUtil.createAliasRef(expressions.head,
+  name + "_ sum",
+  NamedExpression.newExprId,
   alias.qualifier,
   Some(alias.metadata),
-  alias.isGenerated)
+  Some(alias))
 validExpressionsMap += AggExpToColumnMappingModel(sumExp)
   }
   // check with same expression already count is present then 
do not add to
   // named expression list otherwise update the list and add 
it to set
   if 
(!validExpressionsMap.contains(AggExpToColumnMappingModel(countExp))) {
 namedExpressionList +=
-Alias(expressions.last, name + "_ 
count")(NamedExpression.newExprId,
-  alias.qualifier,
-  Some(alias.metadata),
-  alias.isGenerated)
+  CarbonCompilerUtil.createAliasRef(expressions.last, name 
+ "_ count",
--- End diff --

Fixed,Changed the name from CarbonCompilerUtil to CarbonToSparkAdapater


---


[GitHub] carbondata pull request #2366: [CARBONDATA-2532][Integration] Carbon to supp...

2018-06-19 Thread sandeep-katta
Github user sandeep-katta commented on a diff in the pull request:

https://github.com/apache/carbondata/pull/2366#discussion_r196655020
  
--- Diff: 
integration/spark-common-test/src/test/scala/org/apache/carbondata/sql/commands/StoredAsCarbondataSuite.scala
 ---
@@ -87,7 +87,7 @@ class StoredAsCarbondataSuite extends QueryTest with 
BeforeAndAfterEach {
   sql("CREATE TABLE carbon_table(key INT, value STRING) STORED AS  ")
 } catch {
   case e: Exception =>
-assert(e.getMessage.contains("no viable alternative at input"))
+assert(true)
--- End diff --

Fixed,added or condition with message as per spark 2.3.0


---


[GitHub] carbondata pull request #2366: [CARBONDATA-2532][Integration] Carbon to supp...

2018-06-19 Thread sandeep-katta
Github user sandeep-katta commented on a diff in the pull request:

https://github.com/apache/carbondata/pull/2366#discussion_r196654906
  
--- Diff: 
integration/spark-common/src/main/scala/org/apache/spark/util/CarbonReflectionUtils.scala
 ---
@@ -140,6 +142,13 @@ object CarbonReflectionUtils {
 relation,
 expectedOutputAttributes,
 catalogTable)._1.asInstanceOf[LogicalRelation]
+} else if (SPARK_VERSION.startsWith("2.3")) {
--- End diff --

Fixed,added the Utility method for spark version comparison in 
SparkUtil.scala


---


[GitHub] carbondata pull request #2366: [CARBONDATA-2532][Integration] Carbon to supp...

2018-06-19 Thread sandeep-katta
Github user sandeep-katta commented on a diff in the pull request:

https://github.com/apache/carbondata/pull/2366#discussion_r196654926
  
--- Diff: 
integration/spark2/src/main/scala/org/apache/spark/sql/execution/strategy/CarbonLateDecodeStrategy.scala
 ---
@@ -355,18 +362,19 @@ private[sql] class CarbonLateDecodeStrategy extends 
SparkStrategy {
   }
 
   private def getDataSourceScan(relation: LogicalRelation,
-  output: Seq[Attribute],
-  partitions: Seq[PartitionSpec],
-  scanBuilder: (Seq[Attribute], Seq[Expression], Seq[Filter],
-ArrayBuffer[AttributeReference], Seq[PartitionSpec]) => 
RDD[InternalRow],
-  candidatePredicates: Seq[Expression],
-  pushedFilters: Seq[Filter],
-  metadata: Map[String, String],
-  needDecoder: ArrayBuffer[AttributeReference],
-  updateRequestedColumns: Seq[Attribute]): DataSourceScanExec = {
+output: Seq[Attribute],
--- End diff --

fixed


---


[GitHub] carbondata pull request #2366: [CARBONDATA-2532][Integration] Carbon to supp...

2018-06-19 Thread sandeep-katta
Github user sandeep-katta commented on a diff in the pull request:

https://github.com/apache/carbondata/pull/2366#discussion_r196654954
  
--- Diff: 
integration/spark-common-test/src/test/scala/org/apache/carbondata/spark/testsuite/bigdecimal/TestBigDecimal.scala
 ---
@@ -149,8 +149,9 @@ class TestBigDecimal extends QueryTest with 
BeforeAndAfterAll {
   }
 
   test("test sum*10 aggregation on big decimal column with high 
precision") {
-checkAnswer(sql("select sum(salary)*10 from carbonBigDecimal_2"),
-  sql("select sum(salary)*10 from hiveBigDecimal"))
+val carbonSeq = sql("select sum(salary)*10 from 
carbonBigDecimal_2").collect
--- End diff --

fixed


---


[GitHub] carbondata pull request #2366: [CARBONDATA-2532][Integration] Carbon to supp...

2018-06-19 Thread sandeep-katta
Github user sandeep-katta commented on a diff in the pull request:

https://github.com/apache/carbondata/pull/2366#discussion_r196654884
  
--- Diff: 
integration/spark-common/src/main/scala/org/apache/spark/util/CarbonReflectionUtils.scala
 ---
@@ -65,7 +66,7 @@ object CarbonReflectionUtils {
 className,
 tableIdentifier,
 tableAlias)._1.asInstanceOf[UnresolvedRelation]
-} else if (SPARK_VERSION.startsWith("2.2")) {
+} else if (SPARK_VERSION.startsWith("2.2") || 
SPARK_VERSION.startsWith("2.3")) {
--- End diff --

Fixed,added the Utility method for spark version comparison in 
SparkUtil.scala


---


[GitHub] carbondata pull request #2366: [CARBONDATA-2532][Integration] Carbon to supp...

2018-06-19 Thread sandeep-katta
Github user sandeep-katta commented on a diff in the pull request:

https://github.com/apache/carbondata/pull/2366#discussion_r196341741
  
--- Diff: 
integration/spark-common/src/main/scala/org/apache/spark/sql/execution/streaming/CarbonAppendableStreamSink.scala
 ---
@@ -127,7 +127,7 @@ class CarbonAppendableStreamSink(
 className = 
sparkSession.sessionState.conf.streamingFileCommitProtocolClass,
 jobId = batchId.toString,
 outputPath = fileLogPath,
-isAppend = false)
+false)
--- End diff --

in spark2.2.1 default argument name is "isAppend" and in 2.3.0 it is 
dynamicPartitionOverwrite. So it is required change 


---


[GitHub] carbondata issue #2343: [WIP][CARBONDATA-2532][Integration] Carbon to suppor...

2018-05-28 Thread sandeep-katta
Github user sandeep-katta commented on the issue:

https://github.com/apache/carbondata/pull/2343
  
Duplicate of [WIP][CARBONDATA-2532][Integration] Carbon to support spark 
2.3 version


---


[GitHub] carbondata pull request #2343: [WIP][CARBONDATA-2532][Integration] Carbon to...

2018-05-28 Thread sandeep-katta
Github user sandeep-katta closed the pull request at:

https://github.com/apache/carbondata/pull/2343


---


[GitHub] carbondata pull request #2343: [WIP][CARBONDATA-2532][Integration] Carbon to...

2018-05-25 Thread sandeep-katta
GitHub user sandeep-katta opened a pull request:

https://github.com/apache/carbondata/pull/2343

[WIP][CARBONDATA-2532][Integration] Carbon to support spark 2.3 version

**What changes were proposed in this pull request?**

1.In general carbon is need to correct the API changes done by the spark in 
2.3.0
2.Added 1 more folder in integration/spark2 for 2.3 support
3.New profile is added for spark 2.3

**How was this patch tested?**
Existing testcases.
Some issues are pending will be correcting meanwhile

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/sandeep-katta/carbondata sparksupport

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/carbondata/pull/2343.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2343


commit 09df21ad61cc518fa12ad648ff00d4e68dcc47d0
Author: sandeep-katta 
Date:   2018-05-25T10:17:48Z

CARBON-SPARK-UPGRADE_2.3

Carbon support for 2.3.1




---