[GitHub] incubator-carbondata issue #715: [CARBONDATA-782]support SORT_COLUMNS

2017-04-01 Thread zzcclp
Github user zzcclp commented on the issue:

https://github.com/apache/incubator-carbondata/pull/715
  
@jackylk thanks for you reply, in my opinion i think that it's better to 
create a branch-1.1 for preparing for 1.1 release, and keep branch master as 
development branch. 


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-carbondata issue #715: [CARBONDATA-782]support SORT_COLUMNS

2017-03-31 Thread zzcclp
Github user zzcclp commented on the issue:

https://github.com/apache/incubator-carbondata/pull/715
  
@QiangCai @jackylk will this pr be merged into branch master?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-carbondata issue #506: [CARBONDATA-608]Fixed compilation issue in ...

2017-01-08 Thread zzcclp
Github user zzcclp commented on the issue:

https://github.com/apache/incubator-carbondata/pull/506
  
I complied carbondata with spark-1.6 and this pr,  it is successful. Good.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-carbondata issue #492: [CARBONDATA-440] Providing the update and d...

2017-01-06 Thread zzcclp
Github user zzcclp commented on the issue:

https://github.com/apache/incubator-carbondata/pull/492
  
@ravikiran23  @jackylk  there are some errors when compile with spark-1.6, 
I have pointed out in comments, please take a look, thanks.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-carbondata pull request #492: [CARBONDATA-440] Providing the updat...

2017-01-06 Thread zzcclp
Github user zzcclp commented on a diff in the pull request:

https://github.com/apache/incubator-carbondata/pull/492#discussion_r94986558
  
--- Diff: 
integration/spark/src/main/scala/org/apache/spark/sql/hive/CarbonAnalysisRules.scala
 ---
@@ -0,0 +1,173 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.spark.sql.hive
+
+import org.apache.spark.sql._
+import org.apache.spark.sql.catalyst.CarbonTableIdentifierImplicit
+import org.apache.spark.sql.catalyst.analysis.{UnresolvedAlias, 
UnresolvedFunction, UnresolvedRelation, UnresolvedStar}
+import org.apache.spark.sql.catalyst.expressions.Alias
+import org.apache.spark.sql.catalyst.plans.Inner
+import org.apache.spark.sql.catalyst.plans.logical._
+import org.apache.spark.sql.catalyst.rules._
+import org.apache.spark.sql.execution.command.ProjectForDeleteCommand
+import org.apache.spark.sql.execution.datasources.LogicalRelation
+
+import org.apache.carbondata.core.constants.CarbonCommonConstants
+
+/**
+ * Insert into carbon table from other source
+ */
+object CarbonPreInsertionCasts extends Rule[LogicalPlan] {
+def apply(plan: LogicalPlan): LogicalPlan = plan.transform {
+  // Wait until children are resolved.
+  case p: LogicalPlan if !p.childrenResolved => p
+
+  case p @ InsertIntoTable(relation: LogicalRelation, _, child, _, _)
+   if relation.relation.isInstanceOf[CarbonDatasourceRelation] =>
+castChildOutput(p, 
relation.relation.asInstanceOf[CarbonDatasourceRelation], child)
+}
+
+def castChildOutput(p: InsertIntoTable, relation: 
CarbonDatasourceRelation, child: LogicalPlan)
+  : LogicalPlan = {
+  if (relation.carbonRelation.output.size > CarbonCommonConstants
+.DEFAULT_MAX_NUMBER_OF_COLUMNS) {
+sys
+  .error("Maximum supported column by carbon is:" + 
CarbonCommonConstants
+.DEFAULT_MAX_NUMBER_OF_COLUMNS
+  )
+  }
+  if (child.output.size >= relation.carbonRelation.output.size ) {
+InsertIntoCarbonTable(relation, p.partition, p.child, p.overwrite, 
p.ifNotExists)
+  } else {
+ sys.error("Cannot insert into target table because column number 
are different")
+  }
+}
+  }
+
+object CarbonIUDAnalysisRule extends Rule[LogicalPlan] {
+
+  var sqlContext: SQLContext = _
+
+  def init(sqlContext: SQLContext) {
+this.sqlContext = sqlContext
+  }
+
+  private def processUpdateQuery(
+   table: UnresolvedRelation,
+   columns: List[String],
+   selectStmt: String,
+   filter: String): LogicalPlan = {
+var includedDestColumns = false
+var includedDestRelation = false
+var addedTupleId = false
+
+def prepareTargetReleation(relation: UnresolvedRelation): Subquery = {
+  val tupleId = UnresolvedAlias(Alias(UnresolvedFunction("getTupleId",
+Seq.empty, isDistinct = false), "tupleId")())
+  val projList = Seq(
+UnresolvedAlias(UnresolvedStar(table.alias)), tupleId)
+  // include tuple id and rest of the required columns in subqury
+  Subquery(table.alias.getOrElse(""), Project(projList, relation))
+}
+// get the un-analyzed logical plan
+val targetTable = prepareTargetReleation(table)
+val selectPlan = org.apache.spark.sql.SQLParser.parse(selectStmt, 
sqlContext) transform {
+  case Project(projectList, child) if (!includedDestColumns) =>
+includedDestColumns = true
+if (projectList.size != columns.size) {
+  sys.error("Number of source and destination columns are not 
matching")
+}
+val renamedProjectList = projectList.zip(columns).map{ case(attr, 
col) =>
+  attr match {
+ca

[GitHub] incubator-carbondata pull request #492: [CARBONDATA-440] Providing the updat...

2017-01-06 Thread zzcclp
Github user zzcclp commented on a diff in the pull request:

https://github.com/apache/incubator-carbondata/pull/492#discussion_r94986306
  
--- Diff: 
integration/spark/src/main/scala/org/apache/spark/sql/hive/CarbonAnalysisRules.scala
 ---
@@ -0,0 +1,173 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.spark.sql.hive
+
+import org.apache.spark.sql._
+import org.apache.spark.sql.catalyst.CarbonTableIdentifierImplicit
+import org.apache.spark.sql.catalyst.analysis.{UnresolvedAlias, 
UnresolvedFunction, UnresolvedRelation, UnresolvedStar}
+import org.apache.spark.sql.catalyst.expressions.Alias
+import org.apache.spark.sql.catalyst.plans.Inner
+import org.apache.spark.sql.catalyst.plans.logical._
+import org.apache.spark.sql.catalyst.rules._
+import org.apache.spark.sql.execution.command.ProjectForDeleteCommand
+import org.apache.spark.sql.execution.datasources.LogicalRelation
+
+import org.apache.carbondata.core.constants.CarbonCommonConstants
+
+/**
+ * Insert into carbon table from other source
+ */
+object CarbonPreInsertionCasts extends Rule[LogicalPlan] {
+def apply(plan: LogicalPlan): LogicalPlan = plan.transform {
+  // Wait until children are resolved.
+  case p: LogicalPlan if !p.childrenResolved => p
+
+  case p @ InsertIntoTable(relation: LogicalRelation, _, child, _, _)
+   if relation.relation.isInstanceOf[CarbonDatasourceRelation] =>
+castChildOutput(p, 
relation.relation.asInstanceOf[CarbonDatasourceRelation], child)
+}
+
+def castChildOutput(p: InsertIntoTable, relation: 
CarbonDatasourceRelation, child: LogicalPlan)
+  : LogicalPlan = {
+  if (relation.carbonRelation.output.size > CarbonCommonConstants
+.DEFAULT_MAX_NUMBER_OF_COLUMNS) {
+sys
+  .error("Maximum supported column by carbon is:" + 
CarbonCommonConstants
+.DEFAULT_MAX_NUMBER_OF_COLUMNS
+  )
+  }
+  if (child.output.size >= relation.carbonRelation.output.size ) {
+InsertIntoCarbonTable(relation, p.partition, p.child, p.overwrite, 
p.ifNotExists)
+  } else {
+ sys.error("Cannot insert into target table because column number 
are different")
+  }
+}
+  }
+
+object CarbonIUDAnalysisRule extends Rule[LogicalPlan] {
+
+  var sqlContext: SQLContext = _
+
+  def init(sqlContext: SQLContext) {
+this.sqlContext = sqlContext
+  }
+
+  private def processUpdateQuery(
+   table: UnresolvedRelation,
+   columns: List[String],
+   selectStmt: String,
+   filter: String): LogicalPlan = {
+var includedDestColumns = false
+var includedDestRelation = false
+var addedTupleId = false
+
+def prepareTargetReleation(relation: UnresolvedRelation): Subquery = {
+  val tupleId = UnresolvedAlias(Alias(UnresolvedFunction("getTupleId",
+Seq.empty, isDistinct = false), "tupleId")())
+  val projList = Seq(
+UnresolvedAlias(UnresolvedStar(table.alias)), tupleId)
+  // include tuple id and rest of the required columns in subqury
+  Subquery(table.alias.getOrElse(""), Project(projList, relation))
+}
+// get the un-analyzed logical plan
+val targetTable = prepareTargetReleation(table)
+val selectPlan = org.apache.spark.sql.SQLParser.parse(selectStmt, 
sqlContext) transform {
+  case Project(projectList, child) if (!includedDestColumns) =>
+includedDestColumns = true
+if (projectList.size != columns.size) {
+  sys.error("Number of source and destination columns are not 
matching")
+}
+val renamedProjectList = projectList.zip(columns).map{ case(attr, 
col) =>
+  attr match {
+ca

[GitHub] incubator-carbondata pull request #492: [CARBONDATA-440] Providing the updat...

2017-01-06 Thread zzcclp
Github user zzcclp commented on a diff in the pull request:

https://github.com/apache/incubator-carbondata/pull/492#discussion_r94986095
  
--- Diff: 
integration/spark/src/main/scala/org/apache/spark/sql/hive/CarbonAnalysisRules.scala
 ---
@@ -0,0 +1,173 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.spark.sql.hive
+
+import org.apache.spark.sql._
+import org.apache.spark.sql.catalyst.CarbonTableIdentifierImplicit
+import org.apache.spark.sql.catalyst.analysis.{UnresolvedAlias, 
UnresolvedFunction, UnresolvedRelation, UnresolvedStar}
+import org.apache.spark.sql.catalyst.expressions.Alias
+import org.apache.spark.sql.catalyst.plans.Inner
+import org.apache.spark.sql.catalyst.plans.logical._
+import org.apache.spark.sql.catalyst.rules._
+import org.apache.spark.sql.execution.command.ProjectForDeleteCommand
+import org.apache.spark.sql.execution.datasources.LogicalRelation
+
+import org.apache.carbondata.core.constants.CarbonCommonConstants
+
+/**
+ * Insert into carbon table from other source
+ */
+object CarbonPreInsertionCasts extends Rule[LogicalPlan] {
+def apply(plan: LogicalPlan): LogicalPlan = plan.transform {
+  // Wait until children are resolved.
+  case p: LogicalPlan if !p.childrenResolved => p
+
+  case p @ InsertIntoTable(relation: LogicalRelation, _, child, _, _)
+   if relation.relation.isInstanceOf[CarbonDatasourceRelation] =>
+castChildOutput(p, 
relation.relation.asInstanceOf[CarbonDatasourceRelation], child)
+}
+
+def castChildOutput(p: InsertIntoTable, relation: 
CarbonDatasourceRelation, child: LogicalPlan)
+  : LogicalPlan = {
+  if (relation.carbonRelation.output.size > CarbonCommonConstants
+.DEFAULT_MAX_NUMBER_OF_COLUMNS) {
+sys
+  .error("Maximum supported column by carbon is:" + 
CarbonCommonConstants
+.DEFAULT_MAX_NUMBER_OF_COLUMNS
+  )
+  }
+  if (child.output.size >= relation.carbonRelation.output.size ) {
+InsertIntoCarbonTable(relation, p.partition, p.child, p.overwrite, 
p.ifNotExists)
+  } else {
+ sys.error("Cannot insert into target table because column number 
are different")
+  }
+}
+  }
+
+object CarbonIUDAnalysisRule extends Rule[LogicalPlan] {
+
+  var sqlContext: SQLContext = _
+
+  def init(sqlContext: SQLContext) {
+this.sqlContext = sqlContext
+  }
+
+  private def processUpdateQuery(
+   table: UnresolvedRelation,
+   columns: List[String],
+   selectStmt: String,
+   filter: String): LogicalPlan = {
+var includedDestColumns = false
+var includedDestRelation = false
+var addedTupleId = false
+
+def prepareTargetReleation(relation: UnresolvedRelation): Subquery = {
+  val tupleId = UnresolvedAlias(Alias(UnresolvedFunction("getTupleId",
+Seq.empty, isDistinct = false), "tupleId")())
+  val projList = Seq(
+UnresolvedAlias(UnresolvedStar(table.alias)), tupleId)
--- End diff --

@ravikiran23  @jackylk UnresolvedStar's init method requests a 
‘Option[String]’ type in Spark-1.5 while in Spark-1.6 it requests a 
'Option[Seq[String]]' type. 


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-carbondata pull request #492: [CARBONDATA-440] Providing the updat...

2017-01-06 Thread zzcclp
Github user zzcclp commented on a diff in the pull request:

https://github.com/apache/incubator-carbondata/pull/492#discussion_r94985585
  
--- Diff: 
integration/spark/src/main/scala/org/apache/spark/sql/optimizer/CarbonOptimizer.scala
 ---
@@ -72,23 +74,71 @@ object CarbonOptimizer {
 class ResolveCarbonFunctions(relations: Seq[CarbonDecoderRelation])
   extends Rule[LogicalPlan] with PredicateHelper {
   val LOGGER = LogServiceFactory.getLogService(this.getClass.getName)
-  def apply(plan: LogicalPlan): LogicalPlan = {
-if (relations.nonEmpty && !isOptimized(plan)) {
+  def apply(logicalPlan: LogicalPlan): LogicalPlan = {
+if (relations.nonEmpty && !isOptimized(logicalPlan)) {
+  val plan = processPlan(logicalPlan)
+  val udfTransformedPlan = pushDownUDFToJoinLeftRelation(plan)
   LOGGER.info("Starting to optimize plan")
   val recorder = CarbonTimeStatisticsFactory.createExecutorRecorder("")
   val queryStatistic = new QueryStatistic()
-  val result = transformCarbonPlan(plan, relations)
+  val result = transformCarbonPlan(udfTransformedPlan, relations)
   queryStatistic.addStatistics("Time taken for Carbon Optimizer to 
optimize: ",
 System.currentTimeMillis)
   recorder.recordStatistics(queryStatistic)
   recorder.logStatistics()
   result
 } else {
   LOGGER.info("Skip CarbonOptimizer")
-  plan
+  logicalPlan
 }
   }
 
+  private def processPlan(plan: LogicalPlan): LogicalPlan = {
+plan transform {
+  case ProjectForUpdate(table, cols, Seq(updatePlan)) =>
+var isTransformed = false
+val newPlan = updatePlan transform {
+  case Project(pList, child) if (!isTransformed) =>
+val (dest: Seq[NamedExpression], source: Seq[NamedExpression]) 
= pList
+  .splitAt(pList.size - cols.size)
+val diff = cols.diff(dest.map(_.name))
+if (diff.size > 0) {
+  sys.error(s"Unknown column(s) ${diff.mkString(",")} in table 
${table.tableName}")
+}
+isTransformed = true
+Project(dest.filter(a => !cols.contains(a.name)) ++ source, 
child)
+}
+ProjectForUpdateCommand(newPlan, table.tableIdentifier)
--- End diff --

@ravikiran23 @jackylk UnresolvedRelation.tableIdentifier is a 'Seq[String]' 
type in Spark-1.5 while in Spark-1.6 it's a 'TableIdentifier' type, so compile 
unsuccessfully with Spark-1.6.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-carbondata issue #498: [CARBONDATA-568][Minor][Follow-Up] clean up...

2017-01-04 Thread zzcclp
Github user zzcclp commented on the issue:

https://github.com/apache/incubator-carbondata/pull/498
  
@jackylk please take a look, thanks.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-carbondata pull request #498: [CARBONDATA-568][Minor][Follow-Up] c...

2017-01-04 Thread zzcclp
GitHub user zzcclp opened a pull request:

https://github.com/apache/incubator-carbondata/pull/498

[CARBONDATA-568][Minor][Follow-Up] clean up code for carbon-core module

using "new java.util.LinkedHashSet" instead of "new util.LinkedHashSet"



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/zzcclp/incubator-carbondata cleancore-followup

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/incubator-carbondata/pull/498.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #498


commit 109d3833b5e9fd4ed9c2f231145c149be71903a5
Author: Zhang Zhichao <441586...@qq.com>
Date:   2017-01-04T16:03:41Z

[CARBONDATA-568][Follow-Up] clean up code for carbon-core module

using "new java.util.LinkedHashSet" instead of "new util.LinkedHashSet"




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-carbondata issue #426: [MINOR-FIX]change the declared package of t...

2016-12-13 Thread zzcclp
Github user zzcclp commented on the issue:

https://github.com/apache/incubator-carbondata/pull/426
  
thanks, @jackylk .


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-carbondata issue #373: [CARBONDATA-474] test case for columnar pac...

2016-12-12 Thread zzcclp
Github user zzcclp commented on the issue:

https://github.com/apache/incubator-carbondata/pull/373
  
@anuragknoldus @ravipesala I have created a 
[pr](https://github.com/apache/incubator-carbondata/pull/426) for wrong 
declared package


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-carbondata pull request #426: [MINOR-FIX]change the declared packa...

2016-12-12 Thread zzcclp
GitHub user zzcclp opened a pull request:

https://github.com/apache/incubator-carbondata/pull/426

[MINOR-FIX]change the declared package of these four java files 

The declared package of these four java files must be modidied to 
"org.apache.carbondata.core.datastorage.store.filesystem":
AlluxioCarbonFileTest.java
HDFSCarbonFileTest.java
LocalCarbonFileTest.java
ViewFsCarbonFileTest.java

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/zzcclp/incubator-carbondata 
change_declared_package

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/incubator-carbondata/pull/426.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #426


commit 59d17168f8f015acd10f7b540b5928a50d6166d9
Author: Zhang Zhichao <441586...@qq.com>
Date:   2016-12-13T06:28:25Z

The declared package of these four java files must be modidied to 
"org.apache.carbondata.core.datastorage.store.filesystem"

minor fix




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-carbondata issue #373: [CARBONDATA-474] test case for columnar pac...

2016-12-12 Thread zzcclp
Github user zzcclp commented on the issue:

https://github.com/apache/incubator-carbondata/pull/373
  
ping @anuragknoldus @ravipesala  


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---