[GitHub] spark issue #14148: [SPARK-16482] [SQL] Describe Table Command for Tables Re...

2016-07-11 Thread SparkQA
Github user SparkQA commented on the issue:

https://github.com/apache/spark/pull/14148
  
**[Test build #62144 has 
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/62144/consoleFull)**
 for PR 14148 at commit 
[`a05383c`](https://github.com/apache/spark/commit/a05383c8ff4483dacdf34070173b965ab6f7d4ca).


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #14148: [SPARK-16482] [SQL] Describe Table Command for Tables Re...

2016-07-11 Thread AmplabJenkins
Github user AmplabJenkins commented on the issue:

https://github.com/apache/spark/pull/14148
  
Test PASSed.
Refer to this link for build results (access rights to CI server needed): 
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/62141/
Test PASSed.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #14148: [SPARK-16482] [SQL] Describe Table Command for Tables Re...

2016-07-11 Thread AmplabJenkins
Github user AmplabJenkins commented on the issue:

https://github.com/apache/spark/pull/14148
  
Merged build finished. Test PASSed.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #14148: [SPARK-16482] [SQL] Describe Table Command for Tables Re...

2016-07-11 Thread SparkQA
Github user SparkQA commented on the issue:

https://github.com/apache/spark/pull/14148
  
**[Test build #62141 has 
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/62141/consoleFull)**
 for PR 14148 at commit 
[`d92ebcd`](https://github.com/apache/spark/commit/d92ebcdfd7e525499e0c8b491eeab416ad12ecfd).
 * This patch passes all tests.
 * This patch merges cleanly.
 * This patch adds no public classes.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #14148: [SPARK-16482] [SQL] Describe Table Command for Tables Re...

2016-07-11 Thread SparkQA
Github user SparkQA commented on the issue:

https://github.com/apache/spark/pull/14148
  
**[Test build #62143 has 
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/62143/consoleFull)**
 for PR 14148 at commit 
[`473b27d`](https://github.com/apache/spark/commit/473b27deeb49096ddd38f1b4d4ca03207aa9e025).


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #13494: [SPARK-15752] [SQL] Optimize metadata only query ...

2016-07-11 Thread cloud-fan
Github user cloud-fan commented on a diff in the pull request:

https://github.com/apache/spark/pull/13494#discussion_r70380596
  
--- Diff: 
sql/core/src/test/scala/org/apache/spark/sql/execution/OptimizeMetadataOnlyQuerySuite.scala
 ---
@@ -0,0 +1,122 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.spark.sql.execution
+
+import org.apache.spark.sql._
+import org.apache.spark.sql.catalyst.plans.logical.LocalRelation
+import org.apache.spark.sql.internal.SQLConf
+import org.apache.spark.sql.test.SharedSQLContext
+
+class OptimizeMetadataOnlyQuerySuite extends QueryTest with 
SharedSQLContext {
+  import testImplicits._
+
+  override def beforeAll(): Unit = {
+super.beforeAll()
+val data = (1 to 10).map(i => (i, s"data-$i", i % 2, if ((i % 2) == 0) 
"even" else "odd"))
+  .toDF("col1", "col2", "partcol1", "partcol2")
+data.write.partitionBy("partcol1", 
"partcol2").mode("append").saveAsTable("srcpart")
+  }
+
+  override protected def afterAll(): Unit = {
+try {
+  sql("DROP TABLE IF EXISTS srcpart")
+} finally {
+  super.afterAll()
+}
+  }
+
+  private def assertMetadataOnlyQuery(df: DataFrame): Unit = {
+val localRelations = df.queryExecution.optimizedPlan.collect {
+  case l @ LocalRelation(_, _) => l
+}
+assert(localRelations.size == 1)
+  }
+
+  private def assertNotMetadataOnlyQuery(df: DataFrame): Unit = {
+val localRelations = df.queryExecution.optimizedPlan.collect {
+  case l @ LocalRelation(_, _) => l
+}
+assert(localRelations.size == 0)
+  }
+
+  private def testMetadataOnly(name: String, sqls: String*): Unit = {
+test(name) {
+  withSQLConf(SQLConf.OPTIMIZER_METADATA_ONLY.key -> "true") {
+sqls.foreach { case q => assertMetadataOnlyQuery(sql(q)) }
+  }
+  withSQLConf(SQLConf.OPTIMIZER_METADATA_ONLY.key -> "false") {
+sqls.foreach { case q => assertNotMetadataOnlyQuery(sql(q)) }
+  }
+}
+  }
+
+  private def testNotMetadataOnly(name: String, sqls: String*): Unit = {
+test(name) {
+  withSQLConf(SQLConf.OPTIMIZER_METADATA_ONLY.key -> "true") {
+sqls.foreach { case q => assertNotMetadataOnlyQuery(sql(q)) }
+  }
+  withSQLConf(SQLConf.OPTIMIZER_METADATA_ONLY.key -> "false") {
+sqls.foreach { case q => assertNotMetadataOnlyQuery(sql(q)) }
+  }
+}
+  }
+
+  testMetadataOnly(
+"OptimizeMetadataOnlyQuery test: aggregate expression is partition 
columns",
--- End diff --

I think we can remove the prefix: `OptimizeMetadataOnlyQuery test`. The 
test report will print the name of this test suite for these tests.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #13494: [SPARK-15752] [SQL] Optimize metadata only query ...

2016-07-11 Thread cloud-fan
Github user cloud-fan commented on a diff in the pull request:

https://github.com/apache/spark/pull/13494#discussion_r70380410
  
--- Diff: 
sql/core/src/main/scala/org/apache/spark/sql/execution/OptimizeMetadataOnlyQuery.scala
 ---
@@ -0,0 +1,162 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.spark.sql.execution
+
+import org.apache.spark.sql.catalyst.InternalRow
+import org.apache.spark.sql.catalyst.catalog.{CatalogRelation, 
SessionCatalog}
+import org.apache.spark.sql.catalyst.expressions._
+import org.apache.spark.sql.catalyst.expressions.aggregate._
+import org.apache.spark.sql.catalyst.plans.logical._
+import org.apache.spark.sql.catalyst.rules.Rule
+import org.apache.spark.sql.execution.datasources.{HadoopFsRelation, 
LogicalRelation}
+import org.apache.spark.sql.internal.SQLConf
+
+/**
+ * This rule optimizes the execution of queries that can be answered by 
looking only at
+ * partition-level metadata. This applies when all the columns scanned are 
partition columns, and
+ * the query has an aggregate operator that satisfies the following 
conditions:
+ * 1. aggregate expression is partition columns.
+ *  e.g. SELECT col FROM tbl GROUP BY col.
+ * 2. aggregate function on partition columns with DISTINCT.
+ *  e.g. SELECT col1, count(DISTINCT col2) FROM tbl GROUP BY col1.
+ * 3. aggregate function on partition columns which have same result w or 
w/o DISTINCT keyword.
+ *  e.g. SELECT col1, Max(col2) FROM tbl GROUP BY col1.
+ */
+case class OptimizeMetadataOnlyQuery(
+catalog: SessionCatalog,
+conf: SQLConf) extends Rule[LogicalPlan] {
+
+  def apply(plan: LogicalPlan): LogicalPlan = {
+if (!conf.optimizerMetadataOnly) {
+  return plan
+}
+
+plan.transform {
+  case a @ Aggregate(_, aggExprs, child @ 
PartitionedRelation(partAttrs, relation)) =>
+// We only apply this optimization when only partitioned 
attributes are scanned.
+if (a.references.subsetOf(partAttrs)) {
+  val aggFunctions = aggExprs.flatMap(_.collect {
+case agg: AggregateExpression => agg
+  })
+  val isAllDistinctAgg = aggFunctions.forall { agg =>
+agg.isDistinct || (agg.aggregateFunction match {
+  // `Max`, `Min`, `First` and `Last` are always distinct 
aggregate functions no matter
+  // they have DISTINCT keyword or not, as the result will be 
same.
+  case _: Max => true
+  case _: Min => true
+  case _: First => true
+  case _: Last => true
+  case _ => false
+})
+  }
+  if (isAllDistinctAgg) {
+
a.withNewChildren(Seq(replaceTableScanWithPartitionMetadata(child, relation)))
+  } else {
+a
+  }
+} else {
+  a
+}
+}
+  }
+
+  /**
+   * Returns the partition attributes of the table relation plan.
+   */
+  private def getPartitionAttrs(
+  partitionColumnNames: Seq[String],
+  relation: LogicalPlan): Seq[Attribute] = {
+val partColumns = partitionColumnNames.map(_.toLowerCase).toSet
+relation.output.filter(a => partColumns.contains(a.name.toLowerCase))
+  }
+
+  /**
+   * Transform the given plan, find its table scan nodes that matches the 
given relation, and then
+   * replace the table scan node with its corresponding partition values.
+   */
+  private def replaceTableScanWithPartitionMetadata(
+  child: LogicalPlan,
+  relation: LogicalPlan): LogicalPlan = {
+child transform {
+  case plan if plan eq relation =>
+relation match {
+  case l @ LogicalRelation(fsRelation: HadoopFsRelation, _, _) =>
+val partAttrs = 
getPartitionAttrs(fsRelation.partitionSchema.map(_.name), l)
+val partitionData = 

[GitHub] spark issue #14148: [SPARK-16482] [SQL] Describe Table Command for Tables Re...

2016-07-11 Thread gatorsmile
Github user gatorsmile commented on the issue:

https://github.com/apache/spark/pull/14148
  
Did a quick check. My understanding is wrong. We did the schema inference 
when creating the table. Let me fix it. Thanks!


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #14146: [SPARK-16489][SQL] Guard against variable reuse mistakes...

2016-07-11 Thread AmplabJenkins
Github user AmplabJenkins commented on the issue:

https://github.com/apache/spark/pull/14146
  
Merged build finished. Test PASSed.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #14146: [SPARK-16489][SQL] Guard against variable reuse mistakes...

2016-07-11 Thread sameeragarwal
Github user sameeragarwal commented on the issue:

https://github.com/apache/spark/pull/14146
  
LGTM


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #14146: [SPARK-16489][SQL] Guard against variable reuse mistakes...

2016-07-11 Thread AmplabJenkins
Github user AmplabJenkins commented on the issue:

https://github.com/apache/spark/pull/14146
  
Test PASSed.
Refer to this link for build results (access rights to CI server needed): 
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/62139/
Test PASSed.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #14146: [SPARK-16489][SQL] Guard against variable reuse mistakes...

2016-07-11 Thread SparkQA
Github user SparkQA commented on the issue:

https://github.com/apache/spark/pull/14146
  
**[Test build #62139 has 
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/62139/consoleFull)**
 for PR 14146 at commit 
[`222e868`](https://github.com/apache/spark/commit/222e868969c3f89ba62ebb97a6d99d046c00b6c8).
 * This patch passes all tests.
 * This patch merges cleanly.
 * This patch adds no public classes.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #14148: [SPARK-16482] [SQL] Describe Table Command for Tables Re...

2016-07-11 Thread gatorsmile
Github user gatorsmile commented on the issue:

https://github.com/apache/spark/pull/14148
  
@rxin The created table could be empty. Thus, we are unable to cover all 
the cases even if we try schema inference when creating tables. You know, this 
is just my understanding. No clue about the original intention. : )


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #14116: [SPARK-16452][SQL] Support basic INFORMATION_SCHEMA

2016-07-11 Thread AmplabJenkins
Github user AmplabJenkins commented on the issue:

https://github.com/apache/spark/pull/14116
  
Test FAILed.
Refer to this link for build results (access rights to CI server needed): 
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/62140/
Test FAILed.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #14116: [SPARK-16452][SQL] Support basic INFORMATION_SCHEMA

2016-07-11 Thread SparkQA
Github user SparkQA commented on the issue:

https://github.com/apache/spark/pull/14116
  
**[Test build #62140 has 
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/62140/consoleFull)**
 for PR 14116 at commit 
[`2a753aa`](https://github.com/apache/spark/commit/2a753aa40c8663c8b5fd0b28c0bec962556930af).
 * This patch **fails PySpark unit tests**.
 * This patch merges cleanly.
 * This patch adds no public classes.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #14116: [SPARK-16452][SQL] Support basic INFORMATION_SCHEMA

2016-07-11 Thread AmplabJenkins
Github user AmplabJenkins commented on the issue:

https://github.com/apache/spark/pull/14116
  
Merged build finished. Test FAILed.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #14120: [SPARK-16199][SQL] Add a method to list the refer...

2016-07-11 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/spark/pull/14120


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #14080: [SPARK-16405] Add metrics and source for external shuffl...

2016-07-11 Thread SparkQA
Github user SparkQA commented on the issue:

https://github.com/apache/spark/pull/14080
  
**[Test build #3185 has 
finished](https://amplab.cs.berkeley.edu/jenkins/job/NewSparkPullRequestBuilder/3185/consoleFull)**
 for PR 14080 at commit 
[`4884084`](https://github.com/apache/spark/commit/488408479858b026cf67d9a04e7b0fe1aad8934d).
 * This patch passes all tests.
 * This patch merges cleanly.
 * This patch adds no public classes.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #13901: [SPARK-16199][SQL] Add a method to list the refer...

2016-07-11 Thread rxin
Github user rxin closed the pull request at:

https://github.com/apache/spark/pull/13901


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #14120: [SPARK-16199][SQL] Add a method to list the referenced c...

2016-07-11 Thread rxin
Github user rxin commented on the issue:

https://github.com/apache/spark/pull/14120
  
Merging in master.



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #14143: [SPARK-16430][SQL][STREAMING] Fixed bug in the maxFilesP...

2016-07-11 Thread zsxwing
Github user zsxwing commented on the issue:

https://github.com/apache/spark/pull/14143
  
LGTM!


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #14148: [SPARK-16482] [SQL] Describe Table Command for Tables Re...

2016-07-11 Thread rxin
Github user rxin commented on the issue:

https://github.com/apache/spark/pull/14148
  
Shouldn't schema inference run as soon as the table is created?



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #13494: [SPARK-15752] [SQL] Optimize metadata only query that ha...

2016-07-11 Thread AmplabJenkins
Github user AmplabJenkins commented on the issue:

https://github.com/apache/spark/pull/13494
  
Test PASSed.
Refer to this link for build results (access rights to CI server needed): 
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/62137/
Test PASSed.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #13494: [SPARK-15752] [SQL] Optimize metadata only query that ha...

2016-07-11 Thread AmplabJenkins
Github user AmplabJenkins commented on the issue:

https://github.com/apache/spark/pull/13494
  
Merged build finished. Test PASSed.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #13494: [SPARK-15752] [SQL] Optimize metadata only query that ha...

2016-07-11 Thread SparkQA
Github user SparkQA commented on the issue:

https://github.com/apache/spark/pull/13494
  
**[Test build #62137 has 
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/62137/consoleFull)**
 for PR 13494 at commit 
[`ff16509`](https://github.com/apache/spark/commit/ff1650987b901825d3828de972fa4434466c951f).
 * This patch passes all tests.
 * This patch merges cleanly.
 * This patch adds no public classes.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #14139: [SPARK-16313][SQL][BRANCH-1.6] Spark should not silently...

2016-07-11 Thread AmplabJenkins
Github user AmplabJenkins commented on the issue:

https://github.com/apache/spark/pull/14139
  
Merged build finished. Test FAILed.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #14139: [SPARK-16313][SQL][BRANCH-1.6] Spark should not silently...

2016-07-11 Thread AmplabJenkins
Github user AmplabJenkins commented on the issue:

https://github.com/apache/spark/pull/14139
  
Test FAILed.
Refer to this link for build results (access rights to CI server needed): 
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/62142/
Test FAILed.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #14139: [SPARK-16313][SQL][BRANCH-1.6] Spark should not silently...

2016-07-11 Thread SparkQA
Github user SparkQA commented on the issue:

https://github.com/apache/spark/pull/14139
  
**[Test build #62142 has 
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/62142/consoleFull)**
 for PR 14139 at commit 
[`5aaa96b`](https://github.com/apache/spark/commit/5aaa96b9fa413aed119f7dffb0e3661249db5788).
 * This patch **fails to build**.
 * This patch merges cleanly.
 * This patch adds no public classes.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #11317: [SPARK-12639] [SQL] Mark Filters Fully Handled By...

2016-07-11 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/spark/pull/11317


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #14028: [SPARK-16351][SQL] Avoid per-record type dispatch in JSO...

2016-07-11 Thread HyukjinKwon
Github user HyukjinKwon commented on the issue:

https://github.com/apache/spark/pull/14028
  
@yhuai Could you take another look?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #11317: [SPARK-12639] [SQL] Mark Filters Fully Handled By Source...

2016-07-11 Thread yhuai
Github user yhuai commented on the issue:

https://github.com/apache/spark/pull/11317
  
lgtm. Merging to master.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #14139: [SPARK-16313][SQL][BRANCH-1.6] Spark should not silently...

2016-07-11 Thread SparkQA
Github user SparkQA commented on the issue:

https://github.com/apache/spark/pull/14139
  
**[Test build #62142 has 
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/62142/consoleFull)**
 for PR 14139 at commit 
[`5aaa96b`](https://github.com/apache/spark/commit/5aaa96b9fa413aed119f7dffb0e3661249db5788).


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #14144: [SPARK-16488] Fix codegen variable namespace collision i...

2016-07-11 Thread AmplabJenkins
Github user AmplabJenkins commented on the issue:

https://github.com/apache/spark/pull/14144
  
Merged build finished. Test PASSed.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #14144: [SPARK-16488] Fix codegen variable namespace collision i...

2016-07-11 Thread AmplabJenkins
Github user AmplabJenkins commented on the issue:

https://github.com/apache/spark/pull/14144
  
Test PASSed.
Refer to this link for build results (access rights to CI server needed): 
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/62135/
Test PASSed.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #14144: [SPARK-16488] Fix codegen variable namespace collision i...

2016-07-11 Thread SparkQA
Github user SparkQA commented on the issue:

https://github.com/apache/spark/pull/14144
  
**[Test build #62135 has 
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/62135/consoleFull)**
 for PR 14144 at commit 
[`8b2639f`](https://github.com/apache/spark/commit/8b2639f60975f07157790309052949ff0daabe38).
 * This patch passes all tests.
 * This patch merges cleanly.
 * This patch adds no public classes.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #14139: [SPARK-16313][SQL][BRANCH-1.6] Spark should not silently...

2016-07-11 Thread AmplabJenkins
Github user AmplabJenkins commented on the issue:

https://github.com/apache/spark/pull/14139
  
Merged build finished. Test PASSed.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #14139: [SPARK-16313][SQL][BRANCH-1.6] Spark should not silently...

2016-07-11 Thread AmplabJenkins
Github user AmplabJenkins commented on the issue:

https://github.com/apache/spark/pull/14139
  
Test PASSed.
Refer to this link for build results (access rights to CI server needed): 
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/62136/
Test PASSed.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #14139: [SPARK-16313][SQL][BRANCH-1.6] Spark should not silently...

2016-07-11 Thread SparkQA
Github user SparkQA commented on the issue:

https://github.com/apache/spark/pull/14139
  
**[Test build #62136 has 
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/62136/consoleFull)**
 for PR 14139 at commit 
[`b98ee09`](https://github.com/apache/spark/commit/b98ee098d5d71f4f2a60f49be615bcf1632e2c9c).
 * This patch passes all tests.
 * This patch merges cleanly.
 * This patch adds no public classes.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #14079: [SPARK-8425][CORE] New Blacklist Mechanism

2016-07-11 Thread AmplabJenkins
Github user AmplabJenkins commented on the issue:

https://github.com/apache/spark/pull/14079
  
Merged build finished. Test PASSed.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #14079: [SPARK-8425][CORE] New Blacklist Mechanism

2016-07-11 Thread AmplabJenkins
Github user AmplabJenkins commented on the issue:

https://github.com/apache/spark/pull/14079
  
Test PASSed.
Refer to this link for build results (access rights to CI server needed): 
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/62134/
Test PASSed.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #14079: [SPARK-8425][CORE] New Blacklist Mechanism

2016-07-11 Thread SparkQA
Github user SparkQA commented on the issue:

https://github.com/apache/spark/pull/14079
  
**[Test build #62134 has 
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/62134/consoleFull)**
 for PR 14079 at commit 
[`c22aaad`](https://github.com/apache/spark/commit/c22aaad76f07cbe58ea455d18959470e7afb1498).
 * This patch passes all tests.
 * This patch merges cleanly.
 * This patch adds no public classes.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #14148: [SPARK-16482] [SQL] Describe Table Command for Tables Re...

2016-07-11 Thread SparkQA
Github user SparkQA commented on the issue:

https://github.com/apache/spark/pull/14148
  
**[Test build #62141 has 
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/62141/consoleFull)**
 for PR 14148 at commit 
[`d92ebcd`](https://github.com/apache/spark/commit/d92ebcdfd7e525499e0c8b491eeab416ad12ecfd).


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #14148: [SPARK-16482] [SQL] Describe Table Command for Ta...

2016-07-11 Thread gatorsmile
GitHub user gatorsmile opened a pull request:

https://github.com/apache/spark/pull/14148

[SPARK-16482] [SQL] Describe Table Command for Tables Requiring Runtime 
Inferred Schema 

 What changes were proposed in this pull request?
If we create a table pointing to a parquet/json datasets without specifying 
the schema, describe table command does not show the schema at all. It only 
shows `# Schema of this table is inferred at runtime`. In 1.6, describe table 
does show the schema of such a table.

For data source tables, to infer the schema, we need to load the data 
source tables at runtime. Thus, this PR calls the function `lookupRelation`.

 How was this patch tested?
Added test cases

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/gatorsmile/spark describeSchema

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/spark/pull/14148.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #14148


commit 57893bdf55146c4ecd0a6d72c69ec3d3e85b5207
Author: gatorsmile 
Date:   2016-07-11T22:30:11Z

fix

commit 6f2deb3405b119aff1c88cab19d3953a7ede0408
Author: gatorsmile 
Date:   2016-07-11T22:55:18Z

another fix way

commit d92ebcdfd7e525499e0c8b491eeab416ad12ecfd
Author: gatorsmile 
Date:   2016-07-12T04:00:20Z

another fix way




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #14138: [SPARK-16284][SQL] Implement reflect SQL function

2016-07-11 Thread AmplabJenkins
Github user AmplabJenkins commented on the issue:

https://github.com/apache/spark/pull/14138
  
Merged build finished. Test PASSed.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #14138: [SPARK-16284][SQL] Implement reflect SQL function

2016-07-11 Thread AmplabJenkins
Github user AmplabJenkins commented on the issue:

https://github.com/apache/spark/pull/14138
  
Test PASSed.
Refer to this link for build results (access rights to CI server needed): 
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/62133/
Test PASSed.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #14052: [SPARK-15440] [Core] [Deploy] Add CSRF Filter for...

2016-07-11 Thread yanboliang
Github user yanboliang closed the pull request at:

https://github.com/apache/spark/pull/14052


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #14138: [SPARK-16284][SQL] Implement reflect SQL function

2016-07-11 Thread SparkQA
Github user SparkQA commented on the issue:

https://github.com/apache/spark/pull/14138
  
**[Test build #62133 has 
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/62133/consoleFull)**
 for PR 14138 at commit 
[`ccfb9c4`](https://github.com/apache/spark/commit/ccfb9c4511bdd6d9655ef5ef56fb387b9fab06dd).
 * This patch passes all tests.
 * This patch merges cleanly.
 * This patch adds no public classes.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #14116: [SPARK-16452][SQL] Support basic INFORMATION_SCHEMA

2016-07-11 Thread SparkQA
Github user SparkQA commented on the issue:

https://github.com/apache/spark/pull/14116
  
**[Test build #62140 has 
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/62140/consoleFull)**
 for PR 14116 at commit 
[`2a753aa`](https://github.com/apache/spark/commit/2a753aa40c8663c8b5fd0b28c0bec962556930af).


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #14146: [SPARK-16489][SQL] Guard against variable reuse mistakes...

2016-07-11 Thread SparkQA
Github user SparkQA commented on the issue:

https://github.com/apache/spark/pull/14146
  
**[Test build #62139 has 
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/62139/consoleFull)**
 for PR 14146 at commit 
[`222e868`](https://github.com/apache/spark/commit/222e868969c3f89ba62ebb97a6d99d046c00b6c8).


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #14146: [SPARK-16489][SQL] Guard against variable reuse mistakes...

2016-07-11 Thread rxin
Github user rxin commented on the issue:

https://github.com/apache/spark/pull/14146
  
cc @sameeragarwal 


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #14116: [SPARK-16452][SQL] Support basic INFORMATION_SCHEMA

2016-07-11 Thread AmplabJenkins
Github user AmplabJenkins commented on the issue:

https://github.com/apache/spark/pull/14116
  
Test FAILed.
Refer to this link for build results (access rights to CI server needed): 
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/62138/
Test FAILed.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #14116: [SPARK-16452][SQL] Support basic INFORMATION_SCHEMA

2016-07-11 Thread AmplabJenkins
Github user AmplabJenkins commented on the issue:

https://github.com/apache/spark/pull/14116
  
Merged build finished. Test FAILed.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #14144: [SPARK-16488] Fix codegen variable namespace coll...

2016-07-11 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/spark/pull/14144


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #14144: [SPARK-16488] Fix codegen variable namespace collision i...

2016-07-11 Thread rxin
Github user rxin commented on the issue:

https://github.com/apache/spark/pull/14144
  
Merging in master/2.0.



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #14080: [SPARK-16405] Add metrics and source for external shuffl...

2016-07-11 Thread lovexi
Github user lovexi commented on the issue:

https://github.com/apache/spark/pull/14080
  
Thank you for reminding me this. Already updated PR description and add 
more details. 
cc @rxin 


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #14080: [SPARK-16405] Add metrics and source for external shuffl...

2016-07-11 Thread SparkQA
Github user SparkQA commented on the issue:

https://github.com/apache/spark/pull/14080
  
**[Test build #3185 has 
started](https://amplab.cs.berkeley.edu/jenkins/job/NewSparkPullRequestBuilder/3185/consoleFull)**
 for PR 14080 at commit 
[`4884084`](https://github.com/apache/spark/commit/488408479858b026cf67d9a04e7b0fe1aad8934d).


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #14144: [SPARK-16488] Fix codegen variable namespace collision i...

2016-07-11 Thread AmplabJenkins
Github user AmplabJenkins commented on the issue:

https://github.com/apache/spark/pull/14144
  
Test PASSed.
Refer to this link for build results (access rights to CI server needed): 
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/62129/
Test PASSed.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #14144: [SPARK-16488] Fix codegen variable namespace collision i...

2016-07-11 Thread AmplabJenkins
Github user AmplabJenkins commented on the issue:

https://github.com/apache/spark/pull/14144
  
Merged build finished. Test PASSed.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #14080: [SPARK-16405] Add metrics and source for external shuffl...

2016-07-11 Thread rxin
Github user rxin commented on the issue:

https://github.com/apache/spark/pull/14080
  
Can you update the pull request description? It is now outdated.



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #14144: [SPARK-16488] Fix codegen variable namespace collision i...

2016-07-11 Thread SparkQA
Github user SparkQA commented on the issue:

https://github.com/apache/spark/pull/14144
  
**[Test build #62129 has 
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/62129/consoleFull)**
 for PR 14144 at commit 
[`8e7b05f`](https://github.com/apache/spark/commit/8e7b05f0f59d58bf6482e069b31c102eff5a1b83).
 * This patch passes all tests.
 * This patch merges cleanly.
 * This patch adds no public classes.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #14080: [SPARK-16405] Add metrics and source for external shuffl...

2016-07-11 Thread lovexi
Github user lovexi commented on the issue:

https://github.com/apache/spark/pull/14080
  
Restart test, please


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #14080: [SPARK-16405] Add metrics and source for external shuffl...

2016-07-11 Thread lovexi
Github user lovexi commented on the issue:

https://github.com/apache/spark/pull/14080
  
@rxin Thank you for letting me know this. That saves me a lot of time on 
testing..


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #14146: [SPARK-16489][SQL] Guard against variable reuse mistakes...

2016-07-11 Thread SparkQA
Github user SparkQA commented on the issue:

https://github.com/apache/spark/pull/14146
  
**[Test build #3184 has 
finished](https://amplab.cs.berkeley.edu/jenkins/job/NewSparkPullRequestBuilder/3184/consoleFull)**
 for PR 14146 at commit 
[`0df6a99`](https://github.com/apache/spark/commit/0df6a99e02fa7208bb5a540d703f0938152181fe).
 * This patch **fails Spark unit tests**.
 * This patch merges cleanly.
 * This patch adds the following public classes _(experimental)_:
  * `class ExpressionEvalHelperSuite extends SparkFunSuite with 
ExpressionEvalHelper `
  * `case class BadCodegenExpression() extends LeafExpression `


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #14045: [SPARK-16362][SQL] Support ArrayType and StructType in v...

2016-07-11 Thread AmplabJenkins
Github user AmplabJenkins commented on the issue:

https://github.com/apache/spark/pull/14045
  
Test PASSed.
Refer to this link for build results (access rights to CI server needed): 
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/62130/
Test PASSed.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #14045: [SPARK-16362][SQL] Support ArrayType and StructType in v...

2016-07-11 Thread AmplabJenkins
Github user AmplabJenkins commented on the issue:

https://github.com/apache/spark/pull/14045
  
Merged build finished. Test PASSed.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #14045: [SPARK-16362][SQL] Support ArrayType and StructType in v...

2016-07-11 Thread SparkQA
Github user SparkQA commented on the issue:

https://github.com/apache/spark/pull/14045
  
**[Test build #62130 has 
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/62130/consoleFull)**
 for PR 14045 at commit 
[`42f53de`](https://github.com/apache/spark/commit/42f53de2af894f961468300b250907a7775e9aac).
 * This patch passes all tests.
 * This patch merges cleanly.
 * This patch adds no public classes.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #14146: [SPARK-16489][SQL] Guard against variable reuse mistakes...

2016-07-11 Thread SparkQA
Github user SparkQA commented on the issue:

https://github.com/apache/spark/pull/14146
  
**[Test build #3183 has 
finished](https://amplab.cs.berkeley.edu/jenkins/job/NewSparkPullRequestBuilder/3183/consoleFull)**
 for PR 14146 at commit 
[`0df6a99`](https://github.com/apache/spark/commit/0df6a99e02fa7208bb5a540d703f0938152181fe).
 * This patch **fails Spark unit tests**.
 * This patch merges cleanly.
 * This patch adds the following public classes _(experimental)_:
  * `class ExpressionEvalHelperSuite extends SparkFunSuite with 
ExpressionEvalHelper `
  * `case class BadCodegenExpression() extends LeafExpression `


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #13494: [SPARK-15752] [SQL] Optimize metadata only query that ha...

2016-07-11 Thread lianhuiwang
Github user lianhuiwang commented on the issue:

https://github.com/apache/spark/pull/13494
  
@cloud-fan @hvanhovell about getPartitionAttrs() It has a improve place 
that we can define it in relation node. but now relation node has not this 
function. how about added in follow-up PRs? Thanks.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #13494: [SPARK-15752] [SQL] Optimize metadata only query that ha...

2016-07-11 Thread SparkQA
Github user SparkQA commented on the issue:

https://github.com/apache/spark/pull/13494
  
**[Test build #62137 has 
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/62137/consoleFull)**
 for PR 13494 at commit 
[`ff16509`](https://github.com/apache/spark/commit/ff1650987b901825d3828de972fa4434466c951f).


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #13494: [SPARK-15752] [SQL] Optimize metadata only query ...

2016-07-11 Thread lianhuiwang
Github user lianhuiwang commented on a diff in the pull request:

https://github.com/apache/spark/pull/13494#discussion_r70370384
  
--- Diff: 
sql/core/src/main/scala/org/apache/spark/sql/execution/OptimizeMetadataOnlyQuery.scala
 ---
@@ -0,0 +1,153 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.spark.sql.execution
+
+import org.apache.spark.sql.catalyst.InternalRow
+import org.apache.spark.sql.catalyst.catalog.{CatalogRelation, 
SessionCatalog}
+import org.apache.spark.sql.catalyst.expressions._
+import org.apache.spark.sql.catalyst.expressions.aggregate._
+import org.apache.spark.sql.catalyst.plans.logical._
+import org.apache.spark.sql.catalyst.rules.Rule
+import org.apache.spark.sql.execution.datasources.{HadoopFsRelation, 
LogicalRelation}
+import org.apache.spark.sql.internal.SQLConf
+
+/**
+ * This rule optimizes the execution of queries that can be answered by 
looking only at
+ * partition-level metadata. This applies when all the columns scanned are 
partition columns, and
+ * the query has an aggregate operator that satisfies the following 
conditions:
+ * 1. aggregate expression is partition columns.
+ *  e.g. SELECT col FROM tbl GROUP BY col.
+ * 2. aggregate function on partition columns with DISTINCT.
+ *  e.g. SELECT col1, count(DISTINCT col2) FROM tbl GROUP BY col1.
+ * 3. aggregate function on partition columns which have same result w or 
w/o DISTINCT keyword.
+ *  e.g. SELECT col1, Max(col2) FROM tbl GROUP BY col1.
+ */
+case class OptimizeMetadataOnlyQuery(
+catalog: SessionCatalog,
+conf: SQLConf) extends Rule[LogicalPlan] {
+
+  def apply(plan: LogicalPlan): LogicalPlan = {
+if (!conf.optimizerMetadataOnly) {
+  return plan
+}
+
+plan.transform {
+  case a @ Aggregate(_, aggExprs, child @ 
PartitionedRelation(partAttrs, relation)) =>
+// We only apply this optimization when only partitioned 
attributes are scanned.
+if (a.references.subsetOf(partAttrs)) {
+  val aggFunctions = aggExprs.flatMap(_.collect {
+case agg: AggregateExpression => agg
+  })
+  val isAllDistinctAgg = aggFunctions.forall { agg =>
+agg.isDistinct || (agg.aggregateFunction match {
+  // `Max`, `Min`, `First` and `Last` are always distinct 
aggregate functions no matter
+  // they have DISTINCT keyword or not, as the result will be 
same.
+  case _: Max => true
+  case _: Min => true
+  case _: First => true
+  case _: Last => true
+  case _ => false
+})
+  }
+  if (isAllDistinctAgg) {
+
a.withNewChildren(Seq(replaceTableScanWithPartitionMetadata(child, relation)))
+  } else {
+a
+  }
+} else {
+  a
+}
+}
+  }
+
+  /**
+   * Transform the given plan, find its table scan nodes that matches the 
given relation, and then
+   * replace the table scan node with its corresponding partition values.
+   */
+  private def replaceTableScanWithPartitionMetadata(
+  child: LogicalPlan,
+  relation: LogicalPlan): LogicalPlan = {
+child transform {
+  case plan if plan eq relation =>
+relation match {
+  case l @ LogicalRelation(fsRelation: HadoopFsRelation, _, _) =>
+val partAttrs = PartitionedRelation.getPartitionAttrs(
--- End diff --

@cloud-fan  I will define two functions for getPartitionAttrs(). In the 
future, I think we can put getPartitionAttrs() into relation plan. If i has 
some problem, please tell me. thanks.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact 

[GitHub] spark pull request #14012: [SPARK-16343][SQL] Improve the PushDownPredicate ...

2016-07-11 Thread jiangxb1987
Github user jiangxb1987 commented on a diff in the pull request:

https://github.com/apache/spark/pull/14012#discussion_r70370161
  
--- Diff: 
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/optimizer/Optimizer.scala
 ---
@@ -1086,6 +1086,28 @@ object PruneFilters extends Rule[LogicalPlan] with 
PredicateHelper {
  * This heuristic is valid assuming the expression evaluation cost is 
minimal.
  */
 object PushDownPredicate extends Rule[LogicalPlan] with PredicateHelper {
+  /**
+   * Splits condition expressions into pushDown predicates and stayUp 
predicates based on
+   * specific rules. Parts of the predicate that can be pushed beneath 
must satisfy the following
+   * conditions:
+   * 1. Deterministic.
+   * 2. Placed before any non-deterministic predicates.
+   * 3. Other specific rules.
+   *
+   * @return (pushDown, stayUp)
+   */
+  private def splitPushdownPredicates(
+  condition: Expression)(specificRules: (Expression) => Boolean) = {
--- End diff --

I have reverted this part, thanks!


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #14141: [SPARK-16375] [Web UI] Fixed misassigned var: numComplet...

2016-07-11 Thread AmplabJenkins
Github user AmplabJenkins commented on the issue:

https://github.com/apache/spark/pull/14141
  
Test PASSed.
Refer to this link for build results (access rights to CI server needed): 
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/62127/
Test PASSed.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #14141: [SPARK-16375] [Web UI] Fixed misassigned var: numComplet...

2016-07-11 Thread AmplabJenkins
Github user AmplabJenkins commented on the issue:

https://github.com/apache/spark/pull/14141
  
Merged build finished. Test PASSed.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #14141: [SPARK-16375] [Web UI] Fixed misassigned var: numComplet...

2016-07-11 Thread SparkQA
Github user SparkQA commented on the issue:

https://github.com/apache/spark/pull/14141
  
**[Test build #62127 has 
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/62127/consoleFull)**
 for PR 14141 at commit 
[`5122a1a`](https://github.com/apache/spark/commit/5122a1ac559a9e152942da2c752633d118f74667).
 * This patch passes all tests.
 * This patch merges cleanly.
 * This patch adds no public classes.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #13494: [SPARK-15752] [SQL] Optimize metadata only query ...

2016-07-11 Thread lianhuiwang
Github user lianhuiwang commented on a diff in the pull request:

https://github.com/apache/spark/pull/13494#discussion_r70369134
  
--- Diff: 
sql/core/src/main/scala/org/apache/spark/sql/execution/OptimizeMetadataOnlyQuery.scala
 ---
@@ -0,0 +1,153 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.spark.sql.execution
+
+import org.apache.spark.sql.catalyst.InternalRow
+import org.apache.spark.sql.catalyst.catalog.{CatalogRelation, 
SessionCatalog}
+import org.apache.spark.sql.catalyst.expressions._
+import org.apache.spark.sql.catalyst.expressions.aggregate._
+import org.apache.spark.sql.catalyst.plans.logical._
+import org.apache.spark.sql.catalyst.rules.Rule
+import org.apache.spark.sql.execution.datasources.{HadoopFsRelation, 
LogicalRelation}
+import org.apache.spark.sql.internal.SQLConf
+
+/**
+ * This rule optimizes the execution of queries that can be answered by 
looking only at
+ * partition-level metadata. This applies when all the columns scanned are 
partition columns, and
+ * the query has an aggregate operator that satisfies the following 
conditions:
+ * 1. aggregate expression is partition columns.
+ *  e.g. SELECT col FROM tbl GROUP BY col.
+ * 2. aggregate function on partition columns with DISTINCT.
+ *  e.g. SELECT col1, count(DISTINCT col2) FROM tbl GROUP BY col1.
+ * 3. aggregate function on partition columns which have same result w or 
w/o DISTINCT keyword.
+ *  e.g. SELECT col1, Max(col2) FROM tbl GROUP BY col1.
+ */
+case class OptimizeMetadataOnlyQuery(
+catalog: SessionCatalog,
+conf: SQLConf) extends Rule[LogicalPlan] {
+
+  def apply(plan: LogicalPlan): LogicalPlan = {
+if (!conf.optimizerMetadataOnly) {
+  return plan
+}
+
+plan.transform {
+  case a @ Aggregate(_, aggExprs, child @ 
PartitionedRelation(partAttrs, relation)) =>
+// We only apply this optimization when only partitioned 
attributes are scanned.
+if (a.references.subsetOf(partAttrs)) {
+  val aggFunctions = aggExprs.flatMap(_.collect {
+case agg: AggregateExpression => agg
+  })
+  val isAllDistinctAgg = aggFunctions.forall { agg =>
+agg.isDistinct || (agg.aggregateFunction match {
+  // `Max`, `Min`, `First` and `Last` are always distinct 
aggregate functions no matter
+  // they have DISTINCT keyword or not, as the result will be 
same.
+  case _: Max => true
+  case _: Min => true
+  case _: First => true
+  case _: Last => true
+  case _ => false
+})
+  }
+  if (isAllDistinctAgg) {
+
a.withNewChildren(Seq(replaceTableScanWithPartitionMetadata(child, relation)))
+  } else {
+a
+  }
+} else {
+  a
+}
+}
+  }
+
+  /**
+   * Transform the given plan, find its table scan nodes that matches the 
given relation, and then
+   * replace the table scan node with its corresponding partition values.
+   */
+  private def replaceTableScanWithPartitionMetadata(
+  child: LogicalPlan,
+  relation: LogicalPlan): LogicalPlan = {
+child transform {
+  case plan if plan eq relation =>
+relation match {
+  case l @ LogicalRelation(fsRelation: HadoopFsRelation, _, _) =>
+val partAttrs = PartitionedRelation.getPartitionAttrs(
+  fsRelation.partitionSchema.map(_.name), l)
+val partitionData = fsRelation.location.listFiles(filters = 
Nil)
+LocalRelation(partAttrs, partitionData.map(_.values))
+
+  case relation: CatalogRelation =>
+val partAttrs = PartitionedRelation.getPartitionAttrs(
+  relation.catalogTable.partitionColumnNames, relation)
+val partitionData = 

[GitHub] spark pull request #13494: [SPARK-15752] [SQL] Optimize metadata only query ...

2016-07-11 Thread lianhuiwang
Github user lianhuiwang commented on a diff in the pull request:

https://github.com/apache/spark/pull/13494#discussion_r70368905
  
--- Diff: 
sql/core/src/main/scala/org/apache/spark/sql/execution/OptimizeMetadataOnlyQuery.scala
 ---
@@ -0,0 +1,153 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.spark.sql.execution
+
+import org.apache.spark.sql.catalyst.InternalRow
+import org.apache.spark.sql.catalyst.catalog.{CatalogRelation, 
SessionCatalog}
+import org.apache.spark.sql.catalyst.expressions._
+import org.apache.spark.sql.catalyst.expressions.aggregate._
+import org.apache.spark.sql.catalyst.plans.logical._
+import org.apache.spark.sql.catalyst.rules.Rule
+import org.apache.spark.sql.execution.datasources.{HadoopFsRelation, 
LogicalRelation}
+import org.apache.spark.sql.internal.SQLConf
+
+/**
+ * This rule optimizes the execution of queries that can be answered by 
looking only at
+ * partition-level metadata. This applies when all the columns scanned are 
partition columns, and
+ * the query has an aggregate operator that satisfies the following 
conditions:
+ * 1. aggregate expression is partition columns.
+ *  e.g. SELECT col FROM tbl GROUP BY col.
+ * 2. aggregate function on partition columns with DISTINCT.
+ *  e.g. SELECT col1, count(DISTINCT col2) FROM tbl GROUP BY col1.
+ * 3. aggregate function on partition columns which have same result w or 
w/o DISTINCT keyword.
+ *  e.g. SELECT col1, Max(col2) FROM tbl GROUP BY col1.
+ */
+case class OptimizeMetadataOnlyQuery(
+catalog: SessionCatalog,
+conf: SQLConf) extends Rule[LogicalPlan] {
+
+  def apply(plan: LogicalPlan): LogicalPlan = {
+if (!conf.optimizerMetadataOnly) {
+  return plan
+}
+
+plan.transform {
+  case a @ Aggregate(_, aggExprs, child @ 
PartitionedRelation(partAttrs, relation)) =>
+// We only apply this optimization when only partitioned 
attributes are scanned.
+if (a.references.subsetOf(partAttrs)) {
+  val aggFunctions = aggExprs.flatMap(_.collect {
+case agg: AggregateExpression => agg
+  })
+  val isAllDistinctAgg = aggFunctions.forall { agg =>
+agg.isDistinct || (agg.aggregateFunction match {
+  // `Max`, `Min`, `First` and `Last` are always distinct 
aggregate functions no matter
+  // they have DISTINCT keyword or not, as the result will be 
same.
+  case _: Max => true
+  case _: Min => true
+  case _: First => true
+  case _: Last => true
+  case _ => false
+})
+  }
+  if (isAllDistinctAgg) {
+
a.withNewChildren(Seq(replaceTableScanWithPartitionMetadata(child, relation)))
+  } else {
+a
+  }
+} else {
+  a
+}
+}
+  }
+
+  /**
+   * Transform the given plan, find its table scan nodes that matches the 
given relation, and then
+   * replace the table scan node with its corresponding partition values.
+   */
+  private def replaceTableScanWithPartitionMetadata(
+  child: LogicalPlan,
+  relation: LogicalPlan): LogicalPlan = {
+child transform {
+  case plan if plan eq relation =>
+relation match {
+  case l @ LogicalRelation(fsRelation: HadoopFsRelation, _, _) =>
+val partAttrs = PartitionedRelation.getPartitionAttrs(
+  fsRelation.partitionSchema.map(_.name), l)
+val partitionData = fsRelation.location.listFiles(filters = 
Nil)
+LocalRelation(partAttrs, partitionData.map(_.values))
+
+  case relation: CatalogRelation =>
+val partAttrs = PartitionedRelation.getPartitionAttrs(
+  relation.catalogTable.partitionColumnNames, relation)
+val partitionData = 

[GitHub] spark pull request #13494: [SPARK-15752] [SQL] Optimize metadata only query ...

2016-07-11 Thread lianhuiwang
Github user lianhuiwang commented on a diff in the pull request:

https://github.com/apache/spark/pull/13494#discussion_r70368861
  
--- Diff: 
sql/core/src/main/scala/org/apache/spark/sql/execution/OptimizeMetadataOnlyQuery.scala
 ---
@@ -0,0 +1,153 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.spark.sql.execution
+
+import org.apache.spark.sql.catalyst.InternalRow
+import org.apache.spark.sql.catalyst.catalog.{CatalogRelation, 
SessionCatalog}
+import org.apache.spark.sql.catalyst.expressions._
+import org.apache.spark.sql.catalyst.expressions.aggregate._
+import org.apache.spark.sql.catalyst.plans.logical._
+import org.apache.spark.sql.catalyst.rules.Rule
+import org.apache.spark.sql.execution.datasources.{HadoopFsRelation, 
LogicalRelation}
+import org.apache.spark.sql.internal.SQLConf
+
+/**
+ * This rule optimizes the execution of queries that can be answered by 
looking only at
+ * partition-level metadata. This applies when all the columns scanned are 
partition columns, and
+ * the query has an aggregate operator that satisfies the following 
conditions:
+ * 1. aggregate expression is partition columns.
+ *  e.g. SELECT col FROM tbl GROUP BY col.
+ * 2. aggregate function on partition columns with DISTINCT.
+ *  e.g. SELECT col1, count(DISTINCT col2) FROM tbl GROUP BY col1.
+ * 3. aggregate function on partition columns which have same result w or 
w/o DISTINCT keyword.
+ *  e.g. SELECT col1, Max(col2) FROM tbl GROUP BY col1.
+ */
+case class OptimizeMetadataOnlyQuery(
+catalog: SessionCatalog,
+conf: SQLConf) extends Rule[LogicalPlan] {
+
+  def apply(plan: LogicalPlan): LogicalPlan = {
+if (!conf.optimizerMetadataOnly) {
+  return plan
+}
+
+plan.transform {
+  case a @ Aggregate(_, aggExprs, child @ 
PartitionedRelation(partAttrs, relation)) =>
+// We only apply this optimization when only partitioned 
attributes are scanned.
+if (a.references.subsetOf(partAttrs)) {
+  val aggFunctions = aggExprs.flatMap(_.collect {
+case agg: AggregateExpression => agg
+  })
+  val isAllDistinctAgg = aggFunctions.forall { agg =>
+agg.isDistinct || (agg.aggregateFunction match {
+  // `Max`, `Min`, `First` and `Last` are always distinct 
aggregate functions no matter
+  // they have DISTINCT keyword or not, as the result will be 
same.
+  case _: Max => true
+  case _: Min => true
+  case _: First => true
+  case _: Last => true
+  case _ => false
+})
+  }
+  if (isAllDistinctAgg) {
+
a.withNewChildren(Seq(replaceTableScanWithPartitionMetadata(child, relation)))
+  } else {
+a
+  }
+} else {
+  a
+}
+}
+  }
+
+  /**
+   * Transform the given plan, find its table scan nodes that matches the 
given relation, and then
+   * replace the table scan node with its corresponding partition values.
+   */
+  private def replaceTableScanWithPartitionMetadata(
+  child: LogicalPlan,
+  relation: LogicalPlan): LogicalPlan = {
+child transform {
+  case plan if plan eq relation =>
+relation match {
+  case l @ LogicalRelation(fsRelation: HadoopFsRelation, _, _) =>
+val partAttrs = PartitionedRelation.getPartitionAttrs(
--- End diff --

Because object PartitionedRelation also use getPartitionAttrs, Now i just 
define it in PartitionedRelation. If it define a private method in class 
OptimizeMetadataOnlyQuery, there are two same getPartitionAttrs() functions in 
PartitionedRelation and OptimizeMetadataOnlyQuery. Based on it, here use 
PartitionedRelation.getPartitionAttrs.


---
If your project is set up for it, you can reply to this email and have your
reply 

[GitHub] spark issue #14139: [SPARK-16313][SQL][BRANCH-1.6] Spark should not silently...

2016-07-11 Thread SparkQA
Github user SparkQA commented on the issue:

https://github.com/apache/spark/pull/14139
  
**[Test build #62136 has 
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/62136/consoleFull)**
 for PR 14139 at commit 
[`b98ee09`](https://github.com/apache/spark/commit/b98ee098d5d71f4f2a60f49be615bcf1632e2c9c).


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #13704: [SPARK-15985][SQL] Reduce runtime overhead of a p...

2016-07-11 Thread kiszk
Github user kiszk commented on a diff in the pull request:

https://github.com/apache/spark/pull/13704#discussion_r70368410
  
--- Diff: 
sql/catalyst/src/test/scala/org/apache/spark/sql/catalyst/optimizer/SimplifyCastsSuite.scala
 ---
@@ -0,0 +1,78 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.spark.sql.catalyst.optimizer
+
+import org.apache.spark.sql.catalyst.dsl.expressions._
+import org.apache.spark.sql.catalyst.dsl.plans._
+import org.apache.spark.sql.catalyst.expressions._
+import org.apache.spark.sql.catalyst.plans.PlanTest
+import org.apache.spark.sql.catalyst.plans.logical._
+import org.apache.spark.sql.catalyst.rules.RuleExecutor
+import org.apache.spark.sql.types._
+
+class SimplifyCastsSuite extends PlanTest {
+
+  object Optimize extends RuleExecutor[LogicalPlan] {
+val batches = Batch("SimplifyCasts", FixedPoint(50), SimplifyCasts) :: 
Nil
+  }
+
+  test("non-nullable to non-nullable array cast") {
+val input = LocalRelation('a.array(ArrayType(IntegerType, false)))
+val array_intPrimitive = Literal.create(
+  Seq(1, 2, 3, 4, 5), ArrayType(IntegerType, false))
+val plan = input.select(array_intPrimitive
--- End diff --

@cloud-fan This statement cannot be optimized, probably, due to of missing 
information on `a`. As a result, it causes assertion an error. Did I make some 
mistakes?
```
  test("non-nullable to non-nullable array cast") {
val input = LocalRelation('a.array(ArrayType(IntegerType, false)))
val plan = input.select(
  'a.cast(ArrayType(IntegerType, false)).as('casted)).analyze
val optimized = Optimize.execute(plan)
val expected = input.select('a.as('casted)).analyze
print(s"optimized: $plan")
print(s"optimized: $optimized")
comparePlans(optimized, expected)
  }

optimized: 'Project [cast(a#0 as array) AS casted#1]
+- LocalRelation , [a#0]
optimized: 'Project [cast(a#0 as array) AS casted#1]
+- LocalRelation , [a#0]


== FAIL: Plans do not match ===
!'Project [cast(a#0 as array) AS casted#0]   Project [a#0 AS casted#0]
 +- LocalRelation , [a#0]  +- LocalRelation , 
[a#0]
```

The original one
```
  test("non-nullable to non-nullable array cast") {
val input = LocalRelation('a.array(ArrayType(IntegerType, false)))
val array_intPrimitive = Literal.create(
  Seq(1, 2, 3, 4, 5), ArrayType(IntegerType, false))
val plan = input.select(array_intPrimitive
  .cast(ArrayType(IntegerType, false)).as('a)).analyze
val optimized = Optimize.execute(plan)
val expected = input.select(array_intPrimitive.as('a)).analyze
print(s"optimized: $plan")
print(s"optimized: $optimized")
comparePlans(optimized, expected)
  }

optimized: Project [cast([1,2,3,4,5] as array) AS a#1]
+- LocalRelation , [a#0]
optimized: Project [[1,2,3,4,5] AS a#1]
+- LocalRelation , [a#0]
```


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #14144: [SPARK-16488] Fix codegen variable namespace collision i...

2016-07-11 Thread SparkQA
Github user SparkQA commented on the issue:

https://github.com/apache/spark/pull/14144
  
**[Test build #62135 has 
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/62135/consoleFull)**
 for PR 14144 at commit 
[`8b2639f`](https://github.com/apache/spark/commit/8b2639f60975f07157790309052949ff0daabe38).


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #14144: [SPARK-16488] Fix codegen variable namespace collision i...

2016-07-11 Thread rxin
Github user rxin commented on the issue:

https://github.com/apache/spark/pull/14144
  
LGTM pending Jenkins.



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #13704: [SPARK-15985][SQL] Reduce runtime overhead of a p...

2016-07-11 Thread kiszk
Github user kiszk commented on a diff in the pull request:

https://github.com/apache/spark/pull/13704#discussion_r70367470
  
--- Diff: 
sql/core/src/test/scala/org/apache/spark/sql/execution/benchmark/PrimitiveArrayBenchmark.scala
 ---
@@ -0,0 +1,84 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.spark.sql.execution.benchmark
+
+import java.util.{Arrays, Comparator, Random}
+
+import org.apache.spark.sql.SQLContext
+import org.apache.spark.sql.internal.SQLConf
+import org.apache.spark.sql.test.SharedSQLContext
+import org.apache.spark.unsafe.array.LongArray
+import org.apache.spark.unsafe.memory.MemoryBlock
+import org.apache.spark.util.Benchmark
+import org.apache.spark.util.collection.Sorter
+import org.apache.spark.util.collection.unsafe.sort._
+
+/**
+ * Benchmark to measure performance for accessing primitive arrays
+ * To run this:
+ *  1. Replace ignore(...) with test(...)
+ *  2. build/sbt "sql/test-only *benchmark.PrimitiveArrayBenchmark"
+ *
+ * Benchmarks in this file are skipped in normal builds.
+ */
+class PrimitiveArrayBenchmark extends BenchmarkBase {
--- End diff --

Got it. In the future, it would be good to prepare criteria to require 
benchmark results in Wiki if anyone create a PR for optimizers.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #14079: [SPARK-8425][CORE] New Blacklist Mechanism

2016-07-11 Thread SparkQA
Github user SparkQA commented on the issue:

https://github.com/apache/spark/pull/14079
  
**[Test build #62134 has 
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/62134/consoleFull)**
 for PR 14079 at commit 
[`c22aaad`](https://github.com/apache/spark/commit/c22aaad76f07cbe58ea455d18959470e7afb1498).


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #14079: [SPARK-8425][CORE] New Blacklist Mechanism

2016-07-11 Thread squito
Github user squito commented on the issue:

https://github.com/apache/spark/pull/14079
  
discussed this offline with @vanzin, realized that actually with the latest 
design, it doesn't make sense to have so many maps inside one 
`BlacklistTracker` -- a lot of info is now intended to only be tied to one 
`TaskSet`, so I should push that info back into the `TaskSetManager` (where the 
old blacklist lived).  I'll do that refactoring, which will hopefully clean 
things up.  No behavior changes planned, though.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #13894: [SPARK-15254][DOC] Improve ML pipeline Cross Vali...

2016-07-11 Thread jkbradley
Github user jkbradley commented on a diff in the pull request:

https://github.com/apache/spark/pull/13894#discussion_r70367155
  
--- Diff: 
mllib/src/main/scala/org/apache/spark/ml/tuning/CrossValidator.scala ---
@@ -56,7 +56,10 @@ private[ml] trait CrossValidatorParams extends 
ValidatorParams {
 
 /**
  * :: Experimental ::
- * K-fold cross validation.
+ * CrossValidator begins by splitting the dataset into a set of 
non-overlapping randomly
--- End diff --

I like the improved description, but can we please keep the phrase "k-fold 
cross validation?"  It's a very common phrase and will be useful for people 
using keyword search.  Thanks!


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #14138: [SPARK-16284][SQL] Implement reflect SQL function

2016-07-11 Thread petermaxlee
Github user petermaxlee commented on a diff in the pull request:

https://github.com/apache/spark/pull/14138#discussion_r70366868
  
--- Diff: 
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/CallMethodViaReflection.scala
 ---
@@ -0,0 +1,166 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.spark.sql.catalyst.expressions
+
+import java.lang.reflect.{Method, Modifier}
+
+import org.apache.spark.sql.catalyst.InternalRow
+import org.apache.spark.sql.catalyst.analysis.TypeCheckResult
+import 
org.apache.spark.sql.catalyst.analysis.TypeCheckResult.{TypeCheckFailure, 
TypeCheckSuccess}
+import org.apache.spark.sql.catalyst.expressions.codegen.CodegenFallback
+import org.apache.spark.sql.types._
+import org.apache.spark.unsafe.types.UTF8String
+import org.apache.spark.util.Utils
+
+/**
+ * An expression that invokes a method on a class via reflection.
+ *
+ * For now, only types defined in `Reflect.typeMapping` are supported 
(basically primitives
+ * and string) as input types, and the output is turned automatically to a 
string.
+ *
+ * Note that unlike Hive's reflect function, this expression calls only 
static methods
+ * (i.e. does not support calling non-static methods).
+ *
+ * We should also look into how to consolidate this expression with
+ * [[org.apache.spark.sql.catalyst.expressions.objects.StaticInvoke]] in 
the future.
+ *
+ * @param children the first element should be a literal string for the 
class name,
+ * and the second element should be a literal string for 
the method name,
+ * and the remaining are input arguments to the Java 
method.
+ */
+// scalastyle:off line.size.limit
+@ExpressionDescription(
+  usage = "_FUNC_(class,method[,arg1[,arg2..]]) calls method with 
reflection",
+  extended = "> SELECT _FUNC_('java.util.UUID', 'randomUUID');\n 
c33fb387-8500-4bfa-81d2-6e0e3e930df2")
+// scalastyle:on line.size.limit
+case class CallMethodViaReflection(children: Seq[Expression])
+  extends Expression with CodegenFallback {
+
+  override def prettyName: String = "reflect"
+
+  override def checkInputDataTypes(): TypeCheckResult = {
+if (children.size < 2) {
+  TypeCheckFailure("requires at least two arguments")
+} else if (!children.take(2).forall(e => e.dataType == StringType && 
e.foldable)) {
+  // The first two arguments must be string type.
+  TypeCheckFailure("first two arguments should be string literals")
+} else if (!classExists) {
+  TypeCheckFailure(s"class $className not found")
+} else if (method == null) {
+  TypeCheckFailure(s"cannot find a static method that matches the 
argument types in $className")
+} else {
+  TypeCheckSuccess
+}
+  }
+
+  override def deterministic: Boolean = false
+  override def nullable: Boolean = true
+  override val dataType: DataType = StringType
+
+  override def eval(input: InternalRow): Any = {
+var i = 0
+while (i < argExprs.length) {
+  buffer(i) = argExprs(i).eval(input).asInstanceOf[Object]
+  // Convert if necessary. Based on the types defined in typeMapping, 
string is the only
+  // type that needs conversion. If we support timestamps, dates, 
decimals, arrays, or maps
+  // in the future, proper conversion needs to happen here too.
+  if (buffer(i).isInstanceOf[UTF8String]) {
+buffer(i) = buffer(i).toString
+  }
+  i += 1
+}
+val ret = method.invoke(null, buffer : _*)
+UTF8String.fromString(String.valueOf(ret))
+  }
+
+  @transient private lazy val argExprs: Array[Expression] = 
children.drop(2).toArray
+
+  /** Name of the class -- this has to be called after we verify children 
has at least two exprs. */
+  @transient private lazy val className = 
children(0).eval().asInstanceOf[UTF8String].toString
--- End diff --


[GitHub] spark issue #14147: [SPARK-14812][ML][MLLIB][PYTHON] Experimental, Developer...

2016-07-11 Thread SparkQA
Github user SparkQA commented on the issue:

https://github.com/apache/spark/pull/14147
  
**[Test build #62132 has 
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/62132/consoleFull)**
 for PR 14147 at commit 
[`f86ea5a`](https://github.com/apache/spark/commit/f86ea5aaf15523c944582a56b93fc3b1ee3b58a0).
 * This patch **fails Scala style tests**.
 * This patch merges cleanly.
 * This patch adds no public classes.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #14147: [SPARK-14812][ML][MLLIB][PYTHON] Experimental, Developer...

2016-07-11 Thread AmplabJenkins
Github user AmplabJenkins commented on the issue:

https://github.com/apache/spark/pull/14147
  
Test FAILed.
Refer to this link for build results (access rights to CI server needed): 
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/62132/
Test FAILed.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #14147: [SPARK-14812][ML][MLLIB][PYTHON] Experimental, Developer...

2016-07-11 Thread AmplabJenkins
Github user AmplabJenkins commented on the issue:

https://github.com/apache/spark/pull/14147
  
Merged build finished. Test FAILed.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #14143: [SPARK-16430][SQL][STREAMING] Fixed bug in the ma...

2016-07-11 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/spark/pull/14143


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #14138: [SPARK-16284][SQL] Implement reflect SQL function

2016-07-11 Thread SparkQA
Github user SparkQA commented on the issue:

https://github.com/apache/spark/pull/14138
  
**[Test build #62133 has 
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/62133/consoleFull)**
 for PR 14138 at commit 
[`ccfb9c4`](https://github.com/apache/spark/commit/ccfb9c4511bdd6d9655ef5ef56fb387b9fab06dd).


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark issue #14147: [SPARK-14812][ML][MLLIB][PYTHON] Experimental, Developer...

2016-07-11 Thread SparkQA
Github user SparkQA commented on the issue:

https://github.com/apache/spark/pull/14147
  
**[Test build #62132 has 
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/62132/consoleFull)**
 for PR 14147 at commit 
[`f86ea5a`](https://github.com/apache/spark/commit/f86ea5aaf15523c944582a56b93fc3b1ee3b58a0).


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #14147: [SPARK-14812][ML][MLLIB][PYTHON] Experimental, De...

2016-07-11 Thread jkbradley
GitHub user jkbradley opened a pull request:

https://github.com/apache/spark/pull/14147

[SPARK-14812][ML][MLLIB][PYTHON] Experimental, DeveloperApi annotation 
audit for ML

## What changes were proposed in this pull request?

General decisions to follow, except where noted:
* spark.mllib, pyspark.mllib: Remove all Experimental annotations.  Leave 
DeveloperApi annotations alone.
* spark.ml, pyspark.ml
** Annotate Estimator-Model pairs of classes and companion objects the same 
way.
** For all algorithms marked Experimental with Since tag <= 1.6, remove 
Experimental annotation.
** For all algorithms marked Experimental with Since tag = 2.0, leave 
Experimental annotation.
* DeveloperApi annotations are left alone, except where noted.
* No changes to which types are sealed.

Exceptions where I am leaving items Experimental in spark.ml, pyspark.ml, 
mainly because the items are new:
* Model Summary classes
* MLWriter, MLReader, MLWritable, MLReadable
* Evaluator and subclasses: There is discussion of changes around 
evaluating multiple metrics at once for efficiency.
* RFormula: Its behavior may need to change slightly to match R in edge 
cases.
* AFTSurvivalRegression
* MultilayerPerceptronClassifier

DeveloperApi changes:
* ml.tree.Node, ml.tree.Split, and subclasses should no longer be 
DeveloperApi

## How was this patch tested?

N/A

Note to reviewers:
* spark.ml.clustering.LDA underwent significant changes (additional 
methods), so let me know if you want me to leave it Experimental.
* Be careful to check for cases where a class should no longer be 
Experimental but has an Experimental method, val, or other feature.  I did not 
find such cases, but please verify.


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/jkbradley/spark experimental-audit

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/spark/pull/14147.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #14147


commit a8beb430de18db7171badea37f12d365b8006467
Author: Joseph K. Bradley 
Date:   2016-07-12T00:39:15Z

Removed Experimental annotations from spark.mllib, pyspark.mllib

commit f86ea5aaf15523c944582a56b93fc3b1ee3b58a0
Author: Joseph K. Bradley 
Date:   2016-07-12T01:33:56Z

Audited Experimental, DeveloperApi annotations in .ml




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #14138: [SPARK-16284][SQL] Implement reflect SQL function

2016-07-11 Thread cloud-fan
Github user cloud-fan commented on a diff in the pull request:

https://github.com/apache/spark/pull/14138#discussion_r70366108
  
--- Diff: 
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/CallMethodViaReflection.scala
 ---
@@ -0,0 +1,166 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.spark.sql.catalyst.expressions
+
+import java.lang.reflect.{Method, Modifier}
+
+import org.apache.spark.sql.catalyst.InternalRow
+import org.apache.spark.sql.catalyst.analysis.TypeCheckResult
+import 
org.apache.spark.sql.catalyst.analysis.TypeCheckResult.{TypeCheckFailure, 
TypeCheckSuccess}
+import org.apache.spark.sql.catalyst.expressions.codegen.CodegenFallback
+import org.apache.spark.sql.types._
+import org.apache.spark.unsafe.types.UTF8String
+import org.apache.spark.util.Utils
+
+/**
+ * An expression that invokes a method on a class via reflection.
+ *
+ * For now, only types defined in `Reflect.typeMapping` are supported 
(basically primitives
+ * and string) as input types, and the output is turned automatically to a 
string.
+ *
+ * Note that unlike Hive's reflect function, this expression calls only 
static methods
+ * (i.e. does not support calling non-static methods).
+ *
+ * We should also look into how to consolidate this expression with
+ * [[org.apache.spark.sql.catalyst.expressions.objects.StaticInvoke]] in 
the future.
+ *
+ * @param children the first element should be a literal string for the 
class name,
+ * and the second element should be a literal string for 
the method name,
+ * and the remaining are input arguments to the Java 
method.
+ */
+// scalastyle:off line.size.limit
+@ExpressionDescription(
+  usage = "_FUNC_(class,method[,arg1[,arg2..]]) calls method with 
reflection",
+  extended = "> SELECT _FUNC_('java.util.UUID', 'randomUUID');\n 
c33fb387-8500-4bfa-81d2-6e0e3e930df2")
+// scalastyle:on line.size.limit
+case class CallMethodViaReflection(children: Seq[Expression])
+  extends Expression with CodegenFallback {
+
+  override def prettyName: String = "reflect"
+
+  override def checkInputDataTypes(): TypeCheckResult = {
+if (children.size < 2) {
+  TypeCheckFailure("requires at least two arguments")
+} else if (!children.take(2).forall(e => e.dataType == StringType && 
e.foldable)) {
+  // The first two arguments must be string type.
+  TypeCheckFailure("first two arguments should be string literals")
+} else if (!classExists) {
+  TypeCheckFailure(s"class $className not found")
+} else if (method == null) {
+  TypeCheckFailure(s"cannot find a static method that matches the 
argument types in $className")
+} else {
+  TypeCheckSuccess
+}
+  }
+
+  override def deterministic: Boolean = false
+  override def nullable: Boolean = true
+  override val dataType: DataType = StringType
+
+  override def eval(input: InternalRow): Any = {
+var i = 0
+while (i < argExprs.length) {
+  buffer(i) = argExprs(i).eval(input).asInstanceOf[Object]
+  // Convert if necessary. Based on the types defined in typeMapping, 
string is the only
+  // type that needs conversion. If we support timestamps, dates, 
decimals, arrays, or maps
+  // in the future, proper conversion needs to happen here too.
+  if (buffer(i).isInstanceOf[UTF8String]) {
+buffer(i) = buffer(i).toString
+  }
+  i += 1
+}
+val ret = method.invoke(null, buffer : _*)
+UTF8String.fromString(String.valueOf(ret))
+  }
+
+  @transient private lazy val argExprs: Array[Expression] = 
children.drop(2).toArray
+
+  /** Name of the class -- this has to be called after we verify children 
has at least two exprs. */
+  @transient private lazy val className = 
children(0).eval().asInstanceOf[UTF8String].toString
--- End diff --

  

[GitHub] spark pull request #14138: [SPARK-16284][SQL] Implement reflect SQL function

2016-07-11 Thread petermaxlee
Github user petermaxlee commented on a diff in the pull request:

https://github.com/apache/spark/pull/14138#discussion_r70366065
  
--- Diff: 
sql/catalyst/src/test/scala/org/apache/spark/sql/catalyst/expressions/CallMethodViaReflectionSuite.scala
 ---
@@ -0,0 +1,102 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.spark.sql.catalyst.expressions
+
+import org.apache.spark.SparkFunSuite
+import 
org.apache.spark.sql.catalyst.analysis.TypeCheckResult.TypeCheckFailure
+import org.apache.spark.sql.types.{IntegerType, StringType}
+
+/** A static class for testing purpose. */
+object ReflectStaticClass {
+  def method1(): String = "m1"
+  def method2(v1: Int): String = "m" + v1
+  def method3(v1: java.lang.Integer): String = "m" + v1
+  def method4(v1: Int, v2: String): String = "m" + v1 + v2
+}
+
+/** A non-static class for testing purpose. */
+class ReflectDynamicClass {
--- End diff --

This is used to make sure we report the correct error message.



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #14138: [SPARK-16284][SQL] Implement reflect SQL function

2016-07-11 Thread petermaxlee
Github user petermaxlee commented on a diff in the pull request:

https://github.com/apache/spark/pull/14138#discussion_r70366073
  
--- Diff: 
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/CallMethodViaReflection.scala
 ---
@@ -0,0 +1,166 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.spark.sql.catalyst.expressions
+
+import java.lang.reflect.{Method, Modifier}
+
+import org.apache.spark.sql.catalyst.InternalRow
+import org.apache.spark.sql.catalyst.analysis.TypeCheckResult
+import 
org.apache.spark.sql.catalyst.analysis.TypeCheckResult.{TypeCheckFailure, 
TypeCheckSuccess}
+import org.apache.spark.sql.catalyst.expressions.codegen.CodegenFallback
+import org.apache.spark.sql.types._
+import org.apache.spark.unsafe.types.UTF8String
+import org.apache.spark.util.Utils
+
+/**
+ * An expression that invokes a method on a class via reflection.
+ *
+ * For now, only types defined in `Reflect.typeMapping` are supported 
(basically primitives
+ * and string) as input types, and the output is turned automatically to a 
string.
+ *
+ * Note that unlike Hive's reflect function, this expression calls only 
static methods
+ * (i.e. does not support calling non-static methods).
+ *
+ * We should also look into how to consolidate this expression with
+ * [[org.apache.spark.sql.catalyst.expressions.objects.StaticInvoke]] in 
the future.
+ *
+ * @param children the first element should be a literal string for the 
class name,
+ * and the second element should be a literal string for 
the method name,
+ * and the remaining are input arguments to the Java 
method.
+ */
+// scalastyle:off line.size.limit
+@ExpressionDescription(
+  usage = "_FUNC_(class,method[,arg1[,arg2..]]) calls method with 
reflection",
+  extended = "> SELECT _FUNC_('java.util.UUID', 'randomUUID');\n 
c33fb387-8500-4bfa-81d2-6e0e3e930df2")
+// scalastyle:on line.size.limit
+case class CallMethodViaReflection(children: Seq[Expression])
+  extends Expression with CodegenFallback {
+
+  override def prettyName: String = "reflect"
+
+  override def checkInputDataTypes(): TypeCheckResult = {
+if (children.size < 2) {
+  TypeCheckFailure("requires at least two arguments")
+} else if (!children.take(2).forall(e => e.dataType == StringType && 
e.foldable)) {
+  // The first two arguments must be string type.
+  TypeCheckFailure("first two arguments should be string literals")
+} else if (!classExists) {
+  TypeCheckFailure(s"class $className not found")
+} else if (method == null) {
+  TypeCheckFailure(s"cannot find a static method that matches the 
argument types in $className")
+} else {
+  TypeCheckSuccess
+}
+  }
+
+  override def deterministic: Boolean = false
+  override def nullable: Boolean = true
+  override val dataType: DataType = StringType
+
+  override def eval(input: InternalRow): Any = {
+var i = 0
+while (i < argExprs.length) {
+  buffer(i) = argExprs(i).eval(input).asInstanceOf[Object]
+  // Convert if necessary. Based on the types defined in typeMapping, 
string is the only
+  // type that needs conversion. If we support timestamps, dates, 
decimals, arrays, or maps
+  // in the future, proper conversion needs to happen here too.
+  if (buffer(i).isInstanceOf[UTF8String]) {
+buffer(i) = buffer(i).toString
+  }
+  i += 1
+}
+val ret = method.invoke(null, buffer : _*)
+UTF8String.fromString(String.valueOf(ret))
+  }
+
+  @transient private lazy val argExprs: Array[Expression] = 
children.drop(2).toArray
+
+  /** Name of the class -- this has to be called after we verify children 
has at least two exprs. */
+  @transient private lazy val className = 
children(0).eval().asInstanceOf[UTF8String].toString
+
+  /** True if 

[GitHub] spark issue #14143: [SPARK-16430][SQL][STREAMING] Fixed bug in the maxFilesP...

2016-07-11 Thread tdas
Github user tdas commented on the issue:

https://github.com/apache/spark/pull/14143
  
@zsxwing 
I am merging this critical bug fix to master and 2.0. Feel free to leave 
reviews and I will address them in follow up PRs. 




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #14138: [SPARK-16284][SQL] Implement reflect SQL function

2016-07-11 Thread cloud-fan
Github user cloud-fan commented on a diff in the pull request:

https://github.com/apache/spark/pull/14138#discussion_r70365738
  
--- Diff: 
sql/catalyst/src/test/scala/org/apache/spark/sql/catalyst/expressions/CallMethodViaReflectionSuite.scala
 ---
@@ -0,0 +1,102 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.spark.sql.catalyst.expressions
+
+import org.apache.spark.SparkFunSuite
+import 
org.apache.spark.sql.catalyst.analysis.TypeCheckResult.TypeCheckFailure
+import org.apache.spark.sql.types.{IntegerType, StringType}
+
+/** A static class for testing purpose. */
+object ReflectStaticClass {
+  def method1(): String = "m1"
+  def method2(v1: Int): String = "m" + v1
+  def method3(v1: java.lang.Integer): String = "m" + v1
+  def method4(v1: Int, v2: String): String = "m" + v1 + v2
+}
+
+/** A non-static class for testing purpose. */
+class ReflectDynamicClass {
--- End diff --

no need to test it?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #14138: [SPARK-16284][SQL] Implement reflect SQL function

2016-07-11 Thread cloud-fan
Github user cloud-fan commented on a diff in the pull request:

https://github.com/apache/spark/pull/14138#discussion_r70365507
  
--- Diff: 
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/CallMethodViaReflection.scala
 ---
@@ -0,0 +1,166 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.spark.sql.catalyst.expressions
+
+import java.lang.reflect.{Method, Modifier}
+
+import org.apache.spark.sql.catalyst.InternalRow
+import org.apache.spark.sql.catalyst.analysis.TypeCheckResult
+import 
org.apache.spark.sql.catalyst.analysis.TypeCheckResult.{TypeCheckFailure, 
TypeCheckSuccess}
+import org.apache.spark.sql.catalyst.expressions.codegen.CodegenFallback
+import org.apache.spark.sql.types._
+import org.apache.spark.unsafe.types.UTF8String
+import org.apache.spark.util.Utils
+
+/**
+ * An expression that invokes a method on a class via reflection.
+ *
+ * For now, only types defined in `Reflect.typeMapping` are supported 
(basically primitives
+ * and string) as input types, and the output is turned automatically to a 
string.
+ *
+ * Note that unlike Hive's reflect function, this expression calls only 
static methods
+ * (i.e. does not support calling non-static methods).
+ *
+ * We should also look into how to consolidate this expression with
+ * [[org.apache.spark.sql.catalyst.expressions.objects.StaticInvoke]] in 
the future.
+ *
+ * @param children the first element should be a literal string for the 
class name,
+ * and the second element should be a literal string for 
the method name,
+ * and the remaining are input arguments to the Java 
method.
+ */
+// scalastyle:off line.size.limit
+@ExpressionDescription(
+  usage = "_FUNC_(class,method[,arg1[,arg2..]]) calls method with 
reflection",
+  extended = "> SELECT _FUNC_('java.util.UUID', 'randomUUID');\n 
c33fb387-8500-4bfa-81d2-6e0e3e930df2")
+// scalastyle:on line.size.limit
+case class CallMethodViaReflection(children: Seq[Expression])
+  extends Expression with CodegenFallback {
+
+  override def prettyName: String = "reflect"
+
+  override def checkInputDataTypes(): TypeCheckResult = {
+if (children.size < 2) {
+  TypeCheckFailure("requires at least two arguments")
+} else if (!children.take(2).forall(e => e.dataType == StringType && 
e.foldable)) {
+  // The first two arguments must be string type.
+  TypeCheckFailure("first two arguments should be string literals")
+} else if (!classExists) {
+  TypeCheckFailure(s"class $className not found")
+} else if (method == null) {
+  TypeCheckFailure(s"cannot find a static method that matches the 
argument types in $className")
+} else {
+  TypeCheckSuccess
+}
+  }
+
+  override def deterministic: Boolean = false
+  override def nullable: Boolean = true
+  override val dataType: DataType = StringType
+
+  override def eval(input: InternalRow): Any = {
+var i = 0
+while (i < argExprs.length) {
+  buffer(i) = argExprs(i).eval(input).asInstanceOf[Object]
+  // Convert if necessary. Based on the types defined in typeMapping, 
string is the only
+  // type that needs conversion. If we support timestamps, dates, 
decimals, arrays, or maps
+  // in the future, proper conversion needs to happen here too.
+  if (buffer(i).isInstanceOf[UTF8String]) {
+buffer(i) = buffer(i).toString
+  }
+  i += 1
+}
+val ret = method.invoke(null, buffer : _*)
+UTF8String.fromString(String.valueOf(ret))
+  }
+
+  @transient private lazy val argExprs: Array[Expression] = 
children.drop(2).toArray
+
+  /** Name of the class -- this has to be called after we verify children 
has at least two exprs. */
+  @transient private lazy val className = 
children(0).eval().asInstanceOf[UTF8String].toString
+
+  /** True if 

[GitHub] spark issue #14088: [SPARK-16414] [YARN] Fix bugs for "Can not get user conf...

2016-07-11 Thread AmplabJenkins
Github user AmplabJenkins commented on the issue:

https://github.com/apache/spark/pull/14088
  
Test PASSed.
Refer to this link for build results (access rights to CI server needed): 
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/62128/
Test PASSed.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



  1   2   3   4   5   6   7   >