[GitHub] spark pull request: [SPARK-14445][SQL] Support native execution of...

2016-04-26 Thread dilipbiswal
Github user dilipbiswal commented on the pull request:

https://github.com/apache/spark/pull/1#issuecomment-214939215
  
@liancheng Thank you very much Lian !!


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request: [SPARK-14445][SQL] Support native execution of...

2016-04-26 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/spark/pull/1


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request: [SPARK-14445][SQL] Support native execution of...

2016-04-26 Thread liancheng
Github user liancheng commented on the pull request:

https://github.com/apache/spark/pull/1#issuecomment-214938170
  
@dilipbiswal Great! LGTM now, I'm merging this one to master. Thanks!


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request: [SPARK-14445][SQL] Support native execution of...

2016-04-26 Thread AmplabJenkins
Github user AmplabJenkins commented on the pull request:

https://github.com/apache/spark/pull/1#issuecomment-214876293
  
Merged build finished. Test PASSed.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request: [SPARK-14445][SQL] Support native execution of...

2016-04-26 Thread AmplabJenkins
Github user AmplabJenkins commented on the pull request:

https://github.com/apache/spark/pull/1#issuecomment-214876307
  
Test PASSed.
Refer to this link for build results (access rights to CI server needed): 
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/57014/
Test PASSed.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request: [SPARK-14445][SQL] Support native execution of...

2016-04-26 Thread SparkQA
Github user SparkQA commented on the pull request:

https://github.com/apache/spark/pull/1#issuecomment-214876009
  
**[Test build #57014 has 
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/57014/consoleFull)**
 for PR 1 at commit 
[`54ff5e4`](https://github.com/apache/spark/commit/54ff5e4a444f93b785a31dc927152461d0c9dfca).
 * This patch passes all tests.
 * This patch merges cleanly.
 * This patch adds no public classes.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request: [SPARK-14445][SQL] Support native execution of...

2016-04-26 Thread dilipbiswal
Github user dilipbiswal commented on the pull request:

https://github.com/apache/spark/pull/1#issuecomment-214847459
  
@liancheng Hi Lian, i just pushed my branch which has the new test case for 
your reference. It also has all your comments addressed :-)


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request: [SPARK-14445][SQL] Support native execution of...

2016-04-26 Thread SparkQA
Github user SparkQA commented on the pull request:

https://github.com/apache/spark/pull/1#issuecomment-214846747
  
**[Test build #57014 has 
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/57014/consoleFull)**
 for PR 1 at commit 
[`54ff5e4`](https://github.com/apache/spark/commit/54ff5e4a444f93b785a31dc927152461d0c9dfca).


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request: [SPARK-14445][SQL] Support native execution of...

2016-04-26 Thread dilipbiswal
Github user dilipbiswal commented on the pull request:

https://github.com/apache/spark/pull/1#issuecomment-214815027
  
@liancheng I had all the comments addressed and had left the tests running 
last night was about to push today :-). Anyways thanks !! I will take a look at 
your PR.



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request: [SPARK-14445][SQL] Support native execution of...

2016-04-26 Thread liancheng
Github user liancheng commented on the pull request:

https://github.com/apache/spark/pull/1#issuecomment-214803024
  
Hi @dilipbiswal, we did a bunch of major refactoring on the master branch, 
and it's pretty close to 2.0 code freeze, so I took it over based on your 
version and opened PR #12703. Would you mind to have a look at it? Thanks.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request: [SPARK-14445][SQL] Support native execution of...

2016-04-25 Thread dilipbiswal
Github user dilipbiswal commented on a diff in the pull request:

https://github.com/apache/spark/pull/1#discussion_r61027608
  
--- Diff: 
sql/hive/src/test/scala/org/apache/spark/sql/hive/execution/HiveCommandSuite.scala
 ---
@@ -122,4 +134,105 @@ class HiveCommandSuite extends QueryTest with 
SQLTestUtils with TestHiveSingleto
   checkAnswer(sql("SHOW TBLPROPERTIES parquet_temp"), Nil)
 }
   }
+
+  test("show columns") {
+checkAnswer(
+  sql("SHOW COLUMNS IN parquet_tab3"),
+  Row("col1") :: Row("col 2") :: Nil)
+
+checkAnswer(
+  sql("SHOW COLUMNS IN default.parquet_tab3"),
+  Row("col1") :: Row("col 2") :: Nil)
+
+checkAnswer(
+  sql("SHOW COLUMNS IN parquet_tab3 FROM default"),
+  Row("col1") :: Row("col 2") :: Nil)
+
+checkAnswer(
+  sql("SHOW COLUMNS IN parquet_tab4 IN default"),
+  Row("price") :: Row("qty") :: Row("year") :: Row("month") :: Nil)
+
+val message = intercept[NoSuchTableException] {
+  sql("SHOW COLUMNS IN badtable FROM default")
+}.getMessage
+assert(message.contains("badtable not found in database"))
+  }
+
+  test("show partitions - show everything") {
+checkAnswer(
+  sql("show partitions parquet_tab4"),
+  Row("year=2015/month=1") ::
+Row("year=2015/month=2") ::
+Row("year=2016/month=2") ::
+Row("year=2016/month=3") :: Nil)
--- End diff --

@liancheng Thanks !! You are right !! Having >= 5 partition keys does 
expose the problem. Any advice on how to go about handling this ? Can we change 
TablePartitionSpec to be a LinkedHashMap instead ?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request: [SPARK-14445][SQL] Support native execution of...

2016-04-25 Thread liancheng
Github user liancheng commented on a diff in the pull request:

https://github.com/apache/spark/pull/1#discussion_r61011875
  
--- Diff: 
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/PartitioningUtils.scala
 ---
@@ -424,7 +424,7 @@ private[sql] object PartitioningUtils {
 path.foreach { c =>
   if (needsEscaping(c)) {
 builder.append('%')
-builder.append(f"${c.asInstanceOf[Int]}%02x")
+builder.append(f"${c.asInstanceOf[Int]}%02X")
--- End diff --

Makes sense. Thanks for the explanation.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request: [SPARK-14445][SQL] Support native execution of...

2016-04-25 Thread dilipbiswal
Github user dilipbiswal commented on a diff in the pull request:

https://github.com/apache/spark/pull/1#discussion_r61006959
  
--- Diff: 
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/PartitioningUtils.scala
 ---
@@ -424,7 +424,7 @@ private[sql] object PartitioningUtils {
 path.foreach { c =>
   if (needsEscaping(c)) {
 builder.append('%')
-builder.append(f"${c.asInstanceOf[Int]}%02x")
+builder.append(f"${c.asInstanceOf[Int]}%02X")
--- End diff --

@liancheng So i was comparing the output of our impl against hive. And hive 
reports the escaped name in upper case,  I looked at FileUtils.escapePathName() 
in hive as reference before changing the code here. Please let me know what you 
think.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request: [SPARK-14445][SQL] Support native execution of...

2016-04-25 Thread dilipbiswal
Github user dilipbiswal commented on a diff in the pull request:

https://github.com/apache/spark/pull/1#discussion_r61004951
  
--- Diff: 
sql/core/src/main/scala/org/apache/spark/sql/execution/command/commands.scala 
---
@@ -423,6 +426,102 @@ case class ShowTablePropertiesCommand(
 }
 
 /**
+ * A command for users to list the column names for a table. This function 
creates a
+ * [[ShowColumnsCommand]] logical plan.
+ *
+ * The syntax of using this command in SQL is:
+ * {{{
+ *   SHOW COLUMNS (FROM | IN) table_identifier [(FROM | IN) database];
+ * }}}
--- End diff --

@liancheng I followed other commands and some of them mention the SQL 
syntax. Do you want them removed from all other commands or just this two ?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request: [SPARK-14445][SQL] Support native execution of...

2016-04-25 Thread dilipbiswal
Github user dilipbiswal commented on a diff in the pull request:

https://github.com/apache/spark/pull/1#discussion_r61004685
  
--- Diff: 
sql/core/src/main/scala/org/apache/spark/sql/execution/command/commands.scala 
---
@@ -423,6 +426,102 @@ case class ShowTablePropertiesCommand(
 }
 
 /**
+ * A command for users to list the column names for a table. This function 
creates a
+ * [[ShowColumnsCommand]] logical plan.
--- End diff --

ok.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request: [SPARK-14445][SQL] Support native execution of...

2016-04-25 Thread dilipbiswal
Github user dilipbiswal commented on a diff in the pull request:

https://github.com/apache/spark/pull/1#discussion_r61003756
  
--- Diff: 
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/catalog/InMemoryCatalog.scala
 ---
@@ -277,9 +277,18 @@ class InMemoryCatalog extends ExternalCatalog {
 catalog(db).tables(table).partitions(spec)
   }
 
+  /**
+   * List the metadata of all partitions that belong to the specified 
table, assuming it exists.
+   *
+   * A partial partition spec may optionally be provided to filter the 
partitions returned.
+   * For instance, if there exist partitions (a='1', b='2'), (a='1', 
b='3') and (a='2', b='4'),
+   * then a partial spec of (a='1') will return the first two only.
--- End diff --

Sure. I will remove the comments.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request: [SPARK-14445][SQL] Support native execution of...

2016-04-25 Thread liancheng
Github user liancheng commented on a diff in the pull request:

https://github.com/apache/spark/pull/1#discussion_r60937575
  
--- Diff: 
sql/hive/src/test/scala/org/apache/spark/sql/hive/execution/HiveCommandSuite.scala
 ---
@@ -36,12 +38,22 @@ class HiveCommandSuite extends QueryTest with 
SQLTestUtils with TestHiveSingleto
 |STORED AS PARQUET
 |TBLPROPERTIES('prop1Key'="prop1Val", '`prop2Key`'="prop2Val")
   """.stripMargin)
+ sql("CREATE TABLE parquet_tab3(col1 int, `col 2` int)")
+ sql("CREATE TABLE parquet_tab4 (price int, qty int) partitioned by 
(year int, month int)")
+ sql("INSERT INTO parquet_tab4 PARTITION(year = 2015, month=1) SELECT 
1,1")
+ sql("INSERT INTO parquet_tab4 PARTITION(year = 2015, month=2) SELECT 
2,2")
+ sql("INSERT INTO parquet_tab4 PARTITION(year = 2016, month=2) SELECT 
3,3")
+ sql("INSERT INTO parquet_tab4 PARTITION(year = 2016, month=3) SELECT 
3,3")
--- End diff --

Nit: Spaces around `=`


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request: [SPARK-14445][SQL] Support native execution of...

2016-04-25 Thread liancheng
Github user liancheng commented on a diff in the pull request:

https://github.com/apache/spark/pull/1#discussion_r60937498
  
--- Diff: 
sql/hive/src/test/scala/org/apache/spark/sql/hive/execution/HiveCommandSuite.scala
 ---
@@ -17,11 +17,13 @@
 
 package org.apache.spark.sql.hive.execution
 
-import org.apache.spark.sql.{AnalysisException, QueryTest, Row}
-import org.apache.spark.sql.hive.test.TestHiveSingleton
+import org.apache.spark.sql.{AnalysisException, QueryTest, Row, SaveMode}
+import org.apache.spark.sql.catalyst.analysis.NoSuchTableException
+import org.apache.spark.sql.hive.test.{TestHive, TestHiveSingleton}
 import org.apache.spark.sql.test.SQLTestUtils
 
 class HiveCommandSuite extends QueryTest with SQLTestUtils with 
TestHiveSingleton {
+  import testImplicits._
protected override def beforeAll(): Unit = {
--- End diff --

Please help fix this indentation, thanks!


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request: [SPARK-14445][SQL] Support native execution of...

2016-04-25 Thread liancheng
Github user liancheng commented on a diff in the pull request:

https://github.com/apache/spark/pull/1#discussion_r60934317
  
--- Diff: 
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/PartitioningUtils.scala
 ---
@@ -424,7 +424,7 @@ private[sql] object PartitioningUtils {
 path.foreach { c =>
   if (needsEscaping(c)) {
 builder.append('%')
-builder.append(f"${c.asInstanceOf[Int]}%02x")
+builder.append(f"${c.asInstanceOf[Int]}%02X")
--- End diff --

Why this change is necessary?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request: [SPARK-14445][SQL] Support native execution of...

2016-04-25 Thread liancheng
Github user liancheng commented on a diff in the pull request:

https://github.com/apache/spark/pull/1#discussion_r60934120
  
--- Diff: 
sql/core/src/main/scala/org/apache/spark/sql/execution/command/commands.scala 
---
@@ -423,6 +426,102 @@ case class ShowTablePropertiesCommand(
 }
 
 /**
+ * A command for users to list the column names for a table. This function 
creates a
+ * [[ShowColumnsCommand]] logical plan.
+ *
+ * The syntax of using this command in SQL is:
+ * {{{
+ *   SHOW COLUMNS (FROM | IN) table_identifier [(FROM | IN) database];
+ * }}}
+ */
+case class ShowColumnsCommand(table: TableIdentifier) extends 
RunnableCommand {
+  // The result of SHOW COLUMNS has one column called 'result'
+  override val output: Seq[Attribute] = {
+AttributeReference("result", StringType, nullable = false)() :: Nil
+  }
+
+  override def run(sqlContext: SQLContext): Seq[Row] = {
+sqlContext.sessionState.catalog.getTableMetadata(table).schema.map { c 
=>
+  Row(c.name)
+}
+  }
+}
+
+/**
+ * A command for users to list the partition names of a table. If the 
partition spec is specified,
+ * partitions that match the spec are returned. [[AnalysisException]] 
exception is thrown under
+ * the following conditions:
+ *
+ * 1. If the command is called for a non partitioned table.
+ * 2. If the partition spec refers to the columns that are not defined as 
partitioning columns.
+ *
+ * This function creates a [[ShowPartitionsCommand]] logical plan
+ *
+ * The syntax of using this command in SQL is:
+ * {{{
+ *   SHOW PARTITIONS [db_name.]table_name [PARTITION(partition_spec)]
+ * }}}
+ */
+case class ShowPartitionsCommand(
+table: TableIdentifier,
+spec: Option[TablePartitionSpec]) extends RunnableCommand {
+  // The result of SHOW PARTITIONS has one column called 'result'
+  override val output: Seq[Attribute] = {
+AttributeReference("result", StringType, nullable = false)() :: Nil
+  }
+
+  def getPartName(spec: TablePartitionSpec): String = {
+spec.map {s =>
+  PartitioningUtils.escapePathName(s._1) + "=" + 
PartitioningUtils.escapePathName(s._2)
+}.mkString("/")
+  }
+  override def run(sqlContext: SQLContext): Seq[Row] = {
+val catalog = sqlContext.sessionState.catalog
+val db = table.database.getOrElse(catalog.getCurrentDatabase)
+if (catalog.isTemporaryTable(table)) {
+  throw new AnalysisException("SHOW PARTITIONS is not allowed on a 
temporary table: " +
+  s"${table.unquotedString}")
+} else {
+  val tab = catalog.getTableMetadata(table)
+  /**
+   * Validate and throws an [[AnalysisException]] exception under the 
following conditions:
+   * 1. If the table is not partitioned.
+   * 2. If it is a datasource table.
+   * 3. If it is a view or index table.
+   */
+  if (tab.tableType == CatalogTableType.VIRTUAL_VIEW ||
+tab.tableType == CatalogTableType.INDEX_TABLE) {
+throw new AnalysisException("SHOW PARTITIONS is not allowed on a 
view or index table: " +
+  s"${tab.qualifiedName}")
+  }
+  if (!DDLUtils.isTablePartitioned(tab)) {
+throw new AnalysisException("SHOW PARTITIONS is not allowed on a 
table that is not " +
+  s"partitioned: ${tab.qualifiedName}")
+  }
+  if (DDLUtils.isDatasourceTable(tab)) {
+throw new AnalysisException("SHOW PARTITIONS is not allowed on a 
datasource table: " +
+  s"${tab.qualifiedName}")
+  }
+  /**
+   * Validate the partitioning spec by making sure all the referenced 
columns are
+   * defined as partitioning columns in table definition. An 
AnalysisException exception is
+   * thrown if the partitioning spec is invalid.
+   */
+  if (spec.isDefined) {
+val badColumns = 
spec.get.keySet.filterNot(tab.partitionColumns.map(_.name).contains)
+if (badColumns.nonEmpty) {
+  throw new AnalysisException(
+s"Non-partitioned column(s) [${badColumns.mkString(", ")}] are 
" +
--- End diff --

Nit: Non-partitioning


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request: [SPARK-14445][SQL] Support native execution of...

2016-04-25 Thread liancheng
Github user liancheng commented on a diff in the pull request:

https://github.com/apache/spark/pull/1#discussion_r60931135
  
--- Diff: 
sql/hive/src/test/scala/org/apache/spark/sql/hive/execution/HiveCommandSuite.scala
 ---
@@ -122,4 +134,105 @@ class HiveCommandSuite extends QueryTest with 
SQLTestUtils with TestHiveSingleto
   checkAnswer(sql("SHOW TBLPROPERTIES parquet_temp"), Nil)
 }
   }
+
+  test("show columns") {
+checkAnswer(
+  sql("SHOW COLUMNS IN parquet_tab3"),
+  Row("col1") :: Row("col 2") :: Nil)
+
+checkAnswer(
+  sql("SHOW COLUMNS IN default.parquet_tab3"),
+  Row("col1") :: Row("col 2") :: Nil)
+
+checkAnswer(
+  sql("SHOW COLUMNS IN parquet_tab3 FROM default"),
+  Row("col1") :: Row("col 2") :: Nil)
+
+checkAnswer(
+  sql("SHOW COLUMNS IN parquet_tab4 IN default"),
+  Row("price") :: Row("qty") :: Row("year") :: Row("month") :: Nil)
+
+val message = intercept[NoSuchTableException] {
+  sql("SHOW COLUMNS IN badtable FROM default")
+}.getMessage
+assert(message.contains("badtable not found in database"))
+  }
+
+  test("show partitions - show everything") {
+checkAnswer(
+  sql("show partitions parquet_tab4"),
+  Row("year=2015/month=1") ::
+Row("year=2015/month=2") ::
+Row("year=2016/month=2") ::
+Row("year=2016/month=3") :: Nil)
--- End diff --

As a simple experiment, order is preserved for maps with less than 5 
elements:

```scala
scala> Map("year" -> 1) foreach println
(year,1)

scala> Map("year" -> 1, "month" -> 2) foreach println
(year,1)
(month,2)

scala> Map("year" -> 1, "month" -> 2, "day" -> 3) foreach println
(year,1)
(month,2)
(day,3)

scala> Map("year" -> 1, "month" -> 2, "day" -> 3, "hour" -> 4) foreach 
println
(year,1)
(month,2)
(day,3)
(hour,4)

scala> Map("year" -> 1, "month" -> 2, "day" -> 3, "hour" -> 4, "minute" -> 
5) foreach println
(minute,5)
(year,1)
(hour,4)
(day,3)
(month,2)
```


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request: [SPARK-14445][SQL] Support native execution of...

2016-04-25 Thread liancheng
Github user liancheng commented on a diff in the pull request:

https://github.com/apache/spark/pull/1#discussion_r60930580
  
--- Diff: 
sql/hive/src/test/scala/org/apache/spark/sql/hive/execution/HiveCommandSuite.scala
 ---
@@ -122,4 +134,105 @@ class HiveCommandSuite extends QueryTest with 
SQLTestUtils with TestHiveSingleto
   checkAnswer(sql("SHOW TBLPROPERTIES parquet_temp"), Nil)
 }
   }
+
+  test("show columns") {
+checkAnswer(
+  sql("SHOW COLUMNS IN parquet_tab3"),
+  Row("col1") :: Row("col 2") :: Nil)
+
+checkAnswer(
+  sql("SHOW COLUMNS IN default.parquet_tab3"),
+  Row("col1") :: Row("col 2") :: Nil)
+
+checkAnswer(
+  sql("SHOW COLUMNS IN parquet_tab3 FROM default"),
+  Row("col1") :: Row("col 2") :: Nil)
+
+checkAnswer(
+  sql("SHOW COLUMNS IN parquet_tab4 IN default"),
+  Row("price") :: Row("qty") :: Row("year") :: Row("month") :: Nil)
+
+val message = intercept[NoSuchTableException] {
+  sql("SHOW COLUMNS IN badtable FROM default")
+}.getMessage
+assert(message.contains("badtable not found in database"))
+  }
+
+  test("show partitions - show everything") {
+checkAnswer(
+  sql("show partitions parquet_tab4"),
+  Row("year=2015/month=1") ::
+Row("year=2015/month=2") ::
+Row("year=2016/month=2") ::
+Row("year=2016/month=3") :: Nil)
--- End diff --

Could you please add test cases where the testing table containing >= 5 
partition columns? I believe Scala standard library provides specialized 
classes for maps containing <= 4 elements, which covers the out-of-order bug.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request: [SPARK-14445][SQL] Support native execution of...

2016-04-25 Thread liancheng
Github user liancheng commented on a diff in the pull request:

https://github.com/apache/spark/pull/1#discussion_r60928725
  
--- Diff: 
sql/core/src/main/scala/org/apache/spark/sql/execution/command/commands.scala 
---
@@ -423,6 +426,102 @@ case class ShowTablePropertiesCommand(
 }
 
 /**
+ * A command for users to list the column names for a table. This function 
creates a
+ * [[ShowColumnsCommand]] logical plan.
+ *
+ * The syntax of using this command in SQL is:
+ * {{{
+ *   SHOW COLUMNS (FROM | IN) table_identifier [(FROM | IN) database];
+ * }}}
+ */
+case class ShowColumnsCommand(table: TableIdentifier) extends 
RunnableCommand {
+  // The result of SHOW COLUMNS has one column called 'result'
+  override val output: Seq[Attribute] = {
+AttributeReference("result", StringType, nullable = false)() :: Nil
+  }
+
+  override def run(sqlContext: SQLContext): Seq[Row] = {
+sqlContext.sessionState.catalog.getTableMetadata(table).schema.map { c 
=>
+  Row(c.name)
+}
+  }
+}
+
+/**
+ * A command for users to list the partition names of a table. If the 
partition spec is specified,
+ * partitions that match the spec are returned. [[AnalysisException]] 
exception is thrown under
+ * the following conditions:
+ *
+ * 1. If the command is called for a non partitioned table.
+ * 2. If the partition spec refers to the columns that are not defined as 
partitioning columns.
+ *
+ * This function creates a [[ShowPartitionsCommand]] logical plan
+ *
+ * The syntax of using this command in SQL is:
+ * {{{
+ *   SHOW PARTITIONS [db_name.]table_name [PARTITION(partition_spec)]
+ * }}}
+ */
+case class ShowPartitionsCommand(
+table: TableIdentifier,
+spec: Option[TablePartitionSpec]) extends RunnableCommand {
+  // The result of SHOW PARTITIONS has one column called 'result'
+  override val output: Seq[Attribute] = {
+AttributeReference("result", StringType, nullable = false)() :: Nil
+  }
+
+  def getPartName(spec: TablePartitionSpec): String = {
+spec.map {s =>
+  PartitioningUtils.escapePathName(s._1) + "=" + 
PartitioningUtils.escapePathName(s._2)
+}.mkString("/")
--- End diff --

@rxin I'm thinking it might not be a good idea to use `Map` to represent 
partition spec in `CatalogTablePartition` since it doesn't preserve partition 
column order. What do you think?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request: [SPARK-14445][SQL] Support native execution of...

2016-04-25 Thread liancheng
Github user liancheng commented on a diff in the pull request:

https://github.com/apache/spark/pull/1#discussion_r60927623
  
--- Diff: 
sql/core/src/main/scala/org/apache/spark/sql/execution/command/commands.scala 
---
@@ -423,6 +426,102 @@ case class ShowTablePropertiesCommand(
 }
 
 /**
+ * A command for users to list the column names for a table. This function 
creates a
+ * [[ShowColumnsCommand]] logical plan.
+ *
+ * The syntax of using this command in SQL is:
+ * {{{
+ *   SHOW COLUMNS (FROM | IN) table_identifier [(FROM | IN) database];
+ * }}}
+ */
+case class ShowColumnsCommand(table: TableIdentifier) extends 
RunnableCommand {
+  // The result of SHOW COLUMNS has one column called 'result'
+  override val output: Seq[Attribute] = {
+AttributeReference("result", StringType, nullable = false)() :: Nil
+  }
+
+  override def run(sqlContext: SQLContext): Seq[Row] = {
+sqlContext.sessionState.catalog.getTableMetadata(table).schema.map { c 
=>
+  Row(c.name)
+}
+  }
+}
+
+/**
+ * A command for users to list the partition names of a table. If the 
partition spec is specified,
+ * partitions that match the spec are returned. [[AnalysisException]] 
exception is thrown under
+ * the following conditions:
+ *
+ * 1. If the command is called for a non partitioned table.
+ * 2. If the partition spec refers to the columns that are not defined as 
partitioning columns.
+ *
+ * This function creates a [[ShowPartitionsCommand]] logical plan
+ *
+ * The syntax of using this command in SQL is:
+ * {{{
+ *   SHOW PARTITIONS [db_name.]table_name [PARTITION(partition_spec)]
+ * }}}
+ */
+case class ShowPartitionsCommand(
+table: TableIdentifier,
+spec: Option[TablePartitionSpec]) extends RunnableCommand {
+  // The result of SHOW PARTITIONS has one column called 'result'
+  override val output: Seq[Attribute] = {
+AttributeReference("result", StringType, nullable = false)() :: Nil
+  }
+
+  def getPartName(spec: TablePartitionSpec): String = {
+spec.map {s =>
+  PartitioningUtils.escapePathName(s._1) + "=" + 
PartitioningUtils.escapePathName(s._2)
+}.mkString("/")
+  }
--- End diff --

New line here please.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request: [SPARK-14445][SQL] Support native execution of...

2016-04-25 Thread liancheng
Github user liancheng commented on a diff in the pull request:

https://github.com/apache/spark/pull/1#discussion_r60927757
  
--- Diff: 
sql/core/src/main/scala/org/apache/spark/sql/execution/command/commands.scala 
---
@@ -423,6 +426,102 @@ case class ShowTablePropertiesCommand(
 }
 
 /**
+ * A command for users to list the column names for a table. This function 
creates a
+ * [[ShowColumnsCommand]] logical plan.
+ *
+ * The syntax of using this command in SQL is:
+ * {{{
+ *   SHOW COLUMNS (FROM | IN) table_identifier [(FROM | IN) database];
+ * }}}
+ */
+case class ShowColumnsCommand(table: TableIdentifier) extends 
RunnableCommand {
+  // The result of SHOW COLUMNS has one column called 'result'
+  override val output: Seq[Attribute] = {
+AttributeReference("result", StringType, nullable = false)() :: Nil
+  }
+
+  override def run(sqlContext: SQLContext): Seq[Row] = {
+sqlContext.sessionState.catalog.getTableMetadata(table).schema.map { c 
=>
+  Row(c.name)
+}
+  }
+}
+
+/**
+ * A command for users to list the partition names of a table. If the 
partition spec is specified,
+ * partitions that match the spec are returned. [[AnalysisException]] 
exception is thrown under
+ * the following conditions:
+ *
+ * 1. If the command is called for a non partitioned table.
+ * 2. If the partition spec refers to the columns that are not defined as 
partitioning columns.
+ *
+ * This function creates a [[ShowPartitionsCommand]] logical plan
+ *
+ * The syntax of using this command in SQL is:
+ * {{{
+ *   SHOW PARTITIONS [db_name.]table_name [PARTITION(partition_spec)]
+ * }}}
+ */
+case class ShowPartitionsCommand(
+table: TableIdentifier,
+spec: Option[TablePartitionSpec]) extends RunnableCommand {
+  // The result of SHOW PARTITIONS has one column called 'result'
+  override val output: Seq[Attribute] = {
+AttributeReference("result", StringType, nullable = false)() :: Nil
+  }
+
+  def getPartName(spec: TablePartitionSpec): String = {
+spec.map {s =>
+  PartitioningUtils.escapePathName(s._1) + "=" + 
PartitioningUtils.escapePathName(s._2)
+}.mkString("/")
+  }
+  override def run(sqlContext: SQLContext): Seq[Row] = {
+val catalog = sqlContext.sessionState.catalog
+val db = table.database.getOrElse(catalog.getCurrentDatabase)
+if (catalog.isTemporaryTable(table)) {
+  throw new AnalysisException("SHOW PARTITIONS is not allowed on a 
temporary table: " +
+  s"${table.unquotedString}")
--- End diff --

Wrong indentation here.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request: [SPARK-14445][SQL] Support native execution of...

2016-04-25 Thread liancheng
Github user liancheng commented on a diff in the pull request:

https://github.com/apache/spark/pull/1#discussion_r60927505
  
--- Diff: 
sql/core/src/main/scala/org/apache/spark/sql/execution/command/commands.scala 
---
@@ -423,6 +426,102 @@ case class ShowTablePropertiesCommand(
 }
 
 /**
+ * A command for users to list the column names for a table. This function 
creates a
+ * [[ShowColumnsCommand]] logical plan.
+ *
+ * The syntax of using this command in SQL is:
+ * {{{
+ *   SHOW COLUMNS (FROM | IN) table_identifier [(FROM | IN) database];
+ * }}}
+ */
+case class ShowColumnsCommand(table: TableIdentifier) extends 
RunnableCommand {
+  // The result of SHOW COLUMNS has one column called 'result'
+  override val output: Seq[Attribute] = {
+AttributeReference("result", StringType, nullable = false)() :: Nil
+  }
+
+  override def run(sqlContext: SQLContext): Seq[Row] = {
+sqlContext.sessionState.catalog.getTableMetadata(table).schema.map { c 
=>
+  Row(c.name)
+}
+  }
+}
+
+/**
+ * A command for users to list the partition names of a table. If the 
partition spec is specified,
+ * partitions that match the spec are returned. [[AnalysisException]] 
exception is thrown under
+ * the following conditions:
+ *
+ * 1. If the command is called for a non partitioned table.
+ * 2. If the partition spec refers to the columns that are not defined as 
partitioning columns.
+ *
+ * This function creates a [[ShowPartitionsCommand]] logical plan
+ *
+ * The syntax of using this command in SQL is:
+ * {{{
+ *   SHOW PARTITIONS [db_name.]table_name [PARTITION(partition_spec)]
+ * }}}
+ */
+case class ShowPartitionsCommand(
+table: TableIdentifier,
+spec: Option[TablePartitionSpec]) extends RunnableCommand {
+  // The result of SHOW PARTITIONS has one column called 'result'
+  override val output: Seq[Attribute] = {
+AttributeReference("result", StringType, nullable = false)() :: Nil
+  }
+
+  def getPartName(spec: TablePartitionSpec): String = {
--- End diff --

Mark this as private.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request: [SPARK-14445][SQL] Support native execution of...

2016-04-25 Thread liancheng
Github user liancheng commented on a diff in the pull request:

https://github.com/apache/spark/pull/1#discussion_r60927399
  
--- Diff: 
sql/core/src/main/scala/org/apache/spark/sql/execution/command/commands.scala 
---
@@ -423,6 +426,102 @@ case class ShowTablePropertiesCommand(
 }
 
 /**
+ * A command for users to list the column names for a table. This function 
creates a
+ * [[ShowColumnsCommand]] logical plan.
+ *
+ * The syntax of using this command in SQL is:
+ * {{{
+ *   SHOW COLUMNS (FROM | IN) table_identifier [(FROM | IN) database];
+ * }}}
+ */
+case class ShowColumnsCommand(table: TableIdentifier) extends 
RunnableCommand {
+  // The result of SHOW COLUMNS has one column called 'result'
+  override val output: Seq[Attribute] = {
+AttributeReference("result", StringType, nullable = false)() :: Nil
+  }
+
+  override def run(sqlContext: SQLContext): Seq[Row] = {
+sqlContext.sessionState.catalog.getTableMetadata(table).schema.map { c 
=>
+  Row(c.name)
+}
+  }
+}
+
+/**
+ * A command for users to list the partition names of a table. If the 
partition spec is specified,
+ * partitions that match the spec are returned. [[AnalysisException]] 
exception is thrown under
+ * the following conditions:
+ *
+ * 1. If the command is called for a non partitioned table.
+ * 2. If the partition spec refers to the columns that are not defined as 
partitioning columns.
+ *
+ * This function creates a [[ShowPartitionsCommand]] logical plan
+ *
+ * The syntax of using this command in SQL is:
+ * {{{
+ *   SHOW PARTITIONS [db_name.]table_name [PARTITION(partition_spec)]
+ * }}}
+ */
+case class ShowPartitionsCommand(
+table: TableIdentifier,
+spec: Option[TablePartitionSpec]) extends RunnableCommand {
+  // The result of SHOW PARTITIONS has one column called 'result'
+  override val output: Seq[Attribute] = {
+AttributeReference("result", StringType, nullable = false)() :: Nil
+  }
+
+  def getPartName(spec: TablePartitionSpec): String = {
+spec.map {s =>
+  PartitioningUtils.escapePathName(s._1) + "=" + 
PartitioningUtils.escapePathName(s._2)
+}.mkString("/")
--- End diff --

I don't think we can simply join all parts using here since 
`TablePartitionSpec` is a `Map` rather than a `Seq`. Order of partition columns 
is not preserved.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request: [SPARK-14445][SQL] Support native execution of...

2016-04-25 Thread liancheng
Github user liancheng commented on a diff in the pull request:

https://github.com/apache/spark/pull/1#discussion_r60926964
  
--- Diff: 
sql/core/src/main/scala/org/apache/spark/sql/execution/command/commands.scala 
---
@@ -423,6 +426,102 @@ case class ShowTablePropertiesCommand(
 }
 
 /**
+ * A command for users to list the column names for a table. This function 
creates a
+ * [[ShowColumnsCommand]] logical plan.
+ *
+ * The syntax of using this command in SQL is:
+ * {{{
+ *   SHOW COLUMNS (FROM | IN) table_identifier [(FROM | IN) database];
+ * }}}
+ */
+case class ShowColumnsCommand(table: TableIdentifier) extends 
RunnableCommand {
+  // The result of SHOW COLUMNS has one column called 'result'
+  override val output: Seq[Attribute] = {
+AttributeReference("result", StringType, nullable = false)() :: Nil
+  }
+
+  override def run(sqlContext: SQLContext): Seq[Row] = {
+sqlContext.sessionState.catalog.getTableMetadata(table).schema.map { c 
=>
+  Row(c.name)
+}
+  }
+}
+
+/**
+ * A command for users to list the partition names of a table. If the 
partition spec is specified,
+ * partitions that match the spec are returned. [[AnalysisException]] 
exception is thrown under
+ * the following conditions:
+ *
+ * 1. If the command is called for a non partitioned table.
+ * 2. If the partition spec refers to the columns that are not defined as 
partitioning columns.
+ *
+ * This function creates a [[ShowPartitionsCommand]] logical plan
+ *
+ * The syntax of using this command in SQL is:
+ * {{{
+ *   SHOW PARTITIONS [db_name.]table_name [PARTITION(partition_spec)]
+ * }}}
+ */
+case class ShowPartitionsCommand(
+table: TableIdentifier,
+spec: Option[TablePartitionSpec]) extends RunnableCommand {
+  // The result of SHOW PARTITIONS has one column called 'result'
+  override val output: Seq[Attribute] = {
+AttributeReference("result", StringType, nullable = false)() :: Nil
+  }
+
+  def getPartName(spec: TablePartitionSpec): String = {
+spec.map {s =>
--- End diff --

Space after `{`


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request: [SPARK-14445][SQL] Support native execution of...

2016-04-25 Thread liancheng
Github user liancheng commented on a diff in the pull request:

https://github.com/apache/spark/pull/1#discussion_r60926196
  
--- Diff: 
sql/core/src/main/scala/org/apache/spark/sql/execution/command/commands.scala 
---
@@ -423,6 +426,102 @@ case class ShowTablePropertiesCommand(
 }
 
 /**
+ * A command for users to list the column names for a table. This function 
creates a
+ * [[ShowColumnsCommand]] logical plan.
+ *
+ * The syntax of using this command in SQL is:
+ * {{{
+ *   SHOW COLUMNS (FROM | IN) table_identifier [(FROM | IN) database];
+ * }}}
--- End diff --

We don't need to mention the SQL syntax here.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request: [SPARK-14445][SQL] Support native execution of...

2016-04-25 Thread liancheng
Github user liancheng commented on a diff in the pull request:

https://github.com/apache/spark/pull/1#discussion_r60925953
  
--- Diff: 
sql/core/src/main/scala/org/apache/spark/sql/execution/command/commands.scala 
---
@@ -423,6 +426,102 @@ case class ShowTablePropertiesCommand(
 }
 
 /**
+ * A command for users to list the column names for a table. This function 
creates a
+ * [[ShowColumnsCommand]] logical plan.
--- End diff --

"A command to list column names of a given table."


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request: [SPARK-14445][SQL] Support native execution of...

2016-04-25 Thread liancheng
Github user liancheng commented on a diff in the pull request:

https://github.com/apache/spark/pull/1#discussion_r60923441
  
--- Diff: 
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/catalog/InMemoryCatalog.scala
 ---
@@ -277,9 +277,18 @@ class InMemoryCatalog extends ExternalCatalog {
 catalog(db).tables(table).partitions(spec)
   }
 
+  /**
+   * List the metadata of all partitions that belong to the specified 
table, assuming it exists.
+   *
+   * A partial partition spec may optionally be provided to filter the 
partitions returned.
+   * For instance, if there exist partitions (a='1', b='2'), (a='1', 
b='3') and (a='2', b='4'),
+   * then a partial spec of (a='1') will return the first two only.
+   * TODO: Currently partialSpec is not used for memory catalog and it 
returns all the partitions.
--- End diff --

Can we throw exception instead of returning all the partitions if a 
partition spec is given to an in-memory catalog? Silently returning wrong 
answer can be dangerous.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request: [SPARK-14445][SQL] Support native execution of...

2016-04-25 Thread liancheng
Github user liancheng commented on a diff in the pull request:

https://github.com/apache/spark/pull/1#discussion_r60923122
  
--- Diff: 
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/catalog/SessionCatalog.scala
 ---
@@ -427,13 +427,18 @@ class SessionCatalog(
   }
 
   /**
-   * List all partitions in a table, assuming it exists.
-   * If no database is specified, assume the table is in the current 
database.
+   * List the metadata of all partitions that belong to the specified 
table, assuming it exists.
+   *
+   * A partial partition spec may optionally be provided to filter the 
partitions returned.
+   * For instance, if there exist partitions (a='1', b='2'), (a='1', 
b='3') and (a='2', b='4'),
+   * then a partial spec of (a='1') will return the first two only.
*/
-  def listPartitions(tableName: TableIdentifier): 
Seq[CatalogTablePartition] = {
+  def listPartitions(
--- End diff --

Nit: Missing `override`


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request: [SPARK-14445][SQL] Support native execution of...

2016-04-25 Thread liancheng
Github user liancheng commented on a diff in the pull request:

https://github.com/apache/spark/pull/1#discussion_r60923039
  
--- Diff: 
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/catalog/SessionCatalog.scala
 ---
@@ -427,13 +427,18 @@ class SessionCatalog(
   }
 
   /**
-   * List all partitions in a table, assuming it exists.
-   * If no database is specified, assume the table is in the current 
database.
+   * List the metadata of all partitions that belong to the specified 
table, assuming it exists.
+   *
+   * A partial partition spec may optionally be provided to filter the 
partitions returned.
+   * For instance, if there exist partitions (a='1', b='2'), (a='1', 
b='3') and (a='2', b='4'),
+   * then a partial spec of (a='1') will return the first two only.
--- End diff --

Same as above.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request: [SPARK-14445][SQL] Support native execution of...

2016-04-25 Thread liancheng
Github user liancheng commented on a diff in the pull request:

https://github.com/apache/spark/pull/1#discussion_r60922965
  
--- Diff: 
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/catalog/InMemoryCatalog.scala
 ---
@@ -277,9 +277,18 @@ class InMemoryCatalog extends ExternalCatalog {
 catalog(db).tables(table).partitions(spec)
   }
 
+  /**
+   * List the metadata of all partitions that belong to the specified 
table, assuming it exists.
+   *
+   * A partial partition spec may optionally be provided to filter the 
partitions returned.
+   * For instance, if there exist partitions (a='1', b='2'), (a='1', 
b='3') and (a='2', b='4'),
+   * then a partial spec of (a='1') will return the first two only.
--- End diff --

Nit: No need to repeat the comment here since it's automatically inherited 
from the parent class/trait.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request: [SPARK-14445][SQL] Support native execution of...

2016-04-15 Thread AmplabJenkins
Github user AmplabJenkins commented on the pull request:

https://github.com/apache/spark/pull/1#issuecomment-210346927
  
Test PASSed.
Refer to this link for build results (access rights to CI server needed): 
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/55905/
Test PASSed.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request: [SPARK-14445][SQL] Support native execution of...

2016-04-15 Thread AmplabJenkins
Github user AmplabJenkins commented on the pull request:

https://github.com/apache/spark/pull/1#issuecomment-210346924
  
Merged build finished. Test PASSed.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request: [SPARK-14445][SQL] Support native execution of...

2016-04-15 Thread SparkQA
Github user SparkQA commented on the pull request:

https://github.com/apache/spark/pull/1#issuecomment-210346717
  
**[Test build #55905 has 
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/55905/consoleFull)**
 for PR 1 at commit 
[`587263b`](https://github.com/apache/spark/commit/587263b2fd6b33d4d59e71417539b094791e431b).
 * This patch passes all tests.
 * This patch merges cleanly.
 * This patch adds no public classes.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request: [SPARK-14445][SQL] Support native execution of...

2016-04-15 Thread AmplabJenkins
Github user AmplabJenkins commented on the pull request:

https://github.com/apache/spark/pull/1#issuecomment-210317592
  
Merged build finished. Test PASSed.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request: [SPARK-14445][SQL] Support native execution of...

2016-04-15 Thread AmplabJenkins
Github user AmplabJenkins commented on the pull request:

https://github.com/apache/spark/pull/1#issuecomment-210317598
  
Test PASSed.
Refer to this link for build results (access rights to CI server needed): 
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/55901/
Test PASSed.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request: [SPARK-14445][SQL] Support native execution of...

2016-04-15 Thread SparkQA
Github user SparkQA commented on the pull request:

https://github.com/apache/spark/pull/1#issuecomment-210317168
  
**[Test build #55901 has 
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/55901/consoleFull)**
 for PR 1 at commit 
[`1171a90`](https://github.com/apache/spark/commit/1171a90b8e2df9f8a95bc289e1d813ded9b441c2).
 * This patch passes all tests.
 * This patch merges cleanly.
 * This patch adds no public classes.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request: [SPARK-14445][SQL] Support native execution of...

2016-04-15 Thread SparkQA
Github user SparkQA commented on the pull request:

https://github.com/apache/spark/pull/1#issuecomment-210304187
  
**[Test build #55905 has 
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/55905/consoleFull)**
 for PR 1 at commit 
[`587263b`](https://github.com/apache/spark/commit/587263b2fd6b33d4d59e71417539b094791e431b).


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request: [SPARK-14445][SQL] Support native execution of...

2016-04-14 Thread dilipbiswal
Github user dilipbiswal commented on the pull request:

https://github.com/apache/spark/pull/1#issuecomment-210294481
  
@andrewor14 Thank you for a detailed review. I have implemented your 
comments. A few points/questions.
1) Currently InMemoryCatalog does not make use of partiton spec. If it is 
ok with you, can i work on it as a follow up ?
2) In listPartitions API , the optional partition spec has a default value 
to avoid changing the callers. Is that a ok thing to do ? If its not, i will 
change the callers to pass None where applicable.

Thanks again for your time and help !!



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request: [SPARK-14445][SQL] Support native execution of...

2016-04-14 Thread SparkQA
Github user SparkQA commented on the pull request:

https://github.com/apache/spark/pull/1#issuecomment-210292905
  
**[Test build #55901 has 
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/55901/consoleFull)**
 for PR 1 at commit 
[`1171a90`](https://github.com/apache/spark/commit/1171a90b8e2df9f8a95bc289e1d813ded9b441c2).


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request: [SPARK-14445][SQL] Support native execution of...

2016-04-14 Thread dilipbiswal
Github user dilipbiswal commented on a diff in the pull request:

https://github.com/apache/spark/pull/1#discussion_r59791626
  
--- Diff: 
sql/core/src/main/scala/org/apache/spark/sql/execution/command/commands.scala 
---
@@ -423,6 +424,100 @@ case class ShowTablePropertiesCommand(
 }
 
 /**
+ * A command for users to list the column names for a table. This function 
creates a
+ * [[ShowColumnsCommand]] logical plan.
+ *
+ * The syntax of using this command in SQL is:
+ * {{{
+ *   SHOW COLUMNS (FROM | IN) table_identifier [(FROM | IN) database];
+ * }}}
+ */
+case class ShowColumnsCommand(table: TableIdentifier) extends 
RunnableCommand {
+  // The result of SHOW COLUMNS has one column called 'result'
+  override val output: Seq[Attribute] = {
+AttributeReference("result", StringType, nullable = false)() :: Nil
+  }
+
+  override def run(sqlContext: SQLContext): Seq[Row] = {
+val relation = sqlContext.sessionState.catalog.lookupRelation(table, 
None)
+relation.schema.fields.map { field =>
+  Row(field.name)
+}
--- End diff --

@andrewor14 Thanks Andrew. Let me re-base now :-)


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request: [SPARK-14445][SQL] Support native execution of...

2016-04-14 Thread andrewor14
Github user andrewor14 commented on a diff in the pull request:

https://github.com/apache/spark/pull/1#discussion_r59790218
  
--- Diff: 
sql/core/src/main/scala/org/apache/spark/sql/execution/command/commands.scala 
---
@@ -423,6 +424,100 @@ case class ShowTablePropertiesCommand(
 }
 
 /**
+ * A command for users to list the column names for a table. This function 
creates a
+ * [[ShowColumnsCommand]] logical plan.
+ *
+ * The syntax of using this command in SQL is:
+ * {{{
+ *   SHOW COLUMNS (FROM | IN) table_identifier [(FROM | IN) database];
+ * }}}
+ */
+case class ShowColumnsCommand(table: TableIdentifier) extends 
RunnableCommand {
+  // The result of SHOW COLUMNS has one column called 'result'
+  override val output: Seq[Attribute] = {
+AttributeReference("result", StringType, nullable = false)() :: Nil
+  }
+
+  override def run(sqlContext: SQLContext): Seq[Row] = {
+val relation = sqlContext.sessionState.catalog.lookupRelation(table, 
None)
+relation.schema.fields.map { field =>
+  Row(field.name)
+}
--- End diff --

it does now


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request: [SPARK-14445][SQL] Support native execution of...

2016-04-14 Thread dilipbiswal
Github user dilipbiswal commented on a diff in the pull request:

https://github.com/apache/spark/pull/1#discussion_r59781546
  
--- Diff: 
sql/core/src/main/scala/org/apache/spark/sql/execution/command/commands.scala 
---
@@ -423,6 +424,100 @@ case class ShowTablePropertiesCommand(
 }
 
 /**
+ * A command for users to list the column names for a table. This function 
creates a
+ * [[ShowColumnsCommand]] logical plan.
+ *
+ * The syntax of using this command in SQL is:
+ * {{{
+ *   SHOW COLUMNS (FROM | IN) table_identifier [(FROM | IN) database];
+ * }}}
+ */
+case class ShowColumnsCommand(table: TableIdentifier) extends 
RunnableCommand {
+  // The result of SHOW COLUMNS has one column called 'result'
+  override val output: Seq[Attribute] = {
+AttributeReference("result", StringType, nullable = false)() :: Nil
+  }
+
+  override def run(sqlContext: SQLContext): Seq[Row] = {
+val relation = sqlContext.sessionState.catalog.lookupRelation(table, 
None)
+relation.schema.fields.map { field =>
+  Row(field.name)
+}
--- End diff --

@andrewor14 Hi Andrew, actually i remember i had tried this before. The 
table metadata's schema does not include partition columns ? Thats why i had 
used the lookupRelation.. 

Please let me know what you think ..


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request: [SPARK-14445][SQL] Support native execution of...

2016-04-14 Thread dilipbiswal
Github user dilipbiswal commented on a diff in the pull request:

https://github.com/apache/spark/pull/1#discussion_r59688027
  
--- Diff: 
sql/hive/src/main/scala/org/apache/spark/sql/hive/client/HiveClientImpl.scala 
---
@@ -404,6 +404,28 @@ private[hive] class HiveClientImpl(
 Option(hivePartition).map(fromHivePartition)
   }
 
+  /**
+   * Returns the partition names from hive metastore for a given table in 
a database.
+   */
+  override def getPartitionNames(
+  db: String,
+  table: String,
+  range: Short): Seq[String] = withHiveState {
--- End diff --

Sorry, i should be passing the range to hive's API. I had it hard coded to 
-1 initially and later changed , so that caller of this API can pass in a range 
to restrict the number of partitions that are returned to the client. I will 
make the change to pass the supplied range to hive.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request: [SPARK-14445][SQL] Support native execution of...

2016-04-14 Thread dilipbiswal
Github user dilipbiswal commented on a diff in the pull request:

https://github.com/apache/spark/pull/1#discussion_r59687762
  
--- Diff: 
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/catalog/SessionCatalog.scala
 ---
@@ -409,6 +409,25 @@ class SessionCatalog(
   }
 
   /**
+   * Returns the partition names from catalog for a given table in a 
database.
+   */
+  def getPartitionNames(db: String, table: String, range: Short): 
Seq[String] = {
+externalCatalog.getPartitionNames(db, table, range)
+  }
+
+  /**
+   * Returns the partition names that matche the partition spec for a 
given table in a database.
+   * When no match is found, an empty Sequence is returned.
+   */
+  def getPartitionNames(
--- End diff --

Sure Andrew. 


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request: [SPARK-14445][SQL] Support native execution of...

2016-04-14 Thread dilipbiswal
Github user dilipbiswal commented on a diff in the pull request:

https://github.com/apache/spark/pull/1#discussion_r59687495
  
--- Diff: 
sql/core/src/main/scala/org/apache/spark/sql/execution/command/commands.scala 
---
@@ -423,6 +424,100 @@ case class ShowTablePropertiesCommand(
 }
 
 /**
+ * A command for users to list the column names for a table. This function 
creates a
+ * [[ShowColumnsCommand]] logical plan.
+ *
+ * The syntax of using this command in SQL is:
+ * {{{
+ *   SHOW COLUMNS (FROM | IN) table_identifier [(FROM | IN) database];
+ * }}}
+ */
+case class ShowColumnsCommand(table: TableIdentifier) extends 
RunnableCommand {
+  // The result of SHOW COLUMNS has one column called 'result'
+  override val output: Seq[Attribute] = {
+AttributeReference("result", StringType, nullable = false)() :: Nil
+  }
+
+  override def run(sqlContext: SQLContext): Seq[Row] = {
+val relation = sqlContext.sessionState.catalog.lookupRelation(table, 
None)
+relation.schema.fields.map { field =>
+  Row(field.name)
+}
--- End diff --

ok.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request: [SPARK-14445][SQL] Support native execution of...

2016-04-14 Thread dilipbiswal
Github user dilipbiswal commented on a diff in the pull request:

https://github.com/apache/spark/pull/1#discussion_r59687465
  
--- Diff: 
sql/core/src/main/scala/org/apache/spark/sql/execution/command/commands.scala 
---
@@ -423,6 +424,100 @@ case class ShowTablePropertiesCommand(
 }
 
 /**
+ * A command for users to list the column names for a table. This function 
creates a
+ * [[ShowColumnsCommand]] logical plan.
+ *
+ * The syntax of using this command in SQL is:
+ * {{{
+ *   SHOW COLUMNS (FROM | IN) table_identifier [(FROM | IN) database];
+ * }}}
+ */
+case class ShowColumnsCommand(table: TableIdentifier) extends 
RunnableCommand {
+  // The result of SHOW COLUMNS has one column called 'result'
+  override val output: Seq[Attribute] = {
+AttributeReference("result", StringType, nullable = false)() :: Nil
+  }
+
+  override def run(sqlContext: SQLContext): Seq[Row] = {
+val relation = sqlContext.sessionState.catalog.lookupRelation(table, 
None)
+relation.schema.fields.map { field =>
+  Row(field.name)
+}
+  }
+}
+
+/**
+ * A command for users to list the partition names of a table. If the 
partition spec is specified,
+ * partitions that match the spec are returned. [[AnalysisException]] 
exception is thrown under
+ * the following conditions:
+ *
+ * 1. If the command is called for a non partitioned table.
+ * 2. If the partition spec refers to the columns that are not defined as 
partitioning columns.
+ *
+ * This function creates a [[ShowPartitionsCommand]] logical plan
+ *
+ * The syntax of using this command in SQL is:
+ * {{{
+ *   SHOW PARTITIONS [db_name.]table_name [PARTITION(partition_spec)]
+ * }}}
+ */
+case class ShowPartitionsCommand(
+table: TableIdentifier,
+partitionSpec: Option[Map[String, String]]) extends RunnableCommand {
+  // The result of SHOW PARTITIONS has one column called 'result'
+  override val output: Seq[Attribute] = {
+AttributeReference("result", StringType, nullable = false)() :: Nil
+  }
+
+  /**
+   * This function validates the partitioning spec by making sure all the 
referenced columns are
+   * defined as partitioning columns in table definition. An 
AnalysisException exception is
+   * thrown if the partitioning spec is invalid.
+   */
+  private def validatePartitionSpec(table: CatalogTable, spec: Map[String, 
String]): Unit = {
+if (!spec.keySet.forall(table.partitionColumns.map(_.name).contains)) {
+  throw new AnalysisException(s"Partition spec ${spec.mkString("(", ", 
", ")")} contains " +
+s"non-partition columns")
+}
+  }
+
+  /**
+   * Validates and throws an [[AnalysisException]] exception under the 
following conditions:
+   * 1. If the table is not partitioned.
+   * 2. If it is a datasource table.
+   * 3. If it is a view or index table.
+   */
+  private def checkRequirements(table: CatalogTable): Unit = {
+if (table.tableType == CatalogTableType.VIRTUAL_VIEW ||
+  table.tableType == CatalogTableType.INDEX_TABLE) {
+  throw new AnalysisException("Operation not allowed: view or index 
table")
+} else if (!DDLUtils.isTablePartitioned(table)) {
+  throw new AnalysisException(s"Table ${table.qualifiedName} is not a 
partitioned table")
+} else if (DDLUtils.isDatasourceTable(table)) {
+  throw new AnalysisException("Operation not allowed: datasource 
table")
+}
+  }
+
+  override def run(sqlContext: SQLContext): Seq[Row] = {
+val catalog = sqlContext.sessionState.catalog
+val db = table.database.getOrElse(catalog.getCurrentDatabase)
+if (catalog.isTemporaryTable(table)) {
+  Seq.empty[Row]
--- End diff --

OK. will make the change.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request: [SPARK-14445][SQL] Support native execution of...

2016-04-14 Thread dilipbiswal
Github user dilipbiswal commented on a diff in the pull request:

https://github.com/apache/spark/pull/1#discussion_r59687345
  
--- Diff: 
sql/core/src/main/scala/org/apache/spark/sql/execution/command/commands.scala 
---
@@ -423,6 +424,100 @@ case class ShowTablePropertiesCommand(
 }
 
 /**
+ * A command for users to list the column names for a table. This function 
creates a
+ * [[ShowColumnsCommand]] logical plan.
+ *
+ * The syntax of using this command in SQL is:
+ * {{{
+ *   SHOW COLUMNS (FROM | IN) table_identifier [(FROM | IN) database];
+ * }}}
+ */
+case class ShowColumnsCommand(table: TableIdentifier) extends 
RunnableCommand {
+  // The result of SHOW COLUMNS has one column called 'result'
+  override val output: Seq[Attribute] = {
+AttributeReference("result", StringType, nullable = false)() :: Nil
+  }
+
+  override def run(sqlContext: SQLContext): Seq[Row] = {
+val relation = sqlContext.sessionState.catalog.lookupRelation(table, 
None)
+relation.schema.fields.map { field =>
+  Row(field.name)
+}
+  }
+}
+
+/**
+ * A command for users to list the partition names of a table. If the 
partition spec is specified,
+ * partitions that match the spec are returned. [[AnalysisException]] 
exception is thrown under
+ * the following conditions:
+ *
+ * 1. If the command is called for a non partitioned table.
+ * 2. If the partition spec refers to the columns that are not defined as 
partitioning columns.
+ *
+ * This function creates a [[ShowPartitionsCommand]] logical plan
+ *
+ * The syntax of using this command in SQL is:
+ * {{{
+ *   SHOW PARTITIONS [db_name.]table_name [PARTITION(partition_spec)]
+ * }}}
+ */
+case class ShowPartitionsCommand(
+table: TableIdentifier,
+partitionSpec: Option[Map[String, String]]) extends RunnableCommand {
+  // The result of SHOW PARTITIONS has one column called 'result'
+  override val output: Seq[Attribute] = {
+AttributeReference("result", StringType, nullable = false)() :: Nil
+  }
+
+  /**
+   * This function validates the partitioning spec by making sure all the 
referenced columns are
+   * defined as partitioning columns in table definition. An 
AnalysisException exception is
+   * thrown if the partitioning spec is invalid.
+   */
+  private def validatePartitionSpec(table: CatalogTable, spec: Map[String, 
String]): Unit = {
+if (!spec.keySet.forall(table.partitionColumns.map(_.name).contains)) {
+  throw new AnalysisException(s"Partition spec ${spec.mkString("(", ", 
", ")")} contains " +
+s"non-partition columns")
+}
+  }
+
+  /**
+   * Validates and throws an [[AnalysisException]] exception under the 
following conditions:
+   * 1. If the table is not partitioned.
+   * 2. If it is a datasource table.
+   * 3. If it is a view or index table.
+   */
+  private def checkRequirements(table: CatalogTable): Unit = {
+if (table.tableType == CatalogTableType.VIRTUAL_VIEW ||
+  table.tableType == CatalogTableType.INDEX_TABLE) {
+  throw new AnalysisException("Operation not allowed: view or index 
table")
+} else if (!DDLUtils.isTablePartitioned(table)) {
+  throw new AnalysisException(s"Table ${table.qualifiedName} is not a 
partitioned table")
+} else if (DDLUtils.isDatasourceTable(table)) {
+  throw new AnalysisException("Operation not allowed: datasource 
table")
--- End diff --

Will make the change


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request: [SPARK-14445][SQL] Support native execution of...

2016-04-14 Thread dilipbiswal
Github user dilipbiswal commented on a diff in the pull request:

https://github.com/apache/spark/pull/1#discussion_r59687188
  
--- Diff: 
sql/core/src/main/scala/org/apache/spark/sql/execution/command/commands.scala 
---
@@ -423,6 +424,100 @@ case class ShowTablePropertiesCommand(
 }
 
 /**
+ * A command for users to list the column names for a table. This function 
creates a
+ * [[ShowColumnsCommand]] logical plan.
+ *
+ * The syntax of using this command in SQL is:
+ * {{{
+ *   SHOW COLUMNS (FROM | IN) table_identifier [(FROM | IN) database];
+ * }}}
+ */
+case class ShowColumnsCommand(table: TableIdentifier) extends 
RunnableCommand {
+  // The result of SHOW COLUMNS has one column called 'result'
+  override val output: Seq[Attribute] = {
+AttributeReference("result", StringType, nullable = false)() :: Nil
+  }
+
+  override def run(sqlContext: SQLContext): Seq[Row] = {
+val relation = sqlContext.sessionState.catalog.lookupRelation(table, 
None)
+relation.schema.fields.map { field =>
+  Row(field.name)
+}
+  }
+}
+
+/**
+ * A command for users to list the partition names of a table. If the 
partition spec is specified,
+ * partitions that match the spec are returned. [[AnalysisException]] 
exception is thrown under
+ * the following conditions:
+ *
+ * 1. If the command is called for a non partitioned table.
+ * 2. If the partition spec refers to the columns that are not defined as 
partitioning columns.
+ *
+ * This function creates a [[ShowPartitionsCommand]] logical plan
+ *
+ * The syntax of using this command in SQL is:
+ * {{{
+ *   SHOW PARTITIONS [db_name.]table_name [PARTITION(partition_spec)]
+ * }}}
+ */
+case class ShowPartitionsCommand(
+table: TableIdentifier,
+partitionSpec: Option[Map[String, String]]) extends RunnableCommand {
+  // The result of SHOW PARTITIONS has one column called 'result'
+  override val output: Seq[Attribute] = {
+AttributeReference("result", StringType, nullable = false)() :: Nil
+  }
+
+  /**
+   * This function validates the partitioning spec by making sure all the 
referenced columns are
+   * defined as partitioning columns in table definition. An 
AnalysisException exception is
+   * thrown if the partitioning spec is invalid.
+   */
+  private def validatePartitionSpec(table: CatalogTable, spec: Map[String, 
String]): Unit = {
+if (!spec.keySet.forall(table.partitionColumns.map(_.name).contains)) {
+  throw new AnalysisException(s"Partition spec ${spec.mkString("(", ", 
", ")")} contains " +
--- End diff --

Sure. Will make the change.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request: [SPARK-14445][SQL] Support native execution of...

2016-04-14 Thread dilipbiswal
Github user dilipbiswal commented on a diff in the pull request:

https://github.com/apache/spark/pull/1#discussion_r59687220
  
--- Diff: 
sql/core/src/main/scala/org/apache/spark/sql/execution/command/commands.scala 
---
@@ -423,6 +424,100 @@ case class ShowTablePropertiesCommand(
 }
 
 /**
+ * A command for users to list the column names for a table. This function 
creates a
+ * [[ShowColumnsCommand]] logical plan.
+ *
+ * The syntax of using this command in SQL is:
+ * {{{
+ *   SHOW COLUMNS (FROM | IN) table_identifier [(FROM | IN) database];
+ * }}}
+ */
+case class ShowColumnsCommand(table: TableIdentifier) extends 
RunnableCommand {
+  // The result of SHOW COLUMNS has one column called 'result'
+  override val output: Seq[Attribute] = {
+AttributeReference("result", StringType, nullable = false)() :: Nil
+  }
+
+  override def run(sqlContext: SQLContext): Seq[Row] = {
+val relation = sqlContext.sessionState.catalog.lookupRelation(table, 
None)
+relation.schema.fields.map { field =>
+  Row(field.name)
+}
+  }
+}
+
+/**
+ * A command for users to list the partition names of a table. If the 
partition spec is specified,
+ * partitions that match the spec are returned. [[AnalysisException]] 
exception is thrown under
+ * the following conditions:
+ *
+ * 1. If the command is called for a non partitioned table.
+ * 2. If the partition spec refers to the columns that are not defined as 
partitioning columns.
+ *
+ * This function creates a [[ShowPartitionsCommand]] logical plan
+ *
+ * The syntax of using this command in SQL is:
+ * {{{
+ *   SHOW PARTITIONS [db_name.]table_name [PARTITION(partition_spec)]
+ * }}}
+ */
+case class ShowPartitionsCommand(
+table: TableIdentifier,
+partitionSpec: Option[Map[String, String]]) extends RunnableCommand {
+  // The result of SHOW PARTITIONS has one column called 'result'
+  override val output: Seq[Attribute] = {
+AttributeReference("result", StringType, nullable = false)() :: Nil
+  }
+
+  /**
+   * This function validates the partitioning spec by making sure all the 
referenced columns are
+   * defined as partitioning columns in table definition. An 
AnalysisException exception is
+   * thrown if the partitioning spec is invalid.
+   */
+  private def validatePartitionSpec(table: CatalogTable, spec: Map[String, 
String]): Unit = {
+if (!spec.keySet.forall(table.partitionColumns.map(_.name).contains)) {
+  throw new AnalysisException(s"Partition spec ${spec.mkString("(", ", 
", ")")} contains " +
+s"non-partition columns")
+}
+  }
+
+  /**
+   * Validates and throws an [[AnalysisException]] exception under the 
following conditions:
+   * 1. If the table is not partitioned.
+   * 2. If it is a datasource table.
+   * 3. If it is a view or index table.
+   */
+  private def checkRequirements(table: CatalogTable): Unit = {
--- End diff --

ok.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request: [SPARK-14445][SQL] Support native execution of...

2016-04-14 Thread dilipbiswal
Github user dilipbiswal commented on a diff in the pull request:

https://github.com/apache/spark/pull/1#discussion_r59686964
  
--- Diff: 
sql/hive/src/test/scala/org/apache/spark/sql/hive/execution/HiveCommandSuite.scala
 ---
@@ -122,4 +133,103 @@ class HiveCommandSuite extends QueryTest with 
SQLTestUtils with TestHiveSingleto
   checkAnswer(sql("SHOW TBLPROPERTIES parquet_temp"), Nil)
 }
   }
+
+  test("show columns") {
+checkAnswer(
+  sql("SHOW COLUMNS IN parquet_tab3"),
+  Row("col1") :: Row("col 2") :: Nil)
+
+checkAnswer(
+  sql("SHOW COLUMNS IN default.parquet_tab3"),
+  Row("col1") :: Row("col 2") :: Nil)
+
+checkAnswer(
+  sql("SHOW COLUMNS IN parquet_tab3 FROM default"),
+  Row("col1") :: Row("col 2") :: Nil)
+
+checkAnswer(
+  sql("SHOW COLUMNS IN parquet_tab4 IN default"),
+  Row("price") :: Row("qty") :: Row("year") :: Row("month") :: Nil)
+
+val message = intercept[NoSuchTableException] {
+  sql("SHOW COLUMNS IN badtable FROM default")
+}.getMessage
+assert(message.contains("Table badtable not found in database"))
+  }
+
+  test("show partitions - show everything") {
+checkAnswer(
+  sql("show partitions parquet_tab4"),
+  Row("year=2015/month=1") ::
+Row("year=2015/month=2") ::
+Row("year=2016/month=2") ::
+Row("year=2016/month=3") :: Nil)
+
+checkAnswer(
+  sql("show partitions default.parquet_tab4"),
+  Row("year=2015/month=1") ::
+Row("year=2015/month=2") ::
+Row("year=2016/month=2") ::
+Row("year=2016/month=3") :: Nil)
+  }
+
+  test("show partitions - filter") {
+checkAnswer(
+  sql("show partitions default.parquet_tab4 PARTITION(year=2015)"),
+  Row("year=2015/month=1") ::
+Row("year=2015/month=2") :: Nil)
+
+checkAnswer(
+  sql("show partitions default.parquet_tab4 PARTITION(year=2015, 
month=1)"),
+  Row("year=2015/month=1") :: Nil)
+
+checkAnswer(
+  sql("show partitions default.parquet_tab4 PARTITION(month=2)"),
+  Row("year=2015/month=2") ::
+Row("year=2016/month=2") :: Nil)
+  }
+
+  test("show partitions - empty row") {
+withTempTable("parquet_temp") {
+  sql(
+"""
+  |CREATE TEMPORARY TABLE parquet_temp (c1 INT, c2 STRING)
+  |USING org.apache.spark.sql.parquet.DefaultSource
+""".stripMargin)
+  // An empty sequence of row is returned for session temporary table.
+  checkAnswer(sql("SHOW PARTITIONS parquet_temp"), Nil)
+  val message1 = intercept[AnalysisException] {
+sql("SHOW PARTITIONS parquet_tab3")
+  }.getMessage
+  assert(message1.contains("is not a partitioned table"))
+
+  val message2 = intercept[AnalysisException] {
+sql("SHOW PARTITIONS parquet_tab4 PARTITION(abcd=2015, xyz=1)")
+  }.getMessage
+  assert(message2.contains("Partition spec (abcd -> 2015, xyz -> 1) 
contains " +
+"non-partition columns"))
+
+  val message3 = intercept[AnalysisException] {
+sql("SHOW PARTITIONS parquet_view1")
+  }.getMessage
+  assert(message3.contains("Operation not allowed: view or index 
table"))
+}
+  }
+
+  test("show partitions - datasource") {
+import sqlContext.implicits._
+withTable("part_datasrc") {
+  val df = (1 to 3).map(i => (i, s"val_$i", i * 2)).toDF("a", "b", "c")
+  df.write
+.partitionBy("a")
+.format("parquet")
+.mode(SaveMode.Overwrite)
+.saveAsTable("part_datasrc")
+
+  val message1 = intercept[AnalysisException] {
+sql("SHOW PARTITIONS part_datasrc")
+  }.getMessage
+  assert(message1.contains("Operation not allowed: datasource table"))
--- End diff --

ok


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request: [SPARK-14445][SQL] Support native execution of...

2016-04-14 Thread dilipbiswal
Github user dilipbiswal commented on a diff in the pull request:

https://github.com/apache/spark/pull/1#discussion_r59686977
  
--- Diff: 
sql/hive/src/test/scala/org/apache/spark/sql/hive/execution/HiveCommandSuite.scala
 ---
@@ -122,4 +133,103 @@ class HiveCommandSuite extends QueryTest with 
SQLTestUtils with TestHiveSingleto
   checkAnswer(sql("SHOW TBLPROPERTIES parquet_temp"), Nil)
 }
   }
+
+  test("show columns") {
+checkAnswer(
+  sql("SHOW COLUMNS IN parquet_tab3"),
+  Row("col1") :: Row("col 2") :: Nil)
+
+checkAnswer(
+  sql("SHOW COLUMNS IN default.parquet_tab3"),
+  Row("col1") :: Row("col 2") :: Nil)
+
+checkAnswer(
+  sql("SHOW COLUMNS IN parquet_tab3 FROM default"),
+  Row("col1") :: Row("col 2") :: Nil)
+
+checkAnswer(
+  sql("SHOW COLUMNS IN parquet_tab4 IN default"),
+  Row("price") :: Row("qty") :: Row("year") :: Row("month") :: Nil)
+
+val message = intercept[NoSuchTableException] {
+  sql("SHOW COLUMNS IN badtable FROM default")
+}.getMessage
+assert(message.contains("Table badtable not found in database"))
+  }
+
+  test("show partitions - show everything") {
+checkAnswer(
+  sql("show partitions parquet_tab4"),
+  Row("year=2015/month=1") ::
+Row("year=2015/month=2") ::
+Row("year=2016/month=2") ::
+Row("year=2016/month=3") :: Nil)
+
+checkAnswer(
+  sql("show partitions default.parquet_tab4"),
+  Row("year=2015/month=1") ::
+Row("year=2015/month=2") ::
+Row("year=2016/month=2") ::
+Row("year=2016/month=3") :: Nil)
+  }
+
+  test("show partitions - filter") {
+checkAnswer(
+  sql("show partitions default.parquet_tab4 PARTITION(year=2015)"),
+  Row("year=2015/month=1") ::
+Row("year=2015/month=2") :: Nil)
+
+checkAnswer(
+  sql("show partitions default.parquet_tab4 PARTITION(year=2015, 
month=1)"),
+  Row("year=2015/month=1") :: Nil)
+
+checkAnswer(
+  sql("show partitions default.parquet_tab4 PARTITION(month=2)"),
+  Row("year=2015/month=2") ::
+Row("year=2016/month=2") :: Nil)
+  }
+
+  test("show partitions - empty row") {
+withTempTable("parquet_temp") {
+  sql(
+"""
+  |CREATE TEMPORARY TABLE parquet_temp (c1 INT, c2 STRING)
+  |USING org.apache.spark.sql.parquet.DefaultSource
+""".stripMargin)
+  // An empty sequence of row is returned for session temporary table.
+  checkAnswer(sql("SHOW PARTITIONS parquet_temp"), Nil)
+  val message1 = intercept[AnalysisException] {
+sql("SHOW PARTITIONS parquet_tab3")
+  }.getMessage
+  assert(message1.contains("is not a partitioned table"))
+
+  val message2 = intercept[AnalysisException] {
+sql("SHOW PARTITIONS parquet_tab4 PARTITION(abcd=2015, xyz=1)")
+  }.getMessage
+  assert(message2.contains("Partition spec (abcd -> 2015, xyz -> 1) 
contains " +
+"non-partition columns"))
+
+  val message3 = intercept[AnalysisException] {
+sql("SHOW PARTITIONS parquet_view1")
+  }.getMessage
+  assert(message3.contains("Operation not allowed: view or index 
table"))
+}
+  }
+
+  test("show partitions - datasource") {
+import sqlContext.implicits._
--- End diff --

ok


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request: [SPARK-14445][SQL] Support native execution of...

2016-04-14 Thread viirya
Github user viirya commented on a diff in the pull request:

https://github.com/apache/spark/pull/1#discussion_r59675036
  
--- Diff: 
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/catalog/SessionCatalog.scala
 ---
@@ -409,6 +409,25 @@ class SessionCatalog(
   }
 
   /**
+   * Returns the partition names from catalog for a given table in a 
database.
+   */
+  def getPartitionNames(db: String, table: String, range: Short): 
Seq[String] = {
+externalCatalog.getPartitionNames(db, table, range)
+  }
+
+  /**
+   * Returns the partition names that matche the partition spec for a 
given table in a database.
+   * When no match is found, an empty Sequence is returned.
+   */
+  def getPartitionNames(
--- End diff --

+1 we should follow current catalog API especially we already have 
`listPartitions` method.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request: [SPARK-14445][SQL] Support native execution of...

2016-04-13 Thread andrewor14
Github user andrewor14 commented on the pull request:

https://github.com/apache/spark/pull/1#issuecomment-209673821
  
@dilipbiswal Thanks for working on this. I think it looks pretty good 
overall. The existing API for show partitions is a little too tied to Hive so I 
suggested an alternative to make it more consistent with the rest of the 
catalog. Other than that there weren't really any major issues.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request: [SPARK-14445][SQL] Support native execution of...

2016-04-13 Thread andrewor14
Github user andrewor14 commented on a diff in the pull request:

https://github.com/apache/spark/pull/1#discussion_r59636620
  
--- Diff: 
sql/hive/src/test/scala/org/apache/spark/sql/hive/execution/HiveCommandSuite.scala
 ---
@@ -122,4 +133,103 @@ class HiveCommandSuite extends QueryTest with 
SQLTestUtils with TestHiveSingleto
   checkAnswer(sql("SHOW TBLPROPERTIES parquet_temp"), Nil)
 }
   }
+
+  test("show columns") {
+checkAnswer(
+  sql("SHOW COLUMNS IN parquet_tab3"),
+  Row("col1") :: Row("col 2") :: Nil)
+
+checkAnswer(
+  sql("SHOW COLUMNS IN default.parquet_tab3"),
+  Row("col1") :: Row("col 2") :: Nil)
+
+checkAnswer(
+  sql("SHOW COLUMNS IN parquet_tab3 FROM default"),
+  Row("col1") :: Row("col 2") :: Nil)
+
+checkAnswer(
+  sql("SHOW COLUMNS IN parquet_tab4 IN default"),
+  Row("price") :: Row("qty") :: Row("year") :: Row("month") :: Nil)
+
+val message = intercept[NoSuchTableException] {
+  sql("SHOW COLUMNS IN badtable FROM default")
+}.getMessage
+assert(message.contains("Table badtable not found in database"))
+  }
+
+  test("show partitions - show everything") {
+checkAnswer(
+  sql("show partitions parquet_tab4"),
+  Row("year=2015/month=1") ::
+Row("year=2015/month=2") ::
+Row("year=2016/month=2") ::
+Row("year=2016/month=3") :: Nil)
+
+checkAnswer(
+  sql("show partitions default.parquet_tab4"),
+  Row("year=2015/month=1") ::
+Row("year=2015/month=2") ::
+Row("year=2016/month=2") ::
+Row("year=2016/month=3") :: Nil)
+  }
+
+  test("show partitions - filter") {
+checkAnswer(
+  sql("show partitions default.parquet_tab4 PARTITION(year=2015)"),
+  Row("year=2015/month=1") ::
+Row("year=2015/month=2") :: Nil)
+
+checkAnswer(
+  sql("show partitions default.parquet_tab4 PARTITION(year=2015, 
month=1)"),
+  Row("year=2015/month=1") :: Nil)
+
+checkAnswer(
+  sql("show partitions default.parquet_tab4 PARTITION(month=2)"),
+  Row("year=2015/month=2") ::
+Row("year=2016/month=2") :: Nil)
+  }
+
+  test("show partitions - empty row") {
+withTempTable("parquet_temp") {
+  sql(
+"""
+  |CREATE TEMPORARY TABLE parquet_temp (c1 INT, c2 STRING)
+  |USING org.apache.spark.sql.parquet.DefaultSource
+""".stripMargin)
+  // An empty sequence of row is returned for session temporary table.
+  checkAnswer(sql("SHOW PARTITIONS parquet_temp"), Nil)
+  val message1 = intercept[AnalysisException] {
+sql("SHOW PARTITIONS parquet_tab3")
+  }.getMessage
+  assert(message1.contains("is not a partitioned table"))
+
+  val message2 = intercept[AnalysisException] {
+sql("SHOW PARTITIONS parquet_tab4 PARTITION(abcd=2015, xyz=1)")
+  }.getMessage
+  assert(message2.contains("Partition spec (abcd -> 2015, xyz -> 1) 
contains " +
+"non-partition columns"))
+
+  val message3 = intercept[AnalysisException] {
+sql("SHOW PARTITIONS parquet_view1")
+  }.getMessage
+  assert(message3.contains("Operation not allowed: view or index 
table"))
+}
+  }
+
+  test("show partitions - datasource") {
+import sqlContext.implicits._
+withTable("part_datasrc") {
+  val df = (1 to 3).map(i => (i, s"val_$i", i * 2)).toDF("a", "b", "c")
+  df.write
+.partitionBy("a")
+.format("parquet")
+.mode(SaveMode.Overwrite)
+.saveAsTable("part_datasrc")
+
+  val message1 = intercept[AnalysisException] {
+sql("SHOW PARTITIONS part_datasrc")
+  }.getMessage
+  assert(message1.contains("Operation not allowed: datasource table"))
--- End diff --

this assert is way too specific, just make it lower case and grep for 
"operation not allowed"


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request: [SPARK-14445][SQL] Support native execution of...

2016-04-13 Thread andrewor14
Github user andrewor14 commented on a diff in the pull request:

https://github.com/apache/spark/pull/1#discussion_r59636516
  
--- Diff: 
sql/hive/src/test/scala/org/apache/spark/sql/hive/execution/HiveCommandSuite.scala
 ---
@@ -122,4 +133,103 @@ class HiveCommandSuite extends QueryTest with 
SQLTestUtils with TestHiveSingleto
   checkAnswer(sql("SHOW TBLPROPERTIES parquet_temp"), Nil)
 }
   }
+
+  test("show columns") {
+checkAnswer(
+  sql("SHOW COLUMNS IN parquet_tab3"),
+  Row("col1") :: Row("col 2") :: Nil)
+
+checkAnswer(
+  sql("SHOW COLUMNS IN default.parquet_tab3"),
+  Row("col1") :: Row("col 2") :: Nil)
+
+checkAnswer(
+  sql("SHOW COLUMNS IN parquet_tab3 FROM default"),
+  Row("col1") :: Row("col 2") :: Nil)
+
+checkAnswer(
+  sql("SHOW COLUMNS IN parquet_tab4 IN default"),
+  Row("price") :: Row("qty") :: Row("year") :: Row("month") :: Nil)
+
+val message = intercept[NoSuchTableException] {
+  sql("SHOW COLUMNS IN badtable FROM default")
+}.getMessage
+assert(message.contains("Table badtable not found in database"))
+  }
+
+  test("show partitions - show everything") {
+checkAnswer(
+  sql("show partitions parquet_tab4"),
+  Row("year=2015/month=1") ::
+Row("year=2015/month=2") ::
+Row("year=2016/month=2") ::
+Row("year=2016/month=3") :: Nil)
+
+checkAnswer(
+  sql("show partitions default.parquet_tab4"),
+  Row("year=2015/month=1") ::
+Row("year=2015/month=2") ::
+Row("year=2016/month=2") ::
+Row("year=2016/month=3") :: Nil)
+  }
+
+  test("show partitions - filter") {
+checkAnswer(
+  sql("show partitions default.parquet_tab4 PARTITION(year=2015)"),
+  Row("year=2015/month=1") ::
+Row("year=2015/month=2") :: Nil)
+
+checkAnswer(
+  sql("show partitions default.parquet_tab4 PARTITION(year=2015, 
month=1)"),
+  Row("year=2015/month=1") :: Nil)
+
+checkAnswer(
+  sql("show partitions default.parquet_tab4 PARTITION(month=2)"),
+  Row("year=2015/month=2") ::
+Row("year=2016/month=2") :: Nil)
+  }
+
+  test("show partitions - empty row") {
+withTempTable("parquet_temp") {
+  sql(
+"""
+  |CREATE TEMPORARY TABLE parquet_temp (c1 INT, c2 STRING)
+  |USING org.apache.spark.sql.parquet.DefaultSource
+""".stripMargin)
+  // An empty sequence of row is returned for session temporary table.
+  checkAnswer(sql("SHOW PARTITIONS parquet_temp"), Nil)
+  val message1 = intercept[AnalysisException] {
+sql("SHOW PARTITIONS parquet_tab3")
+  }.getMessage
+  assert(message1.contains("is not a partitioned table"))
+
+  val message2 = intercept[AnalysisException] {
+sql("SHOW PARTITIONS parquet_tab4 PARTITION(abcd=2015, xyz=1)")
+  }.getMessage
+  assert(message2.contains("Partition spec (abcd -> 2015, xyz -> 1) 
contains " +
+"non-partition columns"))
+
+  val message3 = intercept[AnalysisException] {
+sql("SHOW PARTITIONS parquet_view1")
+  }.getMessage
+  assert(message3.contains("Operation not allowed: view or index 
table"))
+}
+  }
+
+  test("show partitions - datasource") {
+import sqlContext.implicits._
--- End diff --

just `import testImplicits._` at the top


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request: [SPARK-14445][SQL] Support native execution of...

2016-04-13 Thread andrewor14
Github user andrewor14 commented on a diff in the pull request:

https://github.com/apache/spark/pull/1#discussion_r59636373
  
--- Diff: 
sql/core/src/main/scala/org/apache/spark/sql/execution/command/commands.scala 
---
@@ -423,6 +424,100 @@ case class ShowTablePropertiesCommand(
 }
 
 /**
+ * A command for users to list the column names for a table. This function 
creates a
+ * [[ShowColumnsCommand]] logical plan.
+ *
+ * The syntax of using this command in SQL is:
+ * {{{
+ *   SHOW COLUMNS (FROM | IN) table_identifier [(FROM | IN) database];
+ * }}}
+ */
+case class ShowColumnsCommand(table: TableIdentifier) extends 
RunnableCommand {
+  // The result of SHOW COLUMNS has one column called 'result'
+  override val output: Seq[Attribute] = {
+AttributeReference("result", StringType, nullable = false)() :: Nil
+  }
+
+  override def run(sqlContext: SQLContext): Seq[Row] = {
+val relation = sqlContext.sessionState.catalog.lookupRelation(table, 
None)
+relation.schema.fields.map { field =>
+  Row(field.name)
+}
+  }
+}
+
+/**
+ * A command for users to list the partition names of a table. If the 
partition spec is specified,
+ * partitions that match the spec are returned. [[AnalysisException]] 
exception is thrown under
+ * the following conditions:
+ *
+ * 1. If the command is called for a non partitioned table.
+ * 2. If the partition spec refers to the columns that are not defined as 
partitioning columns.
+ *
+ * This function creates a [[ShowPartitionsCommand]] logical plan
+ *
+ * The syntax of using this command in SQL is:
+ * {{{
+ *   SHOW PARTITIONS [db_name.]table_name [PARTITION(partition_spec)]
+ * }}}
+ */
+case class ShowPartitionsCommand(
+table: TableIdentifier,
+partitionSpec: Option[Map[String, String]]) extends RunnableCommand {
+  // The result of SHOW PARTITIONS has one column called 'result'
+  override val output: Seq[Attribute] = {
+AttributeReference("result", StringType, nullable = false)() :: Nil
+  }
+
+  /**
+   * This function validates the partitioning spec by making sure all the 
referenced columns are
+   * defined as partitioning columns in table definition. An 
AnalysisException exception is
+   * thrown if the partitioning spec is invalid.
+   */
+  private def validatePartitionSpec(table: CatalogTable, spec: Map[String, 
String]): Unit = {
+if (!spec.keySet.forall(table.partitionColumns.map(_.name).contains)) {
+  throw new AnalysisException(s"Partition spec ${spec.mkString("(", ", 
", ")")} contains " +
+s"non-partition columns")
+}
+  }
+
+  /**
+   * Validates and throws an [[AnalysisException]] exception under the 
following conditions:
+   * 1. If the table is not partitioned.
+   * 2. If it is a datasource table.
+   * 3. If it is a view or index table.
+   */
+  private def checkRequirements(table: CatalogTable): Unit = {
+if (table.tableType == CatalogTableType.VIRTUAL_VIEW ||
+  table.tableType == CatalogTableType.INDEX_TABLE) {
+  throw new AnalysisException("Operation not allowed: view or index 
table")
+} else if (!DDLUtils.isTablePartitioned(table)) {
+  throw new AnalysisException(s"Table ${table.qualifiedName} is not a 
partitioned table")
+} else if (DDLUtils.isDatasourceTable(table)) {
+  throw new AnalysisException("Operation not allowed: datasource 
table")
+}
+  }
+
+  override def run(sqlContext: SQLContext): Seq[Row] = {
+val catalog = sqlContext.sessionState.catalog
+val db = table.database.getOrElse(catalog.getCurrentDatabase)
+if (catalog.isTemporaryTable(table)) {
+  Seq.empty[Row]
+} else {
+  val tab = catalog.getTable(table)
+  checkRequirements(tab)
+  val partNames = partitionSpec match {
+case None => catalog.getPartitionNames(db, table.identifier, 
-1.asInstanceOf[Short])
+case Some(spec) =>
+  validatePartitionSpec(tab, spec)
+  catalog.getPartitionNames(db, table.identifier, spec, 
-1.asInstanceOf[Short])
+  }
--- End diff --

if you still want to validate the partition spec just do it in an option 
foreach or something


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: 

[GitHub] spark pull request: [SPARK-14445][SQL] Support native execution of...

2016-04-13 Thread andrewor14
Github user andrewor14 commented on a diff in the pull request:

https://github.com/apache/spark/pull/1#discussion_r59636345
  
--- Diff: 
sql/core/src/main/scala/org/apache/spark/sql/execution/command/commands.scala 
---
@@ -423,6 +424,100 @@ case class ShowTablePropertiesCommand(
 }
 
 /**
+ * A command for users to list the column names for a table. This function 
creates a
+ * [[ShowColumnsCommand]] logical plan.
+ *
+ * The syntax of using this command in SQL is:
+ * {{{
+ *   SHOW COLUMNS (FROM | IN) table_identifier [(FROM | IN) database];
+ * }}}
+ */
+case class ShowColumnsCommand(table: TableIdentifier) extends 
RunnableCommand {
+  // The result of SHOW COLUMNS has one column called 'result'
+  override val output: Seq[Attribute] = {
+AttributeReference("result", StringType, nullable = false)() :: Nil
+  }
+
+  override def run(sqlContext: SQLContext): Seq[Row] = {
+val relation = sqlContext.sessionState.catalog.lookupRelation(table, 
None)
+relation.schema.fields.map { field =>
+  Row(field.name)
+}
+  }
+}
+
+/**
+ * A command for users to list the partition names of a table. If the 
partition spec is specified,
+ * partitions that match the spec are returned. [[AnalysisException]] 
exception is thrown under
+ * the following conditions:
+ *
+ * 1. If the command is called for a non partitioned table.
+ * 2. If the partition spec refers to the columns that are not defined as 
partitioning columns.
+ *
+ * This function creates a [[ShowPartitionsCommand]] logical plan
+ *
+ * The syntax of using this command in SQL is:
+ * {{{
+ *   SHOW PARTITIONS [db_name.]table_name [PARTITION(partition_spec)]
+ * }}}
+ */
+case class ShowPartitionsCommand(
+table: TableIdentifier,
+partitionSpec: Option[Map[String, String]]) extends RunnableCommand {
+  // The result of SHOW PARTITIONS has one column called 'result'
+  override val output: Seq[Attribute] = {
+AttributeReference("result", StringType, nullable = false)() :: Nil
+  }
+
+  /**
+   * This function validates the partitioning spec by making sure all the 
referenced columns are
+   * defined as partitioning columns in table definition. An 
AnalysisException exception is
+   * thrown if the partitioning spec is invalid.
+   */
+  private def validatePartitionSpec(table: CatalogTable, spec: Map[String, 
String]): Unit = {
+if (!spec.keySet.forall(table.partitionColumns.map(_.name).contains)) {
+  throw new AnalysisException(s"Partition spec ${spec.mkString("(", ", 
", ")")} contains " +
+s"non-partition columns")
+}
+  }
+
+  /**
+   * Validates and throws an [[AnalysisException]] exception under the 
following conditions:
+   * 1. If the table is not partitioned.
+   * 2. If it is a datasource table.
+   * 3. If it is a view or index table.
+   */
+  private def checkRequirements(table: CatalogTable): Unit = {
+if (table.tableType == CatalogTableType.VIRTUAL_VIEW ||
+  table.tableType == CatalogTableType.INDEX_TABLE) {
+  throw new AnalysisException("Operation not allowed: view or index 
table")
+} else if (!DDLUtils.isTablePartitioned(table)) {
+  throw new AnalysisException(s"Table ${table.qualifiedName} is not a 
partitioned table")
+} else if (DDLUtils.isDatasourceTable(table)) {
+  throw new AnalysisException("Operation not allowed: datasource 
table")
+}
+  }
+
+  override def run(sqlContext: SQLContext): Seq[Row] = {
+val catalog = sqlContext.sessionState.catalog
+val db = table.database.getOrElse(catalog.getCurrentDatabase)
+if (catalog.isTemporaryTable(table)) {
+  Seq.empty[Row]
+} else {
+  val tab = catalog.getTable(table)
+  checkRequirements(tab)
+  val partNames = partitionSpec match {
+case None => catalog.getPartitionNames(db, table.identifier, 
-1.asInstanceOf[Short])
+case Some(spec) =>
+  validatePartitionSpec(tab, spec)
+  catalog.getPartitionNames(db, table.identifier, spec, 
-1.asInstanceOf[Short])
+  }
--- End diff --

if you take my suggestion elsewhere then you don't need to do this match. 
Just pass the option into catalog.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional 

[GitHub] spark pull request: [SPARK-14445][SQL] Support native execution of...

2016-04-13 Thread andrewor14
Github user andrewor14 commented on a diff in the pull request:

https://github.com/apache/spark/pull/1#discussion_r59636244
  
--- Diff: 
sql/core/src/main/scala/org/apache/spark/sql/execution/command/commands.scala 
---
@@ -423,6 +424,100 @@ case class ShowTablePropertiesCommand(
 }
 
 /**
+ * A command for users to list the column names for a table. This function 
creates a
+ * [[ShowColumnsCommand]] logical plan.
+ *
+ * The syntax of using this command in SQL is:
+ * {{{
+ *   SHOW COLUMNS (FROM | IN) table_identifier [(FROM | IN) database];
+ * }}}
+ */
+case class ShowColumnsCommand(table: TableIdentifier) extends 
RunnableCommand {
+  // The result of SHOW COLUMNS has one column called 'result'
+  override val output: Seq[Attribute] = {
+AttributeReference("result", StringType, nullable = false)() :: Nil
+  }
+
+  override def run(sqlContext: SQLContext): Seq[Row] = {
+val relation = sqlContext.sessionState.catalog.lookupRelation(table, 
None)
+relation.schema.fields.map { field =>
+  Row(field.name)
+}
+  }
+}
+
+/**
+ * A command for users to list the partition names of a table. If the 
partition spec is specified,
+ * partitions that match the spec are returned. [[AnalysisException]] 
exception is thrown under
+ * the following conditions:
+ *
+ * 1. If the command is called for a non partitioned table.
+ * 2. If the partition spec refers to the columns that are not defined as 
partitioning columns.
+ *
+ * This function creates a [[ShowPartitionsCommand]] logical plan
+ *
+ * The syntax of using this command in SQL is:
+ * {{{
+ *   SHOW PARTITIONS [db_name.]table_name [PARTITION(partition_spec)]
+ * }}}
+ */
+case class ShowPartitionsCommand(
+table: TableIdentifier,
+partitionSpec: Option[Map[String, String]]) extends RunnableCommand {
+  // The result of SHOW PARTITIONS has one column called 'result'
+  override val output: Seq[Attribute] = {
+AttributeReference("result", StringType, nullable = false)() :: Nil
+  }
+
+  /**
+   * This function validates the partitioning spec by making sure all the 
referenced columns are
+   * defined as partitioning columns in table definition. An 
AnalysisException exception is
+   * thrown if the partitioning spec is invalid.
+   */
+  private def validatePartitionSpec(table: CatalogTable, spec: Map[String, 
String]): Unit = {
+if (!spec.keySet.forall(table.partitionColumns.map(_.name).contains)) {
+  throw new AnalysisException(s"Partition spec ${spec.mkString("(", ", 
", ")")} contains " +
--- End diff --

to provide a better error message I would do:
```
val badColumns = spec.keySet.filterNot(table.partitionColumnNames.contains)
throw new AnalysisException("Non-partitioned column(s) 
${badColumns.mkString(", ")} are specified for SHOW PARTITIONS")
```


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request: [SPARK-14445][SQL] Support native execution of...

2016-04-13 Thread andrewor14
Github user andrewor14 commented on a diff in the pull request:

https://github.com/apache/spark/pull/1#discussion_r59635926
  
--- Diff: 
sql/core/src/main/scala/org/apache/spark/sql/execution/command/commands.scala 
---
@@ -423,6 +424,100 @@ case class ShowTablePropertiesCommand(
 }
 
 /**
+ * A command for users to list the column names for a table. This function 
creates a
+ * [[ShowColumnsCommand]] logical plan.
+ *
+ * The syntax of using this command in SQL is:
+ * {{{
+ *   SHOW COLUMNS (FROM | IN) table_identifier [(FROM | IN) database];
+ * }}}
+ */
+case class ShowColumnsCommand(table: TableIdentifier) extends 
RunnableCommand {
+  // The result of SHOW COLUMNS has one column called 'result'
+  override val output: Seq[Attribute] = {
+AttributeReference("result", StringType, nullable = false)() :: Nil
+  }
+
+  override def run(sqlContext: SQLContext): Seq[Row] = {
+val relation = sqlContext.sessionState.catalog.lookupRelation(table, 
None)
+relation.schema.fields.map { field =>
+  Row(field.name)
+}
+  }
+}
+
+/**
+ * A command for users to list the partition names of a table. If the 
partition spec is specified,
+ * partitions that match the spec are returned. [[AnalysisException]] 
exception is thrown under
+ * the following conditions:
+ *
+ * 1. If the command is called for a non partitioned table.
+ * 2. If the partition spec refers to the columns that are not defined as 
partitioning columns.
+ *
+ * This function creates a [[ShowPartitionsCommand]] logical plan
+ *
+ * The syntax of using this command in SQL is:
+ * {{{
+ *   SHOW PARTITIONS [db_name.]table_name [PARTITION(partition_spec)]
+ * }}}
+ */
+case class ShowPartitionsCommand(
+table: TableIdentifier,
+partitionSpec: Option[Map[String, String]]) extends RunnableCommand {
+  // The result of SHOW PARTITIONS has one column called 'result'
+  override val output: Seq[Attribute] = {
+AttributeReference("result", StringType, nullable = false)() :: Nil
+  }
+
+  /**
+   * This function validates the partitioning spec by making sure all the 
referenced columns are
+   * defined as partitioning columns in table definition. An 
AnalysisException exception is
+   * thrown if the partitioning spec is invalid.
+   */
+  private def validatePartitionSpec(table: CatalogTable, spec: Map[String, 
String]): Unit = {
+if (!spec.keySet.forall(table.partitionColumns.map(_.name).contains)) {
+  throw new AnalysisException(s"Partition spec ${spec.mkString("(", ", 
", ")")} contains " +
+s"non-partition columns")
+}
+  }
+
+  /**
+   * Validates and throws an [[AnalysisException]] exception under the 
following conditions:
+   * 1. If the table is not partitioned.
+   * 2. If it is a datasource table.
+   * 3. If it is a view or index table.
+   */
+  private def checkRequirements(table: CatalogTable): Unit = {
+if (table.tableType == CatalogTableType.VIRTUAL_VIEW ||
+  table.tableType == CatalogTableType.INDEX_TABLE) {
+  throw new AnalysisException("Operation not allowed: view or index 
table")
+} else if (!DDLUtils.isTablePartitioned(table)) {
+  throw new AnalysisException(s"Table ${table.qualifiedName} is not a 
partitioned table")
+} else if (DDLUtils.isDatasourceTable(table)) {
+  throw new AnalysisException("Operation not allowed: datasource 
table")
--- End diff --

also since these all throw exceptions anyway I would just do `if`s instead 
of `else if`s


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request: [SPARK-14445][SQL] Support native execution of...

2016-04-13 Thread andrewor14
Github user andrewor14 commented on a diff in the pull request:

https://github.com/apache/spark/pull/1#discussion_r59635881
  
--- Diff: 
sql/core/src/main/scala/org/apache/spark/sql/execution/command/commands.scala 
---
@@ -423,6 +424,100 @@ case class ShowTablePropertiesCommand(
 }
 
 /**
+ * A command for users to list the column names for a table. This function 
creates a
+ * [[ShowColumnsCommand]] logical plan.
+ *
+ * The syntax of using this command in SQL is:
+ * {{{
+ *   SHOW COLUMNS (FROM | IN) table_identifier [(FROM | IN) database];
+ * }}}
+ */
+case class ShowColumnsCommand(table: TableIdentifier) extends 
RunnableCommand {
+  // The result of SHOW COLUMNS has one column called 'result'
+  override val output: Seq[Attribute] = {
+AttributeReference("result", StringType, nullable = false)() :: Nil
+  }
+
+  override def run(sqlContext: SQLContext): Seq[Row] = {
+val relation = sqlContext.sessionState.catalog.lookupRelation(table, 
None)
+relation.schema.fields.map { field =>
+  Row(field.name)
+}
+  }
+}
+
+/**
+ * A command for users to list the partition names of a table. If the 
partition spec is specified,
+ * partitions that match the spec are returned. [[AnalysisException]] 
exception is thrown under
+ * the following conditions:
+ *
+ * 1. If the command is called for a non partitioned table.
+ * 2. If the partition spec refers to the columns that are not defined as 
partitioning columns.
+ *
+ * This function creates a [[ShowPartitionsCommand]] logical plan
+ *
+ * The syntax of using this command in SQL is:
+ * {{{
+ *   SHOW PARTITIONS [db_name.]table_name [PARTITION(partition_spec)]
+ * }}}
+ */
+case class ShowPartitionsCommand(
+table: TableIdentifier,
+partitionSpec: Option[Map[String, String]]) extends RunnableCommand {
+  // The result of SHOW PARTITIONS has one column called 'result'
+  override val output: Seq[Attribute] = {
+AttributeReference("result", StringType, nullable = false)() :: Nil
+  }
+
+  /**
+   * This function validates the partitioning spec by making sure all the 
referenced columns are
+   * defined as partitioning columns in table definition. An 
AnalysisException exception is
+   * thrown if the partitioning spec is invalid.
+   */
+  private def validatePartitionSpec(table: CatalogTable, spec: Map[String, 
String]): Unit = {
+if (!spec.keySet.forall(table.partitionColumns.map(_.name).contains)) {
+  throw new AnalysisException(s"Partition spec ${spec.mkString("(", ", 
", ")")} contains " +
+s"non-partition columns")
+}
+  }
+
+  /**
+   * Validates and throws an [[AnalysisException]] exception under the 
following conditions:
+   * 1. If the table is not partitioned.
+   * 2. If it is a datasource table.
+   * 3. If it is a view or index table.
+   */
+  private def checkRequirements(table: CatalogTable): Unit = {
--- End diff --

this is only called in 1 place. I would just inline it.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request: [SPARK-14445][SQL] Support native execution of...

2016-04-13 Thread andrewor14
Github user andrewor14 commented on a diff in the pull request:

https://github.com/apache/spark/pull/1#discussion_r59635840
  
--- Diff: 
sql/core/src/main/scala/org/apache/spark/sql/execution/command/commands.scala 
---
@@ -423,6 +424,100 @@ case class ShowTablePropertiesCommand(
 }
 
 /**
+ * A command for users to list the column names for a table. This function 
creates a
+ * [[ShowColumnsCommand]] logical plan.
+ *
+ * The syntax of using this command in SQL is:
+ * {{{
+ *   SHOW COLUMNS (FROM | IN) table_identifier [(FROM | IN) database];
+ * }}}
+ */
+case class ShowColumnsCommand(table: TableIdentifier) extends 
RunnableCommand {
+  // The result of SHOW COLUMNS has one column called 'result'
+  override val output: Seq[Attribute] = {
+AttributeReference("result", StringType, nullable = false)() :: Nil
+  }
+
+  override def run(sqlContext: SQLContext): Seq[Row] = {
+val relation = sqlContext.sessionState.catalog.lookupRelation(table, 
None)
+relation.schema.fields.map { field =>
+  Row(field.name)
+}
+  }
+}
+
+/**
+ * A command for users to list the partition names of a table. If the 
partition spec is specified,
+ * partitions that match the spec are returned. [[AnalysisException]] 
exception is thrown under
+ * the following conditions:
+ *
+ * 1. If the command is called for a non partitioned table.
+ * 2. If the partition spec refers to the columns that are not defined as 
partitioning columns.
+ *
+ * This function creates a [[ShowPartitionsCommand]] logical plan
+ *
+ * The syntax of using this command in SQL is:
+ * {{{
+ *   SHOW PARTITIONS [db_name.]table_name [PARTITION(partition_spec)]
+ * }}}
+ */
+case class ShowPartitionsCommand(
+table: TableIdentifier,
+partitionSpec: Option[Map[String, String]]) extends RunnableCommand {
+  // The result of SHOW PARTITIONS has one column called 'result'
+  override val output: Seq[Attribute] = {
+AttributeReference("result", StringType, nullable = false)() :: Nil
+  }
+
+  /**
+   * This function validates the partitioning spec by making sure all the 
referenced columns are
+   * defined as partitioning columns in table definition. An 
AnalysisException exception is
+   * thrown if the partitioning spec is invalid.
+   */
+  private def validatePartitionSpec(table: CatalogTable, spec: Map[String, 
String]): Unit = {
+if (!spec.keySet.forall(table.partitionColumns.map(_.name).contains)) {
+  throw new AnalysisException(s"Partition spec ${spec.mkString("(", ", 
", ")")} contains " +
+s"non-partition columns")
+}
+  }
+
+  /**
+   * Validates and throws an [[AnalysisException]] exception under the 
following conditions:
+   * 1. If the table is not partitioned.
+   * 2. If it is a datasource table.
+   * 3. If it is a view or index table.
+   */
+  private def checkRequirements(table: CatalogTable): Unit = {
+if (table.tableType == CatalogTableType.VIRTUAL_VIEW ||
+  table.tableType == CatalogTableType.INDEX_TABLE) {
+  throw new AnalysisException("Operation not allowed: view or index 
table")
+} else if (!DDLUtils.isTablePartitioned(table)) {
+  throw new AnalysisException(s"Table ${table.qualifiedName} is not a 
partitioned table")
+} else if (DDLUtils.isDatasourceTable(table)) {
+  throw new AnalysisException("Operation not allowed: datasource 
table")
+}
+  }
+
+  override def run(sqlContext: SQLContext): Seq[Row] = {
+val catalog = sqlContext.sessionState.catalog
+val db = table.database.getOrElse(catalog.getCurrentDatabase)
+if (catalog.isTemporaryTable(table)) {
+  Seq.empty[Row]
--- End diff --

I think we should throw an exception instead if it's a temporary table


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request: [SPARK-14445][SQL] Support native execution of...

2016-04-13 Thread andrewor14
Github user andrewor14 commented on a diff in the pull request:

https://github.com/apache/spark/pull/1#discussion_r59635781
  
--- Diff: 
sql/core/src/main/scala/org/apache/spark/sql/execution/command/commands.scala 
---
@@ -423,6 +424,100 @@ case class ShowTablePropertiesCommand(
 }
 
 /**
+ * A command for users to list the column names for a table. This function 
creates a
+ * [[ShowColumnsCommand]] logical plan.
+ *
+ * The syntax of using this command in SQL is:
+ * {{{
+ *   SHOW COLUMNS (FROM | IN) table_identifier [(FROM | IN) database];
+ * }}}
+ */
+case class ShowColumnsCommand(table: TableIdentifier) extends 
RunnableCommand {
+  // The result of SHOW COLUMNS has one column called 'result'
+  override val output: Seq[Attribute] = {
+AttributeReference("result", StringType, nullable = false)() :: Nil
+  }
+
+  override def run(sqlContext: SQLContext): Seq[Row] = {
+val relation = sqlContext.sessionState.catalog.lookupRelation(table, 
None)
+relation.schema.fields.map { field =>
+  Row(field.name)
+}
+  }
+}
+
+/**
+ * A command for users to list the partition names of a table. If the 
partition spec is specified,
+ * partitions that match the spec are returned. [[AnalysisException]] 
exception is thrown under
+ * the following conditions:
+ *
+ * 1. If the command is called for a non partitioned table.
+ * 2. If the partition spec refers to the columns that are not defined as 
partitioning columns.
+ *
+ * This function creates a [[ShowPartitionsCommand]] logical plan
+ *
+ * The syntax of using this command in SQL is:
+ * {{{
+ *   SHOW PARTITIONS [db_name.]table_name [PARTITION(partition_spec)]
+ * }}}
+ */
+case class ShowPartitionsCommand(
+table: TableIdentifier,
+partitionSpec: Option[Map[String, String]]) extends RunnableCommand {
+  // The result of SHOW PARTITIONS has one column called 'result'
+  override val output: Seq[Attribute] = {
+AttributeReference("result", StringType, nullable = false)() :: Nil
+  }
+
+  /**
+   * This function validates the partitioning spec by making sure all the 
referenced columns are
+   * defined as partitioning columns in table definition. An 
AnalysisException exception is
+   * thrown if the partitioning spec is invalid.
+   */
+  private def validatePartitionSpec(table: CatalogTable, spec: Map[String, 
String]): Unit = {
+if (!spec.keySet.forall(table.partitionColumns.map(_.name).contains)) {
+  throw new AnalysisException(s"Partition spec ${spec.mkString("(", ", 
", ")")} contains " +
+s"non-partition columns")
+}
+  }
+
+  /**
+   * Validates and throws an [[AnalysisException]] exception under the 
following conditions:
+   * 1. If the table is not partitioned.
+   * 2. If it is a datasource table.
+   * 3. If it is a view or index table.
+   */
+  private def checkRequirements(table: CatalogTable): Unit = {
+if (table.tableType == CatalogTableType.VIRTUAL_VIEW ||
+  table.tableType == CatalogTableType.INDEX_TABLE) {
+  throw new AnalysisException("Operation not allowed: view or index 
table")
+} else if (!DDLUtils.isTablePartitioned(table)) {
+  throw new AnalysisException(s"Table ${table.qualifiedName} is not a 
partitioned table")
+} else if (DDLUtils.isDatasourceTable(table)) {
+  throw new AnalysisException("Operation not allowed: datasource 
table")
--- End diff --

these error messages are too terse; "datasource table" is not an operation. 
I think it should sound more like:
```
SHOW PARTITIONS is not allowed on a view or index table: $name
SHOW PARTITIONS is not allowed on a table that is not partitioned: $name
SHOW PARTITIONS is not allowed on a datasource table: $name
```


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request: [SPARK-14445][SQL] Support native execution of...

2016-04-13 Thread andrewor14
Github user andrewor14 commented on a diff in the pull request:

https://github.com/apache/spark/pull/1#discussion_r59635435
  
--- Diff: 
sql/core/src/main/scala/org/apache/spark/sql/execution/command/commands.scala 
---
@@ -423,6 +424,100 @@ case class ShowTablePropertiesCommand(
 }
 
 /**
+ * A command for users to list the column names for a table. This function 
creates a
+ * [[ShowColumnsCommand]] logical plan.
+ *
+ * The syntax of using this command in SQL is:
+ * {{{
+ *   SHOW COLUMNS (FROM | IN) table_identifier [(FROM | IN) database];
+ * }}}
+ */
+case class ShowColumnsCommand(table: TableIdentifier) extends 
RunnableCommand {
+  // The result of SHOW COLUMNS has one column called 'result'
+  override val output: Seq[Attribute] = {
+AttributeReference("result", StringType, nullable = false)() :: Nil
+  }
+
+  override def run(sqlContext: SQLContext): Seq[Row] = {
+val relation = sqlContext.sessionState.catalog.lookupRelation(table, 
None)
+relation.schema.fields.map { field =>
+  Row(field.name)
+}
--- End diff --

It's better to get this information from the table metadata. 1 line:
```
sqlContext.sessionState.catalog.getTableMetadata(table).schema.map(_.name)
```


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request: [SPARK-14445][SQL] Support native execution of...

2016-04-13 Thread andrewor14
Github user andrewor14 commented on a diff in the pull request:

https://github.com/apache/spark/pull/1#discussion_r59634825
  
--- Diff: 
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/catalog/SessionCatalog.scala
 ---
@@ -409,6 +409,25 @@ class SessionCatalog(
   }
 
   /**
+   * Returns the partition names from catalog for a given table in a 
database.
+   */
+  def getPartitionNames(db: String, table: String, range: Short): 
Seq[String] = {
+externalCatalog.getPartitionNames(db, table, range)
+  }
+
+  /**
+   * Returns the partition names that matche the partition spec for a 
given table in a database.
+   * When no match is found, an empty Sequence is returned.
+   */
+  def getPartitionNames(
--- End diff --

I think a better API is:
```
def listPartitions(
db: String,
table: String,
partialSpec: Option[TablePartitionSpec]): Seq[CatalogTablePartition]
```

Note that we already have a `listPartitions`, but it doesn't allow us to 
match anything. Once we have this we can do the formatting ourselves in the 
`ShowPartitions` DDL. This leads to a more consistent catalog API and allows us 
to provide a less awkward implementation in `InMemoryCatalog`.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request: [SPARK-14445][SQL] Support native execution of...

2016-04-13 Thread andrewor14
Github user andrewor14 commented on a diff in the pull request:

https://github.com/apache/spark/pull/1#discussion_r59633351
  
--- Diff: 
sql/hive/src/main/scala/org/apache/spark/sql/hive/client/HiveClientImpl.scala 
---
@@ -404,6 +404,28 @@ private[hive] class HiveClientImpl(
 Option(hivePartition).map(fromHivePartition)
   }
 
+  /**
+   * Returns the partition names from hive metastore for a given table in 
a database.
+   */
+  override def getPartitionNames(
+  db: String,
+  table: String,
+  range: Short): Seq[String] = withHiveState {
--- End diff --

what is this `range` thing? It seems that it's never used?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request: [SPARK-14445][SQL] Support native execution of...

2016-04-13 Thread andrewor14
Github user andrewor14 commented on a diff in the pull request:

https://github.com/apache/spark/pull/1#discussion_r59631726
  
--- Diff: 
sql/core/src/main/scala/org/apache/spark/sql/execution/command/commands.scala 
---
@@ -423,6 +424,100 @@ case class ShowTablePropertiesCommand(
 }
 
 /**
+ * A command for users to list the column names for a table. This function 
creates a
+ * [[ShowColumnsCommand]] logical plan.
+ *
+ * The syntax of using this command in SQL is:
+ * {{{
+ *   SHOW COLUMNS (FROM | IN) table_identifier [(FROM | IN) database];
+ * }}}
+ */
+case class ShowColumnsCommand(table: TableIdentifier) extends 
RunnableCommand {
+  // The result of SHOW COLUMNS has one column called 'result'
+  override val output: Seq[Attribute] = {
+AttributeReference("result", StringType, nullable = false)() :: Nil
+  }
+
+  override def run(sqlContext: SQLContext): Seq[Row] = {
+val relation = sqlContext.sessionState.catalog.lookupRelation(table, 
None)
+relation.schema.fields.map { field =>
+  Row(field.name)
+}
+  }
+}
+
+/**
+ * A command for users to list the partition names of a table. If the 
partition spec is specified,
+ * partitions that match the spec are returned. [[AnalysisException]] 
exception is thrown under
+ * the following conditions:
+ *
+ * 1. If the command is called for a non partitioned table.
+ * 2. If the partition spec refers to the columns that are not defined as 
partitioning columns.
+ *
+ * This function creates a [[ShowPartitionsCommand]] logical plan
+ *
+ * The syntax of using this command in SQL is:
+ * {{{
+ *   SHOW PARTITIONS [db_name.]table_name [PARTITION(partition_spec)]
+ * }}}
+ */
+case class ShowPartitionsCommand(
+table: TableIdentifier,
+partitionSpec: Option[Map[String, String]]) extends RunnableCommand {
--- End diff --

please use `ExternalCatalog.TablePartitionSpec` here


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request: [SPARK-14445][SQL] Support native execution of...

2016-04-07 Thread gatorsmile
Github user gatorsmile commented on the pull request:

https://github.com/apache/spark/pull/1#issuecomment-207148007
  
cc @yhuai @andrewor14 @hvanhovell Could you review this PR too? Thanks!


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request: [SPARK-14445][SQL] Support native execution of...

2016-04-06 Thread AmplabJenkins
Github user AmplabJenkins commented on the pull request:

https://github.com/apache/spark/pull/1#issuecomment-206658923
  
Test PASSed.
Refer to this link for build results (access rights to CI server needed): 
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/55169/
Test PASSed.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request: [SPARK-14445][SQL] Support native execution of...

2016-04-06 Thread AmplabJenkins
Github user AmplabJenkins commented on the pull request:

https://github.com/apache/spark/pull/1#issuecomment-206658922
  
Merged build finished. Test PASSed.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request: [SPARK-14445][SQL] Support native execution of...

2016-04-06 Thread SparkQA
Github user SparkQA commented on the pull request:

https://github.com/apache/spark/pull/1#issuecomment-206658766
  
**[Test build #55169 has 
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/55169/consoleFull)**
 for PR 1 at commit 
[`7e93a9a`](https://github.com/apache/spark/commit/7e93a9a9d23b130db5354b8e73971ec17cf952a0).
 * This patch passes all tests.
 * This patch merges cleanly.
 * This patch adds no public classes.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request: [SPARK-14445][SQL] Support native execution of...

2016-04-06 Thread SparkQA
Github user SparkQA commented on the pull request:

https://github.com/apache/spark/pull/1#issuecomment-206636777
  
**[Test build #55169 has 
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/55169/consoleFull)**
 for PR 1 at commit 
[`7e93a9a`](https://github.com/apache/spark/commit/7e93a9a9d23b130db5354b8e73971ec17cf952a0).


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request: [SPARK-14445][SQL] Support native execution of...

2016-04-06 Thread dilipbiswal
Github user dilipbiswal commented on the pull request:

https://github.com/apache/spark/pull/1#issuecomment-206636426
  
 retest this please


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request: [SPARK-14445][SQL] Support native execution of...

2016-04-06 Thread AmplabJenkins
Github user AmplabJenkins commented on the pull request:

https://github.com/apache/spark/pull/1#issuecomment-206628150
  
Merged build finished. Test FAILed.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request: [SPARK-14445][SQL] Support native execution of...

2016-04-06 Thread AmplabJenkins
Github user AmplabJenkins commented on the pull request:

https://github.com/apache/spark/pull/1#issuecomment-206628154
  
Test FAILed.
Refer to this link for build results (access rights to CI server needed): 
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/55167/
Test FAILed.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request: [SPARK-14445][SQL] Support native execution of...

2016-04-06 Thread AmplabJenkins
Github user AmplabJenkins commented on the pull request:

https://github.com/apache/spark/pull/1#issuecomment-206625736
  
Test FAILed.
Refer to this link for build results (access rights to CI server needed): 
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/55165/
Test FAILed.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request: [SPARK-14445][SQL] Support native execution of...

2016-04-06 Thread AmplabJenkins
Github user AmplabJenkins commented on the pull request:

https://github.com/apache/spark/pull/1#issuecomment-206625734
  
Build finished. Test FAILed.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request: [SPARK-14445][SQL] Support native execution of...

2016-04-06 Thread dilipbiswal
Github user dilipbiswal commented on the pull request:

https://github.com/apache/spark/pull/1#issuecomment-206625534
  
Currently i throw an exception for data source tables as in my 
understanding the
partition spec metadata is not available in the hive meta store. I saw a 
[PR-12204] (https://github.com/apache/spark/pull/12204) from @viirya. If with 
this we can have the
partitioning metadata from meta store, then we can remove this restriction. 



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request: [SPARK-14445][SQL] Support native execution of...

2016-04-06 Thread dilipbiswal
GitHub user dilipbiswal opened a pull request:

https://github.com/apache/spark/pull/1

[SPARK-14445][SQL] Support native execution of SHOW COLUMNS and SHOW 
PARTITIONS

## What changes were proposed in this pull request?
This PR adds Native execution of SHOW COLUMNS and SHOW PARTITION commands.

Command Syntax:
``` SQL
SHOW COLUMNS (FROM | IN) table_identifier [(FROM | IN) database]
```
``` SQL
SHOW PARTITIONS [db_name.]table_name [PARTITION(partition_spec)]
```

## How was this patch tested?

Added test cases in HiveCommandSuite to verify execution and DDLCommandSuite
to verify plans.


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/dilipbiswal/spark dkb_show_columns

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/spark/pull/1.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1


commit 1ef6b8b3949abad1949e06454dd68a7f8d3a4df1
Author: Dilip Biswal 
Date:   2016-04-01T17:25:41Z

[SPARK-14445] Support native execution of SHOW COLUMNS and SHOW PARTITIONS




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org