[GitHub] spark pull request #15168: [SPARK-17612][SQL] Support `DESCRIBE table PARTIT...

2016-09-29 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/spark/pull/15168


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #15168: [SPARK-17612][SQL] Support `DESCRIBE table PARTIT...

2016-09-27 Thread dongjoon-hyun
Github user dongjoon-hyun commented on a diff in the pull request:

https://github.com/apache/spark/pull/15168#discussion_r80794752
  
--- Diff: sql/core/src/test/resources/sql-tests/inputs/describe.sql ---
@@ -0,0 +1,24 @@
+CREATE TABLE t (a STRING, b INT) PARTITIONED BY (c STRING, d STRING);
--- End diff --

Done. Thank you!


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #15168: [SPARK-17612][SQL] Support `DESCRIBE table PARTIT...

2016-09-27 Thread gatorsmile
Github user gatorsmile commented on a diff in the pull request:

https://github.com/apache/spark/pull/15168#discussion_r80792407
  
--- Diff: sql/core/src/test/resources/sql-tests/inputs/describe.sql ---
@@ -0,0 +1,24 @@
+CREATE TABLE t (a STRING, b INT) PARTITIONED BY (c STRING, d STRING);
--- End diff --

Drop the table?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #15168: [SPARK-17612][SQL] Support `DESCRIBE table PARTIT...

2016-09-27 Thread dongjoon-hyun
Github user dongjoon-hyun commented on a diff in the pull request:

https://github.com/apache/spark/pull/15168#discussion_r80771190
  
--- Diff: 
sql/core/src/main/scala/org/apache/spark/sql/execution/SparkSqlParser.scala ---
@@ -276,13 +276,26 @@ class SparkSqlAstBuilder(conf: SQLConf) extends 
AstBuilder {
* Create a [[DescribeTableCommand]] logical plan.
*/
   override def visitDescribeTable(ctx: DescribeTableContext): LogicalPlan 
= withOrigin(ctx) {
-// Describe partition and column are not supported yet. Return null 
and let the parser decide
+// Describe column are not supported yet. Return null and let the 
parser decide
 // what to do with this (create an exception or pass it on to a 
different system).
-if (ctx.describeColName != null || ctx.partitionSpec != null) {
+if (ctx.describeColName != null) {
   null
 } else {
+  val partitionSpec = if (ctx.partitionSpec != null) {
+// According to the syntax, visitPartitionSpec returns 
`Map[String, Option[String]]`.
+val (valid, nonValid) = 
visitPartitionSpec(ctx.partitionSpec).partition(_._2.isDefined)
+if (nonValid.nonEmpty) {
+  // For non-valid specification for `DESC` command, this raises a 
ParseException.
+  return null
--- End diff --

Done. Now, Jenkins is running.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #15168: [SPARK-17612][SQL] Support `DESCRIBE table PARTIT...

2016-09-27 Thread dongjoon-hyun
Github user dongjoon-hyun commented on a diff in the pull request:

https://github.com/apache/spark/pull/15168#discussion_r80763632
  
--- Diff: 
sql/core/src/main/scala/org/apache/spark/sql/execution/SparkSqlParser.scala ---
@@ -276,13 +276,26 @@ class SparkSqlAstBuilder(conf: SQLConf) extends 
AstBuilder {
* Create a [[DescribeTableCommand]] logical plan.
*/
   override def visitDescribeTable(ctx: DescribeTableContext): LogicalPlan 
= withOrigin(ctx) {
-// Describe partition and column are not supported yet. Return null 
and let the parser decide
+// Describe column are not supported yet. Return null and let the 
parser decide
 // what to do with this (create an exception or pass it on to a 
different system).
-if (ctx.describeColName != null || ctx.partitionSpec != null) {
+if (ctx.describeColName != null) {
   null
 } else {
+  val partitionSpec = if (ctx.partitionSpec != null) {
+// According to the syntax, visitPartitionSpec returns 
`Map[String, Option[String]]`.
+val (valid, nonValid) = 
visitPartitionSpec(ctx.partitionSpec).partition(_._2.isDefined)
+if (nonValid.nonEmpty) {
+  // For non-valid specification for `DESC` command, this raises a 
ParseException.
+  return null
--- End diff --

Thank you. I'll fix soon.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #15168: [SPARK-17612][SQL] Support `DESCRIBE table PARTIT...

2016-09-27 Thread hvanhovell
Github user hvanhovell commented on a diff in the pull request:

https://github.com/apache/spark/pull/15168#discussion_r80759513
  
--- Diff: 
sql/core/src/main/scala/org/apache/spark/sql/execution/SparkSqlParser.scala ---
@@ -276,13 +276,26 @@ class SparkSqlAstBuilder(conf: SQLConf) extends 
AstBuilder {
* Create a [[DescribeTableCommand]] logical plan.
*/
   override def visitDescribeTable(ctx: DescribeTableContext): LogicalPlan 
= withOrigin(ctx) {
-// Describe partition and column are not supported yet. Return null 
and let the parser decide
+// Describe column are not supported yet. Return null and let the 
parser decide
 // what to do with this (create an exception or pass it on to a 
different system).
-if (ctx.describeColName != null || ctx.partitionSpec != null) {
+if (ctx.describeColName != null) {
   null
 } else {
+  val partitionSpec = if (ctx.partitionSpec != null) {
+// According to the syntax, visitPartitionSpec returns 
`Map[String, Option[String]]`.
+val (valid, nonValid) = 
visitPartitionSpec(ctx.partitionSpec).partition(_._2.isDefined)
+if (nonValid.nonEmpty) {
+  // For non-valid specification for `DESC` command, this raises a 
ParseException.
+  return null
--- End diff --

Just throw a meaningful parse exception and simplify this logic.
```scala
visitPartitionSpec(ctx.partitionSpec).map {
  case (key, Some(value)) => key -> value
  case  _ => throw new ParseException("...", ctx)
}
```


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #15168: [SPARK-17612][SQL] Support `DESCRIBE table PARTIT...

2016-09-26 Thread dongjoon-hyun
Github user dongjoon-hyun commented on a diff in the pull request:

https://github.com/apache/spark/pull/15168#discussion_r80426728
  
--- Diff: 
sql/core/src/main/scala/org/apache/spark/sql/execution/command/tables.scala ---
@@ -499,6 +506,53 @@ case class DescribeTableCommand(table: 
TableIdentifier, isExtended: Boolean, isF
 }
   }
 
+  private def describeDetailPartitionInfo(
--- End diff --

Functions are renamed.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #15168: [SPARK-17612][SQL] Support `DESCRIBE table PARTIT...

2016-09-26 Thread dongjoon-hyun
Github user dongjoon-hyun commented on a diff in the pull request:

https://github.com/apache/spark/pull/15168#discussion_r80424231
  
--- Diff: 
sql/core/src/main/scala/org/apache/spark/sql/execution/command/tables.scala ---
@@ -499,6 +506,53 @@ case class DescribeTableCommand(table: 
TableIdentifier, isExtended: Boolean, isF
 }
   }
 
+  private def describeDetailPartitionInfo(
--- End diff --

Oh, I see. Thanks!


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #15168: [SPARK-17612][SQL] Support `DESCRIBE table PARTIT...

2016-09-26 Thread gatorsmile
Github user gatorsmile commented on a diff in the pull request:

https://github.com/apache/spark/pull/15168#discussion_r80422821
  
--- Diff: 
sql/core/src/main/scala/org/apache/spark/sql/execution/command/tables.scala ---
@@ -499,6 +506,53 @@ case class DescribeTableCommand(table: 
TableIdentifier, isExtended: Boolean, isF
 }
   }
 
+  private def describeDetailPartitionInfo(
--- End diff --

`describeDetailPartitionInfo ` -> `describeDetailedPartitionInfo`

`detail` is not an adjective. I found you use it in multiple function 
names. Maybe correct them. Thanks! 


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #15168: [SPARK-17612][SQL] Support `DESCRIBE table PARTIT...

2016-09-26 Thread dongjoon-hyun
Github user dongjoon-hyun commented on a diff in the pull request:

https://github.com/apache/spark/pull/15168#discussion_r80419919
  
--- Diff: 
sql/core/src/main/scala/org/apache/spark/sql/execution/command/tables.scala ---
@@ -420,17 +424,40 @@ case class DescribeTableCommand(table: 
TableIdentifier, isExtended: Boolean, isF
 val catalog = sparkSession.sessionState.catalog
 
 if (catalog.isTemporaryTable(table)) {
+  if (partitionSpec.nonEmpty) {
+throw new AnalysisException(
+  s"DESC PARTITION is not allowed on a temporary view: 
${table.identifier}")
+  }
   describeSchema(catalog.lookupRelation(table).schema, result)
 } else {
   val metadata = catalog.getTableMetadata(table)
   describeSchema(metadata.schema, result)
 
-  if (isExtended) {
-describeExtended(metadata, result)
-  } else if (isFormatted) {
-describeFormatted(metadata, result)
+  describePartitionInfo(metadata, result)
+
+  if (partitionSpec.isEmpty) {
+if (isExtended) {
+  describeExtendedTableInfo(metadata, result)
+} else if (isFormatted) {
+  describeFormattedTableInfo(metadata, result)
+  describeStorageInfo(metadata, result)
+}
   } else {
-describePartitionInfo(metadata, result)
+if (metadata.tableType == CatalogTableType.VIEW) {
+  throw new AnalysisException(
+s"DESC PARTITION is not allowed on a view: 
${table.identifier}")
+}
+if (DDLUtils.isDatasourceTable(metadata)) {
+  throw new AnalysisException(
+s"DESC PARTITION is not allowed on a datasource table: 
${table.identifier}")
+}
+val partition = catalog.getPartition(table, partitionSpec)
+if (isExtended) {
+  describeExtendedDetailPartitionInfo(table, metadata, partition, 
result)
+} else if (isFormatted) {
+  describeFormattedDetailPartitionInfo(table, metadata, partition, 
result)
+  describeStorageInfo(metadata, result)
+}
--- End diff --

Yep. I'll refactor them into another function.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #15168: [SPARK-17612][SQL] Support `DESCRIBE table PARTIT...

2016-09-26 Thread dongjoon-hyun
Github user dongjoon-hyun commented on a diff in the pull request:

https://github.com/apache/spark/pull/15168#discussion_r80419521
  
--- Diff: 
sql/hive/src/test/scala/org/apache/spark/sql/hive/execution/SQLQuerySuite.scala 
---
@@ -341,6 +341,74 @@ class SQLQuerySuite extends QueryTest with 
SQLTestUtils with TestHiveSingleton {
 }
   }
 
+  test("describe partition") {
+withTable("partitioned_table", "datasource_table") {
+  sql("CREATE TABLE partitioned_table (a STRING, b INT) PARTITIONED BY 
(c STRING, d STRING)")
+  sql("ALTER TABLE partitioned_table ADD PARTITION (c='Us', d=1)")
+
+  checkKeywordsExist(sql("DESC partitioned_table PARTITION (c='Us', 
d=1)"),
+"# Partition Information",
+"# col_name")
+
+  checkKeywordsExist(sql("DESC EXTENDED partitioned_table PARTITION 
(c='Us', d=1)"),
+"# Partition Information",
+"# col_name",
+"Detailed Partition Information CatalogPartition(",
+"Partition Values: [Us, 1]",
+"Storage(Location:",
+"Partition Parameters")
+
+  checkKeywordsExist(sql("DESC FORMATTED partitioned_table PARTITION 
(c='Us', d=1)"),
+"# Partition Information",
+"# col_name",
+"# Detailed Partition Information",
+"Partition Value:",
+"Database:",
+"Table:",
+"Location:",
+"Partition Parameters:",
+"# Storage Information")
+
+  val m = intercept[NoSuchPartitionException] {
+sql("DESC partitioned_table PARTITION (c='Us', d=2)")
+  }.getMessage()
+  assert(m.contains("Partition not found in table"))
+
+  val m2 = intercept[AnalysisException] {
+sql("DESC partitioned_table PARTITION (c='Us')")
+  }.getMessage()
+  assert(m2.contains("Partition spec is invalid"))
+
+  val m3 = intercept[ParseException] {
+sql("DESC partitioned_table PARTITION (c='Us', d)")
+  }.getMessage()
+  assert(m3.contains("Unsupported SQL statement"))
+
+  spark
+.range(1).select('id as 'a, 'id as 'b, 'id as 'c, 'id as 'd).write
+.partitionBy("d")
+.saveAsTable("datasource_table")
+  val m4 = intercept[AnalysisException] {
+sql("DESC datasource_table PARTITION (d=2)")
+  }.getMessage()
+  assert(m4.contains("DESC PARTITION is not allowed on a datasource 
table"))
+
+  val m5 = intercept[AnalysisException] {
+spark.range(10).select('id as 'a, 'id as 
'b).createTempView("view1")
+sql("DESC view1 PARTITION (c='Us', d=1)")
+  }.getMessage()
+  assert(m5.contains("DESC PARTITION is not allowed on a temporary 
view"))
+
+  withView("permanent_view") {
+val m = intercept[AnalysisException] {
+  sql("CREATE VIEW permanent_view AS SELECT * FROM 
partitioned_table")
+  sql("DESC permanent_view PARTITION (c='Us', d=1)")
+}.getMessage()
+assert(m.contains("DESC PARTITION is not allowed on a view"))
+  }
+}
+  }
--- End diff --

No problem!


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #15168: [SPARK-17612][SQL] Support `DESCRIBE table PARTIT...

2016-09-26 Thread gatorsmile
Github user gatorsmile commented on a diff in the pull request:

https://github.com/apache/spark/pull/15168#discussion_r80419524
  
--- Diff: 
sql/core/src/main/scala/org/apache/spark/sql/execution/command/tables.scala ---
@@ -420,17 +424,40 @@ case class DescribeTableCommand(table: 
TableIdentifier, isExtended: Boolean, isF
 val catalog = sparkSession.sessionState.catalog
 
 if (catalog.isTemporaryTable(table)) {
+  if (partitionSpec.nonEmpty) {
+throw new AnalysisException(
+  s"DESC PARTITION is not allowed on a temporary view: 
${table.identifier}")
+  }
   describeSchema(catalog.lookupRelation(table).schema, result)
 } else {
   val metadata = catalog.getTableMetadata(table)
   describeSchema(metadata.schema, result)
 
-  if (isExtended) {
-describeExtended(metadata, result)
-  } else if (isFormatted) {
-describeFormatted(metadata, result)
+  describePartitionInfo(metadata, result)
+
+  if (partitionSpec.isEmpty) {
+if (isExtended) {
+  describeExtendedTableInfo(metadata, result)
+} else if (isFormatted) {
+  describeFormattedTableInfo(metadata, result)
+  describeStorageInfo(metadata, result)
+}
   } else {
-describePartitionInfo(metadata, result)
+if (metadata.tableType == CatalogTableType.VIEW) {
+  throw new AnalysisException(
+s"DESC PARTITION is not allowed on a view: 
${table.identifier}")
+}
+if (DDLUtils.isDatasourceTable(metadata)) {
+  throw new AnalysisException(
+s"DESC PARTITION is not allowed on a datasource table: 
${table.identifier}")
+}
+val partition = catalog.getPartition(table, partitionSpec)
+if (isExtended) {
+  describeExtendedDetailPartitionInfo(table, metadata, partition, 
result)
+} else if (isFormatted) {
+  describeFormattedDetailPartitionInfo(table, metadata, partition, 
result)
+  describeStorageInfo(metadata, result)
+}
--- End diff --

Create another function for LOC 446 to LOC 460? Compared with the existing 
one, the new implementation looks a little bit cumbersome



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #15168: [SPARK-17612][SQL] Support `DESCRIBE table PARTIT...

2016-09-26 Thread gatorsmile
Github user gatorsmile commented on a diff in the pull request:

https://github.com/apache/spark/pull/15168#discussion_r80418906
  
--- Diff: 
sql/hive/src/test/scala/org/apache/spark/sql/hive/execution/SQLQuerySuite.scala 
---
@@ -341,6 +341,74 @@ class SQLQuerySuite extends QueryTest with 
SQLTestUtils with TestHiveSingleton {
 }
   }
 
+  test("describe partition") {
+withTable("partitioned_table", "datasource_table") {
+  sql("CREATE TABLE partitioned_table (a STRING, b INT) PARTITIONED BY 
(c STRING, d STRING)")
+  sql("ALTER TABLE partitioned_table ADD PARTITION (c='Us', d=1)")
+
+  checkKeywordsExist(sql("DESC partitioned_table PARTITION (c='Us', 
d=1)"),
+"# Partition Information",
+"# col_name")
+
+  checkKeywordsExist(sql("DESC EXTENDED partitioned_table PARTITION 
(c='Us', d=1)"),
+"# Partition Information",
+"# col_name",
+"Detailed Partition Information CatalogPartition(",
+"Partition Values: [Us, 1]",
+"Storage(Location:",
+"Partition Parameters")
+
+  checkKeywordsExist(sql("DESC FORMATTED partitioned_table PARTITION 
(c='Us', d=1)"),
+"# Partition Information",
+"# col_name",
+"# Detailed Partition Information",
+"Partition Value:",
+"Database:",
+"Table:",
+"Location:",
+"Partition Parameters:",
+"# Storage Information")
+
+  val m = intercept[NoSuchPartitionException] {
+sql("DESC partitioned_table PARTITION (c='Us', d=2)")
+  }.getMessage()
+  assert(m.contains("Partition not found in table"))
+
+  val m2 = intercept[AnalysisException] {
+sql("DESC partitioned_table PARTITION (c='Us')")
+  }.getMessage()
+  assert(m2.contains("Partition spec is invalid"))
+
+  val m3 = intercept[ParseException] {
+sql("DESC partitioned_table PARTITION (c='Us', d)")
+  }.getMessage()
+  assert(m3.contains("Unsupported SQL statement"))
+
+  spark
+.range(1).select('id as 'a, 'id as 'b, 'id as 'c, 'id as 'd).write
+.partitionBy("d")
+.saveAsTable("datasource_table")
+  val m4 = intercept[AnalysisException] {
+sql("DESC datasource_table PARTITION (d=2)")
+  }.getMessage()
+  assert(m4.contains("DESC PARTITION is not allowed on a datasource 
table"))
+
+  val m5 = intercept[AnalysisException] {
+spark.range(10).select('id as 'a, 'id as 
'b).createTempView("view1")
+sql("DESC view1 PARTITION (c='Us', d=1)")
+  }.getMessage()
+  assert(m5.contains("DESC PARTITION is not allowed on a temporary 
view"))
+
+  withView("permanent_view") {
+val m = intercept[AnalysisException] {
+  sql("CREATE VIEW permanent_view AS SELECT * FROM 
partitioned_table")
+  sql("DESC permanent_view PARTITION (c='Us', d=1)")
+}.getMessage()
+assert(m.contains("DESC PARTITION is not allowed on a view"))
+  }
+}
+  }
--- End diff --

Could you split the test case to two? One is covering the positive cases; 
another is covering the negative cases. We normally do not like a large test 
case.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #15168: [SPARK-17612][SQL] Support `DESCRIBE table PARTIT...

2016-09-25 Thread dongjoon-hyun
Github user dongjoon-hyun commented on a diff in the pull request:

https://github.com/apache/spark/pull/15168#discussion_r80413543
  
--- Diff: 
sql/core/src/main/scala/org/apache/spark/sql/execution/command/tables.scala ---
@@ -499,6 +516,35 @@ case class DescribeTableCommand(table: 
TableIdentifier, isExtended: Boolean, isF
 }
   }
 
+  private def describeExtendedDetailPartitionInfo(
+  tableIdentifier: TableIdentifier,
+  table: CatalogTable,
+  partition: CatalogTablePartition,
+  buffer: ArrayBuffer[Row]): Unit = {
+append(buffer, "", "", "")
+append(buffer, "Detailed Partition Information " + partition.toString, 
"", "")
+  }
+
+  private def describeFormattedDetailPartitionInfo(
+  tableIdentifier: TableIdentifier,
+  table: CatalogTable,
+  partition: CatalogTablePartition,
+  buffer: ArrayBuffer[Row]): Unit = {
+append(buffer, "", "", "")
+append(buffer, "# Detailed Partition Information", "", "")
+append(buffer, "Partition Value:", 
s"[${partition.spec.values.mkString(", ")}]", "")
+append(buffer, "Database:", table.database, "")
+append(buffer, "Table:", tableIdentifier.table, "")
+append(buffer, "Create Time:", "UNKNOWN", "")
+append(buffer, "Last Access Time:", "UNKNOWN", "")
+append(buffer, "Protect Mode:", "None", "")
--- End diff --

Yep. I agree! Let's save the bits.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #15168: [SPARK-17612][SQL] Support `DESCRIBE table PARTIT...

2016-09-25 Thread gatorsmile
Github user gatorsmile commented on a diff in the pull request:

https://github.com/apache/spark/pull/15168#discussion_r80413281
  
--- Diff: 
sql/core/src/main/scala/org/apache/spark/sql/execution/command/tables.scala ---
@@ -499,6 +516,35 @@ case class DescribeTableCommand(table: 
TableIdentifier, isExtended: Boolean, isF
 }
   }
 
+  private def describeExtendedDetailPartitionInfo(
+  tableIdentifier: TableIdentifier,
+  table: CatalogTable,
+  partition: CatalogTablePartition,
+  buffer: ArrayBuffer[Row]): Unit = {
+append(buffer, "", "", "")
+append(buffer, "Detailed Partition Information " + partition.toString, 
"", "")
+  }
+
+  private def describeFormattedDetailPartitionInfo(
+  tableIdentifier: TableIdentifier,
+  table: CatalogTable,
+  partition: CatalogTablePartition,
+  buffer: ArrayBuffer[Row]): Unit = {
+append(buffer, "", "", "")
+append(buffer, "# Detailed Partition Information", "", "")
+append(buffer, "Partition Value:", 
s"[${partition.spec.values.mkString(", ")}]", "")
+append(buffer, "Database:", table.database, "")
+append(buffer, "Table:", tableIdentifier.table, "")
+append(buffer, "Create Time:", "UNKNOWN", "")
+append(buffer, "Last Access Time:", "UNKNOWN", "")
+append(buffer, "Protect Mode:", "None", "")
--- End diff --

IMO, if it does not offer any useful info to the external users, we should 
remove it and keep outputs concise. 


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #15168: [SPARK-17612][SQL] Support `DESCRIBE table PARTIT...

2016-09-25 Thread dongjoon-hyun
Github user dongjoon-hyun commented on a diff in the pull request:

https://github.com/apache/spark/pull/15168#discussion_r80412806
  
--- Diff: 
sql/core/src/main/scala/org/apache/spark/sql/execution/command/tables.scala ---
@@ -499,6 +516,35 @@ case class DescribeTableCommand(table: 
TableIdentifier, isExtended: Boolean, isF
 }
   }
 
+  private def describeExtendedDetailPartitionInfo(
+  tableIdentifier: TableIdentifier,
+  table: CatalogTable,
+  partition: CatalogTablePartition,
+  buffer: ArrayBuffer[Row]): Unit = {
+append(buffer, "", "", "")
+append(buffer, "Detailed Partition Information " + partition.toString, 
"", "")
+  }
+
+  private def describeFormattedDetailPartitionInfo(
+  tableIdentifier: TableIdentifier,
+  table: CatalogTable,
+  partition: CatalogTablePartition,
+  buffer: ArrayBuffer[Row]): Unit = {
+append(buffer, "", "", "")
+append(buffer, "# Detailed Partition Information", "", "")
+append(buffer, "Partition Value:", 
s"[${partition.spec.values.mkString(", ")}]", "")
+append(buffer, "Database:", table.database, "")
+append(buffer, "Table:", tableIdentifier.table, "")
+append(buffer, "Create Time:", "UNKNOWN", "")
+append(buffer, "Last Access Time:", "UNKNOWN", "")
+append(buffer, "Protect Mode:", "None", "")
--- End diff --

Yep, right. But, it's a place holder to look similar with the existing 
behavior. Of course, the column name is changed consistently with Spark 2.0, 
e.g. `CreateTime` -> `Create Time`.
Should we remove them clearly?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #15168: [SPARK-17612][SQL] Support `DESCRIBE table PARTIT...

2016-09-25 Thread gatorsmile
Github user gatorsmile commented on a diff in the pull request:

https://github.com/apache/spark/pull/15168#discussion_r80412638
  
--- Diff: 
sql/core/src/main/scala/org/apache/spark/sql/execution/command/tables.scala ---
@@ -499,6 +516,35 @@ case class DescribeTableCommand(table: 
TableIdentifier, isExtended: Boolean, isF
 }
   }
 
+  private def describeExtendedDetailPartitionInfo(
+  tableIdentifier: TableIdentifier,
+  table: CatalogTable,
+  partition: CatalogTablePartition,
+  buffer: ArrayBuffer[Row]): Unit = {
+append(buffer, "", "", "")
+append(buffer, "Detailed Partition Information " + partition.toString, 
"", "")
+  }
+
+  private def describeFormattedDetailPartitionInfo(
+  tableIdentifier: TableIdentifier,
+  table: CatalogTable,
+  partition: CatalogTablePartition,
+  buffer: ArrayBuffer[Row]): Unit = {
+append(buffer, "", "", "")
+append(buffer, "# Detailed Partition Information", "", "")
+append(buffer, "Partition Value:", 
s"[${partition.spec.values.mkString(", ")}]", "")
+append(buffer, "Database:", table.database, "")
+append(buffer, "Table:", tableIdentifier.table, "")
+append(buffer, "Create Time:", "UNKNOWN", "")
+append(buffer, "Last Access Time:", "UNKNOWN", "")
+append(buffer, "Protect Mode:", "None", "")
--- End diff --

We do not need to output these fields, if their values are always `UNKNOWN` 
or `None`. 


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #15168: [SPARK-17612][SQL] Support `DESCRIBE table PARTIT...

2016-09-21 Thread hvanhovell
Github user hvanhovell commented on a diff in the pull request:

https://github.com/apache/spark/pull/15168#discussion_r79947817
  
--- Diff: 
sql/core/src/main/scala/org/apache/spark/sql/execution/SparkSqlParser.scala ---
@@ -276,13 +276,19 @@ class SparkSqlAstBuilder(conf: SQLConf) extends 
AstBuilder {
* Create a [[DescribeTableCommand]] logical plan.
*/
   override def visitDescribeTable(ctx: DescribeTableContext): LogicalPlan 
= withOrigin(ctx) {
-// Describe partition and column are not supported yet. Return null 
and let the parser decide
+// Describe column are not supported yet. Return null and let the 
parser decide
 // what to do with this (create an exception or pass it on to a 
different system).
-if (ctx.describeColName != null || ctx.partitionSpec != null) {
+if (ctx.describeColName != null) {
   null
 } else {
+  val partitionSpec = if (ctx.partitionSpec != null) {
+visitPartitionSpec(ctx.partitionSpec).filter(_._2.isDefined).map(x 
=> (x._1, x._2.get))
--- End diff --

Could you collect here and add some names? It is a bit cryptic at the 
moment?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #15168: [SPARK-17612][SQL] Support `DESCRIBE table PARTIT...

2016-09-21 Thread hvanhovell
Github user hvanhovell commented on a diff in the pull request:

https://github.com/apache/spark/pull/15168#discussion_r79948051
  
--- Diff: 
sql/core/src/main/scala/org/apache/spark/sql/execution/command/tables.scala ---
@@ -419,6 +423,10 @@ case class DescribeTableCommand(table: 
TableIdentifier, isExtended: Boolean, isF
 val result = new ArrayBuffer[Row]
 val catalog = sparkSession.sessionState.catalog
 
+if (partitionSpec.nonEmpty) {
+  catalog.getPartition(table, partitionSpec)
--- End diff --

Does this touch the result? How is the partition information added to the 
result?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #15168: [SPARK-17612][SQL] Support `DESCRIBE table PARTIT...

2016-09-21 Thread hvanhovell
Github user hvanhovell commented on a diff in the pull request:

https://github.com/apache/spark/pull/15168#discussion_r79948294
  
--- Diff: 
sql/hive/src/test/scala/org/apache/spark/sql/hive/execution/SQLQuerySuite.scala 
---
@@ -341,6 +342,25 @@ class SQLQuerySuite extends QueryTest with 
SQLTestUtils with TestHiveSingleton {
 }
   }
 
+  test("describe partition") {
--- End diff --

It is possible to test the output with `SQLQueryTestSuite`? Or does that 
have portability issues?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #15168: [SPARK-17612][SQL] Support `DESCRIBE table PARTIT...

2016-09-21 Thread dongjoon-hyun
Github user dongjoon-hyun commented on a diff in the pull request:

https://github.com/apache/spark/pull/15168#discussion_r79900014
  
--- Diff: 
sql/hive/src/test/scala/org/apache/spark/sql/hive/execution/SQLQuerySuite.scala 
---
@@ -341,6 +342,25 @@ class SQLQuerySuite extends QueryTest with 
SQLTestUtils with TestHiveSingleton {
 }
   }
 
+  test("describe partition") {
+withTable("partitioned_table") {
+  sql("CREATE TABLE partitioned_table (a STRING, b INT) PARTITIONED BY 
(c STRING, d STRING)")
+  sql("ALTER TABLE partitioned_table ADD PARTITION (c='Us', d=1)")
+
+  sql("DESC partitioned_table PARTITION (c='Us', d=1)")
--- End diff --

@skambha . Let's wait some feedback from committers. Before putting more 
effort on this, I want to know if this kind of PR about `DESC` regression will 
be merged into master.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #15168: [SPARK-17612][SQL] Support `DESCRIBE table PARTIT...

2016-09-20 Thread skambha
Github user skambha commented on a diff in the pull request:

https://github.com/apache/spark/pull/15168#discussion_r79746339
  
--- Diff: 
sql/hive/src/test/scala/org/apache/spark/sql/hive/execution/SQLQuerySuite.scala 
---
@@ -341,6 +342,25 @@ class SQLQuerySuite extends QueryTest with 
SQLTestUtils with TestHiveSingleton {
 }
   }
 
+  test("describe partition") {
+withTable("partitioned_table") {
+  sql("CREATE TABLE partitioned_table (a STRING, b INT) PARTITIONED BY 
(c STRING, d STRING)")
+  sql("ALTER TABLE partitioned_table ADD PARTITION (c='Us', d=1)")
+
+  sql("DESC partitioned_table PARTITION (c='Us', d=1)")
--- End diff --

Hive supports describe based on the partition specified and will list the 
details for the particular partition when used with formatted or extended 
option. 
DESCRIBE formatted part_table partition (d='abc') 

https://cwiki.apache.org/confluence/display/Hive/LanguageManual+DDL#LanguageManualDDL-Describe

@dongjoon-hyun, this might be beyond the scope of this PR but this would be 
useful if there are a lot of partitions and we want to find details for a given 
partition.  What do you think? Thanks.



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #15168: [SPARK-17612][SQL] Support `DESCRIBE table PARTIT...

2016-09-20 Thread dongjoon-hyun
Github user dongjoon-hyun commented on a diff in the pull request:

https://github.com/apache/spark/pull/15168#discussion_r79717996
  
--- Diff: 
sql/hive/src/test/scala/org/apache/spark/sql/hive/execution/SQLQuerySuite.scala 
---
@@ -341,6 +342,25 @@ class SQLQuerySuite extends QueryTest with 
SQLTestUtils with TestHiveSingleton {
 }
   }
 
+  test("describe partition") {
+withTable("partitioned_table") {
+  sql("CREATE TABLE partitioned_table (a STRING, b INT) PARTITIONED BY 
(c STRING, d STRING)")
+  sql("ALTER TABLE partitioned_table ADD PARTITION (c='Us', d=1)")
+
+  sql("DESC partitioned_table PARTITION (c='Us', d=1)")
--- End diff --

Actually, as you see in the PR description, the output result is the same 
with the `DESC` command with `PARTITION` spec.  In other words, the output does 
not depend on the given spec. Only, this command raises exceptions if the given 
partition does not exists.

> the output result points out the partition columns correctly.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #15168: [SPARK-17612][SQL] Support `DESCRIBE table PARTIT...

2016-09-20 Thread skambha
Github user skambha commented on a diff in the pull request:

https://github.com/apache/spark/pull/15168#discussion_r79716333
  
--- Diff: 
sql/hive/src/test/scala/org/apache/spark/sql/hive/execution/SQLQuerySuite.scala 
---
@@ -341,6 +342,25 @@ class SQLQuerySuite extends QueryTest with 
SQLTestUtils with TestHiveSingleton {
 }
   }
 
+  test("describe partition") {
+withTable("partitioned_table") {
+  sql("CREATE TABLE partitioned_table (a STRING, b INT) PARTITIONED BY 
(c STRING, d STRING)")
+  sql("ALTER TABLE partitioned_table ADD PARTITION (c='Us', d=1)")
+
+  sql("DESC partitioned_table PARTITION (c='Us', d=1)")
--- End diff --

I was thinking - of verifying that the output result points out the 
partition columns correctly. 
Is there a testcase that covers this already? If so that is fine. 


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #15168: [SPARK-17612][SQL] Support `DESCRIBE table PARTIT...

2016-09-20 Thread dongjoon-hyun
Github user dongjoon-hyun commented on a diff in the pull request:

https://github.com/apache/spark/pull/15168#discussion_r79713201
  
--- Diff: 
sql/hive/src/test/scala/org/apache/spark/sql/hive/execution/SQLQuerySuite.scala 
---
@@ -341,6 +342,25 @@ class SQLQuerySuite extends QueryTest with 
SQLTestUtils with TestHiveSingleton {
 }
   }
 
+  test("describe partition") {
+withTable("partitioned_table") {
+  sql("CREATE TABLE partitioned_table (a STRING, b INT) PARTITIONED BY 
(c STRING, d STRING)")
+  sql("ALTER TABLE partitioned_table ADD PARTITION (c='Us', d=1)")
+
+  sql("DESC partitioned_table PARTITION (c='Us', d=1)")
--- End diff --

Hi, @skambha .
Thank you for review. BTW, what kind of validation do you want? If we have 
a `DESC` command output validation test cases, it will be the same.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #15168: [SPARK-17612][SQL] Support `DESCRIBE table PARTIT...

2016-09-20 Thread skambha
Github user skambha commented on a diff in the pull request:

https://github.com/apache/spark/pull/15168#discussion_r79710335
  
--- Diff: 
sql/hive/src/test/scala/org/apache/spark/sql/hive/execution/SQLQuerySuite.scala 
---
@@ -341,6 +342,25 @@ class SQLQuerySuite extends QueryTest with 
SQLTestUtils with TestHiveSingleton {
 }
   }
 
+  test("describe partition") {
+withTable("partitioned_table") {
+  sql("CREATE TABLE partitioned_table (a STRING, b INT) PARTITIONED BY 
(c STRING, d STRING)")
+  sql("ALTER TABLE partitioned_table ADD PARTITION (c='Us', d=1)")
+
+  sql("DESC partitioned_table PARTITION (c='Us', d=1)")
--- End diff --

Can you add some validation for the output returned.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #15168: [SPARK-17612][SQL] Support `DESCRIBE table PARTIT...

2016-09-20 Thread dongjoon-hyun
GitHub user dongjoon-hyun opened a pull request:

https://github.com/apache/spark/pull/15168

[SPARK-17612][SQL] Support `DESCRIBE table PARTITION` SQL syntax

## What changes were proposed in this pull request?

This PR implements `DESCRIBE table PARTITION` SQL Syntax again. It was 
supported until Spark 1.6.2, but was dropped since 2.0.0.

**Spark 1.6.2**
```
scala> sql("CREATE TABLE partitioned_table (a STRING, b INT) PARTITIONED BY 
(c STRING, d STRING)")
res1: org.apache.spark.sql.DataFrame = [result: string]

scala> sql("ALTER TABLE partitioned_table ADD PARTITION (c='Us', d=1)")
res2: org.apache.spark.sql.DataFrame = [result: string]

scala> sql("DESC partitioned_table PARTITION (c='Us', d=1)").show(false)
++
|result  |
++
|a  string|
|b  int   |
|c  string|
|d  string|
|   
 |
|# Partition Information
  |
|# col_name data_type   comment |
|   
 |
|c  string|
|d  string|
++
```

**Spark 2.0**
- **Before**
```
scala> sql("CREATE TABLE partitioned_table (a STRING, b INT) PARTITIONED BY 
(c STRING, d STRING)")
res0: org.apache.spark.sql.DataFrame = []

scala> sql("ALTER TABLE partitioned_table ADD PARTITION (c='Us', d=1)")
res1: org.apache.spark.sql.DataFrame = []

scala> sql("DESC partitioned_table PARTITION (c='Us', d=1)").show(false)
org.apache.spark.sql.catalyst.parser.ParseException:
Unsupported SQL statement
== SQL ==
DESC partitioned_table PARTITION (c='Us', d=1)
  at 
org.apache.spark.sql.catalyst.parser.AbstractSqlParser$$anonfun$parsePlan$1.apply(ParseDriver.scala:58)
  at 
org.apache.spark.sql.catalyst.parser.AbstractSqlParser$$anonfun$parsePlan$1.apply(ParseDriver.scala:53)
  at 
org.apache.spark.sql.catalyst.parser.AbstractSqlParser.parse(ParseDriver.scala:82)
  at 
org.apache.spark.sql.execution.SparkSqlParser.parse(SparkSqlParser.scala:45)
  at 
org.apache.spark.sql.catalyst.parser.AbstractSqlParser.parsePlan(ParseDriver.scala:53)
  at org.apache.spark.sql.SparkSession.sql(SparkSession.scala:573)
  ... 48 elided
```

- **After**
```
scala> sql("CREATE TABLE partitioned_table (a STRING, b INT) PARTITIONED BY 
(c STRING, d STRING)")
res0: org.apache.spark.sql.DataFrame = []

scala> sql("ALTER TABLE partitioned_table ADD PARTITION (c='Us', d=1)")
res1: org.apache.spark.sql.DataFrame = []

scala> sql("DESC partitioned_table PARTITION (c='Us', d=1)").show(false)
+---+-+---+
|col_name   |data_type|comment|
+---+-+---+
|a  |string   |null   |
|b  |int  |null   |
|c  |string   |null   |
|d  |string   |null   |
|# Partition Information| |   |
|# col_name |data_type|comment|
|c  |string   |null   |
|d  |string   |null   |
+---+-+---+
```

## How was this patch tested?

Pass the Jenkins tests with a new testcase.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/dongjoon-hyun/spark SPARK-17612

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/spark/pull/15168.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #15168


commit 1396eb10db67382fbb99d0f28e39722ef6758d3b
Author: Dongjoon Hyun 
Date:   2016-09-20T19:57:47Z

[SPARK-17612][SQL] Support `DESCRIBE table PARTITION` SQL syntax




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
--