spark git commit: [SPARK-12492][SQL] Add missing SQLExecution.withNewExecutionId for hiveResultString

2016-06-15 Thread zsxwing
Repository: spark
Updated Branches:
  refs/heads/master 6e0b3d795 -> 3e6d567a4


[SPARK-12492][SQL] Add missing SQLExecution.withNewExecutionId for 
hiveResultString

## What changes were proposed in this pull request?

Add missing SQLExecution.withNewExecutionId for hiveResultString so that 
queries running in `spark-sql` will be shown in Web UI.

Closes #13115

## How was this patch tested?

Existing unit tests.

Author: KaiXinXiaoLei 

Closes #13689 from zsxwing/pr13115.


Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commit: http://git-wip-us.apache.org/repos/asf/spark/commit/3e6d567a
Tree: http://git-wip-us.apache.org/repos/asf/spark/tree/3e6d567a
Diff: http://git-wip-us.apache.org/repos/asf/spark/diff/3e6d567a

Branch: refs/heads/master
Commit: 3e6d567a4688f064f2a2259c8e436b7c628a431c
Parents: 6e0b3d7
Author: KaiXinXiaoLei 
Authored: Wed Jun 15 16:11:46 2016 -0700
Committer: Shixiong Zhu 
Committed: Wed Jun 15 16:11:46 2016 -0700

--
 .../spark/sql/execution/QueryExecution.scala| 31 +++-
 1 file changed, 17 insertions(+), 14 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/spark/blob/3e6d567a/sql/core/src/main/scala/org/apache/spark/sql/execution/QueryExecution.scala
--
diff --git 
a/sql/core/src/main/scala/org/apache/spark/sql/execution/QueryExecution.scala 
b/sql/core/src/main/scala/org/apache/spark/sql/execution/QueryExecution.scala
index e6dc50a..5b9af26 100644
--- 
a/sql/core/src/main/scala/org/apache/spark/sql/execution/QueryExecution.scala
+++ 
b/sql/core/src/main/scala/org/apache/spark/sql/execution/QueryExecution.scala
@@ -113,24 +113,27 @@ class QueryExecution(val sparkSession: SparkSession, val 
logical: LogicalPlan) {
*/
   def hiveResultString(): Seq[String] = executedPlan match {
 case ExecutedCommandExec(desc: DescribeTableCommand) =>
-  // If it is a describe command for a Hive table, we want to have the 
output format
-  // be similar with Hive.
-  desc.run(sparkSession).map {
-case Row(name: String, dataType: String, comment) =>
-  Seq(name, dataType,
-Option(comment.asInstanceOf[String]).getOrElse(""))
-.map(s => String.format(s"%-20s", s))
-.mkString("\t")
+  SQLExecution.withNewExecutionId(sparkSession, this) {
+// If it is a describe command for a Hive table, we want to have the 
output format
+// be similar with Hive.
+desc.run(sparkSession).map {
+  case Row(name: String, dataType: String, comment) =>
+Seq(name, dataType,
+  Option(comment.asInstanceOf[String]).getOrElse(""))
+  .map(s => String.format(s"%-20s", s))
+  .mkString("\t")
+}
   }
 case command: ExecutedCommandExec =>
   command.executeCollect().map(_.getString(0))
-
 case other =>
-  val result: Seq[Seq[Any]] = 
other.executeCollectPublic().map(_.toSeq).toSeq
-  // We need the types so we can output struct field names
-  val types = analyzed.output.map(_.dataType)
-  // Reformat to match hive tab delimited output.
-  result.map(_.zip(types).map(toHiveString)).map(_.mkString("\t")).toSeq
+  SQLExecution.withNewExecutionId(sparkSession, this) {
+val result: Seq[Seq[Any]] = 
other.executeCollectPublic().map(_.toSeq).toSeq
+// We need the types so we can output struct field names
+val types = analyzed.output.map(_.dataType)
+// Reformat to match hive tab delimited output.
+result.map(_.zip(types).map(toHiveString)).map(_.mkString("\t")).toSeq
+  }
   }
 
   /** Formats a datum (based on the given data type) and returns the string 
representation. */


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



spark git commit: [SPARK-12492][SQL] Add missing SQLExecution.withNewExecutionId for hiveResultString

2016-06-15 Thread zsxwing
Repository: spark
Updated Branches:
  refs/heads/branch-2.0 382735c41 -> bc83b09ee


[SPARK-12492][SQL] Add missing SQLExecution.withNewExecutionId for 
hiveResultString

## What changes were proposed in this pull request?

Add missing SQLExecution.withNewExecutionId for hiveResultString so that 
queries running in `spark-sql` will be shown in Web UI.

Closes #13115

## How was this patch tested?

Existing unit tests.

Author: KaiXinXiaoLei 

Closes #13689 from zsxwing/pr13115.

(cherry picked from commit 3e6d567a4688f064f2a2259c8e436b7c628a431c)
Signed-off-by: Shixiong Zhu 


Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commit: http://git-wip-us.apache.org/repos/asf/spark/commit/bc83b09e
Tree: http://git-wip-us.apache.org/repos/asf/spark/tree/bc83b09e
Diff: http://git-wip-us.apache.org/repos/asf/spark/diff/bc83b09e

Branch: refs/heads/branch-2.0
Commit: bc83b09ee653615306e45566012d42d7917d265f
Parents: 382735c
Author: KaiXinXiaoLei 
Authored: Wed Jun 15 16:11:46 2016 -0700
Committer: Shixiong Zhu 
Committed: Wed Jun 15 16:11:55 2016 -0700

--
 .../spark/sql/execution/QueryExecution.scala| 31 +++-
 1 file changed, 17 insertions(+), 14 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/spark/blob/bc83b09e/sql/core/src/main/scala/org/apache/spark/sql/execution/QueryExecution.scala
--
diff --git 
a/sql/core/src/main/scala/org/apache/spark/sql/execution/QueryExecution.scala 
b/sql/core/src/main/scala/org/apache/spark/sql/execution/QueryExecution.scala
index a2d4502..ba23323 100644
--- 
a/sql/core/src/main/scala/org/apache/spark/sql/execution/QueryExecution.scala
+++ 
b/sql/core/src/main/scala/org/apache/spark/sql/execution/QueryExecution.scala
@@ -111,24 +111,27 @@ class QueryExecution(val sparkSession: SparkSession, val 
logical: LogicalPlan) {
*/
   def hiveResultString(): Seq[String] = executedPlan match {
 case ExecutedCommandExec(desc: DescribeTableCommand) =>
-  // If it is a describe command for a Hive table, we want to have the 
output format
-  // be similar with Hive.
-  desc.run(sparkSession).map {
-case Row(name: String, dataType: String, comment) =>
-  Seq(name, dataType,
-Option(comment.asInstanceOf[String]).getOrElse(""))
-.map(s => String.format(s"%-20s", s))
-.mkString("\t")
+  SQLExecution.withNewExecutionId(sparkSession, this) {
+// If it is a describe command for a Hive table, we want to have the 
output format
+// be similar with Hive.
+desc.run(sparkSession).map {
+  case Row(name: String, dataType: String, comment) =>
+Seq(name, dataType,
+  Option(comment.asInstanceOf[String]).getOrElse(""))
+  .map(s => String.format(s"%-20s", s))
+  .mkString("\t")
+}
   }
 case command: ExecutedCommandExec =>
   command.executeCollect().map(_.getString(0))
-
 case other =>
-  val result: Seq[Seq[Any]] = 
other.executeCollectPublic().map(_.toSeq).toSeq
-  // We need the types so we can output struct field names
-  val types = analyzed.output.map(_.dataType)
-  // Reformat to match hive tab delimited output.
-  result.map(_.zip(types).map(toHiveString)).map(_.mkString("\t")).toSeq
+  SQLExecution.withNewExecutionId(sparkSession, this) {
+val result: Seq[Seq[Any]] = 
other.executeCollectPublic().map(_.toSeq).toSeq
+// We need the types so we can output struct field names
+val types = analyzed.output.map(_.dataType)
+// Reformat to match hive tab delimited output.
+result.map(_.zip(types).map(toHiveString)).map(_.mkString("\t")).toSeq
+  }
   }
 
   /** Formats a datum (based on the given data type) and returns the string 
representation. */


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org