[GitHub] spark pull request #17348: [SPARK-20018][SQL] Pivot with timestamp and count...
Github user asfgit closed the pull request at: https://github.com/apache/spark/pull/17348 --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. --- - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] spark pull request #17348: [SPARK-20018][SQL] Pivot with timestamp and count...
Github user HyukjinKwon commented on a diff in the pull request: https://github.com/apache/spark/pull/17348#discussion_r107408028 --- Diff: sql/core/src/test/scala/org/apache/spark/sql/DataFramePivotSuite.scala --- @@ -230,4 +230,17 @@ class DataFramePivotSuite extends QueryTest with SharedSQLContext{ .groupBy($"a").pivot("a").agg(min($"b")), Row(null, Seq(null, 7), null) :: Row(1, null, Seq(1, 7)) :: Nil) } + + test("pivot with timestamp and count should not print internal representation") { +val ts = "2012-12-31 16:00:10.011" +val tsWithZone = "2013-01-01 00:00:10.011" + +withSQLConf(SQLConf.SESSION_LOCAL_TIMEZONE.key -> "GMT") { + val df = Seq(java.sql.Timestamp.valueOf(ts)).toDF("a").groupBy("a").pivot("a").count() + val expected = StructType( +StructField("a", TimestampType) :: +StructField(tsWithZone, LongType) :: Nil) + assert(df.schema == expected) --- End diff -- Sure. --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. --- - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] spark pull request #17348: [SPARK-20018][SQL] Pivot with timestamp and count...
Github user cloud-fan commented on a diff in the pull request: https://github.com/apache/spark/pull/17348#discussion_r107382834 --- Diff: sql/core/src/test/scala/org/apache/spark/sql/DataFramePivotSuite.scala --- @@ -230,4 +230,17 @@ class DataFramePivotSuite extends QueryTest with SharedSQLContext{ .groupBy($"a").pivot("a").agg(min($"b")), Row(null, Seq(null, 7), null) :: Row(1, null, Seq(1, 7)) :: Nil) } + + test("pivot with timestamp and count should not print internal representation") { +val ts = "2012-12-31 16:00:10.011" +val tsWithZone = "2013-01-01 00:00:10.011" + +withSQLConf(SQLConf.SESSION_LOCAL_TIMEZONE.key -> "GMT") { + val df = Seq(java.sql.Timestamp.valueOf(ts)).toDF("a").groupBy("a").pivot("a").count() + val expected = StructType( +StructField("a", TimestampType) :: +StructField(tsWithZone, LongType) :: Nil) + assert(df.schema == expected) --- End diff -- can we add a `checkAnswer` to make sure the value is also `tsWithZone`? --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. --- - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] spark pull request #17348: [SPARK-20018][SQL] Pivot with timestamp and count...
Github user HyukjinKwon commented on a diff in the pull request: https://github.com/apache/spark/pull/17348#discussion_r107337310 --- Diff: sql/core/src/test/scala/org/apache/spark/sql/DataFramePivotSuite.scala --- @@ -230,4 +230,17 @@ class DataFramePivotSuite extends QueryTest with SharedSQLContext{ .groupBy($"a").pivot("a").agg(min($"b")), Row(null, Seq(null, 7), null) :: Row(1, null, Seq(1, 7)) :: Nil) } + + test("pivot with timestamp and count should not print internal representation") { +val ts = "2012-12-31 16:00:10.011" +val tsWithZone = "2013-01-01 00:00:10.011" + +withSQLConf(SQLConf.SESSION_LOCAL_TIMEZONE.key -> "GMT") { + val df = Seq(java.sql.Timestamp.valueOf(ts)).toDF("a").groupBy("a").pivot("a").count() + val expected = StructType( +StructField("a", TimestampType) :: +StructField(tsWithZone, LongType) :: Nil) --- End diff -- Few more tests with string cast ... ```scala scala> val timestamp = java.sql.Timestamp.valueOf("2012-12-31 16:00:10.011") timestamp: java.sql.Timestamp = 2012-12-31 16:00:10.011 scala> Seq(timestamp).toDF("a").groupBy("a").pivot("a").count().selectExpr("cast(a as string)", "`2012-12-31 16:00:10.011`").show(false) +---+---+ |a |2012-12-31 16:00:10.011| +---+---+ |2012-12-31 16:00:10.011|1 | +---+---+ ``` ```scala scala> spark.conf.set("spark.sql.session.timeZone", "America/Los_Angeles") scala> val timestamp = java.sql.Timestamp.valueOf("2012-12-31 16:00:10.011") timestamp: java.sql.Timestamp = 2012-12-31 16:00:10.011 scala> Seq(timestamp).toDF("a").groupBy("a").pivot("a").count().selectExpr("cast(a as string)", "`2012-12-30 23:00:10.011`").show(false) +---+---+ |a |2012-12-30 23:00:10.011| +---+---+ |2012-12-30 23:00:10.011|1 | +---+---+ ``` --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. --- - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] spark pull request #17348: [SPARK-20018][SQL] Pivot with timestamp and count...
Github user HyukjinKwon commented on a diff in the pull request: https://github.com/apache/spark/pull/17348#discussion_r107335518 --- Diff: sql/core/src/test/scala/org/apache/spark/sql/DataFramePivotSuite.scala --- @@ -230,4 +230,17 @@ class DataFramePivotSuite extends QueryTest with SharedSQLContext{ .groupBy($"a").pivot("a").agg(min($"b")), Row(null, Seq(null, 7), null) :: Row(1, null, Seq(1, 7)) :: Nil) } + + test("pivot with timestamp and count should not print internal representation") { +val ts = "2012-12-31 16:00:10.011" +val tsWithZone = "2013-01-01 00:00:10.011" + +withSQLConf(SQLConf.SESSION_LOCAL_TIMEZONE.key -> "GMT") { + val df = Seq(java.sql.Timestamp.valueOf(ts)).toDF("a").groupBy("a").pivot("a").count() + val expected = StructType( +StructField("a", TimestampType) :: +StructField(tsWithZone, LongType) :: Nil) --- End diff -- With the default timezone ... ```scala scala> val timestamp = java.sql.Timestamp.valueOf("2012-12-31 16:00:10.011") timestamp: java.sql.Timestamp = 2012-12-31 16:00:10.011 scala> Seq(timestamp).toDF("a").groupBy("a").pivot("a").count().show(false) +---+---+ |a |2012-12-31 16:00:10.011| +---+---+ |2012-12-31 16:00:10.011|1 | +---+---+ ``` --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. --- - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] spark pull request #17348: [SPARK-20018][SQL] Pivot with timestamp and count...
Github user HyukjinKwon commented on a diff in the pull request: https://github.com/apache/spark/pull/17348#discussion_r107335421 --- Diff: sql/core/src/test/scala/org/apache/spark/sql/DataFramePivotSuite.scala --- @@ -230,4 +230,17 @@ class DataFramePivotSuite extends QueryTest with SharedSQLContext{ .groupBy($"a").pivot("a").agg(min($"b")), Row(null, Seq(null, 7), null) :: Row(1, null, Seq(1, 7)) :: Nil) } + + test("pivot with timestamp and count should not print internal representation") { +val ts = "2012-12-31 16:00:10.011" +val tsWithZone = "2013-01-01 00:00:10.011" + +withSQLConf(SQLConf.SESSION_LOCAL_TIMEZONE.key -> "GMT") { + val df = Seq(java.sql.Timestamp.valueOf(ts)).toDF("a").groupBy("a").pivot("a").count() + val expected = StructType( +StructField("a", TimestampType) :: +StructField(tsWithZone, LongType) :: Nil) --- End diff -- Ah, sure. ```scala scala> val timestamp = java.sql.Timestamp.valueOf("2012-12-31 16:00:10.011") timestamp: java.sql.Timestamp = 2012-12-31 16:00:10.011 scala> spark.conf.set("spark.sql.session.timeZone", "America/Los_Angeles") scala> Seq(timestamp).toDF("a").groupBy("a").pivot("a").count().show(false) +---+---+ |a |2012-12-30 23:00:10.011| +---+---+ |2012-12-30 23:00:10.011|1 | +---+---+ ``` --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. --- - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] spark pull request #17348: [SPARK-20018][SQL] Pivot with timestamp and count...
Github user cloud-fan commented on a diff in the pull request: https://github.com/apache/spark/pull/17348#discussion_r107335293 --- Diff: sql/core/src/test/scala/org/apache/spark/sql/DataFramePivotSuite.scala --- @@ -230,4 +230,17 @@ class DataFramePivotSuite extends QueryTest with SharedSQLContext{ .groupBy($"a").pivot("a").agg(min($"b")), Row(null, Seq(null, 7), null) :: Row(1, null, Seq(1, 7)) :: Nil) } + + test("pivot with timestamp and count should not print internal representation") { +val ts = "2012-12-31 16:00:10.011" +val tsWithZone = "2013-01-01 00:00:10.011" + +withSQLConf(SQLConf.SESSION_LOCAL_TIMEZONE.key -> "GMT") { + val df = Seq(java.sql.Timestamp.valueOf(ts)).toDF("a").groupBy("a").pivot("a").count() + val expected = StructType( +StructField("a", TimestampType) :: +StructField(tsWithZone, LongType) :: Nil) --- End diff -- the column name changes with timezone, but what about the value? can you also check the result? --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. --- - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] spark pull request #17348: [SPARK-20018][SQL] Pivot with timestamp and count...
Github user HyukjinKwon commented on a diff in the pull request: https://github.com/apache/spark/pull/17348#discussion_r107330581 --- Diff: sql/core/src/test/scala/org/apache/spark/sql/DataFramePivotSuite.scala --- @@ -230,4 +230,17 @@ class DataFramePivotSuite extends QueryTest with SharedSQLContext{ .groupBy($"a").pivot("a").agg(min($"b")), Row(null, Seq(null, 7), null) :: Row(1, null, Seq(1, 7)) :: Nil) } + + test("pivot with timestamp and count should not print internal representation") { +val ts = "2012-12-31 16:00:10.011" +val tsWithZone = "2013-01-01 00:00:10.011" + +withSQLConf(SQLConf.SESSION_LOCAL_TIMEZONE.key -> "GMT") { + val df = Seq(java.sql.Timestamp.valueOf(ts)).toDF("a").groupBy("a").pivot("a").count() + val expected = StructType( +StructField("a", TimestampType) :: +StructField(tsWithZone, LongType) :: Nil) --- End diff -- Yea, I was confused of it too because the original values are apprently rendered differently. However, it seems intended. ```scala scala> spark.conf.set("spark.sql.session.timeZone", "America/Los_Angeles") scala> val timestamp = java.sql.Timestamp.valueOf("2012-12-31 16:00:10.011") timestamp: java.sql.Timestamp = 2012-12-31 16:00:10.011 scala> Seq(timestamp).toDF("a").show() ++ | a| ++ |2012-12-30 23:00:...| ++ ``` Internal values seem as they are but it seems only changing human readable format according to the given timezone. I guess this is as described in https://github.com/apache/spark/pull/16308 --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. --- - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] spark pull request #17348: [SPARK-20018][SQL] Pivot with timestamp and count...
Github user cloud-fan commented on a diff in the pull request: https://github.com/apache/spark/pull/17348#discussion_r107329860 --- Diff: sql/core/src/test/scala/org/apache/spark/sql/DataFramePivotSuite.scala --- @@ -230,4 +230,17 @@ class DataFramePivotSuite extends QueryTest with SharedSQLContext{ .groupBy($"a").pivot("a").agg(min($"b")), Row(null, Seq(null, 7), null) :: Row(1, null, Seq(1, 7)) :: Nil) } + + test("pivot with timestamp and count should not print internal representation") { +val ts = "2012-12-31 16:00:10.011" +val tsWithZone = "2013-01-01 00:00:10.011" + +withSQLConf(SQLConf.SESSION_LOCAL_TIMEZONE.key -> "GMT") { + val df = Seq(java.sql.Timestamp.valueOf(ts)).toDF("a").groupBy("a").pivot("a").count() + val expected = StructType( +StructField("a", TimestampType) :: +StructField(tsWithZone, LongType) :: Nil) --- End diff -- is it expected? users will see different values now --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. --- - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] spark pull request #17348: [SPARK-20018][SQL] Pivot with timestamp and count...
Github user HyukjinKwon commented on a diff in the pull request: https://github.com/apache/spark/pull/17348#discussion_r107326950 --- Diff: sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/Analyzer.scala --- @@ -486,14 +486,16 @@ class Analyzer( case Pivot(groupByExprs, pivotColumn, pivotValues, aggregates, child) => val singleAgg = aggregates.size == 1 def outputName(value: Literal, aggregate: Expression): String = { + val utf8Value = Cast(value, StringType, Some(conf.sessionLocalTimeZone)).eval(EmptyRow) --- End diff -- Thank you for your confirmation. --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. --- - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] spark pull request #17348: [SPARK-20018][SQL] Pivot with timestamp and count...
Github user ueshin commented on a diff in the pull request: https://github.com/apache/spark/pull/17348#discussion_r107325633 --- Diff: sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/Analyzer.scala --- @@ -486,14 +486,16 @@ class Analyzer( case Pivot(groupByExprs, pivotColumn, pivotValues, aggregates, child) => val singleAgg = aggregates.size == 1 def outputName(value: Literal, aggregate: Expression): String = { + val utf8Value = Cast(value, StringType, Some(conf.sessionLocalTimeZone)).eval(EmptyRow) --- End diff -- Yes, it looks good. --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. --- - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] spark pull request #17348: [SPARK-20018][SQL] Pivot with timestamp and count...
Github user HyukjinKwon commented on a diff in the pull request: https://github.com/apache/spark/pull/17348#discussion_r107319664 --- Diff: sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/Analyzer.scala --- @@ -486,14 +486,16 @@ class Analyzer( case Pivot(groupByExprs, pivotColumn, pivotValues, aggregates, child) => val singleAgg = aggregates.size == 1 def outputName(value: Literal, aggregate: Expression): String = { + val utf8Value = Cast(value, StringType, Some(conf.sessionLocalTimeZone)).eval(EmptyRow) --- End diff -- BTW, is this a correct way for handling timezone - @ueshin ? --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. --- - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] spark pull request #17348: [SPARK-20018][SQL] Pivot with timestamp and count...
Github user HyukjinKwon commented on a diff in the pull request: https://github.com/apache/spark/pull/17348#discussion_r107319641 --- Diff: sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/Analyzer.scala --- @@ -486,14 +486,16 @@ class Analyzer( case Pivot(groupByExprs, pivotColumn, pivotValues, aggregates, child) => val singleAgg = aggregates.size == 1 def outputName(value: Literal, aggregate: Expression): String = { + val utf8Value = Cast(value, StringType, Some(conf.sessionLocalTimeZone)).eval(EmptyRow) --- End diff -- It seems we can cast into `StringType` in all the ways - https://github.com/apache/spark/blob/e9e2c612d58a19ddcb4b6abfb7389a4b0f7ef6f8/sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/Cast.scala#L41 --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. --- - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] spark pull request #17348: [SPARK-20018][SQL] Pivot with timestamp and count...
Github user HyukjinKwon commented on a diff in the pull request: https://github.com/apache/spark/pull/17348#discussion_r107319052 --- Diff: sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/Analyzer.scala --- @@ -486,14 +486,16 @@ class Analyzer( case Pivot(groupByExprs, pivotColumn, pivotValues, aggregates, child) => val singleAgg = aggregates.size == 1 def outputName(value: Literal, aggregate: Expression): String = { + val utf8val = Cast(value, StringType, Some(conf.sessionLocalTimeZone)).eval(EmptyRow) --- End diff -- BTW, is this a correct way for handling timezone - @ueshin ? --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. --- - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] spark pull request #17348: [SPARK-20018][SQL] Pivot with timestamp and count...
Github user HyukjinKwon commented on a diff in the pull request: https://github.com/apache/spark/pull/17348#discussion_r107318998 --- Diff: sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/Analyzer.scala --- @@ -486,14 +486,16 @@ class Analyzer( case Pivot(groupByExprs, pivotColumn, pivotValues, aggregates, child) => val singleAgg = aggregates.size == 1 def outputName(value: Literal, aggregate: Expression): String = { + val utf8val = Cast(value, StringType, Some(conf.sessionLocalTimeZone)).eval(EmptyRow) --- End diff -- It seems we can cast into `StringType` in all the ways - https://github.com/apache/spark/blob/e9e2c612d58a19ddcb4b6abfb7389a4b0f7ef6f8/sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/Cast.scala#L41 --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. --- - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] spark pull request #17348: [SPARK-20018][SQL] Pivot with timestamp and count...
Github user HyukjinKwon commented on a diff in the pull request: https://github.com/apache/spark/pull/17348#discussion_r107312948 --- Diff: sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/Analyzer.scala --- @@ -486,14 +486,16 @@ class Analyzer( case Pivot(groupByExprs, pivotColumn, pivotValues, aggregates, child) => val singleAgg = aggregates.size == 1 def outputName(value: Literal, aggregate: Expression): String = { + val scalaValue = CatalystTypeConverters.convertToScala(value.value, value.dataType) + val stringValue = Option(scalaValue).getOrElse("null").toString --- End diff -- Maybe, I thought https://github.com/HyukjinKwon/spark/blob/3c619dfb94723bd7a7d6a0811ab6329bf107f81b/sql/core/src/test/scala/org/apache/spark/sql/DataFramePivotSuite.scala#L220-L232 covers this. `Literal.toString` handles `null` case before. If we remove `Option(...).getOrElse("null")` there, it throws NPE in those tests. --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. --- - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] spark pull request #17348: [SPARK-20018][SQL] Pivot with timestamp and count...
Github user gatorsmile commented on a diff in the pull request: https://github.com/apache/spark/pull/17348#discussion_r107311172 --- Diff: sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/Analyzer.scala --- @@ -486,14 +486,16 @@ class Analyzer( case Pivot(groupByExprs, pivotColumn, pivotValues, aggregates, child) => val singleAgg = aggregates.size == 1 def outputName(value: Literal, aggregate: Expression): String = { + val scalaValue = CatalystTypeConverters.convertToScala(value.value, value.dataType) + val stringValue = Option(scalaValue).getOrElse("null").toString --- End diff -- The impact is not only on the data type `timestamp`. Any test case to cover `null`? --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. --- - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] spark pull request #17348: [SPARK-20018][SQL] Pivot with timestamp and count...
GitHub user HyukjinKwon opened a pull request: https://github.com/apache/spark/pull/17348 [SPARK-20018][SQL] Pivot with timestamp and count should not print internal representation ## What changes were proposed in this pull request? Currently, when we perform count with timestamp types, it prints the internal representation as the column name as below: ```scala Seq(new java.sql.Timestamp(1)).toDF("a").groupBy("a").pivot("a").count().show() ``` ``` +++ | a|1000| +++ |1969-12-31 16:00:...| 1| +++ ``` This PR proposes to use external Scala value instead of the internal representation in the column names as below: ``` ++---+ | a|1969-12-31 16:00:00.001| ++---+ |1969-12-31 16:00:...| 1| ++---+ ``` ## How was this patch tested? Unit test in `DataFramePivotSuite` and manual tests. You can merge this pull request into a Git repository by running: $ git pull https://github.com/HyukjinKwon/spark SPARK-20018 Alternatively you can review and apply these changes as the patch at: https://github.com/apache/spark/pull/17348.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #17348 commit d64a2b13e56330c55dac29abc3a9ccb6085ce73f Author: hyukjinkwonDate: 2017-03-19T13:42:25Z Pivot with timestamp and count should not print internal representation --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. --- - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org