cloud-fan commented on a change in pull request #31473:
URL: https://github.com/apache/spark/pull/31473#discussion_r570204612



##########
File path: 
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/jdbc/JdbcUtils.scala
##########
@@ -470,6 +455,27 @@ object JdbcUtils extends Logging {
         // TODO(davies): use getBytes for better performance, if the encoding 
is UTF-8
         row.update(pos, UTF8String.fromString(rs.getString(pos + 1)))
 
+    // SPARK-34357 - sql TIME type represents as zero epoch timestamp.
+    // It is represented as Spark TimestampType but fixed at 1970-01-01 for 
day,
+    // time portion is time of day, with no reference to a particular calendar,
+    // time zone or date, with a precision till microseconds.
+    // It stores the number of milliseconds after midnight, 00:00:00.000000
+    case TimestampType if metadata.contains("logical_time_type") =>
+      (rs: ResultSet, row: InternalRow, pos: Int) => {
+        val rawTime = rs.getTime(pos + 1)
+        if (rawTime != null) {
+          val localTimeMicro = 
TimeUnit.NANOSECONDS.toMicros(rawTime.toLocalTime().toNanoOfDay())
+          val localTimeMillis = DateTimeUtils.microsToMillis(localTimeMicro)
+          val timeZoneOffset = TimeZone.getDefault match {

Review comment:
       In most cases, session timezone should be the same as JVM default 
timezone, but ideally we should use session timezone for datetime operations: 
`SQLConf.get.sessionLocalTimeZone`

##########
File path: 
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/jdbc/JdbcUtils.scala
##########
@@ -470,6 +455,27 @@ object JdbcUtils extends Logging {
         // TODO(davies): use getBytes for better performance, if the encoding 
is UTF-8
         row.update(pos, UTF8String.fromString(rs.getString(pos + 1)))
 
+    // SPARK-34357 - sql TIME type represents as zero epoch timestamp.
+    // It is represented as Spark TimestampType but fixed at 1970-01-01 for 
day,
+    // time portion is time of day, with no reference to a particular calendar,
+    // time zone or date, with a precision till microseconds.
+    // It stores the number of milliseconds after midnight, 00:00:00.000000
+    case TimestampType if metadata.contains("logical_time_type") =>
+      (rs: ResultSet, row: InternalRow, pos: Int) => {
+        val rawTime = rs.getTime(pos + 1)
+        if (rawTime != null) {
+          val localTimeMicro = 
TimeUnit.NANOSECONDS.toMicros(rawTime.toLocalTime().toNanoOfDay())
+          val localTimeMillis = DateTimeUtils.microsToMillis(localTimeMicro)
+          val timeZoneOffset = TimeZone.getDefault match {
+            case zoneInfo: ZoneInfo => 
zoneInfo.getOffsetsByWall(localTimeMillis, null)
+            case timeZone: TimeZone => timeZone.getOffset(localTimeMillis - 
timeZone.getRawOffset)
+          }

Review comment:
       Seems we can just do `DateTimeUtils.toUTCTime(localTimeMicro, 
SQLConf.get.sessionLocalTimeZone)`




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to