bart-samwel commented on a change in pull request #28593:
URL: https://github.com/apache/spark/pull/28593#discussion_r429199929



##########
File path: 
sql/catalyst/src/main/scala/org/apache/spark/sql/internal/SQLConf.scala
##########
@@ -2586,6 +2586,22 @@ object SQLConf {
       .checkValue(_ > 0, "The timeout value must be positive")
       .createWithDefault(10L)
 
+  val LEGACY_NUMERIC_CONVERT_TO_TIMESTAMP_ENABLE =
+    buildConf("spark.sql.legacy.numericConvertToTimestampEnable")
+      .doc("when true,use legacy numberic can convert to timestamp")
+      .version("3.0.0")
+      .booleanConf
+      .createWithDefault(false)
+
+  val LEGACY_NUMERIC_CONVERT_TO_TIMESTAMP_IN_SECONDS =
+    buildConf("spark.sql.legacy.numericConvertToTimestampInSeconds")
+      .internal()
+      .doc("The legacy only works when 
LEGACY_NUMERIC_CONVERT_TO_TIMESTAMP_ENABLE is true." +
+        "when true,the value will be  interpreted as seconds,which follow 
spark style," +
+        "when false,value is interpreted as milliseconds,which follow hive 
style")

Review comment:
       FWIW, I think there's a viable solution for you that doesn't involve 
changes to Spark.
   
   1. You add a UDF to Hive for TIMESTAMP_MILLIS().
   2. You search all your users' workloads for "CAST(... AS TIMESTAMP)" (case 
insensitive, allowing multiple spaces and newlines, allowing multiline matches) 
and migrate them to TIMESTAMP_MILLIS(). For most common cases this can likely 
be done with an "sed" script, assuming the workloads are in files.
   
   After that, you have a workload that will run in both systems.
   
   I think forbidding the cast in Spark is a good thing and it could be used as 
a safety net (i.e., if you missed a cast in a query somewhere then it'll fail 
in Spark), but that's not required here.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to