HyukjinKwon commented on a change in pull request #28593:
URL: https://github.com/apache/spark/pull/28593#discussion_r428488834



##########
File path: 
sql/catalyst/src/main/scala/org/apache/spark/sql/internal/SQLConf.scala
##########
@@ -2586,6 +2586,22 @@ object SQLConf {
       .checkValue(_ > 0, "The timeout value must be positive")
       .createWithDefault(10L)
 
+  val LEGACY_NUMERIC_CONVERT_TO_TIMESTAMP_ENABLE =
+    buildConf("spark.sql.legacy.numericConvertToTimestampEnable")
+      .doc("when true,use legacy numberic can convert to timestamp")
+      .version("3.0.0")
+      .booleanConf
+      .createWithDefault(false)
+
+  val LEGACY_NUMERIC_CONVERT_TO_TIMESTAMP_IN_SECONDS =
+    buildConf("spark.sql.legacy.numericConvertToTimestampInSeconds")
+      .internal()
+      .doc("The legacy only works when 
LEGACY_NUMERIC_CONVERT_TO_TIMESTAMP_ENABLE is true." +
+        "when true,the value will be  interpreted as seconds,which follow 
spark style," +
+        "when false,value is interpreted as milliseconds,which follow hive 
style")

Review comment:
       This kind of compatibility isn't fully guaranteed in Spark, see also 
[Compatibility with Apache 
Hive](https://spark.apache.org/docs/latest/sql-migration-guide-hive-compatibility.html).
 Simply following Hive doesn't justify this PR.
   
   There are already a bunch of mismatched behaviours and I don't like to 
target more compatibility, in particular, by fixing the basic functionalities 
such as cast and adding such switches to maintain. Why is it difficult to just 
add `ts / 1000`? The workaround seems very easy.
   
   If we target to get rid of `cast(ts as long)` away, adding separate 
functions is a-okay because it doesn't affect existing users, and also looks 
other DBMSs have their own ways by having such functions in general. Looks we 
will have workarounds once these functions from 
https://github.com/apache/spark/pull/28534 are merged, and seems you can 
leverage these functions alternatively as well.
   
   I would say a-no to fix a basic functionality to have another variant and 
non-standard behaviour, which could potentially trigger to have another set of 
non-standard behaviours in many other places in Spark.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to