cloud-fan commented on pull request #28568:
URL: https://github.com/apache/spark/pull/28568#issuecomment-630582721


   > we need same sql to run on hive/spark during migrating,if spark failed or 
behaviored less expected. so with a compatibility flag ,as you said, we can 
easily migrate them and no need to change user's sqls
   
   We did something similar before, with the pgsql dialect. This project was 
canceled, because it's too much effort to keep 2 systems having exactly the 
same behaviors. And people may keep adding other dialects, which could increase 
maintenance costs dramatically.
   
   Hive is a bit different as Spark already provides a lot of Hive 
compatibility. But still, it's not the right direction for Spark to provide 
100% compatibility with another system.
   
   For this particular case, I agree with @bart-samwel that we can fail by 
default for cast long to timestamp, and provide a legacy config to allow it 
with spark or hive behavior. This is a non-standard and weird behavior to allow 
cast long to timestamp, so for long-term we do want to forbid it, with clear 
error message to suggest using `TIMESTAMP_MILLIS` or `TIMESTAMP_MICROS` 
functions.


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to