Github user vanzin commented on a diff in the pull request:

    https://github.com/apache/spark/pull/19250#discussion_r143288711
  
    --- Diff: 
sql/core/src/main/scala/org/apache/spark/sql/DataFrameWriter.scala ---
    @@ -266,6 +267,10 @@ final class DataFrameWriter[T] private[sql](ds: 
Dataset[T]) {
        * @since 1.4.0
        */
       def insertInto(tableName: String): Unit = {
    +    extraOptions.get(TimestampTableTimeZone.TIMEZONE_PROPERTY).foreach { 
tz =>
    --- End diff --
    
    Hmm. I tried a couple of things, and while it may be possible to remove 
some of these checks and replace them with a check in 
`DateTimeUtils.computeTimeZone`, that doesn't cover all cases. For example, you 
could use "ALTER TABLE" with an invalid time zone and that wouldn't trigger the 
check.
    
    So given the spec I'm inclined to leave the checks as is, unless @zivanfi 
is open to making the spec more lax in that area. 
(`TimeZone.getTimeZone(invalidId)` actually returns the UTC time zone, as 
unexpected as that behavior may be, so things won't necessarily break without 
the checks.)


---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to