Github user HyukjinKwon commented on the issue:

    https://github.com/apache/spark/pull/19607
  
    > Do we need the config "spark.sql.execution.pandas.respectSessionTimeZone"?
    
    I think we don't need this in this case. If this discussion lasts without a 
conclusion, I think we should better open a discussion in the mailing list to 
deduplicate such discussion in the future. 
    
    > What version of Pandas should we support?
    
    I think this does not block this PR though. I don't have a strong opinion 
on this. Just FYI, there are some information that might help:
    
    Pandas 0.19.2 seems [released in December 24, 
2016](https://pandas.pydata.org/pandas-docs/stable/release.html#pandas-0-19-2) 
    Spark 2.1.0 seems [released in December 28, 
2016](https://spark.apache.org/news/index.html).


---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to