[GitHub] spark pull request #22891: [SPARK-25881][pyspark] df.toPandas() convert deci...

2018-10-31 Thread 351zyf
Github user 351zyf closed the pull request at: https://github.com/apache/spark/pull/22891 --- - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org

[GitHub] spark pull request #22891: SPARK-25881

2018-10-30 Thread 351zyf
GitHub user 351zyf opened a pull request: https://github.com/apache/spark/pull/22891 SPARK-25881 ## What changes were proposed in this pull request? https://github.com/apache/spark/pull/22888 decimal type should consider as a number but not object (string

[GitHub] spark pull request #22888: SPARK-25881

2018-10-30 Thread 351zyf
Github user 351zyf closed the pull request at: https://github.com/apache/spark/pull/22888 --- - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org

[GitHub] spark issue #22888: SPARK-25881

2018-10-30 Thread 351zyf
Github user 351zyf commented on the issue: https://github.com/apache/spark/pull/22888 OK --- - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h

[GitHub] spark issue #22888: SPARK-25881

2018-10-30 Thread 351zyf
Github user 351zyf commented on the issue: https://github.com/apache/spark/pull/22888 > Then, you can convert the type into double or floats in Spark DataFrame. This is super easily able to work around at Pandas DataFrame or Spark's DataFrame. I don't think we should add this f

[GitHub] spark issue #22888: SPARK-25881

2018-10-30 Thread 351zyf
Github user 351zyf commented on the issue: https://github.com/apache/spark/pull/22888 and this also have no effect on timestamp values. tested. --- - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org

[GitHub] spark issue #22888: SPARK-25881

2018-10-30 Thread 351zyf
Github user 351zyf commented on the issue: https://github.com/apache/spark/pull/22888 > I think you can just manually convert from Pandas DataFrame, no? If I'm using function toPandas, I dont think decimal to object is right. Isn't decimal values usually a va

[GitHub] spark pull request #22888: SPARK-25881

2018-10-30 Thread 351zyf
GitHub user 351zyf opened a pull request: https://github.com/apache/spark/pull/22888 SPARK-25881 add parametere coerce_float https://issues.apache.org/jira/browse/SPARK-25881 ## What changes were proposed in this pull request? when using pyspark dataframe.toPandas

[GitHub] spark issue #16485: [SPARK-19099] correct the wrong time display in history ...

2017-01-06 Thread 351zyf
Github user 351zyf commented on the issue: https://github.com/apache/spark/pull/16485 But the time display on history server web UI is not correct. It is 8 hours eralier than the actual time here. Am I using the wrong configuration ? --- If your project is set up

[GitHub] spark pull request #16485: [SPARK-19099] correct the wrong time display in h...

2017-01-05 Thread 351zyf
GitHub user 351zyf opened a pull request: https://github.com/apache/spark/pull/16485 [SPARK-19099] correct the wrong time display in history server web UI JIRA Issue: https://issues.apache.org/jira/browse/SPARK-19099 Correct the wrong job start/end time display in spark