Github user 351zyf commented on the issue: https://github.com/apache/spark/pull/22888 > Then, you can convert the type into double or floats in Spark DataFrame. This is super easily able to work around at Pandas DataFrame or Spark's DataFrame. I don't think we should add this flag. > > BTW, the same feature should be added to when Arrow optimization is enabled as well. Or can we correct this conversion in function dataframe._to_corrected_pandas_type ? Converting decimal type manually everytime sounds not good..
--- --------------------------------------------------------------------- To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org