Github user HyukjinKwon commented on a diff in the pull request:

    https://github.com/apache/spark/pull/20403#discussion_r164288478
  
    --- Diff: 
sql/catalyst/src/main/scala/org/apache/spark/sql/internal/SQLConf.scala ---
    @@ -1043,11 +1043,11 @@ object SQLConf {
     
       val ARROW_EXECUTION_ENABLE =
         buildConf("spark.sql.execution.arrow.enabled")
    -      .internal()
    -      .doc("Make use of Apache Arrow for columnar data transfers. 
Currently available " +
    -        "for use with pyspark.sql.DataFrame.toPandas with the following 
data types: " +
    -        "StringType, BinaryType, BooleanType, DoubleType, FloatType, 
ByteType, IntegerType, " +
    -        "LongType, ShortType")
    +      .doc("When true, make use of Apache Arrow for columnar data 
transfers. Currently available " +
    +        "for use with pyspark.sql.DataFrame.toPandas, and " +
    +        "pyspark.sql.SparkSession.createDataFrame when its input is a 
Pandas DataFrame. " +
    +        "The following data types are unsupported: " +
    +        "MapType, ArrayType of TimestampType, and nested StructType.")
           .booleanConf
           .createWithDefault(false)
    --- End diff --
    
    Yup. Let me 



---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to