Hyukjin Kwon created SPARK-27834:
------------------------------------

             Summary: Make separate PySpark/SparkR vectorization configurations
                 Key: SPARK-27834
                 URL: https://issues.apache.org/jira/browse/SPARK-27834
             Project: Spark
          Issue Type: Improvement
          Components: PySpark, SparkR, SQL
    Affects Versions: 3.0.0
            Reporter: Hyukjin Kwon


{{spark.sql.execution.arrow.enabled}} was added when we add PySpark arrow 
optimization.
Later, in the current master, SparkR arrow optimization was added and it's 
controlled by the same configuration {{spark.sql.execution.arrow.enabled}}.

There look two issues about this:

1. {{spark.sql.execution.arrow.enabled}} in PySpark was added from 2.3.0 
whereas SparkR optimization was added 3.0.0. The stability is different so it's 
problematic when we change the default value for one of both optimization first.

2. Suppose users want to share some JVM by PySpark and SparkR. They are 
currently forced to use the optimization for all or none.




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to