[ https://issues.apache.org/jira/browse/SPARK-12635?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16435405#comment-16435405 ]
Naresh Sundararaji commented on SPARK-12635: -------------------------------------------- I could see the status in In Progress. Any tentative date by which the fix would be released ? > More efficient (column batch) serialization for Python/R > -------------------------------------------------------- > > Key: SPARK-12635 > URL: https://issues.apache.org/jira/browse/SPARK-12635 > Project: Spark > Issue Type: New Feature > Components: PySpark, SparkR, SQL > Reporter: Reynold Xin > Priority: Major > > Serialization between Scala / Python / R is pretty slow. Python and R both > work pretty well with column batch interface (e.g. numpy arrays). Technically > we should be able to just pass column batches around with minimal > serialization (maybe even zero copy memory). > Note that this depends on some internal refactoring to use a column batch > interface in Spark SQL. -- This message was sent by Atlassian JIRA (v7.6.3#76005) --------------------------------------------------------------------- To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org