[ https://issues.apache.org/jira/browse/SPARK-22221?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16195736#comment-16195736 ]
Li Jin edited comment on SPARK-22221 at 10/7/17 2:49 PM: --------------------------------------------------------- -I think we should also add to the document is what are the behavior difference of arrow vs non-arrow serialization (if any).- (This is in description already) Just as a reminder, in the current state, there are behavior difference with array and struct type between arrow and non-arrow version. Array: {code:java} non-Arrow: In [47]: type(df2.toPandas().array[0]) Out[47]: list Arrow: In [45]: type(df2.toPandas().array[0]) Out[45]: numpy.ndarray {code} Struct: {code:java} non-Arrow: In [35]: type(df.toPandas().struct[0]) Out[35]: pyspark.sql.types.Row Arrow: In [37]: type(df.toPandas().struct[0]) Out[37]: dict {code} was (Author: icexelloss): -I think we should also add to the document is what are the behavior difference of arrow vs non-arrow serialization (if any).- (This is in description already) In the current state, there are difference in array and struct type between arrow and non-arrow version. Array: {code:java} non-Arrow: In [47]: type(df2.toPandas().array[0]) Out[47]: list Arrow: In [45]: type(df2.toPandas().array[0]) Out[45]: numpy.ndarray {code} Struct: {code:java} non-Arrow: In [35]: type(df.toPandas().struct[0]) Out[35]: pyspark.sql.types.Row Arrow: In [37]: type(df.toPandas().struct[0]) Out[37]: dict {code} > Add User Documentation for Working with Arrow in Spark > ------------------------------------------------------ > > Key: SPARK-22221 > URL: https://issues.apache.org/jira/browse/SPARK-22221 > Project: Spark > Issue Type: Sub-task > Components: PySpark, SQL > Affects Versions: 2.3.0 > Reporter: Bryan Cutler > > There needs to be user facing documentation that will show how to enable/use > Arrow with Spark, what the user should expect, and describe any differences > with similar existing functionality. > A comment from Xiao Li on https://github.com/apache/spark/pull/18664 > Given the users/applications contain the Timestamp in their Dataset and their > processing algorithms also need to have the codes based on the corresponding > time-zone related assumptions. > * For the new users/applications, they first enabled Arrow and later hit an > Arrow bug? Can they simply turn off spark.sql.execution.arrow.enable? If not, > what should they do? > * For the existing users/applications, they want to utilize Arrow for better > performance. Can they just turn on spark.sql.execution.arrow.enable? What > should they do? > Note Hopefully, the guides/solutions are user-friendly. That means, it must > be very simple to understand for most users. -- This message was sent by Atlassian JIRA (v6.4.14#64029) --------------------------------------------------------------------- To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org