[ 
https://issues.apache.org/jira/browse/SPARK-22221?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16195750#comment-16195750
 ] 

Li Jin edited comment on SPARK-22221 at 10/7/17 3:32 PM:
---------------------------------------------------------

Per [~leif] 's comment here about struct type:
https://issues.apache.org/jira/browse/SPARK-21187?focusedCommentId=16098522&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16098522

I think really consider turning spark struct into a pandas multi index because 
they are semantically very similar, and multi index is much faster than, say, 
map object in Pandas.

Pandas:

{code:java}
In [29]: df
Out[29]: 
      name        
     first    last
0  Reynold     Xin
1    Bryan  Cutler

In [28]: df.assign(full_name=df['name']['first'] + df['name']['last'])
Out[28]: 
      name            full_name
     first    last             
0  Reynold     Xin   ReynoldXin
1    Bryan  Cutler  BryanCutler
{code}

Spark:
{code}
In [27]: result.printSchema()
root
 |-- name: struct (nullable = false)
 |    |-- first: string (nullable = true)
 |    |-- last: string (nullable = true)

In [31]: result.withColumn('full_name', concat(result['name']['first'], 
result['name']['last'])).show()
+--------------+-----------+
|          name|  full_name|
+--------------+-----------+
| [Reynold,Xin]| ReynoldXin|
|[Bryan,Cutler]|BryanCutler|
+--------------+-----------+

{code}



was (Author: icexelloss):
Also we might want really consider Leif comment here about struct type:
https://issues.apache.org/jira/browse/SPARK-21187?focusedCommentId=16098522&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16098522

I think really consider turning spark struct into a pandas multi index because 
they are semantically very similar, and multi index is much faster than, say, 
map object in Pandas.

Pandas:

{code:java}
In [29]: df
Out[29]: 
      name        
     first    last
0  Reynold     Xin
1    Bryan  Cutler

In [28]: df.assign(full_name=df['name']['first'] + df['name']['last'])
Out[28]: 
      name            full_name
     first    last             
0  Reynold     Xin   ReynoldXin
1    Bryan  Cutler  BryanCutler
{code}

Spark:
{code}
In [27]: result.printSchema()
root
 |-- name: struct (nullable = false)
 |    |-- first: string (nullable = true)
 |    |-- last: string (nullable = true)

In [31]: result.withColumn('full_name', concat(result['name']['first'], 
result['name']['last'])).show()
+--------------+-----------+
|          name|  full_name|
+--------------+-----------+
| [Reynold,Xin]| ReynoldXin|
|[Bryan,Cutler]|BryanCutler|
+--------------+-----------+

{code}


> Add User Documentation for Working with Arrow in Spark
> ------------------------------------------------------
>
>                 Key: SPARK-22221
>                 URL: https://issues.apache.org/jira/browse/SPARK-22221
>             Project: Spark
>          Issue Type: Sub-task
>          Components: PySpark, SQL
>    Affects Versions: 2.3.0
>            Reporter: Bryan Cutler
>
> There needs to be user facing documentation that will show how to enable/use 
> Arrow with Spark, what the user should expect, and describe any differences 
> with similar existing functionality.
> A comment from Xiao Li on https://github.com/apache/spark/pull/18664
> Given the users/applications contain the Timestamp in their Dataset and their 
> processing algorithms also need to have the codes based on the corresponding 
> time-zone related assumptions.
> * For the new users/applications, they first enabled Arrow and later hit an 
> Arrow bug? Can they simply turn off spark.sql.execution.arrow.enable? If not, 
> what should they do?
> * For the existing users/applications, they want to utilize Arrow for better 
> performance. Can they just turn on spark.sql.execution.arrow.enable? What 
> should they do?
> Note Hopefully, the guides/solutions are user-friendly. That means, it must 
> be very simple to understand for most users.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to