Github user shivaram commented on a diff in the pull request:

    https://github.com/apache/spark/pull/14090#discussion_r70920785
  
    --- Diff: docs/sparkr.md ---
    @@ -316,6 +314,139 @@ head(ldf, 3)
     {% endhighlight %}
     </div>
     
    +#### Run a given function on a large dataset grouping by input column(s) 
and using `gapply` or `gapplyCollect`
    +
    +##### gapply
    +Apply a function to each group of a `SparkDataFrame`. The function is to 
be applied to each group of the `SparkDataFrame` and should have only two 
parameters: grouping key and R `data.frame` corresponding to
    +that key. The groups are chosen from `SparkDataFrame`s column(s).
    +The output of function should be a `data.frame`. Schema specifies the row 
format of the resulting
    +`SparkDataFrame`. It must represent R function's output schema on the 
basis of Spark data types. The column names of the returned `data.frame` are 
set by user. Below data type mapping between R
    +and Spark.
    +
    +#### Data type mapping between R and Spark
    +<table class="table">
    +<tr><th>R</th><th>Spark</th></tr>
    +<tr>
    +  <td>byte</td>
    +  <td>byte</td>
    +</tr>
    +<tr>
    +  <td>integer</td>
    +  <td>integer</td>
    +</tr>
    +<tr>
    +  <td>float</td>
    +  <td>float</td>
    +</tr>
    +<tr>
    +  <td>double</td>
    +  <td>double</td>
    +</tr>
    +<tr>
    +  <td>numeric</td>
    +  <td>double</td>
    +</tr>
    +<tr>
    +  <td>character</td>
    +  <td>string</td>
    +</tr>
    +<tr>
    +  <td>string</td>
    +  <td>string</td>
    +</tr>
    +<tr>
    +  <td>binary</td>
    +  <td>binary</td>
    +</tr>
    +<tr>
    +  <td>raw</td>
    +  <td>binary</td>
    +</tr>
    +<tr>
    +  <td>logical</td>
    +  <td>boolean</td>
    +</tr>
    +<tr>
    +  <td>timestamp</td>
    +  <td>timestamp</td>
    +</tr>
    +<tr>
    +  <td>date</td>
    +  <td>date</td>
    +</tr>
    +<tr>
    +  <td>array</td>
    +  <td>array</td>
    +</tr>
    +<tr>
    +  <td>list</td>
    +  <td>array</td>
    +</tr>
    +<tr>
    +  <td>map</td>
    +  <td>map</td>
    +</tr>
    +<tr>
    +  <td>env</td>
    +  <td>map</td>
    +</tr>
    +<tr>
    +  <td>struct</td>
    --- End diff --
    
    Thats a good point - So users can create a schema with `struct` and that is 
mapping to a corresponding SQL type. But they can't create any R objects that 
will be parsed as `struct`. The main reason our schema is more flexible than 
our serialization / deserialization support is that the schema can be used to 
say read JSON files or JDBC tables etc.
    
    For the use case here, where users are returning a `data.frame` from UDF I 
dont think there is any valid mapping for `struct` from R. 


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to