Github user sun-rui commented on a diff in the pull request:

    https://github.com/apache/spark/pull/13660#discussion_r67881287
  
    --- Diff: docs/sparkr.md ---
    @@ -262,6 +262,83 @@ head(df)
     {% endhighlight %}
     </div>
     
    +### Applying User-defined Function
    +In SparkR, we support several kinds for User-defined Functions:
    +
    +#### Run a given function on a large dataset using `dapply` or 
`dapplyCollect`
    +
    +##### dapply
    +Apply a function to each partition of `SparkDataFrame`. The function to be 
applied to each partition of the `SparkDataFrame`
    +and should have only one parameter, to which a `data.frame` corresponds to 
each partition will be passed. The output of function
    +should be a `data.frame`. Schema specifies the row format of the resulting 
`SparkDataFrame`. It must match the R function's output.
    +<div data-lang="r"  markdown="1">
    +{% highlight r %}
    +
    +# Convert waiting time from hours to seconds.
    +# Note that we can apply UDF to DataFrame.
    +schema <- structType(structField("eruptions", "double"), 
structField("waiting", "double"),
    +                     structField("waiting_secs", "double"))
    +df1 <- dapply(df, function(x) {x <- cbind(x, x$waiting * 60)}, schema)
    +head(collect(df1))
    +##  eruptions waiting waiting_secs
    +##1     3.600      79         4740
    +##2     1.800      54         3240
    +##3     3.333      74         4440
    +##4     2.283      62         3720
    +##5     4.533      85         5100
    +##6     2.883      55         3300
    +{% endhighlight %}
    +</div>
    +
    +##### dapplyCollect
    +Like `dapply`, apply a function to each partition of `SparkDataFrame` and 
collect the result back. The output of function
    +should be a `data.frame`. But, Schema is not required to be passed. Note 
that `dapplyCollect` only can be used if the
    +output of UDF run on all the partitions can fit in driver memory.
    +<div data-lang="r"  markdown="1">
    +{% highlight r %}
    +
    +# Convert waiting time from hours to seconds.
    +# Note that we can apply UDF to DataFrame and return a R's data.frame
    +ldf <- dapplyCollect(
    +         df,
    +         function(x) {
    +           x <- cbind(x, "waiting_secs"=x$waiting * 60)
    +         })
    +head(ldf, 3)
    +##  eruptions waiting waiting_secs
    +##1     3.600      79         4740
    +##2     1.800      54         3240
    +##3     3.333      74         4440
    +
    +{% endhighlight %}
    +</div>
    +
    +#### Run many functions in parallel using `spark.lapply`
    +
    +##### lapply
    --- End diff --
    
    One thought about spark.lapply() is that documenting here means our 
commitment to it. This is a case demonstrating the need to support Dataset in 
SparkR. Maybe next step we can consider replace RDD with Dataset in SparkR


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to