Github user shivaram commented on a diff in the pull request:

    https://github.com/apache/spark/pull/13660#discussion_r67772111
  
    --- Diff: docs/sparkr.md ---
    @@ -262,6 +262,79 @@ head(df)
     {% endhighlight %}
     </div>
     
    +### Applying User-defined Function
    +In SparkR, we support several kinds for User-defined Functions:
    +
    +#### Run a given function on a large dataset using `dapply` or 
`dapplyCollect`
    +
    +##### dapply
    +Apply a function to each partition of `SparkDataFrame`. The function to be 
applied to each partition of the `SparkDataFrame`
    +and should have only one parameter, to which a `data.frame` corresponds to 
each partition will be passed. The output of function
    +should be a `data.frame`. Schema specifies the row format of the resulting 
`SparkDataFrame`. It must match the R function's output.
    +<div data-lang="r"  markdown="1">
    +{% highlight r %}
    +
    +# Convert waiting time from hours to seconds.
    +# Note that we can apply UDF to DataFrame.
    +schema <- structType(structField("eruptions", "double"), 
structField("waiting", "double"),
    +                     structField("waiting_secs", "double"))
    +df1 <- dapply(df, function(x) {x <- cbind(x, x$waiting * 60)}, schema)
    +head(collect(df1))
    +##  eruptions waiting waiting_secs
    +##1     3.600      79         4740
    +##2     1.800      54         3240
    +##3     3.333      74         4440
    +##4     2.283      62         3720
    +##5     4.533      85         5100
    +##6     2.883      55         3300
    +{% endhighlight %}
    +</div>
    +
    +##### dapplyCollect
    +Like `dapply`, apply a function to each partition of `SparkDataFrame` and 
collect the result back.
    +<div data-lang="r"  markdown="1">
    +{% highlight r %}
    +
    +# Convert waiting time from hours to seconds.
    +# Note that we can apply UDF to DataFrame and return a R's data.frame
    +ldf <- dapplyCollect(
    +         df,
    +         function(x) {
    +           x <- cbind(x, "waiting_secs"=x$waiting * 60)
    +         })
    +head(ldf, 3)
    +##  eruptions waiting waiting_secs
    +##1     3.600      79         4740
    +##2     1.800      54         3240
    +##3     3.333      74         4440
    +
    +{% endhighlight %}
    +</div>
    +
    +#### Run many functions in parallel using `spark.lapply`
    +
    +##### lapply
    +Similar to `lapply` in native R, `spark.lapply` runs a function over a 
list of elements and distributes the computations with Spark.
    +Applies a function in a manner that is similar to `doParallel` or `lapply` 
to elements of a list.
    --- End diff --
    
    Similar to the above, it would be good to add a line here saying the 
results of all the computations should fit in a single machine -- And that if 
that is not the case they can do something like `df <- createDataFrame(list)` 
and then use `dapply`


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to