Github user felixcheung commented on a diff in the pull request:

    https://github.com/apache/spark/pull/17981#discussion_r117601533
  
    --- Diff: R/pkg/R/mllib_tree.R ---
    @@ -499,3 +543,199 @@ setMethod("write.ml", signature(object = 
"RandomForestClassificationModel", path
               function(object, path, overwrite = FALSE) {
                 write_internal(object, path, overwrite)
               })
    +
    +#' Decision Tree Model for Regression and Classification
    +#'
    +#' \code{spark.decisionTree} fits a Decision Tree Regression model or 
Classification model on
    +#' a SparkDataFrame. Users can call \code{summary} to get a summary of the 
fitted Decision Tree
    +#' model, \code{predict} to make predictions on new data, and 
\code{write.ml}/\code{read.ml} to
    +#' save/load fitted models.
    +#' For more details, see
    +#' 
\href{http://spark.apache.org/docs/latest/ml-classification-regression.html#decision-tree-regression}{
    +#' Decision Tree Regression} and
    +#' 
\href{http://spark.apache.org/docs/latest/ml-classification-regression.html#decision-tree-classifier}{
    +#' Decision Tree Classification}
    +#'
    +#' @param data a SparkDataFrame for training.
    +#' @param formula a symbolic description of the model to be fitted. 
Currently only a few formula
    +#'                operators are supported, including '~', ':', '+', and 
'-'.
    +#' @param type type of model, one of "regression" or "classification", to 
fit
    +#' @param maxDepth Maximum depth of the tree (>= 0).
    +#' @param maxBins Maximum number of bins used for discretizing continuous 
features and for choosing
    +#'                how to split on features at each node. More bins give 
higher granularity. Must be
    +#'                >= 2 and >= number of categories in any categorical 
feature.
    +#' @param impurity Criterion used for information gain calculation.
    +#'                 For regression, must be "variance". For classification, 
must be one of
    +#'                 "entropy" and "gini", default is "gini".
    +#' @param seed integer seed for random number generation.
    +#' @param minInstancesPerNode Minimum number of instances each child must 
have after split.
    +#' @param minInfoGain Minimum information gain for a split to be 
considered at a tree node.
    +#' @param checkpointInterval Param for set checkpoint interval (>= 1) or 
disable checkpoint (-1).
    +#' @param maxMemoryInMB Maximum memory in MB allocated to histogram 
aggregation.
    +#' @param cacheNodeIds If FALSE, the algorithm will pass trees to 
executors to match instances with
    +#'                     nodes. If TRUE, the algorithm will cache node IDs 
for each instance. Caching
    +#'                     can speed up training of deeper trees. Users can 
set how often should the
    +#'                     cache be checkpointed or disable it by setting 
checkpointInterval.
    --- End diff --
    
    wording can be improved a bit I guess but this matches the Scaladoc...


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to