Github user HyukjinKwon commented on a diff in the pull request:

    https://github.com/apache/spark/pull/19643#discussion_r149087406
  
    --- Diff: R/pkg/R/context.R ---
    @@ -319,6 +319,27 @@ spark.addFile <- function(path, recursive = FALSE) {
       invisible(callJMethod(sc, "addFile", 
suppressWarnings(normalizePath(path)), recursive))
     }
     
    +#' Adds a JAR dependency for Spark tasks to be executed in the future.
    +#'
    +#' The \code{path} passed can be either a local file, a file in HDFS (or 
other Hadoop-supported
    +#' filesystems), an HTTP, HTTPS or FTP URI, or local:/path for a file on 
every worker node.
    +#' If \code{addToCurrentClassLoader} is true, add the jar to the current 
driver.
    --- End diff --
    
    Yup, probably that's better wording. Let me update it after a bit more 
waiting other review comments. @mariusvniekerk, I am okay with closing it if 
you happen to have time to proceed yours now, or I can proceed here. Either way 
works. Up to you :)


---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to