Github user felixcheung commented on a diff in the pull request:

    https://github.com/apache/spark/pull/19643#discussion_r148710943
  
    --- Diff: R/pkg/R/context.R ---
    @@ -319,6 +319,27 @@ spark.addFile <- function(path, recursive = FALSE) {
       invisible(callJMethod(sc, "addFile", 
suppressWarnings(normalizePath(path)), recursive))
     }
     
    +#' Adds a JAR dependency for Spark tasks to be executed in the future.
    +#'
    +#' The \code{path} passed can be either a local file, a file in HDFS (or 
other Hadoop-supported
    +#' filesystems), an HTTP, HTTPS or FTP URI, or local:/path for a file on 
every worker node.
    +#' If \code{addToCurrentClassLoader} is true, add the jar to the current 
driver.
    +#'
    +#' @rdname spark.addJar
    +#' @param path The path of the jar to be added
    +#' @param addToCurrentClassLoader Whether to add the jar to the current 
driver class loader.
    +#' @export
    +#' @examples
    +#'\dontrun{
    +#' spark.addJar("/path/to/something.jar", TRUE)
    +#'}
    +#' @note spark.addJar since 2.3.0
    +spark.addJar <- function(path, addToCurrentClassLoader = FALSE) {
    +  normalizedPath <- suppressWarnings(normalizePath(path))
    --- End diff --
    
    yea, normalizePath wouldn't handle url...
    https://stat.ethz.ch/R-manual/R-devel/library/base/html/normalizePath.html
    
    I think we should require absolute paths in their canonical form here and 
just pass through..


---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to