Github user NarineK commented on a diff in the pull request:

    https://github.com/apache/spark/pull/8920#discussion_r40693133
  
    --- Diff: R/pkg/R/DataFrame.R ---
    @@ -1848,3 +1848,78 @@ setMethod("crosstab",
                 sct <- callJMethod(statFunctions, "crosstab", col1, col2)
                 collect(dataFrame(sct))
               })
    +
    +#' Sort
    +#'
    +#' Sort a DataFrame by the specified column(s).
    +#'
    +#' @param x A DataFrame to be sorted.
    +#' @param by A character column or Column Object  indicating the field to 
sort on.
    +#'           If sorting column is a Column object, we need to embrace the 
column with asc or desc
    +#'           keyword. The 'decreasing' argument does not apply to Column 
Object. It only applies to
    +#'           character column names  
    +#' @param decreasing Orderings for each sorting column
    +#' @param ... Additional sorting fields
    +#' @return A DataFrame where elements are sorted by input sorting columns.
    +#' @rdname sort
    +#' @name sort
    +#' @aliases orderby
    +#' @export
    +#' @examples
    +#'\dontrun{
    +#' sc <- sparkR.init()
    +#' sqlContext <- sparkRSQL.init(sc)
    +#' path <- "path/to/file.json"
    +#' df <- jsonFile(sqlContext, path)
    +#' sort(df, col="col1")
    +#' sort(df, decreasing=FALSE, "col2")
    +#' sort(df, decreasing=TRUE, "col1")
    +#' sort(df, c(TRUE,FALSE), "col1","col2")
    +#' sort(df, col=list(asc(df$col1), desc(df$col2)))
    +#' sort(df, col=desc(df$col1))
    +#' }
    +setMethod("sort",
    +          signature(x = "DataFrame"),
    +          function(x, decreasing=FALSE, col, ...) {
    +
    +            # all sorting columns
    +            by <- c(col, ...)
    +
    +            if (class(by) == "character"){
    +              if (length(decreasing) == 1){
    +                # in case only 1 boolean argument - decreasing value is 
specified, it will be used for all columns
    +                decreasing <- rep(decreasing,length(by))
    +              } else if (length(decreasing) != length(by)){
    +                stop("Arguments 'col' and 'decreasing' must have the same 
length")
    +              }
    +
    +              # creates a string array by replacing TRUE/FALSE 
correspondingly by "desc"/"asc"
    +              sortOrder <- ifelse (decreasing == FALSE, "asc", decreasing)
    +              sortOrder <- ifelse (decreasing == TRUE, "desc", sortOrder)
    +
    +              # concatenates dataframe with the column names, example: 
c("x$Species", "x$Petal_Width")
    +              colDFConcat <- paste("x", by, sep = "$")
    +
    +              # embraces columns with order - asc/desc
    +              # example: c("asc(x$Species)", "desc(x$Petal_Length)" )
    +              colDFOrderConcat <- paste(sortOrder, "(", colDFConcat, ")", 
collapse = ",")
    +
    +              # concatenates all ordered columns to a list
    +              # example: "list(asc(x$Species), desc(x$Petal_Length))"
    +              colDFOrderConcatList <- paste("list(", colDFOrderConcat, 
")", collapse = "")
    +
    +              # builds columns of type Column, example: [[1]] Column 
Species ASC
    +              #                                         [[2]] Column 
Petal_Length DESC
    +              resCols <- eval(parse(t=colDFOrderConcatList))
    --- End diff --
    
    Hi sun-rui,
    
    thank you for you comments. As I mentioned in my previous notes I'm not 
overriding any R functions. We can use 
    sort(iris) and this will call R basic package. (But I guess some of R basic 
functions are now being overriden by SparkR e.g. summary ;) )
    We need the sort function because our customers who worked longer with R 
cannot find the functions and signatures they used to work with. Also arrange 
doesn't allow me to specify the order in case the column names are strings.
    
    another advantage of having decreasing argument is that I can have smth 
like this:
    
    sort(df, TRUE, "col1","col2","col3", .... "col500") and this will apply to 
all columns and I don't have to add that desc or asc prefix to all 500 columns 
manually. Which is time consuming.
    
    Thanks for you suggestion about lapply. I'll give a try with it.
    
    I also think that sort with only character "column" names would be better. 
The users can use arrange if they work with Column Object, because also, as you 
said, it makes "decreasing" parameters useless.
    
    Let me know what do you think, so that I can proceed. 
    
    Thanks,
    Narine


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to