Github user dilipbiswal commented on a diff in the pull request:

    https://github.com/apache/spark/pull/22455#discussion_r219388335
  
    --- Diff: R/pkg/R/DataFrame.R ---
    @@ -244,11 +245,15 @@ setMethod("showDF",
     #' @note show(SparkDataFrame) since 1.4.0
     setMethod("show", "SparkDataFrame",
               function(object) {
    -            cols <- lapply(dtypes(object), function(l) {
    -              paste(l, collapse = ":")
    -            })
    -            s <- paste(cols, collapse = ", ")
    -            cat(paste(class(object), "[", s, "]\n", sep = ""))
    +            if (identical(sparkR.conf("spark.sql.repl.eagerEval.enabled", 
"false")[[1]], "true")) {
    --- End diff --
    
    @adrian555 Thanks for the explanation. 
    > However, my second point is that I don't think these two configs matter 
much or that important/necessary. Since the eager execution is just to show a 
snippet data of the SparkDataFrame, our default numRows = 20 and truncate = 
TRUE are good enough iMO. If users want to see more or less number of rows, 
they should call showDF().
    
    So i just wanted to make sure if its possible to have parity with how it 
works for python. It seems to me that in python, we just get the two configs 
and call the showstring method.
    
    > And if we think that showDF() can ignore the eager execution setting and 
still want the show() to observe eager execution config, we can certainly just 
grab the maxNumRows and truncate setting and pass to showDF() call.
    
    What will happen if we grab these config in show() when eager execution is 
enabled and then call showDF() by passing these parameters ? 



---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to