This is an automated email from the ASF dual-hosted git repository.

gurwls223 pushed a commit to branch branch-2.4
in repository https://gitbox.apache.org/repos/asf/spark.git


The following commit(s) were added to refs/heads/branch-2.4 by this push:
     new cd6a0c4  [MINOR][DOCS] Fix documentation for slide function
cd6a0c4 is described below

commit cd6a0c4dffc86cf6b666ba5e723630641f1190a5
Author: Boris Boutkov <boris.bout...@gmail.com>
AuthorDate: Mon Dec 16 16:29:09 2019 +0900

    [MINOR][DOCS] Fix documentation for slide function
    
    ### What changes were proposed in this pull request?
    
    This PR proposes to fix documentation for slide function. Fixed the spacing 
issue and added some parameter related info.
    
    ### Why are the changes needed?
    
    Documentation improvement
    
    ### Does this PR introduce any user-facing change?
    
    No (doc-only change).
    
    ### How was this patch tested?
    
    Manually tested by documentation build.
    
    Closes #26896 from bboutkov/pyspark_doc_fix.
    
    Authored-by: Boris Boutkov <boris.bout...@gmail.com>
    Signed-off-by: HyukjinKwon <gurwls...@apache.org>
    (cherry picked from commit 3bf5498b4a58ebf39662ee717d3538af8b838e2c)
    Signed-off-by: HyukjinKwon <gurwls...@apache.org>
---
 R/pkg/R/functions.R                                          | 4 ++--
 python/pyspark/sql/functions.py                              | 5 +++++
 sql/core/src/main/scala/org/apache/spark/sql/functions.scala | 5 +++++
 3 files changed, 12 insertions(+), 2 deletions(-)

diff --git a/R/pkg/R/functions.R b/R/pkg/R/functions.R
index 7137faa..e914dd3 100644
--- a/R/pkg/R/functions.R
+++ b/R/pkg/R/functions.R
@@ -3340,8 +3340,8 @@ setMethod("size",
 #' (array indices start at 1, or from the end if start is negative) with the 
specified length.
 #'
 #' @rdname column_collection_functions
-#' @param start an index indicating the first element occurring in the result.
-#' @param length a number of consecutive elements chosen to the result.
+#' @param start the starting index
+#' @param length the length of the slice
 #' @aliases slice slice,Column-method
 #' @note slice since 2.4.0
 setMethod("slice",
diff --git a/python/pyspark/sql/functions.py b/python/pyspark/sql/functions.py
index 069354e..b964980 100644
--- a/python/pyspark/sql/functions.py
+++ b/python/pyspark/sql/functions.py
@@ -1908,6 +1908,11 @@ def slice(x, start, length):
     """
     Collection function: returns an array containing  all the elements in `x` 
from index `start`
     (array indices start at 1, or from the end if `start` is negative) with 
the specified `length`.
+
+    :param x: the array to be sliced
+    :param start: the starting index
+    :param length: the length of the slice
+
     >>> df = spark.createDataFrame([([1, 2, 3],), ([4, 5],)], ['x'])
     >>> df.select(slice(df.x, 2, 2).alias("sliced")).collect()
     [Row(sliced=[2, 3]), Row(sliced=[5])]
diff --git a/sql/core/src/main/scala/org/apache/spark/sql/functions.scala 
b/sql/core/src/main/scala/org/apache/spark/sql/functions.scala
index ac34ba6..f059add 100644
--- a/sql/core/src/main/scala/org/apache/spark/sql/functions.scala
+++ b/sql/core/src/main/scala/org/apache/spark/sql/functions.scala
@@ -3265,6 +3265,11 @@ object functions {
   /**
    * Returns an array containing all the elements in `x` from index `start` 
(or starting from the
    * end if `start` is negative) with the specified `length`.
+   *
+   * @param x the array column to be sliced
+   * @param start the starting index
+   * @param length the length of the slice
+   *
    * @group collection_funcs
    * @since 2.4.0
    */


---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org

Reply via email to