Github user huaxingao commented on a diff in the pull request:

    https://github.com/apache/spark/pull/20400#discussion_r165270774
  
    --- Diff: python/pyspark/sql/window.py ---
    @@ -129,11 +131,34 @@ def rangeBetween(start, end):
             :param end: boundary end, inclusive.
                         The frame is unbounded if this is 
``Window.unboundedFollowing``, or
                         any value greater than or equal to min(sys.maxsize, 
9223372036854775807).
    +
    +        >>> from pyspark.sql import functions as F, SparkSession, Window
    +        >>> spark = SparkSession.builder.getOrCreate()
    +        >>> df = spark.createDataFrame([(1, "a"), (1, "a"), (2, "a"), (1, 
"b"), (2, "b"),
    +        ... (3, "b")], ["id", "category"])
    +        >>> window = 
Window.orderBy("id").partitionBy("category").rangeBetween(F.currentRow(),
    +        ... F.lit(1))
    +        >>> df.withColumn("sum", F.sum("id").over(window)).show()
    +        +---+--------+---+
    +        | id|category|sum|
    +        +---+--------+---+
    +        |  1|       b|  3|
    +        |  2|       b|  5|
    +        |  3|       b|  3|
    +        |  1|       a|  4|
    +        |  1|       a|  4|
    +        |  2|       a|  2|
    +        +---+--------+---+
    +        <BLANKLINE>
    --- End diff --
    
    Seems to me this <BLANKLINE> is required.
    I will change the rest except this one. 


---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to