Repository: spark
Updated Branches:
  refs/heads/master 7a3d0aad2 -> 66738d29c


[SPARK-23069][DOCS][SPARKR] fix R doc for describe missing text

## What changes were proposed in this pull request?

fix doc truncated

## How was this patch tested?

manually

Author: Felix Cheung <felixcheun...@hotmail.com>

Closes #20263 from felixcheung/r23docfix.


Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commit: http://git-wip-us.apache.org/repos/asf/spark/commit/66738d29
Tree: http://git-wip-us.apache.org/repos/asf/spark/tree/66738d29
Diff: http://git-wip-us.apache.org/repos/asf/spark/diff/66738d29

Branch: refs/heads/master
Commit: 66738d29c59871b29d26fc3756772b95ef536248
Parents: 7a3d0aa
Author: Felix Cheung <felixcheun...@hotmail.com>
Authored: Sun Jan 14 19:43:10 2018 +0900
Committer: hyukjinkwon <gurwls...@gmail.com>
Committed: Sun Jan 14 19:43:10 2018 +0900

----------------------------------------------------------------------
 R/pkg/R/DataFrame.R | 8 ++++----
 1 file changed, 4 insertions(+), 4 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/spark/blob/66738d29/R/pkg/R/DataFrame.R
----------------------------------------------------------------------
diff --git a/R/pkg/R/DataFrame.R b/R/pkg/R/DataFrame.R
index 9956f7e..6caa125 100644
--- a/R/pkg/R/DataFrame.R
+++ b/R/pkg/R/DataFrame.R
@@ -3054,10 +3054,10 @@ setMethod("describe",
 #' \item stddev
 #' \item min
 #' \item max
-#' \item arbitrary approximate percentiles specified as a percentage (eg, 
"75%")
+#' \item arbitrary approximate percentiles specified as a percentage (eg, 
"75\%")
 #' }
 #' If no statistics are given, this function computes count, mean, stddev, min,
-#' approximate quartiles (percentiles at 25%, 50%, and 75%), and max.
+#' approximate quartiles (percentiles at 25\%, 50\%, and 75\%), and max.
 #' This function is meant for exploratory data analysis, as we make no 
guarantee about the
 #' backward compatibility of the schema of the resulting Dataset. If you want 
to
 #' programmatically compute summary statistics, use the \code{agg} function 
instead.
@@ -4019,9 +4019,9 @@ setMethod("broadcast",
 #'
 #' Spark will use this watermark for several purposes:
 #' \itemize{
-#'  \item{-} To know when a given time window aggregation can be finalized and 
thus can be emitted
+#'  \item To know when a given time window aggregation can be finalized and 
thus can be emitted
 #' when using output modes that do not allow updates.
-#'  \item{-} To minimize the amount of state that we need to keep for on-going 
aggregations.
+#'  \item To minimize the amount of state that we need to keep for on-going 
aggregations.
 #' }
 #' The current watermark is computed by looking at the \code{MAX(eventTime)} 
seen across
 #' all of the partitions in the query minus a user specified 
\code{delayThreshold}. Due to the cost


---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org

Reply via email to