[GitHub] spark pull request #18366: [SPARK-20889][SparkR] Grouped documentation for S...

2017-06-28 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/spark/pull/18366


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #18366: [SPARK-20889][SparkR] Grouped documentation for S...

2017-06-27 Thread felixcheung
Github user felixcheung commented on a diff in the pull request:

https://github.com/apache/spark/pull/18366#discussion_r124453861
  
--- Diff: R/pkg/R/functions.R ---
@@ -90,8 +90,11 @@ NULL
 #'
 #' String functions defined for \code{Column}.
 #'
-#' @param x Column to compute on. In \code{instr}, it is the substring to 
check. In \code{format_number},
-#'  it is the number of decimal place to format to.
+#' @param x Column to compute on except in the following methods:
+#'  \itemize{
+#'  \item \code{instr}: class "character", the substring to check.
--- End diff --

the form `class "character"` might not be commonly use around here.
how about like 
https://stat.ethz.ch/R-manual/R-devel/library/utils/html/read.table.html, just 
say
```
instr: character, the substring to check.
```


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #18366: [SPARK-20889][SparkR] Grouped documentation for S...

2017-06-26 Thread actuaryzhang
Github user actuaryzhang commented on a diff in the pull request:

https://github.com/apache/spark/pull/18366#discussion_r123934637
  
--- Diff: R/pkg/R/functions.R ---
@@ -1503,18 +1491,12 @@ setMethod("skewness",
 column(jc)
   })
 
-#' soundex
-#'
-#' Return the soundex code for the specified expression.
-#'
-#' @param x Column to compute on.
+#' @details
+#' \code{soundex}: Returns the soundex code for the specified expression.
 #'
-#' @rdname soundex
-#' @name soundex
-#' @family string functions
-#' @aliases soundex,Column-method
+#' @rdname column_string_functions
+#' @aliases soundex soundex,Column-method
 #' @export
-#' @examples \dontrun{soundex(df$c)}
--- End diff --

Thanks for catching this. Updated with an example.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #18366: [SPARK-20889][SparkR] Grouped documentation for S...

2017-06-26 Thread actuaryzhang
Github user actuaryzhang commented on a diff in the pull request:

https://github.com/apache/spark/pull/18366#discussion_r123934615
  
--- Diff: R/pkg/R/functions.R ---
@@ -635,20 +652,16 @@ setMethod("dayofyear",
 column(jc)
   })
 
-#' decode
-#'
-#' Computes the first argument into a string from a binary using the 
provided character set
-#' (one of 'US-ASCII', 'ISO-8859-1', 'UTF-8', 'UTF-16BE', 'UTF-16LE', 
'UTF-16').
+#' @details
+#' \code{decode}: Computes the first argument into a string from a binary 
using the provided
+#' character set.
 #'
-#' @param x Column to compute on.
-#' @param charset Character set to use
+#' @param charset Character set to use (one of "US-ASCII", "ISO-8859-1", 
"UTF-8", "UTF-16BE",
+#'"UTF-16LE", "UTF-16").
--- End diff --

Would leave it as is. 


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #18366: [SPARK-20889][SparkR] Grouped documentation for S...

2017-06-25 Thread HyukjinKwon
Github user HyukjinKwon commented on a diff in the pull request:

https://github.com/apache/spark/pull/18366#discussion_r123925897
  
--- Diff: R/pkg/R/functions.R ---
@@ -635,20 +652,16 @@ setMethod("dayofyear",
 column(jc)
   })
 
-#' decode
-#'
-#' Computes the first argument into a string from a binary using the 
provided character set
-#' (one of 'US-ASCII', 'ISO-8859-1', 'UTF-8', 'UTF-16BE', 'UTF-16LE', 
'UTF-16').
+#' @details
+#' \code{decode}: Computes the first argument into a string from a binary 
using the provided
+#' character set.
 #'
-#' @param x Column to compute on.
-#' @param charset Character set to use
+#' @param charset Character set to use (one of "US-ASCII", "ISO-8859-1", 
"UTF-8", "UTF-16BE",
+#'"UTF-16LE", "UTF-16").
--- End diff --

Not a big deal as they contain same information. So, just rather a weak 
opinion - it'd be nicer if we match this to Scala/Python too IMHO or just leave 
as is. It's also fine to me as is.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #18366: [SPARK-20889][SparkR] Grouped documentation for S...

2017-06-25 Thread HyukjinKwon
Github user HyukjinKwon commented on a diff in the pull request:

https://github.com/apache/spark/pull/18366#discussion_r123926400
  
--- Diff: R/pkg/R/functions.R ---
@@ -1503,18 +1491,12 @@ setMethod("skewness",
 column(jc)
   })
 
-#' soundex
-#'
-#' Return the soundex code for the specified expression.
-#'
-#' @param x Column to compute on.
+#' @details
+#' \code{soundex}: Returns the soundex code for the specified expression.
 #'
-#' @rdname soundex
-#' @name soundex
-#' @family string functions
-#' @aliases soundex,Column-method
+#' @rdname column_string_functions
+#' @aliases soundex soundex,Column-method
 #' @export
-#' @examples \dontrun{soundex(df$c)}
--- End diff --

It looks this example is missed.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #18366: [SPARK-20889][SparkR] Grouped documentation for S...

2017-06-25 Thread felixcheung
Github user felixcheung commented on a diff in the pull request:

https://github.com/apache/spark/pull/18366#discussion_r123891161
  
--- Diff: R/pkg/R/functions.R ---
@@ -86,6 +86,22 @@ NULL
 #' df <- createDataFrame(data.frame(time = as.POSIXct(dts), y = y))}
 NULL
 
+#' String functions for Column operations
+#'
+#' String functions defined for \code{Column}.
+#'
+#' @param x Column to compute on. In \code{instr}, it is the substring to 
check. In \code{format_number},
--- End diff --

hmm, I see. I think we need to be more clear, since there is meaning for 
columns and their order is significant, for instance,

http://spark.apache.org/docs/latest/api/scala/index.html#org.apache.spark.sql.functions$@instr(str:org.apache.spark.sql.Column,substring:String):org.apache.spark.sql.Column
```
instr(str: Column, substring: String)
```
the 2nd column is the substring to look for. maybe it's confusing to list 
column x first?

I wish there is a way to rename parameter in a backward compatible way, but 
I'm not sure there is. perhaps
```
setMethod("instr", signature(y = "Column", substring = "character", x = 
"character"),
```
but that sort of break the generic. maybe
```
setMethod("instr", signature(y = "Column", x = "character"),
setMethod("instr", signature(y = "Column", substring = "character"),
```
?

or maybe we should just have instr and format_number in its own rd?

what do you think?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #18366: [SPARK-20889][SparkR] Grouped documentation for S...

2017-06-23 Thread actuaryzhang
Github user actuaryzhang commented on a diff in the pull request:

https://github.com/apache/spark/pull/18366#discussion_r123757421
  
--- Diff: R/pkg/R/functions.R ---
@@ -635,20 +651,16 @@ setMethod("dayofyear",
 column(jc)
   })
 
-#' decode
-#'
-#' Computes the first argument into a string from a binary using the 
provided character set
-#' (one of 'US-ASCII', 'ISO-8859-1', 'UTF-8', 'UTF-16BE', 'UTF-16LE', 
'UTF-16').
+#' @details
+#' \code{decode}: Computes the first argument into a string from a binary 
using the provided
+#' character set.
 #'
-#' @param x Column to compute on.
-#' @param charset Character set to use
+#' @param charset Character set to use (one of 'US-ASCII', 'ISO-8859-1', 
'UTF-8', 'UTF-16BE',
+#''UTF-16LE', 'UTF-16').
--- End diff --

Done.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #18366: [SPARK-20889][SparkR] Grouped documentation for S...

2017-06-23 Thread actuaryzhang
Github user actuaryzhang commented on a diff in the pull request:

https://github.com/apache/spark/pull/18366#discussion_r123758738
  
--- Diff: R/pkg/R/functions.R ---
@@ -833,21 +838,21 @@ setMethod("hour",
 column(jc)
   })
 
-#' initcap
-#'
-#' Returns a new string column by converting the first letter of each word 
to uppercase.
-#' Words are delimited by whitespace.
-#'
-#' For example, "hello world" will become "Hello World".
-#'
-#' @param x Column to compute on.
+#' @details
+#' \code{initcap}: Returns a new string column by converting the first 
letter of
+#' each word to uppercase. Words are delimited by whitespace. For example, 
"hello world"
+#' will become "Hello World".
 #'
-#' @rdname initcap
-#' @name initcap
-#' @family string functions
-#' @aliases initcap,Column-method
+#' @rdname column_string_functions
+#' @aliases initcap initcap,Column-method
 #' @export
-#' @examples \dontrun{initcap(df$c)}
+#' @examples
+#'
+#' \dontrun{
+#' tmp <- mutate(df, SexLower = lower(df$Sex), AgeUpper = upper(df$age))
+#' head(tmp)
+#' tmp2 <- mutate(tmp, s1 = initcap(tmp$SexLower), s2 = reverse(df$Sex))
--- End diff --

Great catch. Thanks. Added an example on multiple words: 
`initcap(concat_ws(" ", lower(df$sex), lower(df$age)))`


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #18366: [SPARK-20889][SparkR] Grouped documentation for S...

2017-06-23 Thread actuaryzhang
Github user actuaryzhang commented on a diff in the pull request:

https://github.com/apache/spark/pull/18366#discussion_r123760642
  
--- Diff: R/pkg/R/functions.R ---
@@ -2700,19 +2656,14 @@ setMethod("expr", signature(x = "character"),
 column(jc)
   })
 
-#' format_string
-#'
-#' Formats the arguments in printf-style and returns the result as a 
string column.
+#' @details
+#' \code{format_string}: Formats the arguments in printf-style and returns 
the result
+#' as a string column.
 #'
 #' @param format a character object of format strings.
-#' @param x a Column.
-#' @param ... additional Column(s).
-#' @family string functions
-#' @rdname format_string
-#' @name format_string
-#' @aliases format_string,character,Column-method
+#' @rdname column_string_functions
+#' @aliases format_string format_string,character,Column-method
 #' @export
-#' @examples \dontrun{format_string('%d %s', df$a, df$b)}
--- End diff --

added this back in the example for `format_number`. 


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #18366: [SPARK-20889][SparkR] Grouped documentation for S...

2017-06-23 Thread actuaryzhang
Github user actuaryzhang commented on a diff in the pull request:

https://github.com/apache/spark/pull/18366#discussion_r123760974
  
--- Diff: R/pkg/R/functions.R ---
@@ -2976,19 +2918,12 @@ setMethod("regexp_replace",
 column(jc)
   })
 
-#' rpad
-#'
-#' Right-padded with pad to a length of len.
+#' @details
+#' \code{rpad}: Right-padded with pad to a length of len.
 #'
-#' @param x the string Column to be right-padded.
-#' @param len maximum length of each output result.
-#' @param pad a character string to be padded with.
--- End diff --

Yes, they are dup of L2798. 


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #18366: [SPARK-20889][SparkR] Grouped documentation for S...

2017-06-23 Thread actuaryzhang
Github user actuaryzhang commented on a diff in the pull request:

https://github.com/apache/spark/pull/18366#discussion_r123756917
  
--- Diff: R/pkg/R/functions.R ---
@@ -86,6 +86,22 @@ NULL
 #' df <- createDataFrame(data.frame(time = as.POSIXct(dts), y = y))}
 NULL
 
+#' String functions for Column operations
+#'
+#' String functions defined for \code{Column}.
+#'
+#' @param x Column to compute on. In \code{instr}, it is the substring to 
check. In \code{format_number},
--- End diff --

In the string functions, `instr` and `format_number` are the only two 
methods that have the `(y, x)` signature. And yes, there was doc on the `y` 
parameter down below. Now I bring it up and document here.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #18366: [SPARK-20889][SparkR] Grouped documentation for S...

2017-06-23 Thread felixcheung
Github user felixcheung commented on a diff in the pull request:

https://github.com/apache/spark/pull/18366#discussion_r123688817
  
--- Diff: R/pkg/R/functions.R ---
@@ -833,21 +838,21 @@ setMethod("hour",
 column(jc)
   })
 
-#' initcap
-#'
-#' Returns a new string column by converting the first letter of each word 
to uppercase.
-#' Words are delimited by whitespace.
-#'
-#' For example, "hello world" will become "Hello World".
-#'
-#' @param x Column to compute on.
+#' @details
+#' \code{initcap}: Returns a new string column by converting the first 
letter of
+#' each word to uppercase. Words are delimited by whitespace. For example, 
"hello world"
+#' will become "Hello World".
 #'
-#' @rdname initcap
-#' @name initcap
-#' @family string functions
-#' @aliases initcap,Column-method
+#' @rdname column_string_functions
+#' @aliases initcap initcap,Column-method
 #' @export
-#' @examples \dontrun{initcap(df$c)}
+#' @examples
+#'
+#' \dontrun{
+#' tmp <- mutate(df, SexLower = lower(df$Sex), AgeUpper = upper(df$age))
+#' head(tmp)
+#' tmp2 <- mutate(tmp, s1 = initcap(tmp$SexLower), s2 = reverse(df$Sex))
--- End diff --

since `SexLower` is one word I think? is there another column with multiple 
word?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #18366: [SPARK-20889][SparkR] Grouped documentation for S...

2017-06-23 Thread felixcheung
Github user felixcheung commented on a diff in the pull request:

https://github.com/apache/spark/pull/18366#discussion_r123688686
  
--- Diff: R/pkg/R/functions.R ---
@@ -635,20 +651,16 @@ setMethod("dayofyear",
 column(jc)
   })
 
-#' decode
-#'
-#' Computes the first argument into a string from a binary using the 
provided character set
-#' (one of 'US-ASCII', 'ISO-8859-1', 'UTF-8', 'UTF-16BE', 'UTF-16LE', 
'UTF-16').
+#' @details
+#' \code{decode}: Computes the first argument into a string from a binary 
using the provided
+#' character set.
 #'
-#' @param x Column to compute on.
-#' @param charset Character set to use
+#' @param charset Character set to use (one of 'US-ASCII', 'ISO-8859-1', 
'UTF-8', 'UTF-16BE',
+#''UTF-16LE', 'UTF-16').
--- End diff --

actually in the example L284 too


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #18366: [SPARK-20889][SparkR] Grouped documentation for S...

2017-06-23 Thread felixcheung
Github user felixcheung commented on a diff in the pull request:

https://github.com/apache/spark/pull/18366#discussion_r123690298
  
--- Diff: R/pkg/R/functions.R ---
@@ -2976,19 +2918,12 @@ setMethod("regexp_replace",
 column(jc)
   })
 
-#' rpad
-#'
-#' Right-padded with pad to a length of len.
+#' @details
+#' \code{rpad}: Right-padded with pad to a length of len.
 #'
-#' @param x the string Column to be right-padded.
-#' @param len maximum length of each output result.
-#' @param pad a character string to be padded with.
--- End diff --

are these removed some how?
```
@param x the string Column to be right-padded.
 #' @param len maximum length of each output result.
 #' @param pad a character string to be padded with.
 
```


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #18366: [SPARK-20889][SparkR] Grouped documentation for S...

2017-06-23 Thread felixcheung
Github user felixcheung commented on a diff in the pull request:

https://github.com/apache/spark/pull/18366#discussion_r123688605
  
--- Diff: R/pkg/R/functions.R ---
@@ -635,20 +651,16 @@ setMethod("dayofyear",
 column(jc)
   })
 
-#' decode
-#'
-#' Computes the first argument into a string from a binary using the 
provided character set
-#' (one of 'US-ASCII', 'ISO-8859-1', 'UTF-8', 'UTF-16BE', 'UTF-16LE', 
'UTF-16').
+#' @details
+#' \code{decode}: Computes the first argument into a string from a binary 
using the provided
+#' character set.
 #'
-#' @param x Column to compute on.
-#' @param charset Character set to use
+#' @param charset Character set to use (one of 'US-ASCII', 'ISO-8859-1', 
'UTF-8', 'UTF-16BE',
+#''UTF-16LE', 'UTF-16').
--- End diff --

very nit: could you change `'` to `"` for consistency in all other strings


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #18366: [SPARK-20889][SparkR] Grouped documentation for S...

2017-06-23 Thread felixcheung
Github user felixcheung commented on a diff in the pull request:

https://github.com/apache/spark/pull/18366#discussion_r12369
  
--- Diff: R/pkg/R/functions.R ---
@@ -2700,19 +2656,14 @@ setMethod("expr", signature(x = "character"),
 column(jc)
   })
 
-#' format_string
-#'
-#' Formats the arguments in printf-style and returns the result as a 
string column.
+#' @details
+#' \code{format_string}: Formats the arguments in printf-style and returns 
the result
+#' as a string column.
 #'
 #' @param format a character object of format strings.
-#' @param x a Column.
-#' @param ... additional Column(s).
-#' @family string functions
-#' @rdname format_string
-#' @name format_string
-#' @aliases format_string,character,Column-method
+#' @rdname column_string_functions
+#' @aliases format_string format_string,character,Column-method
 #' @export
-#' @examples \dontrun{format_string('%d %s', df$a, df$b)}
--- End diff --

can you add back this example?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #18366: [SPARK-20889][SparkR] Grouped documentation for S...

2017-06-23 Thread felixcheung
Github user felixcheung commented on a diff in the pull request:

https://github.com/apache/spark/pull/18366#discussion_r123690523
  
--- Diff: R/pkg/R/functions.R ---
@@ -86,6 +86,22 @@ NULL
 #' df <- createDataFrame(data.frame(time = as.POSIXct(dts), y = y))}
 NULL
 
+#' String functions for Column operations
+#'
+#' String functions defined for \code{Column}.
+#'
+#' @param x Column to compute on. In \code{instr}, it is the substring to 
check. In \code{format_number},
--- End diff --

`instr` seems to be the odd one here. it's `instr(y, x)` - should there be 
doc on `y` first/too?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #18366: [SPARK-20889][SparkR] Grouped documentation for S...

2017-06-22 Thread actuaryzhang
Github user actuaryzhang closed the pull request at:

https://github.com/apache/spark/pull/18366


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #18366: [SPARK-20889][SparkR] Grouped documentation for S...

2017-06-22 Thread actuaryzhang
GitHub user actuaryzhang reopened a pull request:

https://github.com/apache/spark/pull/18366

[SPARK-20889][SparkR] Grouped documentation for STRING column methods

## What changes were proposed in this pull request?

Grouped documentation for string column methods.


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/actuaryzhang/spark sparkRDocString

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/spark/pull/18366.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #18366


commit 524c84aba5eeefddb2d139be76924a4cc88ca8de
Author: actuaryzhang 
Date:   2017-06-20T06:28:42Z

update doc for string functions

commit 516a5536eb4b06c0faa8b6f47ca4ee0e36f0699e
Author: actuaryzhang 
Date:   2017-06-20T07:42:35Z

add examples

commit a1de1c0ce0b1e324b9e84d4bf32f16a3ff18425c
Author: actuaryzhang 
Date:   2017-06-20T17:12:32Z

add more examples

commit 4c0e112c0b27f7ba635a4366e0575bce846a1b15
Author: actuaryzhang 
Date:   2017-06-22T06:05:05Z

fix example style issue

commit dd707d89bc08301c038562f9c1ebf2d3032ee0d4
Author: Wayne Zhang 
Date:   2017-06-22T17:47:01Z

Merge branch 'master' into sparkRDocString




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #18366: [SPARK-20889][SparkR] Grouped documentation for S...

2017-06-22 Thread actuaryzhang
GitHub user actuaryzhang reopened a pull request:

https://github.com/apache/spark/pull/18366

[SPARK-20889][SparkR] Grouped documentation for STRING column methods

## What changes were proposed in this pull request?

Grouped documentation for string column methods.


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/actuaryzhang/spark sparkRDocString

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/spark/pull/18366.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #18366


commit 524c84aba5eeefddb2d139be76924a4cc88ca8de
Author: actuaryzhang 
Date:   2017-06-20T06:28:42Z

update doc for string functions

commit 516a5536eb4b06c0faa8b6f47ca4ee0e36f0699e
Author: actuaryzhang 
Date:   2017-06-20T07:42:35Z

add examples

commit a1de1c0ce0b1e324b9e84d4bf32f16a3ff18425c
Author: actuaryzhang 
Date:   2017-06-20T17:12:32Z

add more examples

commit 4c0e112c0b27f7ba635a4366e0575bce846a1b15
Author: actuaryzhang 
Date:   2017-06-22T06:05:05Z

fix example style issue




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #18366: [SPARK-20889][SparkR] Grouped documentation for S...

2017-06-22 Thread actuaryzhang
Github user actuaryzhang closed the pull request at:

https://github.com/apache/spark/pull/18366


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #18366: [SPARK-20889][SparkR] Grouped documentation for S...

2017-06-20 Thread actuaryzhang
GitHub user actuaryzhang opened a pull request:

https://github.com/apache/spark/pull/18366

[SPARK-20889][SparkR] Grouped documentation for STRING column methods

## What changes were proposed in this pull request?

Grouped documentation for string column methods.


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/actuaryzhang/spark sparkRDocString

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/spark/pull/18366.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #18366


commit 524c84aba5eeefddb2d139be76924a4cc88ca8de
Author: actuaryzhang 
Date:   2017-06-20T06:28:42Z

update doc for string functions

commit 516a5536eb4b06c0faa8b6f47ca4ee0e36f0699e
Author: actuaryzhang 
Date:   2017-06-20T07:42:35Z

add examples

commit d2c5b8d6993e9292020d19e95b555f1988a1efc4
Author: actuaryzhang 
Date:   2017-06-20T17:12:32Z

add more examples




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org