Github user felixcheung commented on a diff in the pull request:
https://github.com/apache/spark/pull/21427#discussion_r191040210
--- Diff: python/pyspark/sql/tests.py ---
@@ -4931,6 +4931,63 @@ def foo3(key, pdf):
expected4 = udf3.func((), pdf
Github user felixcheung commented on the issue:
https://github.com/apache/spark/pull/21402
while we are here, could we also add or at least propose some metrics
around this, such as number of open block failure, or even number of block
threads?
we have suffer a lot from
Github user felixcheung commented on the issue:
https://github.com/apache/zeppelin/pull/2975
hmmm - agreed maybe shade is the way to go
---
Github user felixcheung commented on a diff in the pull request:
https://github.com/apache/spark/pull/21368#discussion_r189781578
--- Diff: python/pyspark/sql/session.py ---
@@ -547,6 +547,40 @@ def _create_from_pandas_with_arrow(self, pdf, schema,
timezone):
df
Github user felixcheung commented on a diff in the pull request:
https://github.com/apache/spark/pull/21368#discussion_r189781240
--- Diff:
repl/scala-2.12/src/main/scala/org/apache/spark/repl/SparkILoop.scala ---
@@ -37,7 +37,14 @@ class SparkILoop(in0: Option[BufferedReader
Github user felixcheung commented on the issue:
https://github.com/apache/spark/pull/21382
@susanxhuynh
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h
Github user felixcheung commented on the issue:
https://github.com/apache/spark/pull/21067
Jenkins, ok to test
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail
Github user felixcheung commented on a diff in the pull request:
https://github.com/apache/spark/pull/21342#discussion_r189471396
--- Diff:
core/src/main/java/org/apache/spark/memory/SparkOutOfMemoryException.java ---
@@ -0,0 +1,38 @@
+/*
+ * Licensed to the Apache
Github user felixcheung commented on the issue:
https://github.com/apache/spark/pull/21294
looked into a bit not sure why..
- no related changes to testthat
- nothing in the R releases around this
- no config about default type of NA etc.
- tested this out
Github user felixcheung commented on a diff in the pull request:
https://github.com/apache/spark/pull/21370#discussion_r189447446
--- Diff: python/pyspark/sql/dataframe.py ---
@@ -78,6 +78,12 @@ def __init__(self, jdf, sql_ctx):
self.is_cached = False
self
Github user felixcheung commented on a diff in the pull request:
https://github.com/apache/spark/pull/21370#discussion_r189447423
--- Diff: sql/core/src/main/scala/org/apache/spark/sql/Dataset.scala ---
@@ -3056,7 +3059,6 @@ class Dataset[T] private[sql](
* view, e.g
Github user felixcheung commented on a diff in the pull request:
https://github.com/apache/spark/pull/21370#discussion_r189447465
--- Diff: sql/core/src/main/scala/org/apache/spark/sql/Dataset.scala ---
@@ -237,9 +236,13 @@ class Dataset[T] private[sql](
* @param truncate
Github user felixcheung commented on a diff in the pull request:
https://github.com/apache/spark/pull/21370#discussion_r189447477
--- Diff: sql/core/src/main/scala/org/apache/spark/sql/Dataset.scala ---
@@ -237,9 +236,13 @@ class Dataset[T] private[sql](
* @param truncate
Github user felixcheung commented on the issue:
https://github.com/apache/spark/pull/21370
we will wait for the tests to be fixed first.
@xuanyuanking could you update the PR description to clarify which is
"before" which
Github user felixcheung commented on the issue:
https://github.com/apache/spark/pull/21370
@HyukjinKwon @holdenk @ueshin
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e
Github user felixcheung commented on the issue:
https://github.com/apache/spark/pull/21374
Jenkins, retest this please
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e
Github user felixcheung commented on a diff in the pull request:
https://github.com/apache/spark/pull/21368#discussion_r189427017
--- Diff:
repl/scala-2.11/src/main/scala/org/apache/spark/repl/SparkILoop.scala ---
@@ -44,7 +44,14 @@ class SparkILoop(in0: Option[BufferedReader
Github user felixcheung commented on the issue:
https://github.com/apache/spark/pull/21294
hmm, not sure yet, will need to look into
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional
Github user felixcheung commented on the issue:
https://github.com/apache/spark/pull/21294
What are the versions of R you are running with?
---
-
To unsubscribe, e-mail: reviews-unsubscr
Github user felixcheung commented on a diff in the pull request:
https://github.com/apache/spark/pull/21288#discussion_r189175140
--- Diff:
sql/core/src/test/scala/org/apache/spark/sql/FilterPushdownBenchmark.scala ---
@@ -32,14 +32,14 @@ import org.apache.spark.util.{Benchmark
Github user felixcheung commented on the issue:
https://github.com/apache/spark/pull/21349
Jenkins, retest this please
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e
Github user felixcheung commented on a diff in the pull request:
https://github.com/apache/spark/pull/21350#discussion_r188846130
--- Diff: python/pyspark/cloudpickle.py ---
@@ -801,10 +801,10 @@ def save_ellipsis(self, obj):
def save_not_implemented(self, obj
Github user felixcheung commented on the issue:
https://github.com/apache/zeppelin/pull/2969
can you update the PR description, include link to JIRA
https://issues.apache.org/jira/browse/ZEPPELIN/ etc.?
---
Github user felixcheung closed the pull request at:
https://github.com/apache/spark/pull/21325
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h
es #21325 from felixcheung/rlintfix22.
Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commit: http://git-wip-us.apache.org/repos/asf/spark/commit/8c223b65
Tree: http://git-wip-us.apache.org/repos/asf/spark/tree/8c223b65
Diff: http://git-wip-us.apache.org/repos/asf/spark/diff/8c223b65
Bra
Github user felixcheung commented on the issue:
https://github.com/apache/spark/pull/21325
also need this in branch-2.1
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e
GitHub user felixcheung opened a pull request:
https://github.com/apache/spark/pull/21325
[R] backport lint fix
## What changes were proposed in this pull request?
backport part of the commit that addresses lintr issue
You can merge this pull request into a Git repository
ted?
manual test, unit tests
Author: Felix Cheung <felixcheun...@hotmail.com>
Closes #21315 from felixcheung/googvis.
(cherry picked from commit 9059f1ee6ae13c8636c9b7fdbb708a349256fb8e)
Signed-off-by: Felix Cheung <felixche...@apache.org>
Project: http://git-wip-us.apache.org/repos
Github user felixcheung commented on the issue:
https://github.com/apache/spark/pull/21315
merged to master/2.3
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail
ted?
manual test, unit tests
Author: Felix Cheung <felixcheun...@hotmail.com>
Closes #21315 from felixcheung/googvis.
Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commit: http://git-wip-us.apache.org/repos/asf/spark/commit/9059f1ee
Tree: http://git-wip-us.apache.org/repos/asf/s
Github user felixcheung commented on a diff in the pull request:
https://github.com/apache/spark/pull/21314#discussion_r187817762
--- Diff: R/pkg/R/client.R ---
@@ -82,7 +82,7 @@ checkJavaVersion <- function() {
})
javaVersionFilter <-
Github user felixcheung commented on the issue:
https://github.com/apache/spark/pull/21313
sure, let's try 4-5
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail
Github user felixcheung commented on a diff in the pull request:
https://github.com/apache/spark/pull/21314#discussion_r187816505
--- Diff: R/pkg/R/client.R ---
@@ -82,7 +82,7 @@ checkJavaVersion <- function() {
})
javaVersionFilter <-
Github user felixcheung commented on the issue:
https://github.com/apache/spark/pull/21315
@shivaram
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h
Github user felixcheung commented on a diff in the pull request:
https://github.com/apache/spark/pull/21314#discussion_r187815502
--- Diff: R/pkg/R/client.R ---
@@ -82,7 +82,7 @@ checkJavaVersion <- function() {
})
javaVersionFilter <-
GitHub user felixcheung opened a pull request:
https://github.com/apache/spark/pull/21315
[SPARK-23780][R] Failed to use googleVis library with new SparkR
## What changes were proposed in this pull request?
change generic to get it to work with googleVis
also fix lintr
Github user felixcheung commented on the issue:
https://github.com/apache/spark/pull/21315
the main change is
https://github.com/apache/spark/compare/master...felixcheung:googvis?expand=1#diff-8e3d61ff66c9ffcd6ffb7a8eedc08409
Github user felixcheung commented on the issue:
https://github.com/apache/spark/pull/21314
@vanzin FYI - need to fix this before release
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
Github user felixcheung commented on the issue:
https://github.com/apache/spark/pull/21314
@shivaram
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h
GitHub user felixcheung opened a pull request:
https://github.com/apache/spark/pull/21314
[SPARK-24263] SparkR java check breaks with openjdk
## What changes were proposed in this pull request?
Change text to grep for.
## How was this patch tested?
manual
Github user felixcheung commented on a diff in the pull request:
https://github.com/apache/spark/pull/21313#discussion_r187810848
--- Diff: R/pkg/R/functions.R ---
@@ -3006,6 +3008,28 @@ setMethod("array_contains",
Github user felixcheung commented on a diff in the pull request:
https://github.com/apache/spark/pull/21307#discussion_r187788406
--- Diff: R/pkg/R/functions.R ---
@@ -2055,20 +2058,10 @@ setMethod("countDistinct",
#' @details
#' \code{concat}: Concatenate
Github user felixcheung commented on a diff in the pull request:
https://github.com/apache/spark/pull/21288#discussion_r187764083
--- Diff:
sql/core/src/test/scala/org/apache/spark/sql/FilterPushdownBenchmark.scala ---
@@ -32,14 +32,14 @@ import org.apache.spark.util.{Benchmark
Github user felixcheung commented on a diff in the pull request:
https://github.com/apache/spark/pull/21298#discussion_r187763906
--- Diff: R/pkg/R/functions.R ---
@@ -3138,6 +3139,23 @@ setMethod("size",
column(jc)
})
+#
Repository: zeppelin
Updated Branches:
refs/heads/branch-0.8 de1a25c73 -> 1b4d37639
[Zeppelin 3388] Correcting documentation link to zeppelin-context documentation
### What is this PR for?
A small change was needed in the documentation of the JDBC interpreter also,
but had been missed.
The
Repository: zeppelin
Updated Branches:
refs/heads/master 3712ce697 -> 941ccbbbd
[Zeppelin 3388] Correcting documentation link to zeppelin-context documentation
### What is this PR for?
A small change was needed in the documentation of the JDBC interpreter also,
but had been missed.
The
Github user felixcheung commented on the issue:
https://github.com/apache/zeppelin/pull/2960
merged to master and 0.8
---
Github user felixcheung commented on a diff in the pull request:
https://github.com/apache/spark/pull/21278#discussion_r187514621
--- Diff: R/pkg/R/client.R ---
@@ -60,13 +60,48 @@ generateSparkSubmitArgs <- function(args, sparkHome,
jars, sparkSubmitOpts, pack
combinedA
Github user felixcheung commented on a diff in the pull request:
https://github.com/apache/spark/pull/21298#discussion_r187514450
--- Diff: R/pkg/R/functions.R ---
@@ -3124,6 +3125,23 @@ setMethod("size",
column(jc)
})
+#
Github user felixcheung commented on the issue:
https://github.com/apache/zeppelin/pull/2960
@zjffdu can I merge this to branch-0.8?
---
Github user felixcheung commented on a diff in the pull request:
https://github.com/apache/spark/pull/21294#discussion_r187470081
--- Diff: R/pkg/R/functions.R ---
@@ -208,6 +208,7 @@ NULL
#' head(select(tmp, array_contains(tmp$v1, 21), size(tmp$v1)))
#' head(select(tmp
Github user felixcheung commented on a diff in the pull request:
https://github.com/apache/spark/pull/21278#discussion_r187469392
--- Diff: R/pkg/R/client.R ---
@@ -60,13 +60,48 @@ generateSparkSubmitArgs <- function(args, sparkHome,
jars, sparkSubmitOpts, pack
combinedA
Github user felixcheung commented on a diff in the pull request:
https://github.com/apache/spark/pull/21278#discussion_r187469216
--- Diff: R/pkg/DESCRIPTION ---
@@ -13,6 +13,7 @@ Authors@R: c(person("Shivaram", "Venkataraman", role =
c("aut", "
Github user felixcheung commented on the issue:
https://github.com/apache/spark/pull/21255
Yes
From: Hyukjin Kwon <notificati...@github.com>
Sent: Thursday, May 10, 2018 9:16:28 AM
To: apache/spark
Cc: Felix Cheung; M
Github user felixcheung commented on a diff in the pull request:
https://github.com/apache/spark/pull/21278#discussion_r187238820
--- Diff: R/pkg/DESCRIPTION ---
@@ -13,6 +13,7 @@ Authors@R: c(person("Shivaram", "Venkataraman", role =
c("aut", "
Github user felixcheung commented on a diff in the pull request:
https://github.com/apache/spark/pull/21255#discussion_r187238518
--- Diff: R/pkg/R/functions.R ---
@@ -219,7 +219,8 @@ NULL
#' head(select(tmp3, map_values(tmp3$v3)))
#' head(select(tmp3, element_at(tmp3$v3
Github user felixcheung commented on a diff in the pull request:
https://github.com/apache/spark/pull/21278#discussion_r187237615
--- Diff: R/pkg/R/client.R ---
@@ -60,13 +60,48 @@ generateSparkSubmitArgs <- function(args, sparkHome,
jars, sparkSubmitOpts, pack
combinedA
Github user felixcheung commented on a diff in the pull request:
https://github.com/apache/spark/pull/21278#discussion_r187237024
--- Diff: R/pkg/R/client.R ---
@@ -60,13 +60,48 @@ generateSparkSubmitArgs <- function(args, sparkHome,
jars, sparkSubmitOpts, pack
combinedA
Github user felixcheung commented on the issue:
https://github.com/apache/spark/pull/21278
nice I like it... they also say
```
When specifying a minimum Java version please use the official version
names, which are (confusingly)
1.1 1.2 1.3 1.4 5.0 6 7 8 9 10
Github user felixcheung commented on the issue:
https://github.com/apache/spark/pull/21278
it fails with
```
Quitting from lines 65-67 (sparkr-vignettes.Rmd)
Error: processing vignette 'sparkr-vignettes.Rmd' failed with diagnostics:
Java version check failed. Please
Github user felixcheung commented on the issue:
https://github.com/apache/spark/pull/21278
I see yes, maybe grep for java version
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional
Github user felixcheung commented on the issue:
https://github.com/apache/spark/pull/21278
```
Quitting from lines 65-67 (sparkr-vignettes.Rmd)
Error: processing vignette 'sparkr-vignettes.Rmd' failed with diagnostics:
Java version check failed
Execution halted
Github user felixcheung commented on a diff in the pull request:
https://github.com/apache/spark/pull/21255#discussion_r186942863
--- Diff: R/pkg/R/functions.R ---
@@ -2047,17 +2049,15 @@ setMethod("countDistinct",
#' \code{concat}: Concatenates multiple input column
Github user felixcheung commented on a diff in the pull request:
https://github.com/apache/spark/pull/21255#discussion_r186942776
--- Diff: R/pkg/R/functions.R ---
@@ -2047,17 +2049,15 @@ setMethod("countDistinct",
#' \code{concat}: Concatenates multiple input column
Github user felixcheung commented on a diff in the pull request:
https://github.com/apache/spark/pull/21278#discussion_r186940485
--- Diff: R/pkg/R/client.R ---
@@ -60,13 +60,39 @@ generateSparkSubmitArgs <- function(args, sparkHome,
jars, sparkSubmitOpts, pack
combinedA
Github user felixcheung commented on a diff in the pull request:
https://github.com/apache/spark/pull/21278#discussion_r186940268
--- Diff: R/pkg/R/sparkR.R ---
@@ -163,6 +163,10 @@ sparkR.sparkContext <- function(
submitOps <- getClientModeSparkSubm
Github user felixcheung commented on a diff in the pull request:
https://github.com/apache/spark/pull/21278#discussion_r186939722
--- Diff: R/pkg/R/utils.R ---
@@ -756,7 +756,7 @@ launchScript <- function(script, combinedArgs, wait =
FALSE) {
# stdout = F means disc
Github user felixcheung commented on a diff in the pull request:
https://github.com/apache/spark/pull/21278#discussion_r186940332
--- Diff: R/pkg/R/client.R ---
@@ -60,13 +60,39 @@ generateSparkSubmitArgs <- function(args, sparkHome,
jars, sparkSubmitOpts, pack
combinedA
Github user felixcheung commented on a diff in the pull request:
https://github.com/apache/spark/pull/21278#discussion_r186940416
--- Diff: R/pkg/R/client.R ---
@@ -60,13 +60,39 @@ generateSparkSubmitArgs <- function(args, sparkHome,
jars, sparkSubmitOpts, pack
combinedA
Github user felixcheung commented on a diff in the pull request:
https://github.com/apache/spark/pull/21278#discussion_r186939913
--- Diff: R/pkg/R/sparkR.R ---
@@ -163,6 +163,10 @@ sparkR.sparkContext <- function(
submitOps <- getClientModeSparkSubm
Github user felixcheung commented on the issue:
https://github.com/apache/spark/pull/21278
yea, I don't think Spark builds on Java 9 (at least from what I've seen)
I see the package is gone from CRAN so the test results are brief but maybe
related to
https://cran.r
Github user felixcheung commented on the issue:
https://github.com/apache/spark/pull/21278
also I think test fails on Java 9 - is there a way to exclude - doc sounds
like it's a minimal version
---
-
To unsubscribe
Github user felixcheung commented on a diff in the pull request:
https://github.com/apache/spark/pull/21255#discussion_r186634471
--- Diff: R/pkg/R/functions.R ---
@@ -2043,34 +2033,6 @@ setMethod("countDistinct",
column(jc)
})
-#
Github user felixcheung commented on a diff in the pull request:
https://github.com/apache/spark/pull/21255#discussion_r186633965
--- Diff: R/pkg/R/functions.R ---
@@ -1253,19 +1256,6 @@ setMethod("quarter",
column(jc)
})
-#
Github user felixcheung commented on a diff in the pull request:
https://github.com/apache/spark/pull/21255#discussion_r186624915
--- Diff: R/pkg/R/functions.R ---
@@ -2043,34 +2033,6 @@ setMethod("countDistinct",
column(jc)
})
-#
Github user felixcheung commented on a diff in the pull request:
https://github.com/apache/spark/pull/21255#discussion_r186624766
--- Diff: R/pkg/R/functions.R ---
@@ -1253,19 +1256,6 @@ setMethod("quarter",
column(jc)
})
-#
Github user felixcheung commented on a diff in the pull request:
https://github.com/apache/spark/pull/21255#discussion_r186624379
--- Diff: R/pkg/R/functions.R ---
@@ -209,6 +209,7 @@ NULL
#' head(select(tmp, array_max(tmp$v1), array_min(tmp$v1)))
#' head(select(tmp
Github user felixcheung commented on a diff in the pull request:
https://github.com/apache/spark/pull/21092#discussion_r186326478
--- Diff:
resource-managers/kubernetes/docker/src/main/dockerfiles/spark/bindings/python/Dockerfile
---
@@ -0,0 +1,34 @@
+#
+# Licensed
Github user felixcheung commented on the issue:
https://github.com/apache/spark/pull/21198
yes, for R, during package check it can only write to the R tempdir()
(which can be changed in a number of ways
Github user felixcheung commented on the issue:
https://github.com/apache/zeppelin/pull/2960
merging if no more comment
---
Github user felixcheung commented on a diff in the pull request:
https://github.com/apache/spark/pull/21244#discussion_r186278752
--- Diff: R/pkg/R/generics.R ---
@@ -918,6 +918,10 @@ setGeneric("explode_outer", function(x) {
standardGeneric("explode_outer") }
Github user felixcheung commented on a diff in the pull request:
https://github.com/apache/spark/pull/18447#discussion_r186278707
--- Diff: R/pkg/R/functions.R ---
@@ -679,6 +679,19 @@ setMethod("hash",
column(jc)
})
+#
Github user felixcheung commented on a diff in the pull request:
https://github.com/apache/spark/pull/18447#discussion_r186278702
--- Diff: R/pkg/R/functions.R ---
@@ -679,6 +679,19 @@ setMethod("hash",
column(jc)
})
+#
Github user felixcheung commented on a diff in the pull request:
https://github.com/apache/spark/pull/18447#discussion_r186278685
--- Diff: R/pkg/R/functions.R ---
@@ -679,6 +679,19 @@ setMethod("hash",
column(jc)
})
+#
Github user felixcheung commented on the issue:
https://github.com/apache/zeppelin/pull/2555
hi any update on this?
---
Github user felixcheung commented on a diff in the pull request:
https://github.com/apache/spark/pull/21217#discussion_r186252704
--- Diff: docs/sql-programming-guide.md ---
@@ -1812,6 +1812,7 @@ working with timestamps in `pandas_udf`s to get the
best performance, see
Github user felixcheung commented on a diff in the pull request:
https://github.com/apache/spark/pull/21241#discussion_r186252592
--- Diff: docs/running-on-kubernetes.md ---
@@ -561,6 +561,13 @@ specific to Spark on Kubernetes.
This is distinct from spark.executor.cores
Github user felixcheung commented on a diff in the pull request:
https://github.com/apache/spark/pull/21241#discussion_r186252598
--- Diff:
resource-managers/kubernetes/core/src/main/scala/org/apache/spark/scheduler/cluster/k8s/KubernetesClusterSchedulerBackend.scala
Github user felixcheung commented on a diff in the pull request:
https://github.com/apache/spark/pull/21228#discussion_r186252477
--- Diff: R/pkg/R/functions.R ---
@@ -818,6 +818,7 @@ setMethod("factorial",
#' first(df$c, TRUE)
#' }
#' @note first(charact
Github user felixcheung commented on a diff in the pull request:
https://github.com/apache/spark/pull/21228#discussion_r186252480
--- Diff: R/pkg/R/functions.R ---
@@ -963,6 +964,7 @@ setMethod("kurtosis",
#' last(df$c, TRUE)
#' }
#' @note last s
Github user felixcheung commented on a diff in the pull request:
https://github.com/apache/spark/pull/18447#discussion_r186252381
--- Diff: R/pkg/R/functions.R ---
@@ -679,6 +679,19 @@ setMethod("hash",
column(jc)
})
+#
Github user felixcheung commented on a diff in the pull request:
https://github.com/apache/spark/pull/18447#discussion_r186252423
--- Diff: R/pkg/NAMESPACE ---
@@ -236,6 +236,7 @@ exportMethods("%<=>%",
"current_date",
Github user felixcheung commented on a diff in the pull request:
https://github.com/apache/spark/pull/18447#discussion_r186252398
--- Diff: R/pkg/R/functions.R ---
@@ -679,6 +679,19 @@ setMethod("hash",
column(jc)
})
+#
Github user felixcheung commented on a diff in the pull request:
https://github.com/apache/spark/pull/21198#discussion_r185705560
--- Diff: python/run-tests.py ---
@@ -77,13 +79,33 @@ def run_individual_python_test(test_name,
pyspark_python):
'PYSPARK_PYTHON': which
Github user felixcheung commented on the issue:
https://github.com/apache/spark/pull/21201
Jenkins, retest this please
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e
Github user felixcheung commented on a diff in the pull request:
https://github.com/apache/spark/pull/18447#discussion_r185703059
--- Diff: R/pkg/R/functions.R ---
@@ -653,6 +653,25 @@ setMethod("hash",
column(jc)
})
+#'
Github user felixcheung commented on the issue:
https://github.com/apache/spark/pull/18447
can you rebase? (there is a conflict)
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional
Github user felixcheung commented on the issue:
https://github.com/apache/spark/pull/18447
ok to test
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h
Github user felixcheung commented on the issue:
https://github.com/apache/spark/pull/21197
Is this the error? Seems like intermittent problem from CRAN. Let me know
if you see this again.
Also its just the log text repeated, but the test run.
checking CRAN incoming
Github user felixcheung commented on the issue:
https://github.com/apache/spark/pull/21200
Jenkins, add to whitelist
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail
701 - 800 of 6371 matches
Mail list logo