Github user shivaram commented on the issue:
https://github.com/apache/spark/pull/22589
LGTM. Thanks @felixcheung
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail
Github user shivaram commented on the issue:
https://github.com/apache/spark/pull/21920
LGTM. Thanks @HyukjinKwon
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail
Github user shivaram commented on the issue:
https://github.com/apache/spark/pull/21666
Thanks @felixcheung and sorry for the delay in looking at this. I think the
fix looks good. Overall it looks like we need to use system2 for the java
version check as otherwise it runs inside
Github user shivaram commented on a diff in the pull request:
https://github.com/apache/spark/pull/21666#discussion_r199224949
--- Diff: R/pkg/R/client.R ---
@@ -61,6 +61,11 @@ generateSparkSubmitArgs <- function(args, sparkHome,
jars, sparkSubmitOpts, p
Github user shivaram commented on the issue:
https://github.com/apache/spark/pull/21338
Can we check this the appropriate Apache group (is it infra ?) ? It seems
odd that the policy would require removing them when nexus requires them
Github user shivaram commented on the issue:
https://github.com/apache/spark/pull/21338
If I follow this correctly, this is a partial revert only for the Nexus
artifacts ?
---
-
To unsubscribe, e-mail: reviews
Github user shivaram commented on the issue:
https://github.com/apache/spark/pull/21314
Yes @felixcheung or @vanzin can you merge this ?
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
Github user shivaram commented on a diff in the pull request:
https://github.com/apache/spark/pull/21314#discussion_r187816811
--- Diff: R/pkg/R/client.R ---
@@ -82,7 +82,7 @@ checkJavaVersion <- function() {
})
javaVersionFilter <-
Github user shivaram commented on the issue:
https://github.com/apache/spark/pull/21315
LGTM. Lets wait till #21314 is merged ?
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional
Github user shivaram commented on a diff in the pull request:
https://github.com/apache/spark/pull/21314#discussion_r187815762
--- Diff: R/pkg/R/client.R ---
@@ -82,7 +82,7 @@ checkJavaVersion <- function() {
})
javaVersionFilter <-
Github user shivaram commented on a diff in the pull request:
https://github.com/apache/spark/pull/21314#discussion_r187815145
--- Diff: R/pkg/R/client.R ---
@@ -82,7 +82,7 @@ checkJavaVersion <- function() {
})
javaVersionFilter <-
Github user shivaram commented on the issue:
https://github.com/apache/spark/pull/21278
Merging this to master and branch-2.3
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional
Github user shivaram commented on a diff in the pull request:
https://github.com/apache/spark/pull/21278#discussion_r187470922
--- Diff: R/pkg/R/client.R ---
@@ -60,13 +60,48 @@ generateSparkSubmitArgs <- function(args, sparkHome,
jars, sparkSubmitOpts, pack
combinedA
Github user shivaram commented on a diff in the pull request:
https://github.com/apache/spark/pull/21278#discussion_r187407193
--- Diff: R/pkg/DESCRIPTION ---
@@ -13,6 +13,7 @@ Authors@R: c(person("Shivaram", "Venkataraman", role =
c("aut", "
Github user shivaram commented on a diff in the pull request:
https://github.com/apache/spark/pull/21278#discussion_r187405804
--- Diff: R/pkg/R/client.R ---
@@ -60,13 +60,48 @@ generateSparkSubmitArgs <- function(args, sparkHome,
jars, sparkSubmitOpts, pack
combinedA
Github user shivaram commented on a diff in the pull request:
https://github.com/apache/spark/pull/21278#discussion_r187405812
--- Diff: R/pkg/R/client.R ---
@@ -60,13 +60,48 @@ generateSparkSubmitArgs <- function(args, sparkHome,
jars, sparkSubmitOpts, pack
combinedA
Github user shivaram commented on the issue:
https://github.com/apache/spark/pull/21278
Looking at http://r-pkgs.had.co.nz/description.html - `... the
SystemRequirements field. But this is just a plain text field and is not
automatically checked.` I think using `== 8` is probably
Github user shivaram commented on a diff in the pull request:
https://github.com/apache/spark/pull/21278#discussion_r187169358
--- Diff: R/pkg/R/client.R ---
@@ -60,13 +60,39 @@ generateSparkSubmitArgs <- function(args, sparkHome,
jars, sparkSubmitOpts, pack
combinedA
Github user shivaram commented on a diff in the pull request:
https://github.com/apache/spark/pull/21278#discussion_r187169209
--- Diff: R/pkg/R/client.R ---
@@ -60,13 +60,39 @@ generateSparkSubmitArgs <- function(args, sparkHome,
jars, sparkSubmitOpts, pack
combinedA
Github user shivaram commented on a diff in the pull request:
https://github.com/apache/spark/pull/21278#discussion_r187169097
--- Diff: R/pkg/R/sparkR.R ---
@@ -163,6 +163,10 @@ sparkR.sparkContext <- function(
submitOps <- getClientModeSparkSubm
Github user shivaram commented on a diff in the pull request:
https://github.com/apache/spark/pull/21278#discussion_r187168952
--- Diff: R/pkg/R/utils.R ---
@@ -756,7 +756,7 @@ launchScript <- function(script, combinedArgs, wait =
FALSE) {
# stdout = F means disc
Github user shivaram commented on the issue:
https://github.com/apache/spark/pull/21278
Ah I know the problem with the vignettes - if you have _JAVA_OPTIONS set
then the line numbers change. i.e. the output looks like
```
shivaram@localhost ~ » export _JAVA_OPTIONS=&quo
Github user shivaram commented on the issue:
https://github.com/apache/spark/pull/21278
Ah got it - Thanks @HyukjinKwon . I'll check if `== 1.8` is supported by R
syntax
@felixcheung I moved the logic into a `checkJavaVersion` function now. Let
me know if this looks better
Github user shivaram commented on the issue:
https://github.com/apache/spark/pull/21278
Thats a fair question -- I initially created a script was to handle Windows
calls but I think we can do some of the split stuff inside R. Let me try that
out.
Regarding Java 9, do you
Github user shivaram commented on the issue:
https://github.com/apache/spark/pull/21278
The need for both the Requirements field and the runtime check is
documented at
https://cran.r-project.org/doc/manuals/r-release/R-exts.html#Writing-portable-packages
(Search for `Make sure
GitHub user shivaram opened a pull request:
https://github.com/apache/spark/pull/21278
[SPARKR] Require Java 8 for SparkR
This change updates the SystemRequirements and also includes a runtime
check if the JVM is being launched by R. The runtime check is done by querying
`java
Github user shivaram commented on the issue:
https://github.com/apache/spark/pull/20770
@AjithShetty2489 I'm not sure just changing these two maps is sufficient ?
For example createResultStage could in turn create all the parent stages and
the parents stages could be ShuffleMapStage
Github user shivaram commented on the issue:
https://github.com/apache/spark/pull/20770
cc @kayousterhout @markhamstra
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e
Github user shivaram commented on the issue:
https://github.com/apache/spark/pull/20464
I think @felixcheung has the most context here, so I'd suggest we wait for
his comments.
---
-
To unsubscribe, e-mail: reviews
Github user shivaram commented on the issue:
https://github.com/apache/spark/pull/20464
Thanks for clarifying @viirya. Is the PR description accurate ? I read it
as `..SQL's substr also accepts zero-based starting position` while R uses a
1-based starting position
Github user shivaram commented on the issue:
https://github.com/apache/spark/pull/20464
One thing to keep in mind is what the user's perception of the API is. If R
users are going to use 1-based indexing then this might not be the right fix ?
http://stat.ethz.ch/R-manual/R-devel
Github user shivaram commented on the issue:
https://github.com/apache/spark/pull/20414
@jiangxb1987 @mridulm Could we have a special case of using the sort-based
approach when the RDD type is comparable ? I think that should cover a bunch of
the common cases and the hash version
Github user shivaram commented on the issue:
https://github.com/apache/spark/pull/20393
I'm fine with merging this -- I just dont want to this issue to be
forgotten for RDDs as I think its a major correctness issue.
@mridulm @sameeragarwal Lets continue the discussion
Github user shivaram commented on the issue:
https://github.com/apache/spark/pull/20393
@sameeragarwal I think we should wait for the RDD fix for 2.3 as well ?
---
-
To unsubscribe, e-mail: reviews-unsubscr
Github user shivaram commented on the issue:
https://github.com/apache/spark/pull/20393
@jiangxb1987 If I'm not wrong this problem will also happen with RDD
repartition ? Will this fix also cover
Github user shivaram commented on the issue:
https://github.com/apache/spark/pull/20352
Thanks @neilalex - Change LGTM. Lets also see if @felixcheung has any
comments.
---
-
To unsubscribe, e-mail: reviews-unsubscr
Github user shivaram commented on the issue:
https://github.com/apache/spark/pull/20352
@neilalex Can you add the code snippet in the PR description as a new test
case ? That way we will ensure this behavior is tested going forward
Github user shivaram commented on the issue:
https://github.com/apache/spark/pull/20352
Jenkins, ok to test
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews
Github user shivaram commented on the issue:
https://github.com/apache/spark/pull/19290
The minimum R version supported is something that we can revisit though. I
think we do this for Python, Java versions as well in the project
Github user shivaram commented on the issue:
https://github.com/apache/spark/pull/20118
LGTM. I think withinPartitions sounds better than global
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
Github user shivaram commented on the issue:
https://github.com/apache/spark/pull/20060
cc @felixcheung
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h
GitHub user shivaram opened a pull request:
https://github.com/apache/spark/pull/20060
[SPARK-22889][SPARKR] Set overwrite=T when install SparkR in tests
## What changes were proposed in this pull request?
Since all CRAN checks go through the same machine
Github user shivaram commented on a diff in the pull request:
https://github.com/apache/spark/pull/19959#discussion_r158558712
--- Diff: dev/lint-r.R ---
@@ -27,10 +27,11 @@ if (! library(SparkR, lib.loc = LOCAL_LIB_LOC,
logical.return = TRUE)) {
# Installs lintr from Github
Github user shivaram commented on a diff in the pull request:
https://github.com/apache/spark/pull/19959#discussion_r157028493
--- Diff: dev/lint-r.R ---
@@ -27,10 +27,11 @@ if (! library(SparkR, lib.loc = LOCAL_LIB_LOC,
logical.return = TRUE)) {
# Installs lintr from Github
Github user shivaram commented on a diff in the pull request:
https://github.com/apache/spark/pull/19959#discussion_r156808529
--- Diff: dev/lint-r.R ---
@@ -27,10 +27,11 @@ if (! library(SparkR, lib.loc = LOCAL_LIB_LOC,
logical.return = TRUE)) {
# Installs lintr from Github
Github user shivaram commented on a diff in the pull request:
https://github.com/apache/spark/pull/19959#discussion_r156564046
--- Diff: dev/lint-r.R ---
@@ -27,10 +27,11 @@ if (! library(SparkR, lib.loc = LOCAL_LIB_LOC,
logical.return = TRUE)) {
# Installs lintr from Github
Github user shivaram commented on the issue:
https://github.com/apache/spark/pull/19657
AppVeyor still has an error
```
1. Failure: traverseParentDirs (@test_utils.R#252)
-
`dirs` not equal to `expect`.
1/4 mismatches
x[1]: "c:\\
Github user shivaram commented on the issue:
https://github.com/apache/spark/pull/19657
Thanks @felixcheung -- The Appveyor test seems to have failed with the
following err
```
1. Failure: traverseParentDirs (@test_utils.R#255)
-
`dirs
Github user shivaram commented on a diff in the pull request:
https://github.com/apache/spark/pull/19657#discussion_r149436164
--- Diff: R/pkg/tests/fulltests/test_utils.R ---
@@ -236,4 +236,23 @@ test_that("basenameSansExtFromUrl", {
expect_equal(basenameSansEx
Github user shivaram commented on a diff in the pull request:
https://github.com/apache/spark/pull/19657#discussion_r149174223
--- Diff: R/pkg/R/install.R ---
@@ -152,6 +152,9 @@ install.spark <- function(hadoopVersion = "2.7",
mir
Github user shivaram commented on a diff in the pull request:
https://github.com/apache/spark/pull/19657#discussion_r149140297
--- Diff: R/pkg/vignettes/sparkr-vignettes.Rmd ---
@@ -1183,3 +1183,24 @@ env | map
```{r, echo=FALSE}
sparkR.session.stop
Github user shivaram commented on a diff in the pull request:
https://github.com/apache/spark/pull/19657#discussion_r148979368
--- Diff: R/pkg/R/install.R ---
@@ -152,6 +152,9 @@ install.spark <- function(hadoopVersion = "2.7",
mir
Github user shivaram commented on a diff in the pull request:
https://github.com/apache/spark/pull/19657#discussion_r148979827
--- Diff: R/pkg/vignettes/sparkr-vignettes.Rmd ---
@@ -1183,3 +1183,24 @@ env | map
```{r, echo=FALSE}
sparkR.session.stop
Github user shivaram commented on a diff in the pull request:
https://github.com/apache/spark/pull/19657#discussion_r148979924
--- Diff: R/pkg/tests/run-all.R ---
@@ -60,3 +60,22 @@ if (identical(Sys.getenv("NOT_CRAN"), "true")) {
Github user shivaram commented on a diff in the pull request:
https://github.com/apache/spark/pull/19624#discussion_r148320092
--- Diff: R/pkg/R/sparkR.R ---
@@ -420,6 +420,18 @@ sparkR.session <- function(
enableHiveSupport)
ass
GitHub user shivaram opened a pull request:
https://github.com/apache/spark/pull/19624
[SPARKR][SPARK-22315] Warn is SparkR package version doesn't match
SparkContext
## What changes were proposed in this pull request?
This PR adds a check between the R package version
Github user shivaram commented on the issue:
https://github.com/apache/spark/pull/19589
Merging to master, branch-2.2 and branch-2.1
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional
Github user shivaram commented on the issue:
https://github.com/apache/spark/pull/19589
cc @felixcheung -- It'll be great if you could independently test this as
well !
---
-
To unsubscribe, e-mail: reviews
GitHub user shivaram opened a pull request:
https://github.com/apache/spark/pull/19589
[SPARKR][SPARK-22344] Set java.io.tmpdir for SparkR tests
This PR sets the java.io.tmpdir for CRAN checks and also disables the
hsperfdata for the JVM when running CRAN checks. Together
Github user shivaram commented on the issue:
https://github.com/apache/spark/pull/19557
LGTM.
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h
Github user shivaram commented on a diff in the pull request:
https://github.com/apache/spark/pull/19557#discussion_r146760035
--- Diff: R/pkg/R/DataFrame.R ---
@@ -3249,9 +3249,12 @@ setMethod("as.data.frame",
#' @note attach since 1.6.0
setMeth
Github user shivaram commented on the issue:
https://github.com/apache/spark/pull/19557
@felixcheung Are the docs on this version good ?
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
Github user shivaram commented on the issue:
https://github.com/apache/spark/pull/19557
Is there a reason we can't use the same glm trick for attach ? I guess this
was explained above but I'm wondering if there is a reason the base::attach is
not compiled in the same way
Github user shivaram commented on the issue:
https://github.com/apache/spark/pull/19550
LGTM. Thanks @felixcheung
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail
Github user shivaram commented on the issue:
https://github.com/apache/spark/pull/19514
Good point. I'm not sure it counteracts it completely. We should run it to
see the behavior I guess.
I am not a big fan of mucking with Jenkins versions because it
fundamentally looks
Github user shivaram commented on the issue:
https://github.com/apache/spark/pull/19514
We didn't foresee this but it looks like `R CMD check --as-cran` throws
this error if we try to build a package with a version number older than the
one uploaded to CRAN
Github user shivaram commented on a diff in the pull request:
https://github.com/apache/spark/pull/19342#discussion_r140920250
--- Diff: R/pkg/R/DataFrame.R ---
@@ -3250,6 +3250,7 @@ setMethod("attach",
function(what, pos = 2, name = deparse(subst
Github user shivaram commented on the issue:
https://github.com/apache/spark/pull/19290
@HyukjinKwon Thanks for looking at this. The 5 min addition seems
unfortunate though -- does that also happen with lintr-1.0.1 ? I wonder if we
are seeing some specific performance slowdown
Github user shivaram commented on the issue:
https://github.com/apache/spark/pull/19111
Should we set this before the call to `test_package` ? It'll be good to
have it for the CRAN tests as well
Github user shivaram commented on the issue:
https://github.com/apache/spark/pull/19016
Thats great ! I will also run this by winbuilder later today.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user shivaram commented on the issue:
https://github.com/apache/spark/pull/19016
Sure - change LGTM. Lets see if @HyukjinKwon has any more comments ? If not
we can merge to master, branch-2.2 and then do some more tests.
---
If your project is set up for it, you can
Github user shivaram commented on a diff in the pull request:
https://github.com/apache/spark/pull/19016#discussion_r134815709
--- Diff: R/pkg/vignettes/sparkr-vignettes.Rmd ---
@@ -27,6 +27,17 @@ vignette: >
limitations under the License.
-->
+```{r
Github user shivaram commented on the issue:
https://github.com/apache/spark/pull/19016
Ah I see. Yeah the failed tests makes sense. We can also try to submit a
custom tar.gz to r-hub to test it with the PDF and a different version number ?
---
If your project is set up for it, you
Github user shivaram commented on the issue:
https://github.com/apache/spark/pull/19016
Thanks @felixcheung ! Are the warnings about the missing PDF unavoidable ?
I see something like
```
checking package vignettes in 'inst/doc' ... WARNING
Package vignette without
Github user shivaram commented on a diff in the pull request:
https://github.com/apache/spark/pull/15471#discussion_r128301290
--- Diff: R/pkg/R/backend.R ---
@@ -108,13 +108,27 @@ invokeJava <- function(isStatic, objId, methodName,
...) {
conn <- get("
Github user shivaram commented on the issue:
https://github.com/apache/spark/pull/18465
@felixcheung are these failures happening from the gapply tests ? Also do
we have a way to map the error code to an error reason ?
---
If your project is set up for it, you can reply
Github user shivaram commented on the issue:
https://github.com/apache/spark/pull/14431
Compared to introducing a new API, I think @falaki 's idea of adding a
non-default option is better
---
If your project is set up for it, you can reply to this email and have your
reply appear
Github user shivaram commented on the issue:
https://github.com/apache/spark/pull/18465
Thanks @HyukjinKwon - I will try to look at this later tonight
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user shivaram commented on the issue:
https://github.com/apache/spark/pull/15821
cc @shaneknapp
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user shivaram commented on the issue:
https://github.com/apache/spark/pull/18320
LGTM. The update looks good. Thanks for the thorough testing. We could also
post a note on the dev list about this change and especially ask people who use
`dapply` or `gapply` or the old `RDD
Github user shivaram commented on a diff in the pull request:
https://github.com/apache/spark/pull/18320#discussion_r123564501
--- Diff: R/pkg/inst/worker/daemon.R ---
@@ -30,8 +30,40 @@ port <- as.integer(Sys.getenv("SPARKR_WORKER_PORT"))
inputCon <-
Github user shivaram commented on a diff in the pull request:
https://github.com/apache/spark/pull/18320#discussion_r123569536
--- Diff: R/pkg/inst/worker/daemon.R ---
@@ -30,8 +30,40 @@ port <- as.integer(Sys.getenv("SPARKR_WORKER_PORT"))
inputCon <-
Github user shivaram commented on the issue:
https://github.com/apache/spark/pull/18320
Thanks ! LGTM. Lets also wait to see if @felixcheung has anything more
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user shivaram commented on a diff in the pull request:
https://github.com/apache/spark/pull/18320#discussion_r122875289
--- Diff: R/pkg/inst/worker/daemon.R ---
@@ -30,8 +30,42 @@ port <- as.integer(Sys.getenv("SPARKR_WORKER_PORT"))
inputCon <-
Github user shivaram commented on a diff in the pull request:
https://github.com/apache/spark/pull/18320#discussion_r122847863
--- Diff: R/pkg/inst/worker/daemon.R ---
@@ -31,7 +31,15 @@ inputCon <- socketConnection(
port = port, open = "rb", blocking =
Github user shivaram commented on a diff in the pull request:
https://github.com/apache/spark/pull/18320#discussion_r122804718
--- Diff: R/pkg/inst/worker/daemon.R ---
@@ -31,7 +31,30 @@ inputCon <- socketConnection(
port = port, open = "rb", blocking =
Github user shivaram commented on a diff in the pull request:
https://github.com/apache/spark/pull/18320#discussion_r122803981
--- Diff: R/pkg/inst/worker/daemon.R ---
@@ -31,7 +31,15 @@ inputCon <- socketConnection(
port = port, open = "rb", blocking =
Github user shivaram commented on the issue:
https://github.com/apache/spark/pull/14431
AFAIK this was dependent on #14742, but @NarineK may know better
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project
Github user shivaram commented on the issue:
https://github.com/apache/spark/pull/18104
LGTM. Thanks @felixcheung for the update and @marmbrus for the ping
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project
Github user shivaram commented on the issue:
https://github.com/apache/spark/pull/18104
@felixcheung This is very cool. Let me try this on a windows VM and
winbuilder and get back to you.
---
If your project is set up for it, you can reply to this email and have your
reply appear
Github user shivaram commented on the issue:
https://github.com/apache/spark/pull/17966
Thanks I'll try to kick off the winbuilder build soon (i'm out of town till
tomorrow). One more thing we might need to fix is that winbuilder has a 10 or
20 minute time limit for tests (not sure
Github user shivaram commented on the issue:
https://github.com/apache/spark/pull/17966
@felixcheung Unfortunately I'm out traveling and haven't been able to do
the windows tests yet -- Would you have a chance to do that ? Also what are
your thoughts on merging this while we test
Github user shivaram commented on the issue:
https://github.com/apache/spark/pull/17966
@felixcheung I made the change - I'm right now going to test this in my
Windows VM. Will update this PR with the results
---
If your project is set up for it, you can reply to this email and have
Github user shivaram commented on the issue:
https://github.com/apache/spark/pull/17966
Sorry I've been out traveling -- I'll try to update this by tonight
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project
Github user shivaram commented on the issue:
https://github.com/apache/spark/pull/17966
@HyukjinKwon Do we know why things sometime queue for a long time on
AppVeyor ? Like this PR has been queued for around 5 hours right now.
---
If your project is set up for it, you can reply
Github user shivaram commented on the issue:
https://github.com/apache/spark/pull/17966
Actually thinking more about this, I think we should be checking for
availability of `hadoop` library / binaries rather than `is_cran`. For example
I just found that win-builder only runs `R CMD
Github user shivaram commented on the issue:
https://github.com/apache/spark/pull/17966
This is SPARK-20727 - I just happened to have the other JIRA also open and
pasted it incorrectly
---
If your project is set up for it, you can reply to this email and have your
reply appear
Github user shivaram commented on the issue:
https://github.com/apache/spark/pull/17966
Sorry @vanzin I got the wrong JIRA number. Fixing it now
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user shivaram commented on the issue:
https://github.com/apache/spark/pull/17966
cc @felixcheung
FWIW it might easier to view the diff by adding `w=1` to the URL. i.e.
https://github.com/apache/spark/pull/17966/files?w=1
---
If your project is set up for it, you can
GitHub user shivaram opened a pull request:
https://github.com/apache/spark/pull/17966
[SPARK-20666] Skip tests that use Hadoop utils on CRAN Windows
## What changes were proposed in this pull request?
This change skips tests that use the Hadoop libraries while running
1 - 100 of 2516 matches
Mail list logo