GitHub user shivaram opened a pull request:
https://github.com/apache/spark/pull/16229
[MINOR][SPARKR][PYSPARK] Copy PySpark, SparkR archives to latest/
This change copies the pip / CRAN compatible source archives to latest/
during release build
You can merge this pull request
Github user shivaram commented on the issue:
https://github.com/apache/spark/pull/16226
Merging to master, branch-2.1 - and testing again on nightly
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user shivaram commented on the issue:
https://github.com/apache/spark/pull/16226
cc @felixcheung @rxin
This should hopefully be the last one for this - FYI the source archive is
named `SparkR_$SPARK_VERSION.tar.gz` and we were only copying files named
`spark
GitHub user shivaram opened a pull request:
https://github.com/apache/spark/pull/16226
Copy the SparkR source package with LFTP
This PR adds a line in release-build.sh to copy the SparkR source archive
using LFTP
You can merge this pull request into a Git
Github user shivaram commented on the issue:
https://github.com/apache/spark/pull/16221
FYI the pip issue is fixed as you can see in the nightly build at
http://people.apache.org/~pwendell/spark-nightly/spark-branch-2.1-bin/spark-2.1.1-SNAPSHOT-2016_12_08_18_31-ef5646b-bin
Github user shivaram commented on the issue:
https://github.com/apache/spark/pull/16221
Thanks @holdenk - I'm going to merge this as this script isn't tested by
Jenkins. I will manually test this by triggering a nightly build in `branch-2.1`
---
If your project is set up for it, you
Github user shivaram commented on the issue:
https://github.com/apache/spark/pull/16221
@holdenk this seems to work in my machine in that the
`./python/dist/pyspark-2.1.1.dev0.tar.gz` was removed from
`spark-2.1.1-SNAPSHOT-bin-hadoop-2.6.tgz` that I built using the command[1
Github user shivaram commented on the issue:
https://github.com/apache/spark/pull/16221
cc @holdenk @rxin - I also tried to fix the pip issue in this PR.
The main thing here is that we dont have `PYSPARK_VERSION` inside the
make-distribution script so I just remove all
Github user shivaram commented on the issue:
https://github.com/apache/spark/pull/16014
New PR in #16221
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user shivaram commented on the issue:
https://github.com/apache/spark/pull/16221
cc @felixcheung @rxin - I tested this locally by running
`./dev/make-distribution.sh --name "hadoop-2.6" --tgz --r -Phadoop-2.6 -Psparkr
-Phive -Phive-thriftserver -Pyarn -Pmesos`
-
GitHub user shivaram opened a pull request:
https://github.com/apache/spark/pull/16221
[SPARKR][SPARK-18590] Fix R source package name to match Spark version
## What changes were proposed in this pull request?
Fixes name of R source package so that the `cp` in release
Github user shivaram commented on a diff in the pull request:
https://github.com/apache/spark/pull/16154#discussion_r91629017
--- Diff: core/src/main/scala/org/apache/spark/api/r/JVMObjectTracker.scala
---
@@ -0,0 +1,87 @@
+/*
+ * Licensed to the Apache Software Foundation
Github user shivaram commented on the issue:
https://github.com/apache/spark/pull/16014
Hmm looking more closely I dont think the directory being deleted / being
in `dist/` is a real issue. This is because in `release-build.sh` we do `cd
..`[1] before trying to look for the file. I
Github user shivaram commented on a diff in the pull request:
https://github.com/apache/spark/pull/16154#discussion_r91625917
--- Diff: core/src/main/scala/org/apache/spark/api/r/JVMObjectTracker.scala
---
@@ -0,0 +1,87 @@
+/*
+ * Licensed to the Apache Software Foundation
Github user shivaram commented on the issue:
https://github.com/apache/spark/pull/16014
I think release-tag.sh is only run while building RCs and final releases -
I dont think we run it for nightly builds - So thats not getting run as a part
of my test.
---
If your project is set
Github user shivaram commented on the issue:
https://github.com/apache/spark/pull/16014
No I found the problem - its two fold
- The tgz is not copied into the `dist` directory in `make-distribution.sh`
[1]. This is relatively easy to fix. It only works in Python because the entire
Github user shivaram commented on the issue:
https://github.com/apache/spark/pull/16014
Hmm - The problem is that the directory we want to get the file from gets
deleted in the previous step ? I'm going to debug this locally now.
---
If your project is set up for it, you can reply
Github user shivaram commented on the issue:
https://github.com/apache/spark/pull/16014
Hmm still doesn't work
```
* DONE (SparkR)
+ popd
+ mkdir
/home/jenkins/workspace/spark-branch-2.1-package/spark-2.1.1-SNAPSHOT-bin-hadoop2.6/dist/conf
+ cp
/home/jenkins
Github user shivaram commented on the issue:
https://github.com/apache/spark/pull/16154
One minor comment about synchronized - LGTM otherwise. And thanks for the
RBackend unit test !
---
If your project is set up for it, you can reply to this email and have your
reply appear
Github user shivaram commented on a diff in the pull request:
https://github.com/apache/spark/pull/16154#discussion_r91613716
--- Diff: core/src/main/scala/org/apache/spark/api/r/JVMObjectTracker.scala
---
@@ -0,0 +1,87 @@
+/*
+ * Licensed to the Apache Software Foundation
Github user shivaram commented on the issue:
https://github.com/apache/spark/pull/16218
No worries. Since this isn't tested by jenkins, I'm going to merge this and
test this manually
---
If your project is set up for it, you can reply to this email and have your
reply appear
Github user shivaram commented on the issue:
https://github.com/apache/spark/pull/16218
cc @felixcheung @rxin
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes
GitHub user shivaram opened a pull request:
https://github.com/apache/spark/pull/16218
Change the R source build to Hadoop 2.6
This PR changes the SparkR source release tarball to be built using the
Hadoop 2.6 profile. Previously it was using the without hadoop profile which
leads
Github user shivaram commented on the issue:
https://github.com/apache/spark/pull/16014
@felixcheung I triggered a nightly build on branch-2.1 and this doesn't
work correctly in the no-hadoop build. While building the vignettes we run into
an error -- I'm going to change this to use
Github user shivaram commented on the issue:
https://github.com/apache/spark/pull/16014
LGTM. I took another look at `release-build.sh` and I think it looks fine.
Merging this into master, branch-2.1. I'll also see if I can test this out
somehow on jenkins
---
If your project
Github user shivaram commented on a diff in the pull request:
https://github.com/apache/spark/pull/16014#discussion_r91583848
--- Diff: dev/create-release/release-build.sh ---
@@ -172,11 +172,30 @@ if [[ "$1" == "package" ]]; then
MVN_HOME=`$MVN -versio
Github user shivaram commented on a diff in the pull request:
https://github.com/apache/spark/pull/16014#discussion_r91579011
--- Diff: dev/create-release/release-build.sh ---
@@ -221,14 +235,13 @@ if [[ "$1" == "package" ]]; then
# We increment the
Github user shivaram commented on the issue:
https://github.com/apache/spark/pull/16150
Yeah. This is mostly docs fixes - I think this is fine to go into
branch-2.1. We can see which RC gets promoted etc.
---
If your project is set up for it, you can reply to this email and have
Github user shivaram commented on a diff in the pull request:
https://github.com/apache/spark/pull/16154#discussion_r91165192
--- Diff: sql/core/src/main/scala/org/apache/spark/sql/api/r/SQLUtils.scala
---
@@ -247,7 +247,7 @@ private[sql] object SQLUtils extends Logging
Github user shivaram commented on a diff in the pull request:
https://github.com/apache/spark/pull/16154#discussion_r91164412
--- Diff: core/src/main/scala/org/apache/spark/api/r/RBackendHandler.scala
---
@@ -143,12 +142,8 @@ private[r] class RBackendHandler(server: RBackend
Github user shivaram commented on a diff in the pull request:
https://github.com/apache/spark/pull/16154#discussion_r91141403
--- Diff: sql/core/src/main/scala/org/apache/spark/sql/api/r/SQLUtils.scala
---
@@ -167,7 +167,7 @@ private[sql] object SQLUtils extends Logging
Github user shivaram commented on a diff in the pull request:
https://github.com/apache/spark/pull/16154#discussion_r91136911
--- Diff: core/src/main/scala/org/apache/spark/api/r/RRunner.scala ---
@@ -152,7 +152,7 @@ private[spark] class RRunner[U](
dataOut.writeInt
Github user shivaram commented on a diff in the pull request:
https://github.com/apache/spark/pull/16154#discussion_r91018557
--- Diff: core/src/main/scala/org/apache/spark/api/r/JVMObjectTracker.scala
---
@@ -0,0 +1,65 @@
+/*
+ * Licensed to the Apache Software Foundation
Github user shivaram commented on a diff in the pull request:
https://github.com/apache/spark/pull/16154#discussion_r91145714
--- Diff: sql/core/src/main/scala/org/apache/spark/sql/api/r/SQLUtils.scala
---
@@ -247,7 +247,7 @@ private[sql] object SQLUtils extends Logging
Github user shivaram commented on a diff in the pull request:
https://github.com/apache/spark/pull/16154#discussion_r91136204
--- Diff: core/src/main/scala/org/apache/spark/api/r/RBackendHandler.scala
---
@@ -143,12 +142,8 @@ private[r] class RBackendHandler(server: RBackend
Github user shivaram commented on a diff in the pull request:
https://github.com/apache/spark/pull/16154#discussion_r91140911
--- Diff: sql/core/src/main/scala/org/apache/spark/sql/api/r/SQLUtils.scala
---
@@ -158,7 +158,7 @@ private[sql] object SQLUtils extends Logging
Github user shivaram commented on a diff in the pull request:
https://github.com/apache/spark/pull/16014#discussion_r91135457
--- Diff: dev/create-release/release-build.sh ---
@@ -221,14 +235,13 @@ if [[ "$1" == "package" ]]; then
# We increment the
Github user shivaram commented on the issue:
https://github.com/apache/spark/pull/16154
Thanks @mengxr for the change. Taking a look now
cc @felixcheung @falaki
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well
Github user shivaram commented on the issue:
https://github.com/apache/spark/pull/16141
LGTM. Thanks @dongjoon-hyun - Merging into master, branch-2.1
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user shivaram commented on the issue:
https://github.com/apache/spark/pull/16014
Ah got it - I didn't notice that. BTW the way @rxin handled this for pip is
a bit different in
https://github.com/apache/spark/commit/37e52f8793bff306a7ae5a9aecc16f28333b70e3
Might
Github user shivaram commented on the issue:
https://github.com/apache/spark/pull/16077
LGTM. Merging this to master, branch-2.1
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user shivaram commented on a diff in the pull request:
https://github.com/apache/spark/pull/16014#discussion_r90792118
--- Diff: R/check-cran.sh ---
@@ -82,4 +83,20 @@ else
# This will run tests and/or build vignettes, and require SPARK_HOME
SPARK_HOME
Github user shivaram commented on a diff in the pull request:
https://github.com/apache/spark/pull/16014#discussion_r90790181
--- Diff: dev/make-distribution.sh ---
@@ -208,11 +212,24 @@ cp -r "$SPARK_HOME/data" "$DISTDIR"
# Make pip package
if [
Github user shivaram commented on a diff in the pull request:
https://github.com/apache/spark/pull/16014#discussion_r90792416
--- Diff: R/check-cran.sh ---
@@ -82,4 +83,20 @@ else
# This will run tests and/or build vignettes, and require SPARK_HOME
SPARK_HOME
Github user shivaram commented on a diff in the pull request:
https://github.com/apache/spark/pull/16014#discussion_r90792485
--- Diff: dev/make-distribution.sh ---
@@ -71,6 +72,9 @@ while (( "$#" )); do
--pip)
MAKE_PIP=true
;;
Github user shivaram commented on the issue:
https://github.com/apache/spark/pull/16077
Just to clarify this limits the auto-install feature to SparkR running on
an interactive shell -- Is that right ? I think thats mostly fine as its the
use-case we were targeting but it might good
Github user shivaram commented on the issue:
https://github.com/apache/spark/pull/16014
Sorry I was caught up with some other stuff today. Will take a final look
tomm morning.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub
Github user shivaram commented on the issue:
https://github.com/apache/spark/pull/16014
Just to clarify the Jenkins comment, we use Jenkins to test PRs and that
doesn't run make-distribution.sh -- But the main release builds (say 2.0.2[2])
are also built on Jenkins by a separate job
Github user shivaram commented on a diff in the pull request:
https://github.com/apache/spark/pull/16014#discussion_r89849402
--- Diff: R/check-cran.sh ---
@@ -82,4 +83,20 @@ else
# This will run tests and/or build vignettes, and require SPARK_HOME
SPARK_HOME
Github user shivaram commented on a diff in the pull request:
https://github.com/apache/spark/pull/16014#discussion_r89850470
--- Diff: dev/make-distribution.sh ---
@@ -208,11 +212,24 @@ cp -r "$SPARK_HOME/data" "$DISTDIR"
# Make pip package
if [
Github user shivaram commented on the issue:
https://github.com/apache/spark/pull/15930
cc @mengxr @jkbradley
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes
Github user shivaram commented on the issue:
https://github.com/apache/spark/pull/15910
cc @yanboliang
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user shivaram commented on the issue:
https://github.com/apache/spark/pull/15888
cc @junyangq
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user shivaram commented on the issue:
https://github.com/apache/spark/pull/15790
Sure - Sounds good. LGTM. Merging this to master and branch-2.1
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user shivaram commented on the issue:
https://github.com/apache/spark/pull/15790
So one proposal I was thinking of is to just check in a built version of
the vignette in to the source tree. That way the release packaging wouldn't
need to change. The only thing to keep in mind
Github user shivaram commented on a diff in the pull request:
https://github.com/apache/spark/pull/15790#discussion_r87510311
--- Diff: R/create-docs.sh ---
@@ -52,21 +52,28 @@ Rscript -e 'libDir <- "../../lib"; library(SparkR,
lib.loc=libDir); library(knit
Github user shivaram commented on a diff in the pull request:
https://github.com/apache/spark/pull/15790#discussion_r87509314
--- Diff: R/check-cran.sh ---
@@ -36,11 +36,27 @@ if [ ! -z "$R_HOME" ]
fi
echo "USING R_HOME = $R_HOME"
-# B
Github user shivaram commented on a diff in the pull request:
https://github.com/apache/spark/pull/15790#discussion_r87507979
--- Diff: R/create-docs.sh ---
@@ -52,21 +52,28 @@ Rscript -e 'libDir <- "../../lib"; library(SparkR,
lib.loc=libDir); library(knit
Github user shivaram commented on a diff in the pull request:
https://github.com/apache/spark/pull/15790#discussion_r87507839
--- Diff: R/run-tests.sh ---
@@ -48,6 +48,7 @@ if [[ $FAILED != 0 || $NUM_TEST_WARNING != 0 ]]; then
else
# We have 2 existing NOTEs for new
Github user shivaram commented on a diff in the pull request:
https://github.com/apache/spark/pull/15790#discussion_r87507867
--- Diff: R/pkg/vignettes/sparkr-vignettes.Rmd ---
@@ -1,12 +1,13 @@
---
title: "SparkR - Practical Guide"
output:
- htm
Github user shivaram commented on a diff in the pull request:
https://github.com/apache/spark/pull/15790#discussion_r87504850
--- Diff: R/check-cran.sh ---
@@ -36,11 +36,27 @@ if [ ! -z "$R_HOME" ]
fi
echo "USING R_HOME = $R_HOME"
-# B
Github user shivaram commented on the issue:
https://github.com/apache/spark/pull/15790
@felixcheung I noticed one more thing - We are somehow not registering the
vignette correctly with the R package. So for example if I launch
`./bin/sparkR` and then run `vignette(package="
Github user shivaram commented on the issue:
https://github.com/apache/spark/pull/13690
@felixcheung @vectorijk Should we close this PR ?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user shivaram commented on a diff in the pull request:
https://github.com/apache/spark/pull/15790#discussion_r87448147
--- Diff: R/CRAN_RELEASE.md ---
@@ -0,0 +1,74 @@
+# SparkR CRAN Release
+
+To release SparkR as a package to CRAN, we would use the `devtools
Github user shivaram commented on a diff in the pull request:
https://github.com/apache/spark/pull/15790#discussion_r87448886
--- Diff: R/pkg/vignettes/sparkr-vignettes.Rmd ---
@@ -1,12 +1,13 @@
---
title: "SparkR - Practical Guide"
output:
- htm
Github user shivaram commented on the issue:
https://github.com/apache/spark/pull/15790
Sorry I got caught up with some other stuff - Will take a look at this
today.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well
Github user shivaram commented on the issue:
https://github.com/apache/spark/pull/15746
The best sparse vector support in R comes from the `Matrix` package - But
its a big package and I dont think we should add that as a dependency. We could
try to do a wrapper where if the user
Github user shivaram commented on a diff in the pull request:
https://github.com/apache/spark/pull/15790#discussion_r87048260
--- Diff: R/run-tests.sh ---
@@ -48,7 +48,8 @@ if [[ $FAILED != 0 || $NUM_TEST_WARNING != 0 ]]; then
else
# We have 2 existing NOTEs for new
Github user shivaram commented on the issue:
https://github.com/apache/spark/pull/15790
Can you open a Spark JIRA for installing `qpdf` and cc @shaneknapp on it ?
We can install it on the Jenkins machines.
---
If your project is set up for it, you can reply to this email and have
Github user shivaram commented on the issue:
https://github.com/apache/spark/pull/15790
This is great @felixcheung - Taking a closer look now
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user shivaram commented on the issue:
https://github.com/apache/spark/pull/15746
@mengxr @yanboliang Could you review this ? I'll try to take a look by end
of this week.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub
Github user shivaram commented on the issue:
https://github.com/apache/spark/pull/15709
Yeah I'm fine with fixing a version - but as you said its sometimes helpful
to find if we have some problems with a newly released R version. But yeah we
should have a stable download URL
Github user shivaram commented on the issue:
https://github.com/apache/spark/pull/15709
This approach is fine - The other thing we could do is just use the latest
stable version as described in https://cloud.r-project.org/bin/windows/base/ -
If you see the link at the bottom it says
Github user shivaram commented on the issue:
https://github.com/apache/spark/pull/15686
Yeah I think it'll be a good idea to know the error rate we hit.
@HyukjinKwon It might also be good to create an INFRA ticket to see if we can
re-trigger the AppVeyor builds something
Github user shivaram commented on the issue:
https://github.com/apache/spark/pull/15471
Thanks @HyukjinKwon - The appveyor tests seem to pass as well
Change LGTM to merge to master. @felixcheung Any other comments ?
---
If your project is set up for it, you can reply
Github user shivaram commented on the issue:
https://github.com/apache/spark/pull/15471
@falaki The code change looks pretty good to me, but I'm still a bit
worried about introducing a big change in a minor release. Can we have this
disabled by default and flip the flag only
Github user shivaram commented on the issue:
https://github.com/apache/spark/pull/15666
Jenkins, ok to test
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user shivaram commented on the issue:
https://github.com/apache/spark/pull/15471
Taking another look now
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user shivaram commented on a diff in the pull request:
https://github.com/apache/spark/pull/15471#discussion_r85247619
--- Diff: core/src/main/scala/org/apache/spark/api/r/RBackendHandler.scala
---
@@ -83,7 +86,29 @@ private[r] class RBackendHandler(server: RBackend
Github user shivaram commented on the issue:
https://github.com/apache/spark/pull/11336
@olarayej @felixcheung Sorry I've missed the updates to this. I'll try to
take a look at this later today ?
---
If your project is set up for it, you can reply to this email and have your
reply
Github user shivaram commented on a diff in the pull request:
https://github.com/apache/spark/pull/15471#discussion_r84730391
--- Diff: core/src/main/scala/org/apache/spark/api/r/RRunner.scala ---
@@ -333,6 +333,8 @@ private[r] object RRunner {
var rCommand = sparkConf.get
Github user shivaram commented on a diff in the pull request:
https://github.com/apache/spark/pull/15471#discussion_r84113455
--- Diff: R/pkg/inst/worker/daemon.R ---
@@ -18,6 +18,7 @@
# Worker daemon
rLibDir <- Sys.getenv("SPARKR_RLIBDIR")
+conn
Github user shivaram commented on a diff in the pull request:
https://github.com/apache/spark/pull/15471#discussion_r84730154
--- Diff: core/src/main/scala/org/apache/spark/api/r/RBackend.scala ---
@@ -110,6 +115,11 @@ private[spark] object RBackend extends Logging {
val
Github user shivaram commented on a diff in the pull request:
https://github.com/apache/spark/pull/15471#discussion_r84729305
--- Diff: R/pkg/inst/worker/worker.R ---
@@ -90,6 +90,7 @@ bootTime <- currentTimeSecs()
bootElap <- elapsedSecs()
rLibDir <- S
Github user shivaram commented on a diff in the pull request:
https://github.com/apache/spark/pull/15471#discussion_r84730780
--- Diff: core/src/main/scala/org/apache/spark/api/r/RBackendHandler.scala
---
@@ -83,7 +86,29 @@ private[r] class RBackendHandler(server: RBackend
Github user shivaram commented on the issue:
https://github.com/apache/spark/pull/15463
Jenkins, retest this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user shivaram commented on the issue:
https://github.com/apache/spark/pull/15463
Hah - I wish I had that much insight into Jenkins. I just think it was just
flaky esp. given the github DNS issues.
---
If your project is set up for it, you can reply to this email and have your
Github user shivaram commented on the issue:
https://github.com/apache/spark/pull/15463
Jenkins, retest this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user shivaram commented on the issue:
https://github.com/apache/spark/pull/15471
@falaki @felixcheung Since this is a big change I'd like to also take a
look at this once - Will try to get to it tonight.
---
If your project is set up for it, you can reply to this email
Github user shivaram commented on the issue:
https://github.com/apache/spark/pull/15421
@wangmiao1981 is right - i think we can put back the `catch` in there and
add a `TODO` referring to the JIRA. Alternately I'll try to reproduce this on a
fresh Ubuntu 16.04 VM later tonight
Github user shivaram commented on a diff in the pull request:
https://github.com/apache/spark/pull/15421#discussion_r84114214
--- Diff: core/src/main/scala/org/apache/spark/api/r/SerDe.scala ---
@@ -125,15 +125,34 @@ private[spark] object SerDe {
}
def readDate
Github user shivaram commented on the issue:
https://github.com/apache/spark/pull/15421
Thats a good idea. @wangmiao1981 Can you create a new JIRA for this and
@falaki can we add that JIRA as a pointer in the comment close to the
`NegativeArraySizeException` ? Otherwise code change
Github user shivaram commented on the issue:
https://github.com/apache/spark/pull/15421
Sorry my question wasn't clear - Is there a source change that we can spot
that might have caused this behavior ? I don't see this line changing recently
looking at history of
https://github.com
Github user shivaram commented on the issue:
https://github.com/apache/spark/pull/15421
did this change in a recent R version ? I'm not sure why `NA` is not being
serialized ? That `if` statement should only affect the value assigned to
`type` right ?
---
If your project is set up
Github user shivaram commented on the issue:
https://github.com/apache/spark/pull/15471
@falaki looks like the SparkR MLlib unit tests are timing out on Jenkins.
Do they pass on your machine ?
---
If your project is set up for it, you can reply to this email and have your
reply
Github user shivaram commented on the issue:
https://github.com/apache/spark/pull/15421
Thanks - the lines in [3], [4] will be called if we do any operation on the
DataFrame. i.e. something like `dim(c)`. Also can we use the same test case
that is in the test file checked
Github user shivaram commented on the issue:
https://github.com/apache/spark/pull/15421
Cool - @falaki lets see if @wangmiao1981 debugging finds anything in the
next day or so ? The only thing that would be good is to get this in for 2.0.2
cut if that is happening soon
Github user shivaram commented on the issue:
https://github.com/apache/spark/pull/15471
Jenkins, retest this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user shivaram commented on the issue:
https://github.com/apache/spark/pull/15421
Thanks @wangmiao1981 - There are two different kinds of serializations that
happen in SparkR - one is the RPC style serialization where function arguments
are serialized using `writeDate
Github user shivaram commented on the issue:
https://github.com/apache/spark/pull/15471
@falaki Do we know if the tests timeout are due to this change ? Or are
they unrelated ?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub
301 - 400 of 2516 matches
Mail list logo