Github user shivaram commented on a diff in the pull request:
https://github.com/apache/spark/pull/14182#discussion_r74798929
--- Diff: R/pkg/R/generics.R ---
@@ -1279,6 +1279,11 @@ setGeneric("spark.naiveBayes", function(data,
formula, ...) { standardGeneric("s
Github user shivaram commented on the issue:
https://github.com/apache/spark/pull/14447
Yep, this looks good @felixcheung -- Feel free to merge once you think its
good to go.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub
Github user shivaram commented on the issue:
https://github.com/apache/spark/pull/14639
@zjffdu Lets discuss why this was introduced more in the JIRA. Regd. the
code change, on my Mac `$HOME` is set without any custom changes on my side.
Any ideas when this will not be the case
Github user shivaram commented on the issue:
https://github.com/apache/spark/pull/14392
`spark.gaussianMixture` sounds good to me.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user shivaram commented on the issue:
https://github.com/apache/spark/pull/14641
Thanks @yanboliang - Do you know how the existing tests were passing ?
Should we add a new test case for this ?
---
If your project is set up for it, you can reply to this email and have your
Github user shivaram commented on the issue:
https://github.com/apache/spark/pull/14182
I haven't had a chance to look at the PR, but the function name
`isoreg`seems in line with
https://stat.ethz.ch/R-manual/R-devel/library/stats/html/isoreg.html -- So I'd
say `spark.isoreg` sounds
Github user shivaram commented on the issue:
https://github.com/apache/spark/pull/14392
Yeah I am not sure `mvnormalmixEM` is very descriptive. @junyangq Any
opinions on the name here ?
---
If your project is set up for it, you can reply to this email and have your
reply appear
Github user shivaram commented on the issue:
https://github.com/apache/spark/pull/14613
@wangmiao1981 Thanks for the PR. Could we add a couple of test cases for
this ? It'll also help me understand what is the expected behavior -- one of
them could be for `collect` with decimals
Github user shivaram commented on the issue:
https://github.com/apache/spark/pull/14431
`groupingExprs` is a member of the class as I can see in [1]. Also we
convert these grouping expressions to columns in the flatMapGroupsInR function
[2] -- So we could add a new function that just
Github user shivaram commented on the issue:
https://github.com/apache/spark/pull/14258
Thanks @junyangq and @felixcheung -- LGTM. Merging this to master and
branch-2.0
We should add some tests to this and enable the checks to run on every PR.
But we can do this as a part
Github user shivaram commented on the issue:
https://github.com/apache/spark/pull/14346
LGTM. Merging this to master.
@mengxr @felixcheung I didn't merge this into branch-2.0 as having Scala +
R changes could affect the CRAN package we are building to match the 2.0
release
Github user shivaram commented on the issue:
https://github.com/apache/spark/pull/14431
I'm not sure I understand the question. Also some of the SQL committers
like @liancheng might be able to answer this better
---
If your project is set up for it, you can reply to this email
Github user shivaram commented on a diff in the pull request:
https://github.com/apache/spark/pull/14258#discussion_r74164995
--- Diff: R/pkg/R/install.R ---
@@ -0,0 +1,230 @@
+#
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor
Github user shivaram commented on the issue:
https://github.com/apache/spark/pull/14431
Sure - Appending more information to the R object is fine. Also it looks
like we actually have a handle to the RelationalGroupedDataset when we call
groupBy on the scala side
https://github.com
Github user shivaram commented on the issue:
https://github.com/apache/spark/pull/14489
LGTM. @felixcheung feel free to merge this and let me know if the commit
scripts work fine
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub
Github user shivaram commented on a diff in the pull request:
https://github.com/apache/spark/pull/14489#discussion_r73789927
--- Diff: R/pkg/R/DataFrame.R ---
@@ -41,7 +41,7 @@ setOldClass("structType")
#'\dontrun{
#' sparkR.session()
#' df <- cr
Github user shivaram commented on a diff in the pull request:
https://github.com/apache/spark/pull/14489#discussion_r73773220
--- Diff: R/pkg/R/DataFrame.R ---
@@ -41,7 +41,7 @@ setOldClass("structType")
#'\dontrun{
#' sparkR.session()
#' df <- cr
Github user shivaram commented on the issue:
https://github.com/apache/spark/pull/14258
I see - So I was thinking that we could merge this into master as well as
its not going to fail any tests or affect any users building SparkR from source
-- I dont think we make any promises about
Github user shivaram commented on the issue:
https://github.com/apache/spark/pull/14258
@junyangq Is #14448 different from this PR or is it the same one on
branch-2.0 ? I can just merge this into two branches, so we dont need a new PR
I think
---
If your project is set up
Github user shivaram commented on the issue:
https://github.com/apache/spark/pull/14431
Yeah I think something like that is fine. Basically doing some
pre-processing or post-processing after the UDF has run using our own R code is
a good way to add new features
---
If your project
Github user shivaram commented on a diff in the pull request:
https://github.com/apache/spark/pull/14392#discussion_r73079016
--- Diff: R/pkg/R/mllib.R ---
@@ -632,3 +659,106 @@ setMethod("predict", signature(object =
"AFTSurvivalRegressionModel"),
Github user shivaram commented on the issue:
https://github.com/apache/spark/pull/14431
@NarineK Thanks for the PR. The thing I worry about is that this will break
any code users write with the 2.0 release and they'll need to change their code
if we ship this in 2.1 -- Other than
Github user shivaram commented on the issue:
https://github.com/apache/spark/pull/14433
I think a better change might be to change that message if we are launching
SparkR ? cc @felixcheung
---
If your project is set up for it, you can reply to this email and have your
reply appear
Github user shivaram commented on a diff in the pull request:
https://github.com/apache/spark/pull/14258#discussion_r73007560
--- Diff: R/pkg/R/sparkR.R ---
@@ -365,6 +365,23 @@ sparkR.session <- function(
}
overrideEnvs(sparkConfigMap, param
Github user shivaram commented on a diff in the pull request:
https://github.com/apache/spark/pull/14258#discussion_r73006612
--- Diff: R/pkg/R/install.R ---
@@ -0,0 +1,232 @@
+#
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor
Github user shivaram commented on the issue:
https://github.com/apache/spark/pull/14258
Not sure clean + rebuild will solve the problem here. The problem here is
that we load the Spark 2.0.0 JARs using `install_spark` (i.e. that didn't have
the fix in #14095) and we use R test code
Github user shivaram commented on the issue:
https://github.com/apache/spark/pull/14258
@junyangq I just ran the CRAN checks locally and I see the problem you ran
into in #14357 -- The problem is that if we try to run tests which depend on a
Java-side change in master
Github user shivaram commented on the issue:
https://github.com/apache/spark/pull/14258
Thanks @junyangq - this is looking pretty good. It would be good to add a
test for this (some discussion below).
@mengxr The R package version is still set as 2.0.0 in `DESCRIPTION` - so
Github user shivaram commented on a diff in the pull request:
https://github.com/apache/spark/pull/14258#discussion_r72663430
--- Diff: R/pkg/R/sparkR.R ---
@@ -365,6 +365,23 @@ sparkR.session <- function(
}
overrideEnvs(sparkConfigMap, param
Github user shivaram commented on the issue:
https://github.com/apache/spark/pull/14357
I think this is related to
https://github.com/apache/spark/commit/142df4834bc33dc7b84b626c6ee3508ab1abe015
cc @dongjoon-hyun
---
If your project is set up for it, you can reply
Github user shivaram commented on the issue:
https://github.com/apache/spark/pull/14357
@junyangq Any idea how the tests were passing on Jenkins before this fix ?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user shivaram commented on the issue:
https://github.com/apache/spark/pull/14330
Yeah lets discuss this on JIRA / mailing lists and then get back to this
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user shivaram commented on the issue:
https://github.com/apache/spark/pull/14329
Thanks @felixcheung - LGTM. Merging this to master, branch-2.0
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user shivaram commented on the issue:
https://github.com/apache/spark/pull/14309
Jenkins, ok to test
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user shivaram commented on the issue:
https://github.com/apache/spark/pull/14309
I'm not sure I am good reviewer for this as I dont fully understand the
consequences inside SQL for this change. cc @liancheng @rxin
---
If your project is set up for it, you can reply
Github user shivaram commented on the issue:
https://github.com/apache/spark/pull/14243
Thanks @sun-rui - Merging this to master and branch-2.0
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user shivaram commented on a diff in the pull request:
https://github.com/apache/spark/pull/14258#discussion_r71451304
--- Diff: R/pkg/R/install.R ---
@@ -0,0 +1,84 @@
+#
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license
Github user shivaram commented on the issue:
https://github.com/apache/spark/pull/14264
Jenkins, ok to test
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user shivaram commented on a diff in the pull request:
https://github.com/apache/spark/pull/14258#discussion_r71405599
--- Diff: R/pkg/R/install.R ---
@@ -0,0 +1,84 @@
+#
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license
Github user shivaram commented on a diff in the pull request:
https://github.com/apache/spark/pull/14258#discussion_r71377561
--- Diff: R/pkg/R/install.R ---
@@ -0,0 +1,84 @@
+#
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license
Github user shivaram commented on the issue:
https://github.com/apache/spark/pull/14243
Added the check to the `ignore` test case as well.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user shivaram commented on the issue:
https://github.com/apache/spark/pull/14258
Thanks @junyangq I'll take a look at this today. One question I had is
about adding `install_spark` as a fallback option in `sparkR.session` if the
jars are not found. Can we add that in this PR
Github user shivaram commented on the issue:
https://github.com/apache/spark/pull/14258
cc @felixcheung @mengxr @sun-rui
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user shivaram commented on the issue:
https://github.com/apache/spark/pull/14250
LGTM. Merging this to master, branch-2.0
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user shivaram commented on the issue:
https://github.com/apache/spark/pull/12836
@NarineK Not as far as I know
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user shivaram commented on the issue:
https://github.com/apache/spark/pull/14243
@sun-rui Thats a good point. Right now the test will be canceled if R is
not installed (from the line `assume(R.isInstalled)`. I also added a check to
make sure SparkR is installed in `SPARK_HOME
Github user shivaram commented on a diff in the pull request:
https://github.com/apache/spark/pull/14243#discussion_r71180619
--- Diff: R/pkg/inst/tests/testthat/jarTest.R ---
@@ -16,17 +16,17 @@
#
library(SparkR)
-sparkR.session()
+sc <- sparkR.sess
Github user shivaram commented on the issue:
https://github.com/apache/spark/pull/14243
cc @sun-rui
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user shivaram commented on the issue:
https://github.com/apache/spark/pull/14177
LGTM. Merging into master
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user shivaram commented on the issue:
https://github.com/apache/spark/pull/14243
cc @felixcheung
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
GitHub user shivaram opened a pull request:
https://github.com/apache/spark/pull/14243
[SPARK-10683][SPARK-16510][SPARKR] Move SparkR include jar test to
SparkSubmitSuite
## What changes were proposed in this pull request?
This change moves the include jar test from R
Github user shivaram commented on the issue:
https://github.com/apache/spark/pull/14177
@felixcheung I plan to merge this into master but skip branch-2.0 as I dont
want to introduce new test errors if we have another RC. Let me know if that
sounds good
---
If your project is set up
Github user shivaram commented on the issue:
https://github.com/apache/spark/pull/14179
LGTM.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user shivaram commented on the issue:
https://github.com/apache/spark/pull/14177
I just realized that my local build was not using the hive profile. If this
fails on Jenkins let's just go back to the original PR. Also I wonder if this
is something we should notify the SQL
Github user shivaram commented on the issue:
https://github.com/apache/spark/pull/14177
Hmm ok - The only difference in the patch I tried out locally is that I had
the `sleep` in the loop test case. Did you remove that for some other reason ?
---
If your project is set up
Github user shivaram commented on the issue:
https://github.com/apache/spark/pull/14173
Cool. Merging this to master, branch-2.0
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user shivaram commented on a diff in the pull request:
https://github.com/apache/spark/pull/14179#discussion_r71073655
--- Diff: R/pkg/R/sparkR.R ---
@@ -155,6 +155,10 @@ sparkR.sparkContext <- function(
existingPort <- Sys.getenv("EXISTING_SPARKR_B
Github user shivaram commented on the issue:
https://github.com/apache/spark/pull/14090
Merging this to master, branch-2.0
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user shivaram commented on the issue:
https://github.com/apache/spark/pull/14090
Thanks @NarineK - I tried it on a fresh Ubuntu VM and it rendered fine. I
think it has something to do with ruby / jekyll versions. The rendered docs
looked fine on the Ubuntu VM
LGTM
Github user shivaram commented on the issue:
https://github.com/apache/spark/pull/14173
@felixcheung Just to confirm, based on discussion in SPARK-16508 is this
change good to merge ?
---
If your project is set up for it, you can reply to this email and have your
reply appear
Github user shivaram commented on the issue:
https://github.com/apache/spark/pull/14177
So I just tried the setup where I only have `enableHiveMetastore=F` for the
test case we are uncommenting and the `sparkR.session.stop` added to the other
test files as in this PR. That seems
Github user shivaram commented on the issue:
https://github.com/apache/spark/pull/14179
@krishnakalyan3 We don't need to modify the message. You can keep the
original message and just split it across two lines with something like the
`paste` function used in
https://github.com
Github user shivaram commented on the issue:
https://github.com/apache/spark/pull/14090
Thanks @NarineK for the updates. As a final thing I just had some
formatting problems when I tested out this change locally. Let me know if you
can't reproduce them. I just ran
```
cd docs
Github user shivaram commented on a diff in the pull request:
https://github.com/apache/spark/pull/14090#discussion_r71041878
--- Diff: docs/sparkr.md ---
@@ -316,6 +314,135 @@ head(ldf, 3)
{% endhighlight %}
+ Run a given function on a large dataset
Github user shivaram commented on a diff in the pull request:
https://github.com/apache/spark/pull/14090#discussion_r71041809
--- Diff: docs/sparkr.md ---
@@ -316,6 +314,135 @@ head(ldf, 3)
{% endhighlight %}
+ Run a given function on a large dataset
Github user shivaram commented on a diff in the pull request:
https://github.com/apache/spark/pull/14090#discussion_r71041580
--- Diff: docs/sparkr.md ---
@@ -295,8 +294,7 @@ head(collect(df1))
# dapplyCollect
Like `dapply`, apply a function to each partition
Github user shivaram commented on the issue:
https://github.com/apache/spark/pull/14206
LGTM. Merging this to master, branch-2.0
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user shivaram commented on a diff in the pull request:
https://github.com/apache/spark/pull/14090#discussion_r70923795
--- Diff: docs/sparkr.md ---
@@ -316,6 +314,139 @@ head(ldf, 3)
{% endhighlight %}
+ Run a given function on a large dataset
Github user shivaram commented on a diff in the pull request:
https://github.com/apache/spark/pull/14090#discussion_r70922863
--- Diff: docs/sparkr.md ---
@@ -316,6 +314,139 @@ head(ldf, 3)
{% endhighlight %}
+ Run a given function on a large dataset
Github user shivaram commented on a diff in the pull request:
https://github.com/apache/spark/pull/14090#discussion_r70922747
--- Diff: docs/sparkr.md ---
@@ -316,6 +314,139 @@ head(ldf, 3)
{% endhighlight %}
+ Run a given function on a large dataset
Github user shivaram commented on a diff in the pull request:
https://github.com/apache/spark/pull/14090#discussion_r70920785
--- Diff: docs/sparkr.md ---
@@ -316,6 +314,139 @@ head(ldf, 3)
{% endhighlight %}
+ Run a given function on a large dataset
Github user shivaram commented on a diff in the pull request:
https://github.com/apache/spark/pull/13868#discussion_r70896624
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/internal/SQLConf.scala ---
@@ -691,7 +692,8 @@ private[sql] class SQLConf extends Serializable
Github user shivaram commented on the issue:
https://github.com/apache/spark/pull/14208
cc @rxin @liancheng
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes
GitHub user shivaram opened a pull request:
https://github.com/apache/spark/pull/14208
[SPARK-16553][DOCS] Fix SQL example file name in docs
## What changes were proposed in this pull request?
Fixes a typo in the sql programming guide
## How was this patch tested
Github user shivaram commented on the issue:
https://github.com/apache/spark/pull/14173
Jenkins, retest this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user shivaram commented on a diff in the pull request:
https://github.com/apache/spark/pull/14179#discussion_r70855973
--- Diff: R/pkg/R/sparkR.R ---
@@ -155,6 +155,9 @@ sparkR.sparkContext <- function(
existingPort <- Sys.getenv("EXISTING_SPARKR_B
Github user shivaram commented on a diff in the pull request:
https://github.com/apache/spark/pull/14090#discussion_r70846132
--- Diff: docs/sparkr.md ---
@@ -316,6 +314,139 @@ head(ldf, 3)
{% endhighlight %}
+ Run a given function on a large dataset
Github user shivaram commented on the issue:
https://github.com/apache/spark/pull/14192
LGTM. Merging this to master, branch-2.0
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user shivaram commented on the issue:
https://github.com/apache/spark/pull/14195
I guess it would be good to have an additional test case, but I'm going to
merge this to make sure this makes 2.0 RCs. LGTM. We could look at ways to
make this more robust in the future
Github user shivaram commented on a diff in the pull request:
https://github.com/apache/spark/pull/14192#discussion_r70744731
--- Diff: R/pkg/R/window.R ---
@@ -17,23 +17,23 @@
# window.R - Utility functions for defining window in DataFrames
Github user shivaram commented on the issue:
https://github.com/apache/spark/pull/14192
Thanks @sun-rui for the PR. LGTM. I had a minor comment inline
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user shivaram commented on the issue:
https://github.com/apache/spark/pull/14177
Does the hive metastore not shutdown properly even if we do
`sparkSession.stop()` in all the test files ? The reason I'm trying to avoid
having `enableHiveMetastore=F` in most test files
Github user shivaram commented on the issue:
https://github.com/apache/spark/pull/14173
I managed to reproduce the issue that Jenkins was hitting. It had to do
with using `@method` on a as.DataFrame that was creating an error on html
generation. I just removed that and it seems
Github user shivaram commented on the issue:
https://github.com/apache/spark/pull/14173
@felixcheung Can you check if you see the same error as Jenkins on your
machine ? On my machine the install and tests seem to pass, so I think this is
a R / roxygen / devtools version problem
Github user shivaram commented on the issue:
https://github.com/apache/spark/pull/14178
LGTM. Merging this to master, branch-2.0
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user shivaram commented on a diff in the pull request:
https://github.com/apache/spark/pull/14178#discussion_r70717248
--- Diff: R/pkg/inst/tests/testthat/test_sparkSQL.R ---
@@ -237,7 +237,7 @@ test_that("read csv as DataFrame", {
&q
Github user shivaram commented on a diff in the pull request:
https://github.com/apache/spark/pull/14178#discussion_r70715940
--- Diff: R/pkg/inst/tests/testthat/test_sparkSQL.R ---
@@ -237,7 +237,7 @@ test_that("read csv as DataFrame", {
&q
Github user shivaram commented on a diff in the pull request:
https://github.com/apache/spark/pull/14173#discussion_r70709852
--- Diff: R/pkg/R/column.R ---
@@ -44,6 +44,9 @@ setMethod("initialize", "Column", function(.Object, jc) {
.Object
})
Github user shivaram commented on a diff in the pull request:
https://github.com/apache/spark/pull/14173#discussion_r70708101
--- Diff: R/pkg/R/column.R ---
@@ -44,6 +44,9 @@ setMethod("initialize", "Column", function(.Object, jc) {
.Object
})
Github user shivaram commented on a diff in the pull request:
https://github.com/apache/spark/pull/14173#discussion_r70707355
--- Diff: R/pkg/R/SQLContext.R ---
@@ -267,6 +267,10 @@ as.DataFrame.default <- function(data, schema = NULL,
samplingRatio = 1.0) {
createDataFr
Github user shivaram commented on a diff in the pull request:
https://github.com/apache/spark/pull/14173#discussion_r70705110
--- Diff: R/pkg/R/column.R ---
@@ -235,20 +248,16 @@ setMethod("cast",
function(x, dataType) {
if (is.characte
Github user shivaram commented on a diff in the pull request:
https://github.com/apache/spark/pull/14178#discussion_r70704632
--- Diff: docs/sparkr.md ---
@@ -111,19 +111,17 @@ head(df)
SparkR supports operating on a variety of data sources through the
`SparkDataFrame
Github user shivaram commented on a diff in the pull request:
https://github.com/apache/spark/pull/14178#discussion_r70704012
--- Diff: docs/sparkr.md ---
@@ -138,6 +136,18 @@ printSchema(people)
# |-- age: long (nullable = true)
# |-- name: string (nullable = true
Github user shivaram commented on the issue:
https://github.com/apache/spark/pull/14177
Is there a reason why `enableHiveSupport = F` is required in all the test
cases ?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well
Github user shivaram commented on the issue:
https://github.com/apache/spark/pull/14090
@felixcheung Could you take one more look at this ?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user shivaram commented on the issue:
https://github.com/apache/spark/pull/14171
Merging this to master, branch-2.0
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user shivaram commented on the issue:
https://github.com/apache/spark/pull/14179
Jenkins, ok to test
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user shivaram commented on a diff in the pull request:
https://github.com/apache/spark/pull/14179#discussion_r70661541
--- Diff: R/pkg/R/sparkR.R ---
@@ -155,6 +155,9 @@ sparkR.sparkContext <- function(
existingPort <- Sys.getenv("EXISTING_SPARKR_B
Github user shivaram commented on a diff in the pull request:
https://github.com/apache/spark/pull/14179#discussion_r70661486
--- Diff: R/pkg/R/sparkR.R ---
@@ -155,6 +155,9 @@ sparkR.sparkContext <- function(
existingPort <- Sys.getenv("EXISTING_SPARKR_B
Github user shivaram commented on the issue:
https://github.com/apache/spark/pull/14171
LGTM
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
601 - 700 of 2516 matches
Mail list logo