Github user NarineK commented on the issue:
https://github.com/apache/spark/pull/14431
@gatorsmile, I'm able to access `groupingExprs` from `SQLUtils.scala`
through `val groupingExprs: Seq[Expression],` however it seems to be
challenging to access the name of the column from pure
Github user NarineK commented on the issue:
https://github.com/apache/spark/pull/14431
Alright, give me couple days to address to those cases.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user NarineK commented on the issue:
https://github.com/apache/spark/pull/14431
I think 'prepend' sounds better. What do you think ?
Yes, the `key` in `function(key, x) { x }` can be useful for some use cases
but I also think that the user could easily prepend
Github user NarineK commented on the issue:
https://github.com/apache/spark/pull/14431
I think @falaki's approach is good, only I find the key which is passed as
an argument together with x as an input of function is a little superfluous.
---
If your project is set up for it, you
Github user NarineK commented on the issue:
https://github.com/apache/spark/pull/14431
Thank you, @gatorsmile! I'll give a try.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user NarineK commented on the issue:
https://github.com/apache/spark/pull/14431
@falaki, I'd be fine with a separate `gapplyWithKeys()` method too.
@shivaram, @felixcheung what do you think ? Should we add a new
`gapplyWithKeys()` method ?
---
If your project is set up
Github user NarineK commented on a diff in the pull request:
https://github.com/apache/spark/pull/14431#discussion_r124327866
--- Diff: R/pkg/R/DataFrame.R ---
@@ -1465,10 +1464,10 @@ setMethod("dapplyCollect",
#'
#' Result
#' -
-#' Model
Github user NarineK commented on the issue:
https://github.com/apache/spark/pull/14431
yes, but we only need read access.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user NarineK commented on the issue:
https://github.com/apache/spark/pull/14742
yes, we can close this, but it would be great if you could help us a way to
access the grouping columns from SparkR in #14431
---
If your project is set up for it, you can reply to this email
Github user NarineK commented on the issue:
https://github.com/apache/spark/pull/14742
Hi @gatorsmile, #14431 depends on this. Is there a way I can access the
grouping columns from `RelationalGroupedDataset` ?
---
If your project is set up for it, you can reply to this email
Github user NarineK commented on the issue:
https://github.com/apache/spark/pull/14431
Hi everyone, yes it depends on #14742 . I've been asked to close #14742.
For this PR I need to access the grouping columns. If you think that there
is an alternative way of accessing
Github user NarineK closed the pull request at:
https://github.com/apache/spark/pull/10162
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature
Github user NarineK commented on the issue:
https://github.com/apache/spark/pull/10162
@HyukjinKwon do you mean closing or fixing the PR ? As I understand from
@gatorsmile he wants to close it
---
If your project is set up for it, you can reply to this email and have your
Github user NarineK commented on the issue:
https://github.com/apache/spark/pull/10162
I'd propose to have:
1. One input argument: suffixes[left, right] (if you want we can have 2
similar to pandas).
2. Default values for suffixes (I think defaults are more convenient
Github user NarineK commented on the issue:
https://github.com/apache/spark/pull/10162
In pandas it has 2 arguments:
lsuffix='', rsuffix='', respectively for left and right sides. And it
appends the suffixes to all column names regardless if they are in join
condition
Github user NarineK commented on the issue:
https://github.com/apache/spark/pull/10162
Thank you for following up on this, @marmbrus !
I looked into two places: R and Pandas DataFrames.
In R it seems that they give new names to columns(columns which aren't in
merge/join
Github user NarineK commented on the issue:
https://github.com/apache/spark/pull/10162
I am trying different ways to solve the problem without renaming the
columns and it seems that a better place to change the column names would be
here:
https://github.com/apache/spark/blob/master
Github user NarineK commented on the issue:
https://github.com/apache/spark/pull/10162
I see, I can go over the pull request this weekend. Thanks for the feedback.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user NarineK commented on the issue:
https://github.com/apache/spark/pull/10162
I'd be happy to update to the latest master if we want to review this now.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user NarineK commented on the issue:
https://github.com/apache/spark/pull/14742
@liancheng, @rxin, Do you think adding `columns` to
`RelationalGroupedDataset` is reasonable or should we find a workaround on R
side ?
---
If your project is set up for it, you can reply
Github user NarineK commented on the issue:
https://github.com/apache/spark/pull/14431
Made a pull request for grouping columns: #14742
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user NarineK commented on the issue:
https://github.com/apache/spark/pull/14742
cc: @shivaram, @liancheng
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
GitHub user NarineK opened a pull request:
https://github.com/apache/spark/pull/14742
[SPARK-17177][SQL] Make grouping columns accessible from
`RelationalGroupedDataset`
## What changes were proposed in this pull request?
Currently, once we create `RelationalGroupedDataset
Github user NarineK commented on a diff in the pull request:
https://github.com/apache/spark/pull/14384#discussion_r74551161
--- Diff: R/pkg/inst/tests/testthat/test_mllib.R ---
@@ -454,4 +454,61 @@ test_that("spark.survreg", {
}
})
+test_that(
Github user NarineK commented on the issue:
https://github.com/apache/spark/pull/14431
yes, @shivaram , that will be one way to do.
Basically, adding a new public function to `RelationalGroupedDataset` which
will return the column names.
If it is fine from SQL perspective
Github user NarineK commented on the issue:
https://github.com/apache/spark/pull/14431
My point is the following: Let's say we have the following:
`var relationalGroupedDataset = df.groupBy("col1", "col2");`
Now, having `relationalGroupedDataset` how can I f
Github user NarineK commented on the issue:
https://github.com/apache/spark/pull/14431
Thanks, @shivaram! Yes, we have a handle to RelationalGroupedDataset, but I
couldn't access column fields of RelationalGroupedDataset's instance. Is there
a way to access the columns
Github user NarineK commented on a diff in the pull request:
https://github.com/apache/spark/pull/14384#discussion_r73999377
--- Diff: R/pkg/R/mllib.R ---
@@ -632,3 +642,147 @@ setMethod("predict", signature(object =
"AFTSurvivalRegressionModel"),
Github user NarineK commented on a diff in the pull request:
https://github.com/apache/spark/pull/14384#discussion_r73999041
--- Diff: R/pkg/R/mllib.R ---
@@ -632,3 +642,147 @@ setMethod("predict", signature(object =
"AFTSurvivalRegressionModel"),
Github user NarineK commented on a diff in the pull request:
https://github.com/apache/spark/pull/14384#discussion_r73998522
--- Diff: R/pkg/R/mllib.R ---
@@ -632,3 +642,147 @@ setMethod("predict", signature(object =
"AFTSurvivalRegressionModel"),
Github user NarineK commented on the issue:
https://github.com/apache/spark/pull/14431
It seems that, currently, in SparkR the `GroupedData` which represents
scala's GroupedData object doesn't have any information about the grouping
keys. `RelationalGroupedDataset` has a private
Github user NarineK commented on the issue:
https://github.com/apache/spark/pull/14431
cool! Let me give a try that option.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user NarineK commented on the issue:
https://github.com/apache/spark/pull/14431
That's a good point, @shivaram `worker.R` is the component which has the
keys and appends it to the output.
I don't see any elegant way of doing it in `worker.R` yet.
However, I
Github user NarineK commented on a diff in the pull request:
https://github.com/apache/spark/pull/14431#discussion_r72916339
--- Diff: docs/sparkr.md ---
@@ -429,19 +431,19 @@ result <- gapplyCollect(
df,
"waiting",
function(key, x) {
Github user NarineK commented on a diff in the pull request:
https://github.com/apache/spark/pull/14431#discussion_r72916310
--- Diff: docs/sparkr.md ---
@@ -398,23 +398,25 @@ and Spark.
{% highlight r %}
# Determine six waiting times with the largest eruption time
GitHub user NarineK opened a pull request:
https://github.com/apache/spark/pull/14431
[SPARK-16258][SparkR][WIP] Gapply add key attach option
## What changes were proposed in this pull request?
The following pull request addresses the new feature request described in
SPARK
Github user NarineK commented on the issue:
https://github.com/apache/spark/pull/12836
@shivaram, @sun-rui , I was wondering if someone created a jira for the
issue described here:
https://github.com/apache/spark/pull/12836#issuecomment-225403054
---
If your project is set up
Github user NarineK commented on the issue:
https://github.com/apache/spark/pull/14090
Thanks, I've generated the docs with your suggested way @shivaram, but I'm
not sure if I see the same thing as you.
I still see some '{% highlight r %}' and some formatting issues in general.
I
Github user NarineK commented on the issue:
https://github.com/apache/spark/pull/14090
Thanks @shivaram, @felixcheung for the comments. I'll address those today.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user NarineK commented on a diff in the pull request:
https://github.com/apache/spark/pull/14090#discussion_r70923645
--- Diff: docs/sparkr.md ---
@@ -316,6 +314,139 @@ head(ldf, 3)
{% endhighlight %}
+ Run a given function on a large dataset
Github user NarineK commented on a diff in the pull request:
https://github.com/apache/spark/pull/14090#discussion_r70921996
--- Diff: docs/sparkr.md ---
@@ -316,6 +314,139 @@ head(ldf, 3)
{% endhighlight %}
+ Run a given function on a large dataset
Github user NarineK commented on a diff in the pull request:
https://github.com/apache/spark/pull/14090#discussion_r70920518
--- Diff: docs/sparkr.md ---
@@ -316,6 +314,139 @@ head(ldf, 3)
{% endhighlight %}
+ Run a given function on a large dataset
Github user NarineK commented on a diff in the pull request:
https://github.com/apache/spark/pull/14090#discussion_r70920244
--- Diff: docs/sparkr.md ---
@@ -316,6 +314,139 @@ head(ldf, 3)
{% endhighlight %}
+ Run a given function on a large dataset
Github user NarineK commented on the issue:
https://github.com/apache/spark/pull/14090
Added data type description
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user NarineK commented on a diff in the pull request:
https://github.com/apache/spark/pull/14090#discussion_r70202736
--- Diff: docs/sparkr.md ---
@@ -306,6 +306,64 @@ head(ldf, 3)
{% endhighlight %}
+ Run a given function on a large dataset grouping
Github user NarineK commented on a diff in the pull request:
https://github.com/apache/spark/pull/14090#discussion_r70202321
--- Diff: docs/sparkr.md ---
@@ -306,6 +306,64 @@ head(ldf, 3)
{% endhighlight %}
+ Run a given function on a large dataset grouping
Github user NarineK commented on a diff in the pull request:
https://github.com/apache/spark/pull/14090#discussion_r70198331
--- Diff: docs/sparkr.md ---
@@ -306,6 +306,64 @@ head(ldf, 3)
{% endhighlight %}
+ Run a given function on a large dataset grouping
Github user NarineK commented on a diff in the pull request:
https://github.com/apache/spark/pull/14090#discussion_r70194370
--- Diff: docs/sparkr.md ---
@@ -306,6 +306,64 @@ head(ldf, 3)
{% endhighlight %}
+ Run a given function on a large dataset grouping
Github user NarineK commented on a diff in the pull request:
https://github.com/apache/spark/pull/14090#discussion_r70168781
--- Diff: docs/sparkr.md ---
@@ -306,6 +306,64 @@ head(ldf, 3)
{% endhighlight %}
+ Run a given function on a large dataset grouping
GitHub user NarineK opened a pull request:
https://github.com/apache/spark/pull/14090
[SPARK-16112][SparkR] Programming guide for gapply/gapplyCollect
## What changes were proposed in this pull request?
Updates programming guide for spark.gapply/spark.gapplyCollect
Github user NarineK commented on the issue:
https://github.com/apache/spark/pull/13760
Do you have any questions on this @shivaram , @sun-rui ?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user NarineK commented on the issue:
https://github.com/apache/spark/pull/13760
@felixcheung , I've addressed the comments or put a comment for the
non-addressed ones.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub
Github user NarineK commented on a diff in the pull request:
https://github.com/apache/spark/pull/13760#discussion_r68571809
--- Diff: R/pkg/R/DataFrame.R ---
@@ -1370,14 +1370,22 @@ setMethod("dapplyCollect",
#' columns with data types integer and string and the
Github user NarineK commented on a diff in the pull request:
https://github.com/apache/spark/pull/13760#discussion_r68571761
--- Diff: R/pkg/R/group.R ---
@@ -198,62 +198,61 @@ createMethods()
#'
#' Applies a R function to each group in the input GroupedData
Github user NarineK commented on a diff in the pull request:
https://github.com/apache/spark/pull/13760#discussion_r68571781
--- Diff: R/pkg/R/DataFrame.R ---
@@ -1419,6 +1427,80 @@ setMethod("gapply",
gapply(grouped, fu
Github user NarineK commented on a diff in the pull request:
https://github.com/apache/spark/pull/13760#discussion_r68565249
--- Diff: R/pkg/R/group.R ---
@@ -198,62 +198,61 @@ createMethods()
#'
#' Applies a R function to each group in the input GroupedData
Github user NarineK commented on a diff in the pull request:
https://github.com/apache/spark/pull/13760#discussion_r68563365
--- Diff: R/pkg/R/group.R ---
@@ -198,62 +198,61 @@ createMethods()
#'
#' Applies a R function to each group in the input GroupedData
Github user NarineK commented on a diff in the pull request:
https://github.com/apache/spark/pull/13760#discussion_r68542491
--- Diff: R/pkg/R/DataFrame.R ---
@@ -1370,14 +1370,22 @@ setMethod("dapplyCollect",
#' columns with data types integer and string and the
Github user NarineK commented on a diff in the pull request:
https://github.com/apache/spark/pull/13760#discussion_r68298040
--- Diff: R/pkg/R/group.R ---
@@ -243,17 +236,73 @@ setMethod("gapply",
signature(x = "GroupedData"),
func
Github user NarineK commented on a diff in the pull request:
https://github.com/apache/spark/pull/13760#discussion_r68291007
--- Diff: R/pkg/R/group.R ---
@@ -199,17 +199,10 @@ createMethods()
#' Applies a R function to each group in the input GroupedData
Github user NarineK commented on a diff in the pull request:
https://github.com/apache/spark/pull/13760#discussion_r68227494
--- Diff: R/pkg/R/group.R ---
@@ -199,17 +199,10 @@ createMethods()
#' Applies a R function to each group in the input GroupedData
Github user NarineK commented on a diff in the pull request:
https://github.com/apache/spark/pull/13760#discussion_r68116142
--- Diff: R/pkg/R/group.R ---
@@ -199,17 +199,10 @@ createMethods()
#' Applies a R function to each group in the input GroupedData
Github user NarineK commented on a diff in the pull request:
https://github.com/apache/spark/pull/13760#discussion_r68114211
--- Diff: R/pkg/R/group.R ---
@@ -242,18 +235,73 @@ createMethods()
setMethod("gapply",
signature(x = &q
Github user NarineK commented on a diff in the pull request:
https://github.com/apache/spark/pull/13660#discussion_r67944525
--- Diff: docs/sparkr.md ---
@@ -262,6 +262,83 @@ head(df)
{% endhighlight %}
+### Applying User-defined Function
+In SparkR, we
Github user NarineK commented on a diff in the pull request:
https://github.com/apache/spark/pull/13760#discussion_r67936513
--- Diff: R/pkg/R/DataFrame.R ---
@@ -1347,6 +1347,65 @@ setMethod("gapply",
gapply(grouped, fu
Github user NarineK commented on the issue:
https://github.com/apache/spark/pull/13790
@shivaram , I've noticed that I didn't associate the pull request with the
jira. I've just did it.
---
If your project is set up for it, you can reply to this email and have your
reply appear
Github user NarineK commented on the issue:
https://github.com/apache/spark/pull/13790
cc: @sun-rui, @shivaram @felixcheung
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
GitHub user NarineK opened a pull request:
https://github.com/apache/spark/pull/13790
remove duplicated docs in dapply
## What changes were proposed in this pull request?
Removed unnecessary duplicated documentation in dapply and dapplyCollect.
In this pull request I
Github user NarineK commented on the issue:
https://github.com/apache/spark/pull/13660
Hi @vectorijk , @felixcheung ,
As I was looking at the documentation generated in R I've noticed that
there is some duplicated information. I'm not sure if this is the right place
to ask about
Github user NarineK commented on the issue:
https://github.com/apache/spark/pull/12836
Thanks for the quick response. I'll create one.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user NarineK commented on the issue:
https://github.com/apache/spark/pull/12836
@vectorijk, should I do the pull request for the same jira -
https://issues.apache.org/jira/browse/SPARK-15672, or should I create a new
jira for the programming guide?
---
If your project is set
GitHub user NarineK opened a pull request:
https://github.com/apache/spark/pull/13760
[SPARK-16012][SparkR] GapplyCollect - applies a R function to each group
similar to gapply and collects the result back to R data.frame
## What changes were proposed in this pull request
Github user NarineK commented on the issue:
https://github.com/apache/spark/pull/12836
Hi @vectorijk,
Thanks for asking, i think in a separate PR. Do you think including in
#13660 would be better ?
---
If your project is set up for it, you can reply to this email and have your
Github user NarineK commented on a diff in the pull request:
https://github.com/apache/spark/pull/12836#discussion_r67265581
--- Diff: R/pkg/R/DataFrame.R ---
@@ -1266,6 +1266,83 @@ setMethod("dapplyCollect",
ldf
})
Github user NarineK commented on the issue:
https://github.com/apache/spark/pull/12836
Thanks, @shivaram and @sun-rui. Yes, I can work on programming guide for
gapply.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well
Github user NarineK commented on a diff in the pull request:
https://github.com/apache/spark/pull/12836#discussion_r67264756
--- Diff: R/pkg/R/DataFrame.R ---
@@ -1266,6 +1266,83 @@ setMethod("dapplyCollect",
ldf
})
Github user NarineK commented on a diff in the pull request:
https://github.com/apache/spark/pull/12836#discussion_r67264555
--- Diff: R/pkg/R/DataFrame.R ---
@@ -1266,6 +1266,83 @@ setMethod("dapplyCollect",
ldf
})
Github user NarineK commented on a diff in the pull request:
https://github.com/apache/spark/pull/12836#discussion_r67197168
--- Diff: R/pkg/inst/worker/worker.R ---
@@ -79,75 +127,72 @@ if (numBroadcastVars > 0) {
# Timing broadcast
broadcastElap <- elaps
Github user NarineK commented on the issue:
https://github.com/apache/spark/pull/12836
Addressed your comments @sun-rui, please let me know if you have any
comments.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well
Github user NarineK commented on a diff in the pull request:
https://github.com/apache/spark/pull/12836#discussion_r66745283
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/objects.scala ---
@@ -325,6 +330,71 @@ case class MapGroupsExec
Github user NarineK commented on a diff in the pull request:
https://github.com/apache/spark/pull/12836#discussion_r66732763
--- Diff: R/pkg/R/group.R ---
@@ -142,3 +142,58 @@ createMethods <- function() {
}
createMethods()
+
+#' gap
Github user NarineK commented on a diff in the pull request:
https://github.com/apache/spark/pull/12836#discussion_r66717543
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/RelationalGroupedDataset.scala ---
@@ -381,6 +385,50 @@ class RelationalGroupedDataset protected[sql
Github user NarineK commented on the issue:
https://github.com/apache/spark/pull/12836
Thanks @liancheng and @rxin !
With respect to your point, @rxin - "private[sql] signature in public
APIs ."
dapply added that signature to `Dataset.scala `and g
Github user NarineK commented on a diff in the pull request:
https://github.com/apache/spark/pull/12836#discussion_r66712035
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/RelationalGroupedDataset.scala ---
@@ -381,6 +385,50 @@ class RelationalGroupedDataset protected[sql
Github user NarineK commented on the issue:
https://github.com/apache/spark/pull/13610
Thanks! Changed the title!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user NarineK commented on the issue:
https://github.com/apache/spark/pull/13610
@sun-rui , @liancheng, @shivaram
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
GitHub user NarineK opened a pull request:
https://github.com/apache/spark/pull/13610
overwritting stringArgs in MapPartitionsInR
## What changes were proposed in this pull request?
As discussed in https://github.com/apache/spark/pull/12836
we need to override
Github user NarineK commented on a diff in the pull request:
https://github.com/apache/spark/pull/12836#discussion_r66672823
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/plans/logical/object.scala
---
@@ -286,6 +290,9 @@ case class FlatMapGroupsInR
Github user NarineK commented on a diff in the pull request:
https://github.com/apache/spark/pull/12836#discussion_r66670797
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/plans/logical/object.scala
---
@@ -286,6 +290,9 @@ case class FlatMapGroupsInR
Github user NarineK commented on the issue:
https://github.com/apache/spark/pull/12836
Hi @sun-rui, hi @shivaram,
I've overwritten the stringArgs - I've pushed my changes in the following
branch. I haven't created a jira yet.
https://github.com/apache/spark/commit
Github user NarineK commented on the issue:
https://github.com/apache/spark/pull/12836
Sure, let me try to override stringArgs and give it a try.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user NarineK commented on the issue:
https://github.com/apache/spark/pull/12836
Thank you for the quick responses @sun-rui and @shivaram .
Here is how the `dataframe.queyExection.toString` printout starts with:
== Parsed Logical Plan
Github user NarineK commented on the issue:
https://github.com/apache/spark/pull/12836
Do you know what exactly caused this ?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user NarineK commented on the issue:
https://github.com/apache/spark/pull/12836
Hi @shivaram , hi @sun-rui ,
Surprisingly the `dataframe.queyExection.toString` both for dapply and
gapply is prepended by a huge array, which I'm not able to understand. It seems
that recent
Github user NarineK commented on the issue:
https://github.com/apache/spark/pull/12836
I can print-out the query plan on scala side and see what does it look like
for that example.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub
Github user NarineK commented on the issue:
https://github.com/apache/spark/pull/12836
not sure why it fails. It fails for my new test case on iris dataset. The
resulting dataframe has 35x2 dimensions.
---
If your project is set up for it, you can reply to this email and have your
Github user NarineK commented on the issue:
https://github.com/apache/spark/pull/12836
Locally, run-tests.sh run successfully, but it fails on jenkins ...
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project
Github user NarineK commented on the issue:
https://github.com/apache/spark/pull/12836
@shivaram, I didn't change the code, but merged with master, because prior
to this the build was failing because some pyspark tests didn't pass.
After my today's merge, when I run gapply
Github user NarineK commented on the issue:
https://github.com/apache/spark/pull/12836
Merged build finished. Test FAILed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user NarineK commented on the issue:
https://github.com/apache/spark/pull/12836
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/59998/
Test FAILed
1 - 100 of 333 matches
Mail list logo