Github user junyangq commented on a diff in the pull request:
https://github.com/apache/spark/pull/14258#discussion_r74125426
--- Diff: R/pkg/R/install.R ---
@@ -0,0 +1,230 @@
+#
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor
Github user junyangq commented on a diff in the pull request:
https://github.com/apache/spark/pull/14258#discussion_r74125149
--- Diff: R/pkg/R/sparkR.R ---
@@ -365,6 +365,23 @@ sparkR.session <- function(
}
overrideEnvs(sparkConfigMap, paramMap)
}
+ #
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/14500#discussion_r74123952
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/command/ddl.scala ---
@@ -425,6 +430,111 @@ case class AlterTableDropPartitionCommand(
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/14182
**[Test build #63453 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/63453/consoleFull)**
for PR 14182 at commit
Github user hvanhovell commented on the issue:
https://github.com/apache/spark/pull/14500
LGTM
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/14500
@liancheng Can you do a post-hoc review?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/14182
Merged build finished. Test FAILed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/14182
**[Test build #63452 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/63452/consoleFull)**
for PR 14182 at commit
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/14182
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/63452/
Test FAILed.
---
Github user mgummelt commented on the issue:
https://github.com/apache/spark/pull/14175
@sun-rui Let me know if you are unable to do so. We need this in 2.0
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/14182
**[Test build #63452 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/63452/consoleFull)**
for PR 14182 at commit
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/14452
Will the deduplication logics on conflicting attributes in Analyzer affect
your solution?
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/14384
**[Test build #63451 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/63451/consoleFull)**
for PR 14384 at commit
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/14544
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/14500
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user junyangq commented on a diff in the pull request:
https://github.com/apache/spark/pull/14384#discussion_r74116056
--- Diff: R/pkg/R/mllib.R ---
@@ -632,3 +642,147 @@ setMethod("predict", signature(object =
"AFTSurvivalRegressionModel"),
function(object,
Github user skonto commented on the issue:
https://github.com/apache/spark/pull/11157
Do we need to make any changes to the documentation?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/11157
**[Test build #63450 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/63450/consoleFull)**
for PR 11157 at commit
Github user JoshRosen commented on the issue:
https://github.com/apache/spark/pull/14544
Merging to master, branch-2.0, and branch-1.6.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/14566
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/14566
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/63444/
Test PASSed.
---
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/14566
**[Test build #63444 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/63444/consoleFull)**
for PR 14566 at commit
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/11157
**[Test build #63449 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/63449/consoleFull)**
for PR 11157 at commit
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/14560
I'm going to mark this as won't fix. If it is a decimal type, I actually
expect it to show me all the 0s.
---
If your project is set up for it, you can reply to this email and have your
reply appear
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/14560
hm this is not what postgres does
```
rxin=# select cast(2.01 as numeric(40, 20));
numeric
2.0100
(1 row)
```
Github user junyangq commented on a diff in the pull request:
https://github.com/apache/spark/pull/14384#discussion_r74112930
--- Diff: R/pkg/R/mllib.R ---
@@ -632,3 +642,147 @@ setMethod("predict", signature(object =
"AFTSurvivalRegressionModel"),
function(object,
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/14547
**[Test build #63447 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/63447/consoleFull)**
for PR 14547 at commit
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/11157
**[Test build #63448 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/63448/consoleFull)**
for PR 11157 at commit
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/14544
**[Test build #63446 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/63446/consoleFull)**
for PR 14544 at commit
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/14567
BTW this is actually a non-trivial change and would require very careful
look, since Python imports are not side effect free.
---
If your project is set up for it, you can reply to this email and
Github user vanzin commented on the issue:
https://github.com/apache/spark/pull/14468
Friendly ping.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if
Github user vanzin commented on a diff in the pull request:
https://github.com/apache/spark/pull/14542#discussion_r74111478
--- Diff:
yarn/src/main/scala/org/apache/spark/deploy/yarn/ApplicationMaster.scala ---
@@ -404,7 +410,8 @@ private[spark] class ApplicationMaster(
Github user sameeragarwal commented on a diff in the pull request:
https://github.com/apache/spark/pull/14500#discussion_r74110917
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/command/ddl.scala ---
@@ -425,6 +430,111 @@ case class
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/14567
Can you create a JIRA ticket for this? This is too large to go in without a
JIRA ticket.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/11157
**[Test build #63445 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/63445/consoleFull)**
for PR 11157 at commit
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/14566
LGTM
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature
Github user JoshRosen commented on a diff in the pull request:
https://github.com/apache/spark/pull/14544#discussion_r74110547
--- Diff: docs/spark-standalone.md ---
@@ -196,6 +196,21 @@ SPARK_MASTER_OPTS supports the following system
properties:
+
Github user skonto commented on the issue:
https://github.com/apache/spark/pull/11157
@mgummelt pls review
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so,
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/14544
LGTM too.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the
Github user rxin commented on a diff in the pull request:
https://github.com/apache/spark/pull/14544#discussion_r74109798
--- Diff: docs/spark-standalone.md ---
@@ -196,6 +196,21 @@ SPARK_MASTER_OPTS supports the following system
properties:
+
Github user rxin commented on a diff in the pull request:
https://github.com/apache/spark/pull/14544#discussion_r74109736
--- Diff: docs/spark-standalone.md ---
@@ -196,6 +196,21 @@ SPARK_MASTER_OPTS supports the following system
properties:
+
Github user srowen commented on a diff in the pull request:
https://github.com/apache/spark/pull/14551#discussion_r74108468
--- Diff: core/src/test/scala/org/apache/spark/util/UtilsSuite.scala ---
@@ -874,4 +874,38 @@ class UtilsSuite extends SparkFunSuite with
Github user tgravescs commented on the issue:
https://github.com/apache/spark/pull/14065
all my original comments were addressed and I won't have time to do another
review until next week so I'm good with it if you are.
---
If your project is set up for it, you can reply to this
Github user vanzin commented on the issue:
https://github.com/apache/spark/pull/14065
Looks fine. There are some possible enhancements (e.g. what looks like some
code repetition in the HDFS provider, neither Hive nor HBase return a token
renewal time, etc) but those can be done
Github user felixcheung commented on the issue:
https://github.com/apache/spark/pull/14346
LGTM
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the
Github user felixcheung commented on the issue:
https://github.com/apache/spark/pull/14558
Thank you for working on this. I've done a pass and added my notes.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user felixcheung commented on a diff in the pull request:
https://github.com/apache/spark/pull/14558#discussion_r74105069
--- Diff: R/pkg/R/DataFrame.R ---
@@ -177,11 +176,10 @@ setMethod("isLocal",
#'
#' Print the first numRows rows of a SparkDataFrame
#'
Github user felixcheung commented on a diff in the pull request:
https://github.com/apache/spark/pull/14558#discussion_r74104886
--- Diff: R/pkg/R/DataFrame.R ---
@@ -1146,7 +1147,7 @@ setMethod("head",
#' Return the first row of a SparkDataFrame
#'
-#' @param x
Github user felixcheung commented on a diff in the pull request:
https://github.com/apache/spark/pull/14558#discussion_r74104522
--- Diff: R/pkg/R/DataFrame.R ---
@@ -2845,8 +2844,11 @@ setMethod("fillna",
#' Since data.frames are held in memory, ensure that you have enough
Github user vanzin commented on a diff in the pull request:
https://github.com/apache/spark/pull/14065#discussion_r74104234
--- Diff: docs/running-on-yarn.md ---
@@ -525,16 +524,23 @@ token for the cluster's HDFS filesystem, and
potentially for HBase and Hive.
An HBase
Github user felixcheung commented on a diff in the pull request:
https://github.com/apache/spark/pull/14558#discussion_r74104039
--- Diff: R/pkg/R/DataFrame.R ---
@@ -3049,8 +3050,8 @@ setMethod("drop",
#'
#' @name histogram
#' @param nbins the number of bins
Github user felixcheung commented on a diff in the pull request:
https://github.com/apache/spark/pull/14558#discussion_r74103250
--- Diff: R/pkg/R/DataFrame.R ---
@@ -3184,6 +3185,7 @@ setMethod("histogram",
#' @param x A SparkDataFrame
#' @param url JDBC database url of
Github user vanzin commented on the issue:
https://github.com/apache/spark/pull/13146
A few minor things, otherwise LGTM.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user felixcheung commented on a diff in the pull request:
https://github.com/apache/spark/pull/14558#discussion_r74103028
--- Diff: R/pkg/R/SQLContext.R ---
@@ -257,23 +257,24 @@ createDataFrame.default <- function(data, schema =
NULL, samplingRatio = 1.0) {
}
Github user vanzin commented on a diff in the pull request:
https://github.com/apache/spark/pull/13146#discussion_r74102459
--- Diff: core/src/main/scala/org/apache/spark/deploy/PythonRunner.scala ---
@@ -37,8 +38,11 @@ object PythonRunner {
val pythonFile = args(0)
Github user felixcheung commented on a diff in the pull request:
https://github.com/apache/spark/pull/14558#discussion_r74102403
--- Diff: R/pkg/R/SQLContext.R ---
@@ -727,6 +729,7 @@ dropTempView <- function(viewName) {
#' @param source The name of external data source
Github user vanzin commented on a diff in the pull request:
https://github.com/apache/spark/pull/13146#discussion_r74102371
--- Diff: docs/configuration.md ---
@@ -427,6 +427,22 @@ Apart from these, the following properties are also
available, and may be useful
with
Github user vanzin commented on a diff in the pull request:
https://github.com/apache/spark/pull/13146#discussion_r74102346
--- Diff: docs/configuration.md ---
@@ -427,6 +427,22 @@ Apart from these, the following properties are also
available, and may be useful
with
Github user Stibbons commented on a diff in the pull request:
https://github.com/apache/spark/pull/14567#discussion_r74102329
--- Diff: python/pyspark/context.py ---
@@ -22,22 +22,30 @@
import signal
import sys
import threading
-from threading import RLock
Github user felixcheung commented on a diff in the pull request:
https://github.com/apache/spark/pull/14558#discussion_r74102227
--- Diff: R/pkg/R/SQLContext.R ---
@@ -840,6 +844,7 @@ createExternalTable <- function(x, ...) {
#' clause expressions used to
Github user vanzin commented on a diff in the pull request:
https://github.com/apache/spark/pull/13146#discussion_r74101878
--- Diff:
core/src/test/java/org/apache/spark/launcher/SparkLauncherSuite.java ---
@@ -89,6 +89,11 @@ public void testSparkArgumentHandling() throws
Github user srowen commented on a diff in the pull request:
https://github.com/apache/spark/pull/14567#discussion_r74101927
--- Diff: python/pyspark/context.py ---
@@ -22,22 +22,30 @@
import signal
import sys
import threading
-from threading import RLock
Github user hvanhovell commented on a diff in the pull request:
https://github.com/apache/spark/pull/14539#discussion_r74100376
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/plans/logical/basicLogicalOperators.scala
---
@@ -769,3 +769,41 @@ case object
Github user Stibbons commented on a diff in the pull request:
https://github.com/apache/spark/pull/14567#discussion_r74101512
--- Diff: python/run-tests.py ---
@@ -37,11 +45,6 @@
sys.path.append(os.path.join(os.path.dirname(os.path.realpath(__file__)),
"../dev/"))
Github user nicklavers commented on a diff in the pull request:
https://github.com/apache/spark/pull/14551#discussion_r74101470
--- Diff: core/src/test/scala/org/apache/spark/util/UtilsSuite.scala ---
@@ -874,4 +874,38 @@ class UtilsSuite extends SparkFunSuite with
Github user Stibbons commented on the issue:
https://github.com/apache/spark/pull/14567
Rebased, sorry I had to force push this PR.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/11157
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/63443/
Test PASSed.
---
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/11157
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user davies commented on the issue:
https://github.com/apache/spark/pull/13701
LGTM, could you fix the conflict (should be trivial)?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user Stibbons commented on a diff in the pull request:
https://github.com/apache/spark/pull/14567#discussion_r74100949
--- Diff: python/pyspark/context.py ---
@@ -22,22 +22,30 @@
import signal
import sys
import threading
-from threading import RLock
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/11157
**[Test build #63443 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/63443/consoleFull)**
for PR 11157 at commit
Github user davies commented on the issue:
https://github.com/apache/spark/pull/14500
Merging into master, thanks!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and
Github user davies commented on a diff in the pull request:
https://github.com/apache/spark/pull/14500#discussion_r74100170
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/command/ddl.scala ---
@@ -425,6 +430,110 @@ case class AlterTableDropPartitionCommand(
Github user srowen commented on the issue:
https://github.com/apache/spark/pull/14566
OK
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature
Github user davies commented on a diff in the pull request:
https://github.com/apache/spark/pull/14500#discussion_r74099592
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/command/ddl.scala ---
@@ -425,6 +430,111 @@ case class AlterTableDropPartitionCommand(
Github user hvanhovell commented on a diff in the pull request:
https://github.com/apache/spark/pull/14539#discussion_r74098318
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/plans/logical/basicLogicalOperators.scala
---
@@ -769,3 +769,41 @@ case object
Github user srowen commented on a diff in the pull request:
https://github.com/apache/spark/pull/14567#discussion_r74098782
--- Diff: python/pyspark/context.py ---
@@ -22,22 +22,30 @@
import signal
import sys
import threading
-from threading import RLock
Github user srowen commented on the issue:
https://github.com/apache/spark/pull/14567
Although there has generally been some resistance to large style-only
changes, we do enforce import order in Scala/Java including checks. So it seems
pretty reasonable to do the same in one big go
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/14567
Can one of the admins verify this patch?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/14540
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user Stibbons commented on the issue:
https://github.com/apache/spark/pull/14180
Opened #14567 with Pep8, import reorganisations and editconfig.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
GitHub user Stibbons opened a pull request:
https://github.com/apache/spark/pull/14567
Python import reorg
## What changes were proposed in this pull request?
This patch adds a code style validation script following pep8
recommendations.
Features:
- add a
Github user davies commented on the issue:
https://github.com/apache/spark/pull/14540
LGTM, merging this into master, thanks!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user holdenk commented on the issue:
https://github.com/apache/spark/pull/14540
Ah interesting, we might want to report the bug upstream with Py4J - but
this change looks good to me :) Thanks for getting this working in Python 3 :)
cc @davies who can maybe take a look as well?
Github user hvanhovell commented on the issue:
https://github.com/apache/spark/pull/14539
@cloud-fan I had an offline discussion with @rxin about this. His main
point was that a larger inline table would create an extremely unreadable plan.
So I came up with this.
---
If your
Github user davies commented on a diff in the pull request:
https://github.com/apache/spark/pull/14500#discussion_r74094542
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/command/ddl.scala ---
@@ -425,6 +430,110 @@ case class AlterTableDropPartitionCommand(
Github user davies commented on a diff in the pull request:
https://github.com/apache/spark/pull/14500#discussion_r74094235
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/command/ddl.scala ---
@@ -425,6 +430,111 @@ case class AlterTableDropPartitionCommand(
Github user mallman commented on a diff in the pull request:
https://github.com/apache/spark/pull/14537#discussion_r74092780
--- Diff:
sql/hive/src/main/scala/org/apache/spark/sql/hive/HiveMetastoreCatalog.scala ---
@@ -287,14 +287,14 @@ private[hive] class
Github user mallman commented on a diff in the pull request:
https://github.com/apache/spark/pull/14537#discussion_r74092457
--- Diff:
sql/hive/src/main/scala/org/apache/spark/sql/hive/HiveMetastoreCatalog.scala ---
@@ -287,14 +287,14 @@ private[hive] class
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/14566
**[Test build #63444 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/63444/consoleFull)**
for PR 14566 at commit
GitHub user ash211 opened a pull request:
https://github.com/apache/spark/pull/14566
Make logDir easily copy/paste-able
In many terminals double-clicking and dragging also includes the trailing
period. Simply remove this to make the value more easily copy/pasteable.
Github user shubhamchopra commented on a diff in the pull request:
https://github.com/apache/spark/pull/14412#discussion_r74088323
--- Diff: core/src/main/scala/org/apache/spark/storage/BlockManagerId.scala
---
@@ -37,10 +37,11 @@ import org.apache.spark.util.Utils
class
Github user xubo245 closed the pull request at:
https://github.com/apache/spark/pull/14422
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
GitHub user xubo245 reopened a pull request:
https://github.com/apache/spark/pull/14422
Add rand(numRows: Int, numCols: Int) functions
## What changes were proposed in this pull request?
(Please fill in changes proposed in this fix)
## How was this patch
Github user xubo245 commented on the issue:
https://github.com/apache/spark/pull/14422
ok
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the
Github user xubo245 closed the pull request at:
https://github.com/apache/spark/pull/14422
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/14539
Merged build finished. Test FAILed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/14539
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/63442/
Test FAILed.
---
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/14539
**[Test build #63442 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/63442/consoleFull)**
for PR 14539 at commit
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/11157
**[Test build #63443 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/63443/consoleFull)**
for PR 11157 at commit
401 - 500 of 741 matches
Mail list logo