Github user clarkfitzg commented on a diff in the pull request:
https://github.com/apache/spark/pull/14783#discussion_r77763275
--- Diff: R/pkg/R/utils.R ---
@@ -697,3 +697,18 @@ is_master_local <- function(master) {
is_sparkR_shell <- function() {
grepl(".*shell\\.R$",
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/14960
Yeap, I quickly fixed and re-ran :). Thanks!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
Github user shivaram commented on a diff in the pull request:
https://github.com/apache/spark/pull/14783#discussion_r77763061
--- Diff: R/pkg/R/utils.R ---
@@ -697,3 +697,18 @@ is_master_local <- function(master) {
is_sparkR_shell <- function() {
grepl(".*shell\\.R$",
Github user xuanyuanking commented on a diff in the pull request:
https://github.com/apache/spark/pull/14957#discussion_r77762907
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/types/StructType.scala ---
@@ -280,6 +280,29 @@ case class StructType(fields:
Github user felixcheung commented on the issue:
https://github.com/apache/spark/pull/14960
seems to fail to build:
```
[INFO] Compiling 468 Scala sources and 74 Java sources to
C:\projects\spark\core\target\scala-2.11\classes...
[ERROR]
Github user felixcheung commented on a diff in the pull request:
https://github.com/apache/spark/pull/14783#discussion_r77762689
--- Diff: R/pkg/R/utils.R ---
@@ -697,3 +697,18 @@ is_master_local <- function(master) {
is_sparkR_shell <- function() {
Github user shivaram commented on a diff in the pull request:
https://github.com/apache/spark/pull/14783#discussion_r77762250
--- Diff: R/pkg/R/utils.R ---
@@ -697,3 +697,18 @@ is_master_local <- function(master) {
is_sparkR_shell <- function() {
grepl(".*shell\\.R$",
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/14960
**[Test build #65026 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/65026/consoleFull)**
for PR 14960 at commit
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/14957#discussion_r77762149
--- Diff:
sql/core/src/test/scala/org/apache/spark/sql/execution/datasources/parquet/ParquetQuerySuite.scala
---
@@ -571,6 +571,44 @@ class
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/14960
I re-run the test after this commit -
https://ci.appveyor.com/project/HyukjinKwon/spark/build/81-SPARK-17339-fix-r
Let's wait and see :)
---
If your project is set up for it, you can
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/14962#discussion_r77761938
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/catalog/TempViewManager.scala
---
@@ -0,0 +1,92 @@
+/*
+ * Licensed to the
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/14960
@sarutak Ah, I will do this here. Thanks!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/14957#discussion_r77761381
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/types/StructType.scala ---
@@ -280,6 +280,29 @@ case class StructType(fields:
Github user sarutak commented on the issue:
https://github.com/apache/spark/pull/14960
I found we can replace `FileSystem.get` in `SparkContext#hadoopFile` and
`SparkContext.newAPIHadoopFile` with `FileSystem.getLocal` like
`SparkContext#hadoopRDD` so once they are replaced, we need
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/14623
**[Test build #65025 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/65025/consoleFull)**
for PR 14623 at commit
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/14527
**[Test build #65024 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/65024/consoleFull)**
for PR 14527 at commit
Github user clarkfitzg commented on a diff in the pull request:
https://github.com/apache/spark/pull/14783#discussion_r77760776
--- Diff: R/pkg/R/utils.R ---
@@ -697,3 +697,18 @@ is_master_local <- function(master) {
is_sparkR_shell <- function() {
grepl(".*shell\\.R$",
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/14991
**[Test build #65021 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/65021/consoleFull)**
for PR 14991 at commit
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/14426
**[Test build #65023 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/65023/consoleFull)**
for PR 14426 at commit
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/14990
**[Test build #65022 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/65022/consoleFull)**
for PR 14990 at commit
GitHub user adrian-wang opened a pull request:
https://github.com/apache/spark/pull/14991
[SPARK-17427][SQL] function SIZE should return -1 when parameter is null
## What changes were proposed in this pull request?
`select size(null)` returns -1 in Hive. In order to be
Github user xuanyuanking commented on a diff in the pull request:
https://github.com/apache/spark/pull/14957#discussion_r77760397
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/types/StructType.scala ---
@@ -280,6 +280,29 @@ case class StructType(fields:
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/14962
Found a common bug in the following ALTER TABLE commands:
```
| ALTER TABLE tableIdentifier (partitionSpec)?
SET SERDE STRING (WITH SERDEPROPERTIES tablePropertyList)?
Github user xuanyuanking commented on a diff in the pull request:
https://github.com/apache/spark/pull/14957#discussion_r77760264
--- Diff:
sql/core/src/test/scala/org/apache/spark/sql/execution/datasources/parquet/ParquetQuerySuite.scala
---
@@ -571,6 +571,44 @@ class
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/14990
**[Test build #65020 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/65020/consoleFull)**
for PR 14990 at commit
GitHub user clockfly opened a pull request:
https://github.com/apache/spark/pull/14990
[SPARK-17426][SQL] Refactor `TreeNode.toJSON` to avoid OOM when converting
unknown fields to JSON
## What changes were proposed in this pull request?
This PR is a follow up of
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/14988
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/65018/
Test PASSed.
---
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/14988
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/14988
**[Test build #65018 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/65018/consoleFull)**
for PR 14988 at commit
Github user kayousterhout commented on the issue:
https://github.com/apache/spark/pull/12436
@sitalkedia I was thinking about this over the weekend and I'm not sure
this is the right approach. I suspect it might be better to re-use the same
task set manager for the new stage. This
Github user sethah commented on a diff in the pull request:
https://github.com/apache/spark/pull/9#discussion_r77758918
--- Diff: mllib/src/main/scala/org/apache/spark/ml/clustering/KMeans.scala
---
@@ -137,6 +138,17 @@ class KMeansModel private[ml] (
@Since("1.6.0")
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/14116
**[Test build #65019 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/65019/consoleFull)**
for PR 14116 at commit
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/14957
Also, it seems you might need to update your PR description. It seems the
last commit you just pushed acts differently with your PR description. In
addition, maybe you would need to fix the
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/14957
Could you please check out related tests pass locally? It seems it affects
all other data sources.
Also, I am not sure of the approach here. Marking nested fields by
modifying column
Github user shivaram commented on the issue:
https://github.com/apache/spark/pull/14783
Sorry for the delay @clarkfitzg - The code change looks pretty good to me.
I just had one question about mixed type columns.
---
If your project is set up for it, you can reply to this email and
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/14957#discussion_r77757859
--- Diff:
sql/core/src/test/scala/org/apache/spark/sql/execution/datasources/parquet/ParquetQuerySuite.scala
---
@@ -571,6 +571,44 @@ class
Github user shivaram commented on a diff in the pull request:
https://github.com/apache/spark/pull/14783#discussion_r77757807
--- Diff: R/pkg/R/utils.R ---
@@ -697,3 +697,18 @@ is_master_local <- function(master) {
is_sparkR_shell <- function() {
grepl(".*shell\\.R$",
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/14957#discussion_r77757667
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/types/StructType.scala ---
@@ -280,6 +280,29 @@ case class StructType(fields:
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/14957#discussion_r77757611
--- Diff:
sql/core/src/test/scala/org/apache/spark/sql/execution/datasources/parquet/ParquetQuerySuite.scala
---
@@ -571,6 +571,44 @@ class
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/14957#discussion_r77757552
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/FileSourceStrategy.scala
---
@@ -97,7 +98,16 @@ object FileSourceStrategy
Github user viirya commented on a diff in the pull request:
https://github.com/apache/spark/pull/14912#discussion_r77757275
--- Diff:
sql/catalyst/src/test/scala/org/apache/spark/sql/catalyst/optimizer/FilterPushdownSuite.scala
---
@@ -171,6 +172,27 @@ class FilterPushdownSuite
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/14962#discussion_r77756736
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/command/tables.scala ---
@@ -159,12 +171,13 @@ case class AlterTableRenameCommand(
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/14962#discussion_r77756537
--- Diff:
sql/hive/src/test/scala/org/apache/spark/sql/hive/execution/SQLViewSuite.scala
---
@@ -95,12 +95,12 @@ class SQLViewSuite extends QueryTest
Github user viirya commented on the issue:
https://github.com/apache/spark/pull/14912
The CNF exponential expansion issue is an important concern in previous
works. Actually you can find that this patch doesn't produce a real CNF for
predicate. I use `splitDisjunctivePredicates` to
Github user sitalkedia commented on the issue:
https://github.com/apache/spark/pull/12436
@davies - Thanks for looking into this. Updated the PR description with
details of the change. Let me know if the approach seem reasonable, I will work
on rebasing the change against latest
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/14962#discussion_r77756261
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/catalog/SessionCatalog.scala
---
@@ -246,33 +246,23 @@ class SessionCatalog(
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/14988
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/14988
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/65017/
Test PASSed.
---
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/14988
**[Test build #65017 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/65017/consoleFull)**
for PR 14988 at commit
Github user sumansomasundar commented on a diff in the pull request:
https://github.com/apache/spark/pull/14762#discussion_r77755718
--- Diff:
common/unsafe/src/main/java/org/apache/spark/unsafe/array/ByteArrayMethods.java
---
@@ -47,13 +47,20 @@ public static int
Github user sethah commented on the issue:
https://github.com/apache/spark/pull/14834
| numClasses| isMultinomial| coefficientMatrix size|
| - |:-:| -:|
|3+|true|3+ x numFeatures|
|2|true|2 x numFeatures|
|2|false|1 x numFeatures|
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/14931
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/65016/
Test PASSed.
---
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/14931
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/14931
**[Test build #65016 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/65016/consoleFull)**
for PR 14931 at commit
Github user viirya commented on the issue:
https://github.com/apache/spark/pull/14912
hmm, looks like there are previous works regarding CNF but none of them are
really merged. @gatorsmile Thanks for the context.
---
If your project is set up for it, you can reply to this email and
Github user watermen commented on a diff in the pull request:
https://github.com/apache/spark/pull/14988#discussion_r77754923
--- Diff:
sql/hive/src/main/scala/org/apache/spark/sql/hive/execution/HiveTableScanExec.scala
---
@@ -164,4 +164,11 @@ case class HiveTableScanExec(
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/14988
**[Test build #65018 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/65018/consoleFull)**
for PR 14988 at commit
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/14912
@viirya Could you please wait for the CNF predicate normalization rule?
@liancheng @yjshen did a few related work before. See
https://github.com/apache/spark/pull/10444 and
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/14989
Can one of the admins verify this patch?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
GitHub user vundela opened a pull request:
https://github.com/apache/spark/pull/14989
[MINOR][SQL] Fixing the typo in unit test
## What changes were proposed in this pull request?
Fixing the typo in the unit test of CodeGenerationSuite.scala
## How was this
Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/14887
@zhaoyunjiong , the fix you made may introduce a situation where recovery
data will be existed in multiple directories, I'm not sure if this will
introduce recovery issue or others, since now the
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/14962#discussion_r77753115
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/command/ddl.scala ---
@@ -189,31 +189,39 @@ case class DropTableCommand(
Github user xuanyuanking commented on a diff in the pull request:
https://github.com/apache/spark/pull/14957#discussion_r77753006
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/types/StructType.scala ---
@@ -259,8 +259,23 @@ case class StructType(fields:
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/14987
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user shivaram commented on a diff in the pull request:
https://github.com/apache/spark/pull/14960#discussion_r77751910
--- Diff: core/src/main/scala/org/apache/spark/SparkContext.scala ---
@@ -992,7 +992,7 @@ class SparkContext(config: SparkConf) extends Logging
with
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/14987
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/14987
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/65015/
Test PASSed.
---
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/14987
**[Test build #65015 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/65015/consoleFull)**
for PR 14987 at commit
Github user viirya commented on the issue:
https://github.com/apache/spark/pull/14847
/cc @cloud-fan @rxin @davies for reviewing this. Thanks.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user cloud-fan commented on the issue:
https://github.com/apache/spark/pull/14850
also backport it to 2.0
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and
Github user cloud-fan commented on a diff in the pull request:
https://github.com/apache/spark/pull/14988#discussion_r77750354
--- Diff:
sql/hive/src/main/scala/org/apache/spark/sql/hive/execution/HiveTableScanExec.scala
---
@@ -164,4 +164,11 @@ case class HiveTableScanExec(
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/14988
**[Test build #65017 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/65017/consoleFull)**
for PR 14988 at commit
GitHub user watermen opened a pull request:
https://github.com/apache/spark/pull/14988
[SPARK-17425][SQL] Override sameResult in HiveTableScanExec to make
ReusedExchange work in text format table
## What changes were proposed in this pull request?
The PR will override the
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/14958
```
Using `mvn` from path:
/home/jenkins/workspace/spark-branch-1.6-lint/build/apache-maven-3.3.9/bin/mvn
Spark's published dependencies DO NOT MATCH the manifest file
(dev/spark-deps).
Github user srinathshankar commented on a diff in the pull request:
https://github.com/apache/spark/pull/14912#discussion_r77748668
--- Diff:
sql/catalyst/src/test/scala/org/apache/spark/sql/catalyst/optimizer/FilterPushdownSuite.scala
---
@@ -171,6 +172,27 @@ class
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/14809
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user cloud-fan commented on the issue:
https://github.com/apache/spark/pull/14809
thanks for the review, merging to master!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
Github user chenghao-intel commented on a diff in the pull request:
https://github.com/apache/spark/pull/10225#discussion_r77748327
--- Diff:
core/src/main/scala/org/apache/spark/shuffle/IndexShuffleBlockResolver.scala ---
@@ -136,7 +136,9 @@ private[spark] class
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/14985
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/65012/
Test FAILed.
---
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/14985
Merged build finished. Test FAILed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/14985
**[Test build #65012 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/65012/consoleFull)**
for PR 14985 at commit
Github user wzhfy commented on the issue:
https://github.com/apache/spark/pull/14712
@yhuai @hvanhovell @cloud-fan Sorry for the late response, I'm out of
office for two days.
@gatorsmile Thanks to fix it!
---
If your project is set up for it, you can reply to this email and
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/14960#discussion_r77747489
--- Diff: core/src/main/scala/org/apache/spark/SparkContext.scala ---
@@ -992,7 +992,7 @@ class SparkContext(config: SparkConf) extends Logging
with
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/14984
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/65013/
Test PASSed.
---
Github user atronchi commented on the issue:
https://github.com/apache/spark/pull/10970
The solution mentioned in [SPARK-17424] by @rdblue fixes this issue.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/14984
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/14984
**[Test build #65013 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/65013/consoleFull)**
for PR 14984 at commit
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/14960#discussion_r77747323
--- Diff: core/src/main/scala/org/apache/spark/SparkContext.scala ---
@@ -992,7 +992,7 @@ class SparkContext(config: SparkConf) extends Logging
with
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/14960#discussion_r77747258
--- Diff: core/src/main/scala/org/apache/spark/SparkContext.scala ---
@@ -992,7 +992,7 @@ class SparkContext(config: SparkConf) extends Logging
with
Github user clarkfitzg commented on the issue:
https://github.com/apache/spark/pull/14783
I'm presenting something related to this on Thursday- it would be nice to
tell the audience this patch made it in. Can I do anything to help this along?
---
If your project is set up for it,
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/14931
**[Test build #65016 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/65016/consoleFull)**
for PR 14931 at commit
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/14702
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/65011/
Test PASSed.
---
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/14702
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user ericl commented on a diff in the pull request:
https://github.com/apache/spark/pull/14931#discussion_r77746289
--- Diff:
core/src/main/scala/org/apache/spark/scheduler/cluster/StandaloneSchedulerBackend.scala
---
@@ -153,7 +153,7 @@ private[spark] class
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/14702
**[Test build #65011 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/65011/consoleFull)**
for PR 14702 at commit
Github user clockfly commented on a diff in the pull request:
https://github.com/apache/spark/pull/14962#discussion_r77745578
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/catalog/SessionCatalog.scala
---
@@ -72,9 +72,7 @@ class SessionCatalog(
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/14816
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/14816
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/65014/
Test PASSed.
---
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/14816
**[Test build #65014 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/65014/consoleFull)**
for PR 14816 at commit
Github user JoshRosen commented on a diff in the pull request:
https://github.com/apache/spark/pull/14931#discussion_r77745305
--- Diff:
core/src/main/scala/org/apache/spark/scheduler/cluster/StandaloneSchedulerBackend.scala
---
@@ -153,7 +153,7 @@ private[spark] class
1 - 100 of 525 matches
Mail list logo