Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/16187
Merged build finished. Test FAILed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user tdas commented on the issue:
https://github.com/apache/spark/pull/16109
LGTM. Merging this to master and 2.1
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user narendrasfo commented on the issue:
https://github.com/apache/spark/pull/16198
closing the SPARK-18770
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/16193
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/69808/
Test FAILed.
---
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/16193
Merged build finished. Test FAILed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/16193
**[Test build #69808 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/69808/consoleFull)**
for PR 16193 at commit
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/16138
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/69807/
Test FAILed.
---
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/16138
Merged build finished. Test FAILed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/16138
**[Test build #69807 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/69807/consoleFull)**
for PR 16138 at commit
Github user ericl commented on the issue:
https://github.com/apache/spark/pull/16122
I see. If we set both the client and server conf, and then reverted them in
a finally block, would that be sufficient? Then you can test the client falls
back correctly under various config
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/16119
**[Test build #69819 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/69819/consoleFull)**
for PR 16119 at commit
Github user mallman commented on the issue:
https://github.com/apache/spark/pull/16122
It does, yes. My concern around that test is that its behavior doesn't seem
to be independent of other tests. For example, the value of
Github user srowen commented on the issue:
https://github.com/apache/spark/pull/8785
This needs a rebase and there are still outstanding review comments (minor
ones)
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If
Github user tribbloid commented on the issue:
https://github.com/apache/spark/pull/8785
Hi Reynold,
Yes, by 1.6.2 it made some of the JDBC drivers (notably the one for SAP
HANA) to malfunction.
The fix is easy, though I haven't test if its already fixed in 2.0+.
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/16119
**[Test build #69818 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/69818/consoleFull)**
for PR 16119 at commit
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/16179
**[Test build #69817 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/69817/consoleFull)**
for PR 16179 at commit
Github user vanzin commented on the issue:
https://github.com/apache/spark/pull/16179
retest this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so,
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/14638
**[Test build #69816 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/69816/consoleFull)**
for PR 14638 at commit
Github user ericl commented on the issue:
https://github.com/apache/spark/pull/16122
Hm, doesn't the setMetaConf call work in the removed test?
https://github.com/apache/spark/pull/16122/commits/c0668f086d75d170d7a0ace30831ae81b0ba
---
If your project is set up for it, you can
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/14638
Thank you for review, @mridulm .
I updated the PR by using `isAssignableFrom`.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as
Github user dongjoon-hyun commented on a diff in the pull request:
https://github.com/apache/spark/pull/14638#discussion_r91383183
--- Diff:
sql/hive/src/main/scala/org/apache/spark/sql/hive/TableReader.scala ---
@@ -113,6 +113,9 @@ class HadoopTableReader(
val
Github user mallman commented on the issue:
https://github.com/apache/spark/pull/16122
There may be a way to do it, but the classloader tricks being used in the
hive client implementation are beyond my comprehension.
---
If your project is set up for it, you can reply to this email
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/16189
**[Test build #69815 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/69815/consoleFull)**
for PR 16189 at commit
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/16179
Merged build finished. Test FAILed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/16179
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/69805/
Test FAILed.
---
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/16179
**[Test build #69805 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/69805/consoleFull)**
for PR 16179 at commit
Github user mallman commented on the issue:
https://github.com/apache/spark/pull/16122
I tried that but got a `NoSuchMethodException` in the call to `getMSC`.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user JoshRosen commented on a diff in the pull request:
https://github.com/apache/spark/pull/16189#discussion_r91382001
--- Diff: core/src/main/scala/org/apache/spark/executor/Executor.scala ---
@@ -432,6 +435,57 @@ private[spark] class Executor(
}
/**
Github user srowen commented on a diff in the pull request:
https://github.com/apache/spark/pull/16180#discussion_r91381739
--- Diff: docs/programming-guide.md ---
@@ -1345,14 +1345,17 @@ therefore be efficiently supported in parallel.
They can be used to implement co
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/16189
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/69803/
Test FAILed.
---
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/16189
Merged build finished. Test FAILed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user mengxr commented on the issue:
https://github.com/apache/spark/pull/16190
@sarutak Thanks for the quick fix!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/16189
**[Test build #69803 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/69803/consoleFull)**
for PR 16189 at commit
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/16189
**[Test build #69814 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/69814/consoleFull)**
for PR 16189 at commit
Github user JoshRosen commented on a diff in the pull request:
https://github.com/apache/spark/pull/16189#discussion_r91379426
--- Diff: core/src/main/scala/org/apache/spark/executor/Executor.scala ---
@@ -161,12 +163,7 @@ private[spark] class Executor(
* @param
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/16068
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/69804/
Test PASSed.
---
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/16068
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/16068
**[Test build #69804 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/69804/consoleFull)**
for PR 16068 at commit
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/16195
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/69813/
Test FAILed.
---
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/16195
Merged build finished. Test FAILed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/16195
**[Test build #69813 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/69813/consoleFull)**
for PR 16195 at commit
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/16150
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/69810/
Test PASSed.
---
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/16150
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/16195
**[Test build #69813 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/69813/consoleFull)**
for PR 16195 at commit
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/16150
**[Test build #69810 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/69810/consoleFull)**
for PR 16150 at commit
Github user vanzin commented on a diff in the pull request:
https://github.com/apache/spark/pull/16195#discussion_r91375843
--- Diff:
yarn/src/main/scala/org/apache/spark/deploy/yarn/ClientArguments.scala ---
@@ -61,11 +61,12 @@ private[spark] class ClientArguments(args:
Github user vanzin commented on the issue:
https://github.com/apache/spark/pull/16195
ok to test
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the
Github user ericl commented on the issue:
https://github.com/apache/spark/pull/16122
@mallman could a test using getMSC.setMetaConf work?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
Github user srowen commented on a diff in the pull request:
https://github.com/apache/spark/pull/16196#discussion_r91373995
--- Diff: core/src/main/scala/org/apache/spark/util/SizeEstimator.scala ---
@@ -89,7 +90,13 @@ object SizeEstimator extends Logging {
// A cache
Github user srowen commented on a diff in the pull request:
https://github.com/apache/spark/pull/16196#discussion_r91374827
--- Diff: core/src/main/scala/org/apache/spark/util/SizeEstimator.scala ---
@@ -221,8 +226,13 @@ object SizeEstimator extends Logging {
case _ =>
Github user srowen commented on a diff in the pull request:
https://github.com/apache/spark/pull/16196#discussion_r91375044
--- Diff: core/src/main/scala/org/apache/spark/util/SizeEstimator.scala ---
@@ -243,47 +253,59 @@ object SizeEstimator extends Logging {
arrSize +=
Github user srowen commented on a diff in the pull request:
https://github.com/apache/spark/pull/16196#discussion_r91373420
--- Diff: core/src/main/scala/org/apache/spark/util/SizeEstimator.scala ---
@@ -243,47 +253,59 @@ object SizeEstimator extends Logging {
arrSize +=
Github user srowen commented on a diff in the pull request:
https://github.com/apache/spark/pull/16196#discussion_r91375333
--- Diff: core/src/main/scala/org/apache/spark/util/SizeEstimator.scala ---
@@ -316,62 +338,40 @@ object SizeEstimator extends Logging {
*/
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/16187
**[Test build #69812 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/69812/consoleFull)**
for PR 16187 at commit
Github user vanzin commented on the issue:
https://github.com/apache/spark/pull/11129
It's been a while since discussion died down here; I don't like the current
version because it will break things with a fixed YARN. So the options are
either implement Steve's suggestion, or not do
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/16187
retest this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/16182
**[Test build #69811 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/69811/consoleFull)**
for PR 16182 at commit
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/16165
Merged build finished. Test FAILed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/16165
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/69809/
Test FAILed.
---
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/16165
**[Test build #69809 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/69809/consoleFull)**
for PR 16165 at commit
Github user zsxwing commented on the issue:
https://github.com/apache/spark/pull/16182
retest this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so,
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/12391#discussion_r91372412
--- Diff:
sql/hive/src/test/scala/org/apache/spark/sql/hive/HiveExternalCatalogSuite.scala
---
@@ -17,20 +17,22 @@
package
Github user vanzin commented on the issue:
https://github.com/apache/spark/pull/16190
Merged to 2.0 also.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so,
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/16190
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user vanzin commented on the issue:
https://github.com/apache/spark/pull/16190
Hmm, looks like the original change is in branch-2.0 also, let me try to
merge there too.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as
Github user vanzin commented on the issue:
https://github.com/apache/spark/pull/16190
merging to master / 2.1
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/16150
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user vanzin commented on the issue:
https://github.com/apache/spark/pull/16190
@mengxr that change is actually incorrect and shouldn't have been merged.
(The test changes in the same patch are ok though.)
This one LGTM.
---
If your project is set up for it, you can
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/16150
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/69806/
Test PASSed.
---
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/16150
**[Test build #69806 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/69806/consoleFull)**
for PR 16150 at commit
Github user vanzin commented on a diff in the pull request:
https://github.com/apache/spark/pull/16165#discussion_r91370018
--- Diff:
core/src/main/scala/org/apache/spark/deploy/history/FsHistoryProvider.scala ---
@@ -461,7 +461,7 @@ private[history] class FsHistoryProvider(conf:
Github user seyfe commented on a diff in the pull request:
https://github.com/apache/spark/pull/16165#discussion_r91369539
--- Diff:
core/src/main/scala/org/apache/spark/deploy/history/FsHistoryProvider.scala ---
@@ -461,7 +461,7 @@ private[history] class FsHistoryProvider(conf:
Github user vanzin commented on the issue:
https://github.com/apache/spark/pull/16198
@narendrasfo this is not correct. The yarn module has moved, see the code I
posted in the bug you filed. Please close this.
---
If your project is set up for it, you can reply to this email and
Github user srowen commented on the issue:
https://github.com/apache/spark/pull/16198
This is definitely not what we want to do. It's enabled by a profile. Can
you close this? please see http://spark.apache.org/contributing.html too
---
If your project is set up for it, you can
Github user ericl commented on a diff in the pull request:
https://github.com/apache/spark/pull/16135#discussion_r91369430
--- Diff:
sql/hive/src/main/scala/org/apache/spark/sql/hive/HiveMetastoreCatalog.scala ---
@@ -53,6 +56,10 @@ private[hive] class
Github user ericl commented on the issue:
https://github.com/apache/spark/pull/16135
Isn't it sufficient to lock around the `catalog.filterPartitions(Nil)`? Why
do we need reader locks?
---
If your project is set up for it, you can reply to this email and have your
reply appear on
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/16198
Can one of the admins verify this patch?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
Github user jkbradley commented on the issue:
https://github.com/apache/spark/pull/16169
I don't really see the harm in letting users specify probabilityCol
beforehand, except that they may not have a good way to map the indices to
String labels. I'm OK with removing it for now
GitHub user narendrasfo opened a pull request:
https://github.com/apache/spark/pull/16198
adding yarn module in pom
## What changes were proposed in this pull request?
(Please fill in changes proposed in this fix)
## How was this patch tested?
(Please
Github user vanzin commented on the issue:
https://github.com/apache/spark/pull/13323
@devaraj-kavali do you mind fixing the conflicts?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
Github user vanzin commented on the issue:
https://github.com/apache/spark/pull/12391
@adrian-wang do you mind fixing the conflicts? Seems like this PR was
forgotten.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If
Github user markhamstra commented on a diff in the pull request:
https://github.com/apache/spark/pull/16189#discussion_r91364564
--- Diff: core/src/main/scala/org/apache/spark/executor/Executor.scala ---
@@ -161,12 +163,7 @@ private[spark] class Executor(
* @param
Github user vanzin commented on the issue:
https://github.com/apache/spark/pull/13944
@meknio the description in you last comment should be in the first comment
in the PR (so that it ends up in the git history).
ok to test
---
If your project is set up for it, you can reply
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/16150
**[Test build #69810 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/69810/consoleFull)**
for PR 16150 at commit
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/16165
**[Test build #69809 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/69809/consoleFull)**
for PR 16165 at commit
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/16193
**[Test build #69808 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/69808/consoleFull)**
for PR 16193 at commit
Github user vanzin commented on the issue:
https://github.com/apache/spark/pull/13579
@steveloughran are you going to update this patch?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/16138
**[Test build #69807 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/69807/consoleFull)**
for PR 16138 at commit
Github user vanzin commented on the issue:
https://github.com/apache/spark/pull/16165
retest this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so,
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/16193
I also checked the plan of our 1.6.3 branch. The filter is not
appropriately pushed down, even if we have the logical node `EvaluatePython`.
```
== Parsed Logical Plan ==
'Filter
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/16193
https://github.com/apache/spark/pull/12127 dropped the node `EvaluatePython
`. Based on the PR description, we removed the node for the following reasons:
>Currently we extract Python
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/16193
@cloud-fan Let me do a history search and see why we dropped the logical
plan node `EvaluatePython`
---
If your project is set up for it, you can reply to this email and have your
reply appear
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/16193#discussion_r91360137
--- Diff: python/pyspark/sql/tests.py ---
@@ -360,6 +360,15 @@ def test_broadcast_in_udf(self):
[res] = self.spark.sql("SELECT
Github user anabranch commented on the issue:
https://github.com/apache/spark/pull/16138
More details are here:
https://gist.github.com/anabranch/7a42292593976878eb14e2d86a9966d4
This is completely perplexing to me.
---
If your project is set up for it, you can reply to
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/16150
**[Test build #69806 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/69806/consoleFull)**
for PR 16150 at commit
Github user zsxwing commented on a diff in the pull request:
https://github.com/apache/spark/pull/16186#discussion_r91356341
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/StreamingQueryListenerBus.scala
---
@@ -35,12 +43,24 @@ class
Github user huaxingao commented on the issue:
https://github.com/apache/spark/pull/16175
@gatorsmile Thanks a lot for reviewing this. Sorry I just saw your last
comment after I pushed the change. Will make more changes for other potential
overflow issues.
---
If your project is
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/16179
**[Test build #69805 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/69805/consoleFull)**
for PR 16179 at commit
Github user actuaryzhang commented on the issue:
https://github.com/apache/spark/pull/16149
@srowen @sethah
I have cleaned up the change as suggested. Please review and let me know if
there is any question.
---
If your project is set up for it, you can reply to this email and
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/16068
**[Test build #69804 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/69804/consoleFull)**
for PR 16068 at commit
301 - 400 of 590 matches
Mail list logo