Github user hvanhovell commented on a diff in the pull request:
https://github.com/apache/spark/pull/16954#discussion_r103337724
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/Analyzer.scala
---
@@ -2512,3 +2522,67 @@ object ResolveCreateNamedStruct
Github user hvanhovell commented on a diff in the pull request:
https://github.com/apache/spark/pull/16954#discussion_r102165726
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/Analyzer.scala
---
@@ -707,13 +709,85 @@ class Analyzer(
}
Github user hvanhovell commented on a diff in the pull request:
https://github.com/apache/spark/pull/16954#discussion_r102256790
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/Analyzer.scala
---
@@ -2332,6 +2337,11 @@ class Analyzer(
Github user hvanhovell commented on a diff in the pull request:
https://github.com/apache/spark/pull/16954#discussion_r103340692
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/subquery.scala
---
@@ -40,19 +42,179 @@ abstract class PlanExpression[T
Github user hvanhovell commented on a diff in the pull request:
https://github.com/apache/spark/pull/16954#discussion_r103340272
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/predicates.scala
---
@@ -123,19 +123,36 @@ case class Not(child:
Github user hvanhovell commented on a diff in the pull request:
https://github.com/apache/spark/pull/16954#discussion_r103339031
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/TypeCoercion.scala
---
@@ -109,6 +109,26 @@ object TypeCoercion {
}
Github user hvanhovell commented on a diff in the pull request:
https://github.com/apache/spark/pull/16954#discussion_r102167746
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/Analyzer.scala
---
@@ -1110,31 +1184,24 @@ class Analyzer(
}
Github user hvanhovell commented on a diff in the pull request:
https://github.com/apache/spark/pull/16954#discussion_r103336411
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/Analyzer.scala
---
@@ -1398,42 +1399,46 @@ class Analyzer(
}
Github user hvanhovell commented on a diff in the pull request:
https://github.com/apache/spark/pull/16954#discussion_r102168672
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/Analyzer.scala
---
@@ -1110,31 +1184,24 @@ class Analyzer(
}
Github user hvanhovell commented on a diff in the pull request:
https://github.com/apache/spark/pull/16954#discussion_r102168299
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/Analyzer.scala
---
@@ -707,13 +709,85 @@ class Analyzer(
}
Github user hvanhovell commented on a diff in the pull request:
https://github.com/apache/spark/pull/16954#discussion_r102167233
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/Analyzer.scala
---
@@ -1110,31 +1184,24 @@ class Analyzer(
}
Github user hvanhovell commented on a diff in the pull request:
https://github.com/apache/spark/pull/16954#discussion_r102168200
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/Analyzer.scala
---
@@ -1110,31 +1184,24 @@ class Analyzer(
}
Github user dbtsai commented on a diff in the pull request:
https://github.com/apache/spark/pull/17078#discussion_r103342591
--- Diff:
mllib/src/test/scala/org/apache/spark/ml/classification/LogisticRegressionSuite.scala
---
@@ -456,6 +456,32 @@ class LogisticRegressionSuite
Github user dbtsai commented on a diff in the pull request:
https://github.com/apache/spark/pull/17078#discussion_r103342093
--- Diff:
mllib/src/main/scala/org/apache/spark/ml/classification/LogisticRegression.scala
---
@@ -1447,7 +1447,7 @@ private class LogisticAggregator(
Github user dbtsai commented on a diff in the pull request:
https://github.com/apache/spark/pull/17078#discussion_r103342317
--- Diff:
mllib/src/main/scala/org/apache/spark/ml/classification/LogisticRegression.scala
---
@@ -1431,7 +1431,12 @@ private class LogisticAggregator(
Github user jkbradley commented on the issue:
https://github.com/apache/spark/pull/16965
Github isn't handling the merge well, so you might try rebasing
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/17015
**[Test build #73536 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/73536/testReport)**
for PR 17015 at commit
Github user tgravescs commented on the issue:
https://github.com/apache/spark/pull/16819
I agree with others, this is not the way to do this. There are different
schedulers in yarn, each with different configs that could affect the actual
resources you get.
If you want to
Github user jkbradley commented on the issue:
https://github.com/apache/spark/pull/14273
Sorry about the delay here. Do you still have time to work on this?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/16929#discussion_r103338371
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/jsonExpressions.scala
---
@@ -480,36 +480,79 @@ case class
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/16929
Thanks for your detailed look. Let me check again and address the comments!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If
Github user jkbradley commented on a diff in the pull request:
https://github.com/apache/spark/pull/16811#discussion_r103338261
--- Diff:
mllib/src/test/scala/org/apache/spark/ml/feature/Word2VecSuite.scala ---
@@ -144,6 +144,31 @@ class Word2VecSuite extends SparkFunSuite with
Github user jkbradley commented on a diff in the pull request:
https://github.com/apache/spark/pull/16811#discussion_r103338146
--- Diff:
mllib/src/test/scala/org/apache/spark/ml/feature/Word2VecSuite.scala ---
@@ -144,6 +144,31 @@ class Word2VecSuite extends SparkFunSuite with
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/16929#discussion_r103337914
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/jsonExpressions.scala
---
@@ -480,36 +480,79 @@ case class
Github user marmbrus commented on a diff in the pull request:
https://github.com/apache/spark/pull/16929#discussion_r103337028
--- Diff: sql/core/src/main/scala/org/apache/spark/sql/functions.scala ---
@@ -2969,11 +2969,27 @@ object functions {
}
/**
- *
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/16929#discussion_r103334238
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/jsonExpressions.scala
---
@@ -480,36 +480,79 @@ case class
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/16929#discussion_r10990
--- Diff: sql/core/src/main/scala/org/apache/spark/sql/functions.scala ---
@@ -2969,11 +2969,27 @@ object functions {
}
/**
-
Github user jkbradley commented on a diff in the pull request:
https://github.com/apache/spark/pull/16883#discussion_r103325403
--- Diff:
mllib/src/main/scala/org/apache/spark/ml/feature/StringIndexer.scala ---
@@ -34,8 +36,25 @@ import org.apache.spark.util.collection.OpenHashMap
Github user jkbradley commented on a diff in the pull request:
https://github.com/apache/spark/pull/16883#discussion_r103330093
--- Diff:
mllib/src/main/scala/org/apache/spark/ml/feature/StringIndexer.scala ---
@@ -34,8 +36,25 @@ import org.apache.spark.util.collection.OpenHashMap
Github user jkbradley commented on a diff in the pull request:
https://github.com/apache/spark/pull/16883#discussion_r103332623
--- Diff:
mllib/src/test/scala/org/apache/spark/ml/feature/StringIndexerSuite.scala ---
@@ -75,22 +75,32 @@ class StringIndexerSuite
Github user jkbradley commented on a diff in the pull request:
https://github.com/apache/spark/pull/16883#discussion_r103332929
--- Diff:
mllib/src/main/scala/org/apache/spark/ml/feature/StringIndexer.scala ---
@@ -34,8 +36,25 @@ import org.apache.spark.util.collection.OpenHashMap
Github user jkbradley commented on a diff in the pull request:
https://github.com/apache/spark/pull/16883#discussion_r103325211
--- Diff:
mllib/src/main/scala/org/apache/spark/ml/feature/StringIndexer.scala ---
@@ -17,14 +17,16 @@
package org.apache.spark.ml.feature
Github user jkbradley commented on a diff in the pull request:
https://github.com/apache/spark/pull/16883#discussion_r103331212
--- Diff:
mllib/src/main/scala/org/apache/spark/ml/feature/StringIndexer.scala ---
@@ -163,25 +190,28 @@ class StringIndexerModel (
}
Github user jkbradley commented on a diff in the pull request:
https://github.com/apache/spark/pull/16883#discussion_r103330268
--- Diff:
mllib/src/main/scala/org/apache/spark/ml/feature/StringIndexer.scala ---
@@ -71,18 +90,22 @@ class StringIndexer @Since("1.4.0") (
def
Github user jkbradley commented on a diff in the pull request:
https://github.com/apache/spark/pull/16883#discussion_r103330303
--- Diff:
mllib/src/main/scala/org/apache/spark/ml/feature/StringIndexer.scala ---
@@ -71,18 +90,22 @@ class StringIndexer @Since("1.4.0") (
def
Github user jkbradley commented on a diff in the pull request:
https://github.com/apache/spark/pull/16883#discussion_r103330242
--- Diff:
mllib/src/main/scala/org/apache/spark/ml/feature/StringIndexer.scala ---
@@ -71,18 +90,22 @@ class StringIndexer @Since("1.4.0") (
def
Github user jkbradley commented on a diff in the pull request:
https://github.com/apache/spark/pull/16883#discussion_r103331444
--- Diff:
mllib/src/main/scala/org/apache/spark/ml/feature/StringIndexer.scala ---
@@ -163,25 +190,28 @@ class StringIndexerModel (
}
Github user jkbradley commented on a diff in the pull request:
https://github.com/apache/spark/pull/16883#discussion_r103332885
--- Diff: docs/ml-features.md ---
@@ -576,7 +578,22 @@ will be generated:
2 | c| 1.0
-Notice that the row containing
Github user jkbradley commented on a diff in the pull request:
https://github.com/apache/spark/pull/16883#discussion_r103332764
--- Diff: docs/ml-features.md ---
@@ -502,7 +502,7 @@ for more details on the API.
## StringIndexer
`StringIndexer` encodes a string
Github user imatiach-msft commented on the issue:
https://github.com/apache/spark/pull/17085
@sethah @Lewuathe @thunterdb @WeichenXu123 @jkbradley @actuaryzhang @srowen
would you be able to take a look? I've split the larger pull request into three
parts as suggested.
---
If your
Github user imatiach-msft commented on the issue:
https://github.com/apache/spark/pull/17086
@sethah @Lewuathe @thunterdb @WeichenXu123 @jkbradley @actuaryzhang @srowen
would you be able to take a look? I've split the larger pull request into three
parts as suggested.
---
If your
Github user imatiach-msft commented on the issue:
https://github.com/apache/spark/pull/17084
@sethah @Lewuathe @thunterdb @WeichenXu123 @jkbradley @actuaryzhang @srowen
would you be able to take a look? I've split the larger pull request into
three parts as suggested.
---
If your
Github user mgummelt commented on the issue:
https://github.com/apache/spark/pull/17031
It depends on the application. It's the amount of time you have to wait
before having the opportunity to use those resources again. But if you
explicitly revive, which we do here whenever we
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/17071
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user cloud-fan commented on the issue:
https://github.com/apache/spark/pull/17071
thanks, merging to master!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and
Github user wojtek-szymanski commented on the issue:
https://github.com/apache/spark/pull/17075
Good idea @cloud-fan. I will look for usages of `changePrecision` then.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/17077
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/17077
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/73535/
Test PASSed.
---
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/17077
**[Test build #73535 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/73535/testReport)**
for PR 17077 at commit
Github user sethah commented on the issue:
https://github.com/apache/spark/pull/15628
re-ping @dbtsai @MLnick @yanboliang I still think this is an important
patch :D
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/17077
**[Test build #73535 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/73535/testReport)**
for PR 17077 at commit
Github user jkbradley commented on the issue:
https://github.com/apache/spark/pull/16883
I'll take a look now, thanks!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user kayousterhout commented on the issue:
https://github.com/apache/spark/pull/16867
This looks like a real test failure resulting from this change
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project
Github user sitalkedia commented on the issue:
https://github.com/apache/spark/pull/17088
>> fetch failure does not imply lost executor - it could be a transient
issue.
Similarly, executor loss does not imply host loss.
You are right, it could be transient, but we do have
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/17087
**[Test build #73530 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/73530/testReport)**
for PR 17087 at commit
Github user mridulm commented on the issue:
https://github.com/apache/spark/pull/17088
fetch failure does not imply lost executor - it could be a transient issue.
Similarly, executor loss does not imply host loss.
This is quite drastic for a fetch failure : spark already
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/17085
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/73528/
Test PASSed.
---
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/17085
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user shubhamchopra commented on the issue:
https://github.com/apache/spark/pull/13932
test this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/13932
**[Test build #73534 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/73534/testReport)**
for PR 13932 at commit
Github user skonto commented on the issue:
https://github.com/apache/spark/pull/17031
Ok I see. LGTM.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or
Github user mgummelt commented on the issue:
https://github.com/apache/spark/pull/13143
What whole function is designed poorly. We need to totally change it
instead of tacking this on. We shouldn't be calling `driver.run()` in a
separate thread. We should be calling
Github user shubhamchopra commented on the issue:
https://github.com/apache/spark/pull/13932
Rebased to resolve merge conflicts.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/17085
**[Test build #73528 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/73528/testReport)**
for PR 17085 at commit
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/16867
Merged build finished. Test FAILed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/16867
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/73514/
Test FAILed.
---
Github user mgummelt commented on the issue:
https://github.com/apache/spark/pull/13326
A killed driver never finished, so it shouldn't be added to the finished
set.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/16867
**[Test build #73514 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/73514/testReport)**
for PR 16867 at commit
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/17088
**[Test build #73533 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/73533/testReport)**
for PR 17088 at commit
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/16959
**[Test build #73532 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/73532/testReport)**
for PR 16959 at commit
GitHub user sitalkedia opened a pull request:
https://github.com/apache/spark/pull/17088
[SPARK-19753][CORE] All shuffle files on a host should be removed in â¦
## What changes were proposed in this pull request?
Currently, when we detect fetch failure, we only remove the
Github user pwoody commented on the issue:
https://github.com/apache/spark/pull/16959
Thanks for the feedback @vanzin
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user datumbox commented on the issue:
https://github.com/apache/spark/pull/17059
@srowen @mlnick I updated the PR based on what was discussed above and I
tested it again on Spark 2.1. I also updated the coding styles and the
exception message.
The changes requested
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/16557
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/16557
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/73525/
Test PASSed.
---
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/16557
**[Test build #73525 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/73525/testReport)**
for PR 16557 at commit
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/17031
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/17031
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/73531/
Test PASSed.
---
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/17031
**[Test build #73531 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/73531/testReport)**
for PR 17031 at commit
Github user mgummelt commented on a diff in the pull request:
https://github.com/apache/spark/pull/17031#discussion_r103303266
--- Diff:
resource-managers/mesos/src/main/scala/org/apache/spark/scheduler/cluster/mesos/MesosClusterScheduler.scala
---
@@ -582,141 +688,33 @@
Github user mgummelt commented on a diff in the pull request:
https://github.com/apache/spark/pull/17031#discussion_r103303283
--- Diff:
resource-managers/mesos/src/main/scala/org/apache/spark/scheduler/cluster/mesos/MesosClusterScheduler.scala
---
@@ -737,13 +735,75 @@
Github user mgummelt commented on the issue:
https://github.com/apache/spark/pull/17031
@skonto @susanxhuynh I've updated the solution to use a longer (120s)
default refuse timeout, instead of suppressing offers. Please re-review. Just
as the previous refuse seconds settings were
Github user BryanCutler closed the pull request at:
https://github.com/apache/spark/pull/17048
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/17031
**[Test build #73531 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/73531/testReport)**
for PR 17031 at commit
Github user mgummelt commented on the issue:
https://github.com/apache/spark/pull/17031
@skonto Cassandra supports suppress/revive
Github user marmbrus commented on a diff in the pull request:
https://github.com/apache/spark/pull/16929#discussion_r103302035
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/jsonExpressions.scala
---
@@ -480,36 +480,79 @@ case class
Github user marmbrus commented on a diff in the pull request:
https://github.com/apache/spark/pull/16929#discussion_r103300622
--- Diff: sql/core/src/main/scala/org/apache/spark/sql/functions.scala ---
@@ -2969,11 +2969,27 @@ object functions {
}
/**
- *
Github user jkbradley commented on the issue:
https://github.com/apache/spark/pull/15770
Yep, that's correct. Everyone, please let me know if you disagree.
Also, if we do go with Option 2 above, then the input schema could be a few
possible things:
* list of (neighbor
Github user jkbradley commented on the issue:
https://github.com/apache/spark/pull/17048
Can you please close this manually? Thanks!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
Github user jkbradley commented on the issue:
https://github.com/apache/spark/pull/16782
I'm OK with the current solution, though if it's easy to check using
```inspection``` then that seems nice to do.
If there are cases in which the wrapper is still not thread-safe, then
Github user vanzin commented on a diff in the pull request:
https://github.com/apache/spark/pull/16959#discussion_r103296961
--- Diff:
core/src/main/scala/org/apache/spark/scheduler/OutputCommitCoordinator.scala ---
@@ -181,11 +194,19 @@ private[spark] class
Github user vanzin commented on a diff in the pull request:
https://github.com/apache/spark/pull/16959#discussion_r103296737
--- Diff:
core/src/main/scala/org/apache/spark/scheduler/OutputCommitCoordinator.scala ---
@@ -137,10 +141,15 @@ private[spark] class
Github user vanzin commented on a diff in the pull request:
https://github.com/apache/spark/pull/16959#discussion_r103295542
--- Diff:
core/src/main/scala/org/apache/spark/scheduler/OutputCommitCoordinator.scala ---
@@ -48,25 +48,28 @@ private[spark] class
Github user vanzin commented on a diff in the pull request:
https://github.com/apache/spark/pull/16959#discussion_r103297378
--- Diff:
core/src/main/scala/org/apache/spark/scheduler/OutputCommitCoordinator.scala ---
@@ -158,13 +167,17 @@ private[spark] class
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/17087
**[Test build #73530 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/73530/testReport)**
for PR 17087 at commit
GitHub user kiszk opened a pull request:
https://github.com/apache/spark/pull/17087
[SPARK-19372][SQL] Fix throwing a Java exception at df.fliter() due to 64KB
bytecode size limit
## What changes were proposed in this pull request?
When an expression for `df.filter()` has
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/16819
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/73515/
Test PASSed.
---
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/16819
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/16819
**[Test build #73515 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/73515/testReport)**
for PR 16819 at commit
Github user kayousterhout commented on a diff in the pull request:
https://github.com/apache/spark/pull/16639#discussion_r103289664
--- Diff: core/src/main/scala/org/apache/spark/executor/Executor.scala ---
@@ -400,8 +410,16 @@ private[spark] class Executor(
301 - 400 of 613 matches
Mail list logo