Github user sethah commented on a diff in the pull request:
https://github.com/apache/spark/pull/13557#discussion_r90132945
--- Diff: python/pyspark/ml/clustering.py ---
@@ -349,6 +379,8 @@ class KMeans(JavaEstimator, HasFeaturesCol,
HasPredictionCol, HasMaxIter, HasTol
Github user sethah commented on a diff in the pull request:
https://github.com/apache/spark/pull/13557#discussion_r90132505
--- Diff: python/pyspark/ml/clustering.py ---
@@ -316,6 +316,36 @@ def computeCost(self, dataset):
"""
return
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/16045
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
GitHub user weiqingy opened a pull request:
https://github.com/apache/spark/pull/16069
[WIP][SPARK-18638][BUILD] Upgrade sbt to 0.13.13
## What changes were proposed in this pull request?
This PR is to upgrade sbt from 0.13.11 to 0.13.13.
The release notes since the
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/16063
**[Test build #69364 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/69364/consoleFull)**
for PR 16063 at commit
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/15954
**[Test build #69352 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/69352/consoleFull)**
for PR 15954 at commit
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/15954
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/69352/
Test FAILed.
---
Github user mridulm commented on the issue:
https://github.com/apache/spark/pull/15861
@jiangxb1987 I did a single pass review - particularly given the
similarities in both the codepaths and the classnames, I will need to go over
it again to ensure we dont miss anything.
---
If
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/16048
**[Test build #69345 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/69345/consoleFull)**
for PR 16048 at commit
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/16048
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/69345/
Test FAILed.
---
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/16066
LGTM
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature
Github user hvanhovell commented on a diff in the pull request:
https://github.com/apache/spark/pull/16063#discussion_r90140461
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/TypeCoercion.scala
---
@@ -482,21 +482,6 @@ object TypeCoercion {
Github user dongjoon-hyun commented on a diff in the pull request:
https://github.com/apache/spark/pull/16067#discussion_r90143591
--- Diff: sql/core/src/test/scala/org/apache/spark/sql/DataFrameSuite.scala
---
@@ -1697,6 +1697,12 @@ class DataFrameSuite extends QueryTest with
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/15255
**[Test build #69357 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/69357/consoleFull)**
for PR 15255 at commit
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/15255
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/69357/
Test FAILed.
---
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/15255
Merged build finished. Test FAILed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/16069
**[Test build #69365 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/69365/consoleFull)**
for PR 16069 at commit
Github user JoshRosen commented on the issue:
https://github.com/apache/spark/pull/16069
Upgrading to 0.13.13 is a great idea, especially since we might see
compilation speed improvements due to https://github.com/sbt/sbt/pull/2754.
Let's update the plugins separately.
---
If your
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/15780
**[Test build #69342 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/69342/consoleFull)**
for PR 15780 at commit
Github user mridulm commented on a diff in the pull request:
https://github.com/apache/spark/pull/15861#discussion_r87708046
--- Diff:
core/src/main/scala/org/apache/spark/internal/io/SparkHadoopWriter.scala ---
@@ -0,0 +1,408 @@
+/*
+ * Licensed to the Apache Software
Github user mridulm commented on a diff in the pull request:
https://github.com/apache/spark/pull/15861#discussion_r90119536
--- Diff:
core/src/main/scala/org/apache/spark/internal/io/SparkHadoopWriter.scala ---
@@ -0,0 +1,408 @@
+/*
+ * Licensed to the Apache Software
Github user mridulm commented on a diff in the pull request:
https://github.com/apache/spark/pull/15861#discussion_r90124359
--- Diff:
core/src/main/scala/org/apache/spark/internal/io/SparkHadoopWriter.scala ---
@@ -0,0 +1,408 @@
+/*
+ * Licensed to the Apache Software
Github user mridulm commented on a diff in the pull request:
https://github.com/apache/spark/pull/15861#discussion_r90129259
--- Diff:
core/src/test/scala/org/apache/spark/rdd/PairRDDFunctionsSuite.scala ---
@@ -561,7 +561,7 @@ class PairRDDFunctionsSuite extends SparkFunSuite
Github user mridulm commented on a diff in the pull request:
https://github.com/apache/spark/pull/15861#discussion_r88077635
--- Diff:
core/src/main/scala/org/apache/spark/internal/io/SparkHadoopWriter.scala ---
@@ -0,0 +1,408 @@
+/*
+ * Licensed to the Apache Software
Github user mridulm commented on a diff in the pull request:
https://github.com/apache/spark/pull/15861#discussion_r90121670
--- Diff:
core/src/main/scala/org/apache/spark/internal/io/SparkHadoopWriter.scala ---
@@ -0,0 +1,408 @@
+/*
+ * Licensed to the Apache Software
Github user brkyvz commented on a diff in the pull request:
https://github.com/apache/spark/pull/15954#discussion_r90132677
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/streaming/StreamingQueryManager.scala
---
@@ -59,13 +62,20 @@ class StreamingQueryManager
Github user marmbrus commented on a diff in the pull request:
https://github.com/apache/spark/pull/15954#discussion_r90133503
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/streaming/StreamingQuery.scala ---
@@ -64,23 +68,26 @@ trait StreamingQuery {
/**
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/16065
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/16065
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/69347/
Test PASSed.
---
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/16048
Merged build finished. Test FAILed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/16048
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/69351/
Test FAILed.
---
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/16048
**[Test build #69351 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/69351/consoleFull)**
for PR 16048 at commit
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/16044
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/16066
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/69349/
Test PASSed.
---
Github user zsxwing commented on the issue:
https://github.com/apache/spark/pull/16052
@uncleGen Could you add the following test to StreamingContextSuite?
Otherwise, LGTM.
```Scala
test("SPARK-18560 Receiver data should be deserialized properly.") {
// Start a two
Github user brkyvz commented on a diff in the pull request:
https://github.com/apache/spark/pull/15954#discussion_r90146740
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/streaming/StreamingQuery.scala ---
@@ -33,25 +35,27 @@ trait StreamingQuery {
* Returns the
Github user weiqingy commented on the issue:
https://github.com/apache/spark/pull/16069
Hi, @JoshRosen I am wondering if I should also upgrade sbt plugins in this
PR? What do you think of this upgrade? Your suggestion will be helpful. Thanks.
---
If your project is set up for it,
Github user nsyca commented on a diff in the pull request:
https://github.com/apache/spark/pull/16044#discussion_r90083465
--- Diff: sql/core/src/test/scala/org/apache/spark/sql/JoinSuite.scala ---
@@ -575,6 +575,24 @@ class JoinSuite extends QueryTest with
SharedSQLContext {
Github user liancheng commented on the issue:
https://github.com/apache/spark/pull/15979
My only concern is that "non-flat" type is neither intuitive nor a
well-known term. In fact, this PR only prevents `Option[T <: Product]` to be
top-level Dataset types. How about just call them
GitHub user weiqingy opened a pull request:
https://github.com/apache/spark/pull/16062
[SPARK-18629][SQL] Fix numPartition of JDBCSuite Testcase
## What changes were proposed in this pull request?
Fix numPartition of JDBCSuite Testcase.
## How was this patch tested?
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/16030
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/15972
**[Test build #69338 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/69338/consoleFull)**
for PR 15972 at commit
Github user liancheng commented on the issue:
https://github.com/apache/spark/pull/15976
@cloud-fan @dongjoon-hyun Thanks for the review!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
Github user marmbrus commented on a diff in the pull request:
https://github.com/apache/spark/pull/15954#discussion_r90084420
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/streaming/StreamingQueryListener.scala
---
@@ -81,30 +83,30 @@ object StreamingQueryListener {
Github user marmbrus commented on a diff in the pull request:
https://github.com/apache/spark/pull/15954#discussion_r90083413
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/streaming/StreamingQuery.scala ---
@@ -38,11 +40,11 @@ trait StreamingQuery {
def name:
Github user jkbradley commented on the issue:
https://github.com/apache/spark/pull/16017
LGTM
Merging with master and branch-2.1
Thanks!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not
Github user zsxwing commented on a diff in the pull request:
https://github.com/apache/spark/pull/15982#discussion_r90089946
--- Diff:
core/src/test/java/org/apache/spark/shuffle/sort/UnsafeShuffleWriterSuite.java
---
@@ -338,42 +354,60 @@ private void testMergingSpills(
Github user zsxwing commented on a diff in the pull request:
https://github.com/apache/spark/pull/15982#discussion_r90083229
--- Diff:
core/src/main/scala/org/apache/spark/serializer/SerializerManager.scala ---
@@ -144,14 +144,14 @@ private[spark] class SerializerManager(
Github user zsxwing commented on a diff in the pull request:
https://github.com/apache/spark/pull/15982#discussion_r90085605
--- Diff:
core/src/main/java/org/apache/spark/shuffle/sort/UnsafeShuffleWriter.java ---
@@ -337,42 +340,38 @@ void forceSorterToSpill() throws IOException {
GitHub user hvanhovell opened a pull request:
https://github.com/apache/spark/pull/16063
[SPARK-18622][SQL] Remove TypeCoercion rules for Average and Sum aggregate
functions
## What changes were proposed in this pull request?
Spark currently has special analyzer rules for the
Github user dongjoon-hyun commented on a diff in the pull request:
https://github.com/apache/spark/pull/14638#discussion_r90095923
--- Diff:
sql/hive/src/main/scala/org/apache/spark/sql/hive/TableReader.scala ---
@@ -113,6 +113,10 @@ class HadoopTableReader(
val
Github user nsyca commented on a diff in the pull request:
https://github.com/apache/spark/pull/16044#discussion_r90100457
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/optimizer/Optimizer.scala
---
@@ -932,7 +932,7 @@ object PushPredicateThroughJoin extends
Github user dongjoon-hyun commented on a diff in the pull request:
https://github.com/apache/spark/pull/14638#discussion_r90101517
--- Diff:
sql/hive/src/main/scala/org/apache/spark/sql/hive/TableReader.scala ---
@@ -122,10 +126,20 @@ class HadoopTableReader(
val
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/15982
**[Test build #69337 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/69337/consoleFull)**
for PR 15982 at commit
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/15954
**[Test build #69346 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/69346/consoleFull)**
for PR 15954 at commit
Github user tdas commented on a diff in the pull request:
https://github.com/apache/spark/pull/15954#discussion_r90108129
--- Diff:
sql/core/src/test/scala/org/apache/spark/sql/streaming/StreamTest.scala ---
@@ -669,55 +658,48 @@ trait StreamTest extends QueryTest with
Github user dongjoon-hyun commented on a diff in the pull request:
https://github.com/apache/spark/pull/16063#discussion_r90110048
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/TypeCoercion.scala
---
@@ -482,21 +482,6 @@ object TypeCoercion {
Github user ericl commented on a diff in the pull request:
https://github.com/apache/spark/pull/15998#discussion_r90103384
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/catalog/ExternalCatalog.scala
---
@@ -189,11 +189,28 @@ abstract class ExternalCatalog {
Github user ericl commented on a diff in the pull request:
https://github.com/apache/spark/pull/15998#discussion_r90092477
--- Diff:
sql/hive/src/main/scala/org/apache/spark/sql/hive/HiveExternalCatalog.scala ---
@@ -922,6 +923,29 @@ private[spark] class HiveExternalCatalog(conf:
Github user ericl commented on a diff in the pull request:
https://github.com/apache/spark/pull/15998#discussion_r90097773
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/catalog/InMemoryCatalog.scala
---
@@ -482,6 +482,19 @@ class InMemoryCatalog(
}
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/16066
LGTM other than that tiny comment.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user ericl commented on a diff in the pull request:
https://github.com/apache/spark/pull/15998#discussion_r90114390
--- Diff:
sql/hive/src/test/scala/org/apache/spark/sql/hive/execution/HiveCommandSuite.scala
---
@@ -408,14 +411,18 @@ class HiveCommandSuite extends
Github user ericl commented on a diff in the pull request:
https://github.com/apache/spark/pull/15998#discussion_r90092813
--- Diff:
sql/hive/src/main/scala/org/apache/spark/sql/hive/HiveExternalCatalog.scala ---
@@ -922,6 +923,29 @@ private[spark] class HiveExternalCatalog(conf:
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/16062
**[Test build #69340 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/69340/consoleFull)**
for PR 16062 at commit
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/16044
**[Test build #69341 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/69341/consoleFull)**
for PR 16044 at commit
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/15972
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user wangmiao1981 commented on the issue:
https://github.com/apache/spark/pull/15910
@yanboliang @felixcheung I am back from vacation and made changes according
to your comments.
Thanks!
---
If your project is set up for it, you can reply to this email and have your
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/14638#discussion_r90098793
--- Diff:
sql/hive/src/main/scala/org/apache/spark/sql/hive/TableReader.scala ---
@@ -122,10 +126,20 @@ class HadoopTableReader(
val attrsWithIndex =
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/16065
**[Test build #69347 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/69347/consoleFull)**
for PR 16065 at commit
Github user squito commented on the issue:
https://github.com/apache/spark/pull/15505
I agree with Kay that putting in a smaller change first is better, assuming
it still has the performance gains. That doesn't preclude any further
optimizations that are bigger changes.
I'm
GitHub user hvanhovell opened a pull request:
https://github.com/apache/spark/pull/16066
[SPARK-18632][SQL] AggregateFunction should not implement
ImplicitCastInputTypes
## What changes were proposed in this pull request?
`AggregateFunction` currently implements
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/16066
**[Test build #69349 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/69349/consoleFull)**
for PR 16066 at commit
Github user rxin commented on a diff in the pull request:
https://github.com/apache/spark/pull/16066#discussion_r90114430
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/aggregate/Last.scala
---
@@ -56,6 +52,20 @@ case class Last(child: Expression,
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/16048
**[Test build #69351 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/69351/consoleFull)**
for PR 16048 at commit
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/16067
**[Test build #69350 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/69350/consoleFull)**
for PR 16067 at commit
Github user holdenk commented on the issue:
https://github.com/apache/spark/pull/15344
Ah yes, testing the documentation can be a bit difficult. You can take a
look at the guide under docs/README.md to see how to build the documentation
locally and verify it looks like you expect it
Github user zsxwing commented on a diff in the pull request:
https://github.com/apache/spark/pull/15982#discussion_r90083215
--- Diff:
core/src/main/scala/org/apache/spark/serializer/SerializerManager.scala ---
@@ -144,14 +144,14 @@ private[spark] class SerializerManager(
Github user zsxwing commented on a diff in the pull request:
https://github.com/apache/spark/pull/15982#discussion_r90086862
--- Diff:
core/src/test/scala/org/apache/spark/util/collection/ExternalAppendOnlyMapSuite.scala
---
@@ -17,9 +17,13 @@
package
Github user zsxwing commented on a diff in the pull request:
https://github.com/apache/spark/pull/15982#discussion_r90082215
--- Diff:
core/src/main/java/org/apache/spark/shuffle/sort/UnsafeShuffleWriter.java ---
@@ -337,42 +340,38 @@ void forceSorterToSpill() throws IOException {
Github user zsxwing commented on a diff in the pull request:
https://github.com/apache/spark/pull/15982#discussion_r90076997
--- Diff:
core/src/main/scala/org/apache/spark/serializer/SerializerManager.scala ---
@@ -36,7 +36,7 @@ import org.apache.spark.util.io.{ChunkedByteBuffer,
Github user zsxwing commented on a diff in the pull request:
https://github.com/apache/spark/pull/15982#discussion_r90086837
--- Diff:
core/src/test/scala/org/apache/spark/util/collection/ExternalAppendOnlyMapSuite.scala
---
@@ -17,9 +17,13 @@
package
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/14582
Can one of the admins verify this patch?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
Github user erikerlandson commented on the issue:
https://github.com/apache/spark/pull/16061
@rxin, when you say "move all resource managers" does that mean "move
scheduler back-ends for mesos, yarn, etc, into some `resource-managers`
sub-project" ?
---
If your project is set up
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/16064
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/16064
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/69344/
Test PASSed.
---
Github user marmbrus commented on a diff in the pull request:
https://github.com/apache/spark/pull/15924#discussion_r90090753
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/HDFSMetadataLog.scala
---
@@ -129,48 +129,18 @@ class HDFSMetadataLog[T <:
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/16062
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/15877
Hey guys - after looking at the pr more, I'm afraid we have gone overboard
with testing here. Most of the test cases written are just repeating each other
and doing exactly the same thing. For testing
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/16030
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/69336/
Test PASSed.
---
Github user jkbradley commented on a diff in the pull request:
https://github.com/apache/spark/pull/16009#discussion_r90088312
--- Diff:
mllib/src/main/scala/org/apache/spark/ml/feature/ChiSqSelector.scala ---
@@ -49,15 +49,13 @@ private[feature] trait ChiSqSelectorParams extends
Github user hvanhovell commented on the issue:
https://github.com/apache/spark/pull/16044
LGTM - pending jenkins
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and
Github user erikerlandson commented on a diff in the pull request:
https://github.com/apache/spark/pull/16061#discussion_r90092345
--- Diff:
kubernetes/src/main/scala/org/apache/spark/scheduler/cluster/kubernetes/KubernetesClusterSchedulerBackend.scala
---
@@ -0,0 +1,222 @@
Github user kayousterhout commented on the issue:
https://github.com/apache/spark/pull/16045
LGTM
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/15975#discussion_r90098792
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/jdbc/JDBCOptions.scala
---
@@ -76,9 +76,6 @@ class JDBCOptions(
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/16064
**[Test build #69344 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/69344/consoleFull)**
for PR 16064 at commit
GitHub user markhamstra opened a pull request:
https://github.com/apache/spark/pull/16065
[SPARK-17064][SQL] Changed ExchangeCoordinator re-partitioning to avoid
additional data â¦
## What changes were proposed in this pull request?
Re-partitioning logic in
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/16065
Wrong JIRA ticket?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or
Github user srowen commented on a diff in the pull request:
https://github.com/apache/spark/pull/14638#discussion_r90112125
--- Diff:
sql/hive/src/main/scala/org/apache/spark/sql/hive/TableReader.scala ---
@@ -113,6 +113,10 @@ class HadoopTableReader(
val tablePath
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/15982
**[Test build #69348 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/69348/consoleFull)**
for PR 15982 at commit
Github user tdas commented on a diff in the pull request:
https://github.com/apache/spark/pull/15954#discussion_r90112136
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/streaming/StreamingQuery.scala ---
@@ -38,11 +40,11 @@ trait StreamingQuery {
def name: String
101 - 200 of 671 matches
Mail list logo