Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/15703#discussion_r88140713
--- Diff: sql/hive/src/main/scala/org/apache/spark/sql/hive/hiveUDFs.scala
---
@@ -289,73 +302,75 @@ private[hive] case class HiveUDAFFunction(
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/15703#discussion_r88141072
--- Diff: sql/hive/src/main/scala/org/apache/spark/sql/hive/hiveUDFs.scala
---
@@ -365,4 +380,66 @@ private[hive] case class HiveUDAFFunction(
val
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/15813
Merged build finished. Test FAILed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/15867
Thank you!!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/15867
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/15813
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/68674/
Test FAILed.
---
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/15813
**[Test build #68674 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/68674/consoleFull)**
for PR 15813 at commit
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/15703#discussion_r88140381
--- Diff: sql/hive/src/main/scala/org/apache/spark/sql/hive/hiveUDFs.scala
---
@@ -289,73 +302,75 @@ private[hive] case class HiveUDAFFunction(
Github user zsxwing commented on the issue:
https://github.com/apache/spark/pull/15867
LGTM. Merging to master and 2.1. Thanks!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/15892
LGTM
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the
Github user hvanhovell commented on a diff in the pull request:
https://github.com/apache/spark/pull/13065#discussion_r88139065
--- Diff:
sql/core/src/test/scala/org/apache/spark/sql/execution/benchmark/MiscBenchmark.scala
---
@@ -124,12 +124,124 @@ class MiscBenchmark extends
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/15874
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/68678/
Test PASSed.
---
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/15874
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/15874
**[Test build #68678 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/68678/consoleFull)**
for PR 15874 at commit
Github user vanzin commented on a diff in the pull request:
https://github.com/apache/spark/pull/15652#discussion_r88137680
--- Diff: core/src/main/scala/org/apache/spark/ui/JettyUtils.scala ---
@@ -307,15 +307,26 @@ private[spark] object JettyUtils extends Logging {
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/15885
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/15893
**[Test build #68679 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/68679/consoleFull)**
for PR 15893 at commit
Github user sethah commented on the issue:
https://github.com/apache/spark/pull/15893
cc @MLnick @dbtsai
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so,
GitHub user sethah opened a pull request:
https://github.com/apache/spark/pull/15893
[SPARK-18456][ML][FOLLOWUP] Use matrix abstraction for coefficients in
LogisticRegression training
## What changes were proposed in this pull request?
This is a follow up to some of the
Github user marmbrus commented on the issue:
https://github.com/apache/spark/pull/15885
LGTM, merging to master and 2.1
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user zsxwing commented on a diff in the pull request:
https://github.com/apache/spark/pull/15820#discussion_r88134526
--- Diff:
external/kafka-0-10-sql/src/main/scala/org/apache/spark/sql/kafka010/CachedKafkaConsumer.scala
---
@@ -83,6 +86,139 @@ private[kafka010] case
Github user squito commented on a diff in the pull request:
https://github.com/apache/spark/pull/14079#discussion_r88134308
--- Diff:
yarn/src/main/scala/org/apache/spark/deploy/yarn/ApplicationMaster.scala ---
@@ -691,11 +691,11 @@ private[spark] class ApplicationMaster(
Github user squito commented on a diff in the pull request:
https://github.com/apache/spark/pull/14079#discussion_r88133474
--- Diff:
core/src/test/scala/org/apache/spark/scheduler/BlacklistTrackerSuite.scala ---
@@ -17,10 +17,299 @@
package org.apache.spark.scheduler
Github user koeninger commented on the issue:
https://github.com/apache/spark/pull/15849
LGTM
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the
Github user gabrielhuang commented on a diff in the pull request:
https://github.com/apache/spark/pull/15811#discussion_r88130822
--- Diff: python/pyspark/rdd.py ---
@@ -181,6 +181,7 @@ def __init__(self, jrdd, ctx,
jrdd_deserializer=AutoBatchedSerializer(PickleSeri
Github user felixcheung commented on the issue:
https://github.com/apache/spark/pull/15883
try my comment suggestion - it should fix the error
re: `paste0` it looks like there are more `paste` than `paste0`? perhaps we
shouldn't change it then? what do you think?
---
If
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/13065
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/68671/
Test PASSed.
---
Github user Yunni commented on a diff in the pull request:
https://github.com/apache/spark/pull/15874#discussion_r88129780
--- Diff:
mllib/src/test/scala/org/apache/spark/ml/feature/MinHashLSHSuite.scala ---
@@ -24,7 +24,7 @@ import org.apache.spark.ml.util.DefaultReadWriteTest
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/13065
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user Yunni commented on a diff in the pull request:
https://github.com/apache/spark/pull/15874#discussion_r88129663
--- Diff: mllib/src/main/scala/org/apache/spark/ml/feature/LSH.scala ---
@@ -179,16 +211,13 @@ private[ml] abstract class LSHModel[T <: LSHModel[T]]
Github user Yunni commented on a diff in the pull request:
https://github.com/apache/spark/pull/15874#discussion_r88129409
--- Diff: mllib/src/main/scala/org/apache/spark/ml/feature/LSH.scala ---
@@ -106,22 +123,24 @@ private[ml] abstract class LSHModel[T <: LSHModel[T]]
*
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/13065
**[Test build #68671 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/68671/consoleFull)**
for PR 13065 at commit
Github user Yunni commented on a diff in the pull request:
https://github.com/apache/spark/pull/15874#discussion_r88128756
--- Diff: mllib/src/main/scala/org/apache/spark/ml/feature/MinHashLSH.scala
---
@@ -144,12 +152,12 @@ class MinHash(override val uid: String) extends
Github user Yunni commented on a diff in the pull request:
https://github.com/apache/spark/pull/15874#discussion_r88128823
--- Diff: mllib/src/main/scala/org/apache/spark/ml/feature/LSH.scala ---
@@ -66,10 +66,10 @@ private[ml] abstract class LSHModel[T <: LSHModel[T]]
self:
Github user Yunni commented on a diff in the pull request:
https://github.com/apache/spark/pull/15874#discussion_r88128732
--- Diff: mllib/src/main/scala/org/apache/spark/ml/feature/MinHashLSH.scala
---
@@ -125,11 +125,11 @@ class MinHash(override val uid: String) extends
Github user Yunni commented on a diff in the pull request:
https://github.com/apache/spark/pull/15874#discussion_r88128687
--- Diff: mllib/src/main/scala/org/apache/spark/ml/feature/MinHashLSH.scala
---
@@ -74,9 +72,12 @@ class MinHashModel private[ml] (
}
Github user Yunni commented on a diff in the pull request:
https://github.com/apache/spark/pull/15874#discussion_r88128341
--- Diff: mllib/src/main/scala/org/apache/spark/ml/feature/MinHashLSH.scala
---
@@ -74,9 +72,12 @@ class MinHashModel private[ml] (
}
Github user Yunni commented on a diff in the pull request:
https://github.com/apache/spark/pull/15874#discussion_r88128287
--- Diff: mllib/src/main/scala/org/apache/spark/ml/feature/MinHashLSH.scala
---
@@ -46,21 +42,23 @@ import org.apache.spark.sql.types.StructType
Github user Yunni commented on a diff in the pull request:
https://github.com/apache/spark/pull/15874#discussion_r88128199
--- Diff: mllib/src/main/scala/org/apache/spark/ml/feature/MinHashLSH.scala
---
@@ -31,13 +31,9 @@ import org.apache.spark.sql.types.StructType
/**
Github user Yunni commented on a diff in the pull request:
https://github.com/apache/spark/pull/15874#discussion_r88128252
--- Diff: mllib/src/main/scala/org/apache/spark/ml/feature/MinHashLSH.scala
---
@@ -46,21 +42,23 @@ import org.apache.spark.sql.types.StructType
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/15874
**[Test build #68678 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/68678/consoleFull)**
for PR 15874 at commit
Github user felixcheung commented on a diff in the pull request:
https://github.com/apache/spark/pull/15888#discussion_r88127026
--- Diff: R/pkg/R/sparkR.R ---
@@ -558,16 +558,18 @@ sparkCheckInstall <- function(sparkHome, master) {
message(msg)
NULL
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/15891
LGTM
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the
Github user felixcheung commented on a diff in the pull request:
https://github.com/apache/spark/pull/15883#discussion_r88126472
--- Diff: R/pkg/R/mllib.R ---
@@ -870,7 +872,7 @@ setMethod("summary", signature(object =
"LogisticRegressionModel"),
#' @param ... additional
Github user felixcheung commented on a diff in the pull request:
https://github.com/apache/spark/pull/15883#discussion_r88126345
--- Diff: R/pkg/R/mllib.R ---
@@ -896,9 +898,10 @@ setMethod("summary", signature(object =
"LogisticRegressionModel"),
#' summary(savedModel)
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/15892
**[Test build #68677 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/68677/consoleFull)**
for PR 15892 at commit
Github user hvanhovell commented on the issue:
https://github.com/apache/spark/pull/15892
cc @gatorsmile
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so,
GitHub user hvanhovell opened a pull request:
https://github.com/apache/spark/pull/15892
[SPARK-18300][SQL] Do not apply foldable propagation with expand as a child
[BRANCH-2.0]
## What changes were proposed in this pull request?
The `FoldablePropagation` optimizer rule, pulls
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/15885
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/68670/
Test PASSed.
---
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/15885
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/15885
**[Test build #68670 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/68670/consoleFull)**
for PR 15885 at commit
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/15885
**[Test build #68676 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/68676/consoleFull)**
for PR 15885 at commit
Github user squito commented on a diff in the pull request:
https://github.com/apache/spark/pull/14079#discussion_r88121951
--- Diff:
core/src/main/scala/org/apache/spark/scheduler/TaskSetManager.scala ---
@@ -84,7 +85,7 @@ private[spark] class TaskSetManager(
var
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/15891
**[Test build #68675 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/68675/consoleFull)**
for PR 15891 at commit
Github user zsxwing commented on the issue:
https://github.com/apache/spark/pull/15891
LGTM
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the
Github user yuj commented on the issue:
https://github.com/apache/spark/pull/15381
Thanks for fixing this issue. It works in 2.0.2 now.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
Github user hvanhovell commented on the issue:
https://github.com/apache/spark/pull/15891
cc @gatorsmile @zsxwing
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and
GitHub user hvanhovell opened a pull request:
https://github.com/apache/spark/pull/15891
[SPARK-18300][SQL] Fix scala 2.10 build for FoldablePropagation
## What changes were proposed in this pull request?
Commit
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/15717
Will review this PR tonight. Thanks!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/15717
I prefer to doing `[CASCADE|RESTRICT]` in a separate PR. However, we still
need to add a test case to verify whether the behavior is following the default
`RESTRICT`.
---
If your project is
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/15813
**[Test build #68674 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/68674/consoleFull)**
for PR 15813 at commit
Github user squito commented on a diff in the pull request:
https://github.com/apache/spark/pull/14079#discussion_r88116003
--- Diff:
core/src/main/scala/org/apache/spark/scheduler/BlacklistTracker.scala ---
@@ -17,10 +17,254 @@
package org.apache.spark.scheduler
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/15811
**[Test build #68673 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/68673/consoleFull)**
for PR 15811 at commit
Github user andrewor14 commented on the issue:
https://github.com/apache/spark/pull/15811
Looks good, just one question.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/15858
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user andrewor14 commented on a diff in the pull request:
https://github.com/apache/spark/pull/15811#discussion_r88115031
--- Diff: python/pyspark/rdd.py ---
@@ -181,6 +181,7 @@ def __init__(self, jrdd, ctx,
jrdd_deserializer=AutoBatchedSerializer(PickleSeri
Github user andrewor14 commented on the issue:
https://github.com/apache/spark/pull/15811
ok to test
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if
Github user vanzin commented on the issue:
https://github.com/apache/spark/pull/15858
Merging to master. But in general, please try to avoid small changes like
this that don't really change any behavior.
---
If your project is set up for it, you can reply to this email and have your
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/15801
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user andrewor14 commented on a diff in the pull request:
https://github.com/apache/spark/pull/15833#discussion_r88113719
--- Diff: core/src/main/scala/org/apache/spark/deploy/Client.scala ---
@@ -221,7 +221,9 @@ object Client {
val conf = new SparkConf()
Github user vanzin commented on the issue:
https://github.com/apache/spark/pull/15855
@vijoshi could you close this? The bot doesn't do it for non-master PRs.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user vanzin commented on the issue:
https://github.com/apache/spark/pull/15855
Merging to branch-2.0.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes
Github user tdas commented on the issue:
https://github.com/apache/spark/pull/15801
and 2.0 as well. seems like small enough fix.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user aramesh117 commented on the issue:
https://github.com/apache/spark/pull/11122
@zsxwing and @zzcclp thank you so much. This is much appreciated. :)
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user tdas commented on the issue:
https://github.com/apache/spark/pull/15801
LGTM. Merging this to master and 2.1
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/11122
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user zsxwing commented on the issue:
https://github.com/apache/spark/pull/11122
Merging to master and 2.1. @aramesh117 apologize for the delay again.
Thanks a lot for your patience.
---
If your project is set up for it, you can reply to this email and have your
reply appear
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/15868
**[Test build #68672 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/68672/consoleFull)**
for PR 15868 at commit
Github user dongjoon-hyun commented on a diff in the pull request:
https://github.com/apache/spark/pull/15868#discussion_r88108224
--- Diff: docs/sql-programming-guide.md ---
@@ -1087,6 +1087,13 @@ the following case-sensitive options:
+ maxConnection
Github user dongjoon-hyun commented on a diff in the pull request:
https://github.com/apache/spark/pull/15868#discussion_r88108090
--- Diff: docs/sql-programming-guide.md ---
@@ -1087,6 +1087,13 @@ the following case-sensitive options:
+ maxConnection
Github user squito commented on a diff in the pull request:
https://github.com/apache/spark/pull/14079#discussion_r88102102
--- Diff:
core/src/main/scala/org/apache/spark/scheduler/BlacklistTracker.scala ---
@@ -17,10 +17,254 @@
package org.apache.spark.scheduler
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/15857
Thank you!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/15868#discussion_r88101177
--- Diff: docs/sql-programming-guide.md ---
@@ -1087,6 +1087,13 @@ the following case-sensitive options:
+ maxConnection
Github user hvanhovell commented on the issue:
https://github.com/apache/spark/pull/15857
I am working on a fix for 2.10.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user tgravescs commented on the issue:
https://github.com/apache/spark/pull/15869
- spark.driver.memory and spark.executor.memory is good to remove from yarn
side as its duplicate since they were added for others.
spark.driver.cores and spark.executor.cores could also be
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/13065
**[Test build #68671 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/68671/consoleFull)**
for PR 13065 at commit
Github user hvanhovell commented on a diff in the pull request:
https://github.com/apache/spark/pull/13065#discussion_r88098625
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/generators.scala
---
@@ -144,29 +162,52 @@ case class Stack(children:
Github user hvanhovell commented on a diff in the pull request:
https://github.com/apache/spark/pull/13065#discussion_r88098485
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/GenerateExec.scala ---
@@ -99,5 +102,182 @@ case class GenerateExec(
}
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/15885
**[Test build #68670 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/68670/consoleFull)**
for PR 15885 at commit
Github user squito commented on a diff in the pull request:
https://github.com/apache/spark/pull/14079#discussion_r88093001
--- Diff:
core/src/main/scala/org/apache/spark/scheduler/BlacklistTracker.scala ---
@@ -17,10 +17,254 @@
package org.apache.spark.scheduler
Github user hvanhovell commented on the issue:
https://github.com/apache/spark/pull/15857
I am going to revert this.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/15740
Sounds like a great solution!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and
Github user squito commented on a diff in the pull request:
https://github.com/apache/spark/pull/14079#discussion_r88088492
--- Diff:
core/src/main/scala/org/apache/spark/scheduler/BlacklistTracker.scala ---
@@ -17,10 +17,254 @@
package org.apache.spark.scheduler
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/15857
Seems this breaks the scala 2.10 build?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/15857
```
[error] [warn]
/home/jenkins/workspace/spark-master-compile-sbt-scala-2.10/sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/optimizer/expressions.scala:439:
Cannot check match for
Github user aditya1702 commented on the issue:
https://github.com/apache/spark/pull/15871
@holdenk Sure. I will add the tests. I saw that as per the issue opened the
setParams was taking in string dict while fit() method was not. Hence I thought
it would be an improvement to do that
Github user vijoshi commented on the issue:
https://github.com/apache/spark/pull/15855
@vanzin - ping
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or
Github user mgummelt commented on the issue:
https://github.com/apache/spark/pull/15740
I was hoping 2.1, but it looks like 2.1 was cut before this, so now it's
looking to be 2.2. We've backported this into 2.0.2 for DC/OS Spark, though,
and we'll do so for 2.1 as well.
---
If
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/15880#discussion_r88084605
--- Diff:
sql/core/src/test/scala/org/apache/spark/sql/execution/datasources/json/JsonSuite.scala
---
@@ -448,7 +448,7 @@ class JsonSuite extends
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/15884
Thank you, @cloud-fan .
It becomes much better. Could you review again please?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as
201 - 300 of 436 matches
Mail list logo