Github user sarutak commented on the pull request:
https://github.com/apache/spark/pull/10475#issuecomment-167655015
CC: @harishreedharan
@SaintBacchus could you add test cases for this change?
---
If your project is set up for it, you can reply to this email and have your
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/10499#issuecomment-167655997
**[Test build #48373 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/48373/consoleFull)**
for PR 10499 at commit
Github user jkbradley commented on the pull request:
https://github.com/apache/spark/pull/10440#issuecomment-167656781
ML changes look good to me. Thanks!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project
Github user zsxwing commented on the pull request:
https://github.com/apache/spark/pull/10464#issuecomment-167661660
Looks a race condition in `restart` and `finally { ... socket.stop() ...}`.
`restart` will start a new thread and call `receiver.onStart`. So
`receiver.onStart` may
Github user rxin commented on the pull request:
https://github.com/apache/spark/pull/10498#issuecomment-167664157
BTW in github you can use square brackets to create a checklist, e.g.
```
- [] item a
- [] item b
```
becomes
- [] item a
- []
Github user falaki commented on the pull request:
https://github.com/apache/spark/pull/9185#issuecomment-167664825
ping @marmbrus
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
Github user sarutak commented on the pull request:
https://github.com/apache/spark/pull/10253#issuecomment-167665206
LGTM. Merging this into `master` and `branch-1.6`.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/9185#issuecomment-167668475
Merged build finished. Test FAILed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/9185#issuecomment-167668477
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/10479#issuecomment-167668461
**[Test build #48376 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/48376/consoleFull)**
for PR 10479 at commit
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/10459#issuecomment-167654637
**[Test build #48372 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/48372/consoleFull)**
for PR 10459 at commit
Github user zsxwing commented on a diff in the pull request:
https://github.com/apache/spark/pull/10459#discussion_r48505487
--- Diff: core/src/test/scala/org/apache/spark/util/AkkaUtilsSuite.scala ---
@@ -61,9 +55,14 @@ class AkkaUtilsSuite extends SparkFunSuite with
Github user tedyu commented on a diff in the pull request:
https://github.com/apache/spark/pull/10368#discussion_r48507734
--- Diff:
core/src/main/scala/org/apache/spark/serializer/KryoSerializer.scala ---
@@ -109,6 +111,9 @@ class KryoSerializer(conf: SparkConf)
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/10438#issuecomment-167664384
**[Test build #48374 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/48374/consoleFull)**
for PR 10438 at commit
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/10253
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user marmbrus commented on the pull request:
https://github.com/apache/spark/pull/9185#issuecomment-167665605
This seems fine to me as a first step. Eventually we will probably want to
make the RBackend multi-session aware.
---
If your project is set up for it, you can reply
Github user JoshRosen commented on the pull request:
https://github.com/apache/spark/pull/10479#issuecomment-167665578
Jenkins, retest this please.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not
Github user zsxwing commented on the pull request:
https://github.com/apache/spark/pull/10440#issuecomment-167668145
@andrewor14 could you take a look at this pr? Thanks!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well.
Github user andrewor14 commented on a diff in the pull request:
https://github.com/apache/spark/pull/10440#discussion_r48510554
--- Diff: mllib/src/main/scala/org/apache/spark/ml/tree/Node.scala ---
@@ -386,9 +386,9 @@ private[tree] object LearningNode {
var levelsToGo =
Github user andrewor14 commented on the pull request:
https://github.com/apache/spark/pull/10440#issuecomment-167671951
Looks good.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
Github user andrewor14 commented on a diff in the pull request:
https://github.com/apache/spark/pull/10440#discussion_r48510515
--- Diff: launcher/src/main/java/org/apache/spark/launcher/Main.java ---
@@ -151,7 +151,7 @@ private static String
prepareWindowsCommand(List cmd,
Github user JoshRosen commented on the pull request:
https://github.com/apache/spark/pull/10441#issuecomment-167672868
@zsxwing, I've pushed a new commit which aims to preserve the old behavior
when increasing the number of items displayed per page while pageNumber > 1;
see
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/9185#issuecomment-167689398
**[Test build #48386 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/48386/consoleFull)**
for PR 9185 at commit
Github user liancheng commented on a diff in the pull request:
https://github.com/apache/spark/pull/10444#discussion_r48516709
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/predicates.scala
---
@@ -47,6 +48,34 @@ trait Predicate extends
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/10471#issuecomment-167689959
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/10471#issuecomment-167689958
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/10471#issuecomment-167689909
**[Test build #48382 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/48382/consoleFull)**
for PR 10471 at commit
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/10471
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user rxin commented on the pull request:
https://github.com/apache/spark/pull/10471#issuecomment-167690183
Thanks - I've merged it.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user cloud-fan commented on a diff in the pull request:
https://github.com/apache/spark/pull/10494#discussion_r48517137
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/plans/logical/LocalRelation.scala
---
@@ -62,6 +62,10 @@ case class
Github user felixcheung commented on a diff in the pull request:
https://github.com/apache/spark/pull/10491#discussion_r48517225
--- Diff: docs/configuration.md ---
@@ -120,7 +120,8 @@ of the most common options to set are:
spark.driver.cores
1
-Number
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/10491#issuecomment-167692412
**[Test build #48387 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/48387/consoleFull)**
for PR 10491 at commit
Github user sun-rui commented on a diff in the pull request:
https://github.com/apache/spark/pull/10480#discussion_r48517423
--- Diff: R/pkg/R/DataFrame.R ---
@@ -2272,3 +2260,40 @@ setMethod("with",
newEnv <- assignNewEnv(data)
Github user sun-rui commented on a diff in the pull request:
https://github.com/apache/spark/pull/10480#discussion_r48517433
--- Diff: R/pkg/R/SQLContext.R ---
@@ -556,3 +556,61 @@ createExternalTable <- function(sqlContext, tableName,
path = NULL, source = NUL
sdf <-
Github user sun-rui commented on a diff in the pull request:
https://github.com/apache/spark/pull/10480#discussion_r48517532
--- Diff: R/pkg/R/generics.R ---
@@ -537,6 +537,12 @@ setGeneric("write.df", function(df, path, ...) {
standardGeneric("write.df") })
#' @export
Github user sun-rui commented on a diff in the pull request:
https://github.com/apache/spark/pull/10480#discussion_r48517634
--- Diff: R/pkg/R/SQLContext.R ---
@@ -556,3 +556,61 @@ createExternalTable <- function(sqlContext, tableName,
path = NULL, source = NUL
sdf <-
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/10491#issuecomment-167697095
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/10491#issuecomment-167696817
**[Test build #48387 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/48387/consoleFull)**
for PR 10491 at commit
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/10491#issuecomment-167697099
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/10443#issuecomment-167697369
**[Test build #48389 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/48389/consoleFull)**
for PR 10443 at commit
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/10499#issuecomment-167697956
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/10499#issuecomment-167697671
**[Test build #48381 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/48381/consoleFull)**
for PR 10499 at commit
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/10499#issuecomment-167697959
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/10435#issuecomment-167698232
**[Test build #48388 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/48388/consoleFull)**
for PR 10435 at commit
Github user cloud-fan commented on a diff in the pull request:
https://github.com/apache/spark/pull/10451#discussion_r48517911
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/optimizer/Optimizer.scala
---
@@ -153,6 +153,15 @@ object SetOperationPushDown
Github user cloud-fan commented on a diff in the pull request:
https://github.com/apache/spark/pull/10451#discussion_r48518043
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/optimizer/Optimizer.scala
---
@@ -153,6 +153,15 @@ object SetOperationPushDown
Github user marmbrus commented on a diff in the pull request:
https://github.com/apache/spark/pull/10451#discussion_r48518201
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/optimizer/Optimizer.scala
---
@@ -153,6 +153,15 @@ object SetOperationPushDown extends
Github user marmbrus commented on a diff in the pull request:
https://github.com/apache/spark/pull/10451#discussion_r48518190
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/optimizer/Optimizer.scala
---
@@ -153,6 +153,15 @@ object SetOperationPushDown extends
Github user marmbrus commented on a diff in the pull request:
https://github.com/apache/spark/pull/10451#discussion_r48518228
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/plans/logical/LogicalPlan.scala
---
@@ -91,6 +91,11 @@ abstract class LogicalPlan
Github user andrewor14 commented on the pull request:
https://github.com/apache/spark/pull/10442#issuecomment-167700718
@microhello please file a JIRA and add it to the title of this PR. See how
other patches are opened.
---
If your project is set up for it, you can reply to this
Github user felixcheung commented on a diff in the pull request:
https://github.com/apache/spark/pull/10480#discussion_r48518375
--- Diff: core/src/main/scala/org/apache/spark/api/r/SerDe.scala ---
@@ -355,6 +355,13 @@ private[spark] object SerDe {
writeInt(dos,
Github user marmbrus commented on the pull request:
https://github.com/apache/spark/pull/10451#issuecomment-167700694
> add a comment and explain the current solution. In the future, if we add
such an operator, we can change the current way and fix the issue? (Already
added a comment
Github user marmbrus commented on a diff in the pull request:
https://github.com/apache/spark/pull/10451#discussion_r48518442
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/plans/logical/LogicalPlan.scala
---
@@ -91,6 +91,11 @@ abstract class LogicalPlan
Github user sun-rui commented on a diff in the pull request:
https://github.com/apache/spark/pull/10480#discussion_r48518467
--- Diff: core/src/main/scala/org/apache/spark/api/r/SerDe.scala ---
@@ -355,6 +355,13 @@ private[spark] object SerDe {
writeInt(dos,
Github user marmbrus commented on the pull request:
https://github.com/apache/spark/pull/10500#issuecomment-167700900
ok to test
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user marmbrus commented on a diff in the pull request:
https://github.com/apache/spark/pull/10499#discussion_r48518567
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/DataFrameWriter.scala ---
@@ -119,7 +119,7 @@ final class DataFrameWriter private[sql](df:
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/10418#issuecomment-167701068
**[Test build #48385 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/48385/consoleFull)**
for PR 10418 at commit
Github user sun-rui commented on a diff in the pull request:
https://github.com/apache/spark/pull/10481#discussion_r48518527
--- Diff: R/pkg/R/column.R ---
@@ -225,7 +225,7 @@ setMethod("%in%",
setMethod("otherwise",
signature(x = "Column", value = "ANY"),
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/10418#issuecomment-167701117
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/10418#issuecomment-167701116
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user viirya commented on the pull request:
https://github.com/apache/spark/pull/10399#issuecomment-167702564
@rxin Could you check this? Thanks.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/10500#issuecomment-167702840
**[Test build #48390 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/48390/consoleFull)**
for PR 10500 at commit
GitHub user rxin opened a pull request:
https://github.com/apache/spark/pull/10501
[SPARK-12547][SQL] Tighten scala style checker enforcement for UDF
registration
We use scalastyle:off to turn off style checks in certain places where it
is not possible to follow the style guide.
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/10499#issuecomment-167703422
**[Test build #48391 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/48391/consoleFull)**
for PR 10499 at commit
Github user srowen commented on the pull request:
https://github.com/apache/spark/pull/10416#issuecomment-167509793
Roger that, @Schadix would you mind closing this PR?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/10402#issuecomment-167516759
**[Test build #48362 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/48362/consoleFull)**
for PR 10402 at commit
Github user hvanhovell commented on a diff in the pull request:
https://github.com/apache/spark/pull/10418#discussion_r48469649
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/collectionOperations.scala
---
@@ -44,6 +48,11 @@ case class Size(child:
Github user sun-rui commented on a diff in the pull request:
https://github.com/apache/spark/pull/10480#discussion_r48472149
--- Diff: R/pkg/R/SQLContext.R ---
@@ -556,3 +556,61 @@ createExternalTable <- function(sqlContext, tableName,
path = NULL, source = NUL
sdf <-
Github user sun-rui commented on a diff in the pull request:
https://github.com/apache/spark/pull/10480#discussion_r48472173
--- Diff: R/pkg/R/SQLContext.R ---
@@ -556,3 +556,61 @@ createExternalTable <- function(sqlContext, tableName,
path = NULL, source = NUL
sdf <-
Github user srowen commented on the pull request:
https://github.com/apache/spark/pull/10492#issuecomment-167543175
@JoshRosen it's not the scope that's an issue but the version. Not
specifying it lets the SDK version required by the Kinesis client come in at
whatever it needs to be.
Github user hvanhovell commented on a diff in the pull request:
https://github.com/apache/spark/pull/10488#discussion_r48475897
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/misc.scala
---
@@ -57,9 +57,10 @@ case class Md5(child: Expression)
Github user sun-rui commented on the pull request:
https://github.com/apache/spark/pull/10480#issuecomment-167517583
For test JDBC, we can add a helper function in Scala side, which reuses
code in JDBCSuite to start a in-memory JDBC server?
---
If your project is set up for it, you
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/10402#issuecomment-167520443
**[Test build #48363 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/48363/consoleFull)**
for PR 10402 at commit
Github user srowen commented on the pull request:
https://github.com/apache/spark/pull/10479#issuecomment-167539129
LGTM
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/8785#issuecomment-167539588
**[Test build #2258 has
started](https://amplab.cs.berkeley.edu/jenkins/job/NewSparkPullRequestBuilder/2258/consoleFull)**
for PR 8785 at commit
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/8785#issuecomment-167539724
**[Test build #2258 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/NewSparkPullRequestBuilder/2258/consoleFull)**
for PR 8785 at commit
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/10494#issuecomment-167539570
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
Github user srowen commented on the pull request:
https://github.com/apache/spark/pull/10487#issuecomment-167540446
LGTM
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user srowen commented on the pull request:
https://github.com/apache/spark/pull/10464#issuecomment-167541966
Does this really solve the problem? the current code appears to clean up
the socket on stopping already, so I wonder why this would fix it. Did you test
it?
It
Github user XD-DENG commented on the pull request:
https://github.com/apache/spark/pull/10434#issuecomment-167541958
Thanks for clarifying. Will have a look if I can proceed as you suggested
with JIRA.
Thanks
---
If your project is set up for it, you can reply to this
Github user microhello commented on a diff in the pull request:
https://github.com/apache/spark/pull/10442#discussion_r48467911
--- Diff: pom.xml ---
@@ -99,14 +99,14 @@
sql/hive
unsafe
assembly
-external/twitter
-external/flume
-
Github user cloud-fan commented on a diff in the pull request:
https://github.com/apache/spark/pull/10494#discussion_r48468287
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/plans/logical/LocalRelation.scala
---
@@ -62,6 +62,10 @@ case class
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/10402#issuecomment-167539426
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/10402#issuecomment-167539345
**[Test build #48362 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/48362/consoleFull)**
for PR 10402 at commit
Github user kiszk commented on a diff in the pull request:
https://github.com/apache/spark/pull/10488#discussion_r48472393
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/misc.scala
---
@@ -57,9 +57,10 @@ case class Md5(child: Expression) extends
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/10402#issuecomment-167539427
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
Github user srowen commented on the pull request:
https://github.com/apache/spark/pull/10483#issuecomment-167539239
@nssalian you need to fix the line that's too long now
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well.
Github user srowen commented on the pull request:
https://github.com/apache/spark/pull/10434#issuecomment-167539553
@XD-DENG can you address my comments or close this PR?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well.
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/10494#issuecomment-167539491
**[Test build #48361 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/48361/consoleFull)**
for PR 10494 at commit
Github user srowen commented on a diff in the pull request:
https://github.com/apache/spark/pull/8785#discussion_r48472434
--- Diff:
sql/core/src/test/scala/org/apache/spark/sql/jdbc/UnserializableDriverHelper.scala
---
@@ -0,0 +1,53 @@
+/*
+ * Licensed to the Apache
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/10494#issuecomment-167539569
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user XD-DENG commented on the pull request:
https://github.com/apache/spark/pull/10434#issuecomment-167540341
@srowen Hi Owen, sure. Thanks a lot for your clarification.
My understanding was that you find this modification unnecessary, so I
didn't proceed further.
Github user XD-DENG closed the pull request at:
https://github.com/apache/spark/pull/10434
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user hvanhovell commented on a diff in the pull request:
https://github.com/apache/spark/pull/10418#discussion_r48469610
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/FunctionRegistry.scala
---
@@ -335,7 +335,7 @@ object FunctionRegistry {
Github user hvanhovell commented on a diff in the pull request:
https://github.com/apache/spark/pull/10488#discussion_r48470392
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/misc.scala
---
@@ -57,9 +57,10 @@ case class Md5(child: Expression)
Github user srowen commented on a diff in the pull request:
https://github.com/apache/spark/pull/10491#discussion_r48471720
--- Diff: docs/configuration.md ---
@@ -120,7 +120,8 @@ of the most common options to set are:
spark.driver.cores
1
-Number of
Github user sun-rui commented on the pull request:
https://github.com/apache/spark/pull/10481#issuecomment-167540565
The fix is good, but some style nit:
if (...) { ... } else { ... }
---
If your project is set up for it, you can reply to this email and have your
reply appear on
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/10381#issuecomment-167551863
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/10381#issuecomment-167551695
**[Test build #48364 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/48364/consoleFull)**
for PR 10381 at commit
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/10381#issuecomment-167551862
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
1 - 100 of 418 matches
Mail list logo