Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/10499#issuecomment-167715498
**[Test build #48391 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/48391/consoleFull)**
for PR 10499 at commit
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/10501#issuecomment-167715541
**[Test build #48393 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/48393/consoleFull)**
for PR 10501 at commit
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/10501#issuecomment-167716360
**[Test build #48392 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/48392/consoleFull)**
for PR 10501 at commit
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/10501#issuecomment-167716483
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/10494#issuecomment-167717428
**[Test build #48394 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/48394/consoleFull)**
for PR 10494 at commit
Github user saurfang commented on the pull request:
https://github.com/apache/spark/pull/10481#issuecomment-167717447
Thanks for the review @sun-rui. Hope that's better. Looks like `lintr`, as
awesome as it is, let that slip through, which I have filed a separate issue
here:
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/10501
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
GitHub user HyukjinKwon opened a pull request:
https://github.com/apache/spark/pull/10502
[SPARK-12355][SQL] Implement unhandledFilter interface for Parquet
https://issues.apache.org/jira/browse/SPARK-12355
This is similar with https://github.com/apache/spark/pull/10427.
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/10502#issuecomment-167719205
**[Test build #48396 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/48396/consoleFull)**
for PR 10502 at commit
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/10502#issuecomment-167720172
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/10502#issuecomment-167720147
**[Test build #48396 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/48396/consoleFull)**
for PR 10502 at commit
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/10502#issuecomment-167720165
Merged build finished. Test FAILed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user cloud-fan commented on the pull request:
https://github.com/apache/spark/pull/10500#issuecomment-167705370
LGTM, pending test
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
Github user QiangCai commented on the pull request:
https://github.com/apache/spark/pull/10487#issuecomment-167709123
@sarutak I will try to add test cases.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project
Github user rxin commented on the pull request:
https://github.com/apache/spark/pull/10501#issuecomment-167714800
Yea that's an interesting one -- although in some files we might actually
want that and I don't know whether we can still disable it (maybe we can?).
---
If your
Github user JoshRosen commented on the pull request:
https://github.com/apache/spark/pull/10501#issuecomment-167714726
Can we add a Scalastyle rule to match the shotgun-approach `scalastyle:off`
directive? :smiley:
---
If your project is set up for it, you can reply to this email
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/10500#issuecomment-167715207
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user yhuai commented on the pull request:
https://github.com/apache/spark/pull/10501#issuecomment-167718080
LGTM
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/10481#issuecomment-167718452
**[Test build #48395 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/48395/consoleFull)**
for PR 10481 at commit
Github user yhuai commented on the pull request:
https://github.com/apache/spark/pull/9350#issuecomment-167721041
test this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
Github user yhuai commented on the pull request:
https://github.com/apache/spark/pull/9350#issuecomment-167721192
oh, it already passed the tests.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not
Github user HyukjinKwon commented on the pull request:
https://github.com/apache/spark/pull/10502#issuecomment-167722257
The test is failed from wrong results from Parquet.
The test result was below:
```
== Physical Plan ==
Scan ParquetRelation[_1#4] InputPaths:
Github user yhuai commented on the pull request:
https://github.com/apache/spark/pull/10438#issuecomment-167722251
test this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/10438#issuecomment-167723226
**[Test build #48398 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/48398/consoleFull)**
for PR 10438 at commit
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/10494#issuecomment-167725034
**[Test build #48394 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/48394/consoleFull)**
for PR 10494 at commit
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/10494#issuecomment-167725386
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/10494#issuecomment-167725381
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/10451#discussion_r48523071
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/plans/logical/LogicalPlan.scala
---
@@ -91,6 +91,11 @@ abstract class LogicalPlan
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/10435#issuecomment-167727788
**[Test build #48399 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/48399/consoleFull)**
for PR 10435 at commit
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/10451#issuecomment-167677723
**[Test build #48379 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/48379/consoleFull)**
for PR 10451 at commit
Github user gatorsmile commented on the pull request:
https://github.com/apache/spark/pull/10451#issuecomment-167678849
A few updates are done, but I am not sure if the changes are appropriate.
1. `limit` is already used in
Github user marmbrus commented on a diff in the pull request:
https://github.com/apache/spark/pull/10451#discussion_r48512926
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/plans/logical/LogicalPlan.scala
---
@@ -91,6 +91,11 @@ abstract class LogicalPlan
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/10451#issuecomment-167679998
**[Test build #48380 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/48380/consoleFull)**
for PR 10451 at commit
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/10471#issuecomment-167680687
**[Test build #48382 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/48382/consoleFull)**
for PR 10471 at commit
Github user tedyu commented on a diff in the pull request:
https://github.com/apache/spark/pull/10368#discussion_r48513944
--- Diff:
core/src/test/scala/org/apache/spark/scheduler/TaskResultGetterSuite.scala ---
@@ -81,6 +81,16 @@ class TaskResultGetterSuite extends SparkFunSuite
Github user rxin commented on a diff in the pull request:
https://github.com/apache/spark/pull/10489#discussion_r48513931
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/mathExpressions.scala
---
@@ -279,6 +330,9 @@ object Factorial {
)
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/10488#discussion_r48514250
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/misc.scala
---
@@ -57,9 +57,10 @@ case class Md5(child: Expression)
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/10418#discussion_r48514281
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/collectionOperations.scala
---
@@ -44,6 +48,11 @@ case class Size(child:
Github user felixcheung commented on a diff in the pull request:
https://github.com/apache/spark/pull/10480#discussion_r48515129
--- Diff: R/pkg/R/generics.R ---
@@ -537,6 +537,12 @@ setGeneric("write.df", function(df, path, ...) {
standardGeneric("write.df") })
#' @export
Github user felixcheung commented on a diff in the pull request:
https://github.com/apache/spark/pull/10480#discussion_r48515179
--- Diff: R/pkg/R/SQLContext.R ---
@@ -556,3 +556,61 @@ createExternalTable <- function(sqlContext, tableName,
path = NULL, source = NUL
sdf <-
Github user JoshRosen commented on the pull request:
https://github.com/apache/spark/pull/10441#issuecomment-167685120
The relevant UI tests all passed and the only failures were due to a known
flaky test / build executor, so I'm going to merge this into master. Thanks!
---
If your
Github user felixcheung commented on a diff in the pull request:
https://github.com/apache/spark/pull/10480#discussion_r48515677
--- Diff: core/src/main/scala/org/apache/spark/api/r/SerDe.scala ---
@@ -355,6 +355,13 @@ private[spark] object SerDe {
writeInt(dos,
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/10441#issuecomment-167688409
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/9185#issuecomment-167688404
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
Github user yhuai commented on the pull request:
https://github.com/apache/spark/pull/9185#issuecomment-167688501
test this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/10451#discussion_r48513000
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/plans/logical/LogicalPlan.scala
---
@@ -91,6 +91,11 @@ abstract class LogicalPlan
GitHub user xguo27 opened a pull request:
https://github.com/apache/spark/pull/10500
[SPARK-12512][SQL] support column name with dot in withColumn()
You can merge this pull request into a Git repository by running:
$ git pull https://github.com/xguo27/spark SPARK-12512
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/10471#discussion_r48513554
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/commands.scala ---
@@ -102,6 +102,7 @@ case class SetCommand(kv: Option[(String,
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/10500#issuecomment-167679905
Can one of the admins verify this patch?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user rxin commented on a diff in the pull request:
https://github.com/apache/spark/pull/10489#discussion_r48513725
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/mathExpressions.scala
---
@@ -218,10 +257,22 @@ case class Conv(numExpr:
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/10499#issuecomment-167681335
**[Test build #48381 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/48381/consoleFull)**
for PR 10499 at commit
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/10418#discussion_r48514314
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/FunctionRegistry.scala
---
@@ -335,7 +335,7 @@ object FunctionRegistry {
Github user jerryshao commented on the pull request:
https://github.com/apache/spark/pull/10464#issuecomment-167683269
I guess this might be the problem of TCP delay releasing the port after
disconnected, normally kernel will retain the port for a while after
disconnected
Github user felixcheung commented on a diff in the pull request:
https://github.com/apache/spark/pull/10480#discussion_r48515246
--- Diff: R/pkg/R/SQLContext.R ---
@@ -556,3 +556,61 @@ createExternalTable <- function(sqlContext, tableName,
path = NULL, source = NUL
sdf <-
Github user andrewor14 commented on a diff in the pull request:
https://github.com/apache/spark/pull/10438#discussion_r48515392
--- Diff: core/src/main/scala/org/apache/spark/util/Utils.scala ---
@@ -1709,6 +1709,31 @@ private[spark] object Utils extends Logging {
}
Github user felixcheung commented on a diff in the pull request:
https://github.com/apache/spark/pull/10480#discussion_r48515382
--- Diff: R/pkg/R/DataFrame.R ---
@@ -2272,3 +2260,40 @@ setMethod("with",
newEnv <- assignNewEnv(data)
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/10441#issuecomment-167684909
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
Github user yhuai commented on the pull request:
https://github.com/apache/spark/pull/9185#issuecomment-167684925
test this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/10441#issuecomment-167684908
Merged build finished. Test FAILed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/10441#issuecomment-167684883
**[Test build #48378 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/48378/consoleFull)**
for PR 10441 at commit
Github user zsxwing commented on the pull request:
https://github.com/apache/spark/pull/10441#issuecomment-167685127
retest this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
Github user zsxwing commented on the pull request:
https://github.com/apache/spark/pull/10441#issuecomment-167685179
okey :)
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/10441
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/10418#issuecomment-167688202
**[Test build #48385 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/48385/consoleFull)**
for PR 10418 at commit
Github user felixcheung commented on the pull request:
https://github.com/apache/spark/pull/9185#issuecomment-167688400
As pointed out above, R code actually does not call `createSQLContext`
multiple times:
https://github.com/apache/spark/blob/master/R/pkg/R/sparkR.R#L243
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/9185#issuecomment-167688403
Merged build finished. Test FAILed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/10441#issuecomment-167688408
Merged build finished. Test FAILed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/10451#issuecomment-167688592
**[Test build #48379 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/48379/consoleFull)**
for PR 10451 at commit
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/10451#issuecomment-167688631
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/10451#issuecomment-167688630
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user felixcheung commented on a diff in the pull request:
https://github.com/apache/spark/pull/10480#discussion_r48516227
--- Diff: R/pkg/R/SQLContext.R ---
@@ -556,3 +556,61 @@ createExternalTable <- function(sqlContext, tableName,
path = NULL, source = NUL
sdf <-
Github user cloud-fan commented on a diff in the pull request:
https://github.com/apache/spark/pull/10435#discussion_r48516214
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/misc.scala
---
@@ -176,3 +178,221 @@ case class Crc32(child: Expression)
Github user marmbrus commented on a diff in the pull request:
https://github.com/apache/spark/pull/10435#discussion_r48516332
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/misc.scala
---
@@ -176,3 +178,221 @@ case class Crc32(child: Expression)
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/10451#issuecomment-167689297
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/10451#issuecomment-167689296
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/10451#issuecomment-167689240
**[Test build #48380 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/48380/consoleFull)**
for PR 10451 at commit
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/8760#issuecomment-167728825
**[Test build #48401 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/48401/consoleFull)**
for PR 8760 at commit
Github user HyukjinKwon commented on the pull request:
https://github.com/apache/spark/pull/10502#issuecomment-167729051
I see. `UnsafeRowParquetRecordReader` at Parquet does not support filter
record by record but just block. So, even with `=` operator produces the same
results
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/10481#issuecomment-167730405
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
Github user HyukjinKwon commented on the pull request:
https://github.com/apache/spark/pull/10502#issuecomment-167733162
retest this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user holdenk commented on the pull request:
https://github.com/apache/spark/pull/10154#issuecomment-167733524
I think green for completed makes sense, and having it always stand out
doesn't seem like a particularly bad thing.
---
If your project is set up for it, you can
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/10502#issuecomment-167733908
**[Test build #48403 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/48403/consoleFull)**
for PR 10502 at commit
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/10502#issuecomment-167736506
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/10502#issuecomment-167736500
Merged build finished. Test FAILed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/10502#issuecomment-167736372
**[Test build #48403 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/48403/consoleFull)**
for PR 10502 at commit
Github user rxin commented on a diff in the pull request:
https://github.com/apache/spark/pull/10468#discussion_r48524429
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/jdbc/JDBCRDD.scala
---
@@ -186,13 +187,19 @@ private[sql] object JDBCRDD extends
Github user rxin commented on a diff in the pull request:
https://github.com/apache/spark/pull/10468#discussion_r48524447
--- Diff: sql/core/src/test/scala/org/apache/spark/sql/jdbc/JDBCSuite.scala
---
@@ -186,8 +187,26 @@ class JDBCSuite extends SparkFunSuite
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/10502#issuecomment-167737190
**[Test build #48402 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/48402/consoleFull)**
for PR 10502 at commit
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/10502#issuecomment-167737265
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/10435#issuecomment-167737270
**[Test build #48404 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/48404/consoleFull)**
for PR 10435 at commit
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/10502#issuecomment-167737261
Merged build finished. Test FAILed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/10481
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user rxin commented on the pull request:
https://github.com/apache/spark/pull/10470#issuecomment-167738141
@maropu can you review this change?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user microhello commented on the pull request:
https://github.com/apache/spark/pull/10442#issuecomment-167738304
@andrewor14 I have created a issue:
https://issues.apache.org/jira/browse/SPARK-12548
---
If your project is set up for it, you can reply to this email and have
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/10438#issuecomment-167739452
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/10438#issuecomment-167739406
**[Test build #48398 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/48398/consoleFull)**
for PR 10438 at commit
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/10438#issuecomment-167739453
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
Github user sarutak commented on a diff in the pull request:
https://github.com/apache/spark/pull/10468#discussion_r48525108
--- Diff: sql/core/src/test/scala/org/apache/spark/sql/jdbc/JDBCSuite.scala
---
@@ -186,8 +187,26 @@ class JDBCSuite extends SparkFunSuite
GitHub user wilson8 opened a pull request:
https://github.com/apache/spark/pull/10503
[SPARK-12506][SQL]push down WHERE clause arithmetic operator to JDBC â¦
â¦layer
For arithmetic operator in WHERE clause such as
select * from table where c1 + c2 > 10
Github user mwws commented on the pull request:
https://github.com/apache/spark/pull/8760#issuecomment-167728914
@mridulm I have changed the interface a little bit and create
`AdvancedSingleTaskStrategy` to support the use case you described above. With
this new strategy, we enable
301 - 400 of 418 matches
Mail list logo