Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/17732
Can one of the admins verify this patch?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
GitHub user tangchun opened a pull request:
https://github.com/apache/spark/pull/17732
Branch 2.0
## What changes were proposed in this pull request?
(Please fill in changes proposed in this fix)
## How was this patch tested?
(Please explain how this patch
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/17730
cc @cloud-fan @sameeragarwal
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/17480
**[Test build #76076 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/76076/testReport)**
for PR 17480 at commit
Github user witgo commented on a diff in the pull request:
https://github.com/apache/spark/pull/17480#discussion_r112825043
--- Diff:
core/src/main/scala/org/apache/spark/ExecutorAllocationManager.scala ---
@@ -249,7 +249,6 @@ private[spark] class ExecutorAllocationManager(
Github user facaiy commented on the issue:
https://github.com/apache/spark/pull/17556
Hi, I has checked R GBM's code and found that:
R's gbm uses mean value $(x + y) / 2$, not weighted mean $(c_x * x + c_y *
y) / (c_x + c_y)$ described in [JIRA
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/17649#discussion_r112824647
--- Diff:
sql/core/src/test/resources/sql-tests/results/describe-table-after-alter-table.sql.out
---
@@ -0,0 +1,162 @@
+-- Automatically generated
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/17649
@wzhfy Could you check the behavior of Hive?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/17649#discussion_r112824524
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/command/ddl.scala ---
@@ -267,8 +271,15 @@ case class AlterTableUnsetPropertiesCommand(
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/17708
It sounds like we should not simply merge two Projects to avoid calling the
same UDF multiple times.
---
If your project is set up for it, you can reply to this email and have your
reply appear
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/17469
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user holdenk commented on the issue:
https://github.com/apache/spark/pull/17469
LGTM, thanks for your work on this @map222 & thanks for your work reviewing
this @HyukjinKwon.
Merged to master.
---
If your project is set up for it, you can reply to this email and have
Github user holdenk commented on the issue:
https://github.com/apache/spark/pull/17688
LGTM, thanks @HyukjinKwon for noticing the lack of bool in the scala code.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user holdenk commented on a diff in the pull request:
https://github.com/apache/spark/pull/17688#discussion_r112823550
--- Diff: python/pyspark/sql/dataframe.py ---
@@ -1238,7 +1238,7 @@ def fillna(self, value, subset=None):
Value to replace null values
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/17688#discussion_r112823211
--- Diff: python/pyspark/sql/dataframe.py ---
@@ -1238,7 +1238,7 @@ def fillna(self, value, subset=None):
Value to replace null values
Github user felixcheung commented on the issue:
https://github.com/apache/spark/pull/17731
so essentially it's still evaluating the `get` before when the 2nd `get` is
hit from the delay binding (as a way to prevent going into an infinite loop,
really)
what if you have this
Github user felixcheung commented on the issue:
https://github.com/apache/spark/pull/17731
so both `sparkSession` or `sparkRjsc` are valid even after the call to
`get` failed?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as
Github user vijoshi commented on the issue:
https://github.com/apache/spark/pull/17731
"I understand these 2 cases, can you explain how your change connect to
these two?"
Say, I do this:
```
delayAssign(delayedAssign(".sparkRsession", { sparkR.session(..) },
Github user felixcheung commented on the issue:
https://github.com/apache/spark/pull/17469
@holdenk
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/17469
LGTM if committers are okay with merging fixing some of documentation (not
all) but regarding it is his very first contribution.
---
If your project is set up for it, you can reply to this
Github user felixcheung commented on the issue:
https://github.com/apache/spark/pull/17731
I understand these 2 cases, can you explain how your change connect to
these two?
if you delay bind to `".sparkRjsc", envir = .sparkREnv`, doesn't it just
work?
---
If your project is set
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/17469
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/76075/
Test PASSed.
---
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/17469
**[Test build #76075 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/76075/testReport)**
for PR 17469 at commit
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/17469
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user vijoshi commented on the issue:
https://github.com/apache/spark/pull/17731
@felixcheung yes. We need to support these two types of possibilities:
```
#do not call sparkR.session() - followed by implicit reference to
sparkSession
a <- createDataFrame(iris)
Github user felixcheung commented on the issue:
https://github.com/apache/spark/pull/17731
also, what if an user wants to explicitly create a spark session with
specific parameter? the delay binding model doesn't seem to support that
properly?
---
If your project is set up for it,
Github user felixcheung commented on a diff in the pull request:
https://github.com/apache/spark/pull/17728#discussion_r112822291
--- Diff: R/pkg/vignettes/sparkr-vignettes.Rmd ---
@@ -308,6 +308,21 @@ numCyl <- summarize(groupBy(carsDF, carsDF$cyl), count
= n(carsDF$cyl))
Github user felixcheung commented on a diff in the pull request:
https://github.com/apache/spark/pull/17728#discussion_r112821786
--- Diff: R/pkg/R/DataFrame.R ---
@@ -3642,3 +3642,58 @@ setMethod("checkpoint",
df <- callJMethod(x@sdf, "checkpoint",
Github user felixcheung commented on a diff in the pull request:
https://github.com/apache/spark/pull/17728#discussion_r112821792
--- Diff: R/pkg/R/DataFrame.R ---
@@ -3642,3 +3642,58 @@ setMethod("checkpoint",
df <- callJMethod(x@sdf, "checkpoint",
Github user felixcheung commented on a diff in the pull request:
https://github.com/apache/spark/pull/17728#discussion_r112822250
--- Diff: R/pkg/R/DataFrame.R ---
@@ -3642,3 +3642,58 @@ setMethod("checkpoint",
df <- callJMethod(x@sdf, "checkpoint",
Github user felixcheung commented on a diff in the pull request:
https://github.com/apache/spark/pull/17728#discussion_r112822273
--- Diff: R/pkg/R/DataFrame.R ---
@@ -3642,3 +3642,58 @@ setMethod("checkpoint",
df <- callJMethod(x@sdf, "checkpoint",
Github user felixcheung commented on a diff in the pull request:
https://github.com/apache/spark/pull/17728#discussion_r112822277
--- Diff: R/pkg/R/generics.R ---
@@ -631,6 +635,11 @@ setGeneric("sample",
standardGeneric("sample")
})
+#'
Github user felixcheung commented on a diff in the pull request:
https://github.com/apache/spark/pull/17728#discussion_r112821835
--- Diff: R/pkg/R/DataFrame.R ---
@@ -3642,3 +3642,58 @@ setMethod("checkpoint",
df <- callJMethod(x@sdf, "checkpoint",
Github user felixcheung commented on a diff in the pull request:
https://github.com/apache/spark/pull/17728#discussion_r112821831
--- Diff: R/pkg/R/DataFrame.R ---
@@ -3642,3 +3642,58 @@ setMethod("checkpoint",
df <- callJMethod(x@sdf, "checkpoint",
Github user felixcheung commented on a diff in the pull request:
https://github.com/apache/spark/pull/17728#discussion_r112822261
--- Diff: R/pkg/R/DataFrame.R ---
@@ -3642,3 +3642,58 @@ setMethod("checkpoint",
df <- callJMethod(x@sdf, "checkpoint",
Github user felixcheung commented on a diff in the pull request:
https://github.com/apache/spark/pull/17728#discussion_r112822286
--- Diff: R/pkg/vignettes/sparkr-vignettes.Rmd ---
@@ -308,6 +308,21 @@ numCyl <- summarize(groupBy(carsDF, carsDF$cyl), count
= n(carsDF$cyl))
Github user felixcheung commented on a diff in the pull request:
https://github.com/apache/spark/pull/17728#discussion_r112822246
--- Diff: R/pkg/R/DataFrame.R ---
@@ -3642,3 +3642,58 @@ setMethod("checkpoint",
df <- callJMethod(x@sdf, "checkpoint",
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/17730
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/17730
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/76072/
Test PASSed.
---
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/17730
**[Test build #76072 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/76072/testReport)**
for PR 17730 at commit
Github user felixcheung commented on the issue:
https://github.com/apache/spark/pull/17688
good catch - instead of duplicating it, perhaps just say `supported data
types` or `supported data types above`
---
If your project is set up for it, you can reply to this email and have your
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/17469
**[Test build #76075 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/76075/testReport)**
for PR 17469 at commit
Github user felixcheung commented on the issue:
https://github.com/apache/spark/pull/17469
I don't why Jenkins doesn't pick up the changes automatically...
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project
Github user felixcheung commented on the issue:
https://github.com/apache/spark/pull/17469
Jenkins, retest this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user felixcheung commented on a diff in the pull request:
https://github.com/apache/spark/pull/17729#discussion_r112822059
--- Diff: R/pkg/inst/tests/testthat/test_sparkSQL.R ---
@@ -1546,6 +1546,40 @@ test_that("string operators", {
expect_equal(collect(select(df3,
Github user felixcheung commented on a diff in the pull request:
https://github.com/apache/spark/pull/17729#discussion_r112821860
--- Diff: R/pkg/NAMESPACE ---
@@ -300,6 +300,7 @@ exportMethods("%in%",
"rank",
"regexp_extract",
Github user felixcheung commented on a diff in the pull request:
https://github.com/apache/spark/pull/17729#discussion_r112822015
--- Diff: R/pkg/R/functions.R ---
@@ -3745,3 +3745,55 @@ setMethod("collect_set",
jc <- callJStatic("org.apache.spark.sql.functions",
Github user felixcheung commented on a diff in the pull request:
https://github.com/apache/spark/pull/17729#discussion_r112822000
--- Diff: R/pkg/R/functions.R ---
@@ -3745,3 +3745,55 @@ setMethod("collect_set",
jc <- callJStatic("org.apache.spark.sql.functions",
Github user felixcheung commented on a diff in the pull request:
https://github.com/apache/spark/pull/17729#discussion_r112822029
--- Diff: R/pkg/R/functions.R ---
@@ -3745,3 +3745,55 @@ setMethod("collect_set",
jc <- callJStatic("org.apache.spark.sql.functions",
Github user felixcheung commented on a diff in the pull request:
https://github.com/apache/spark/pull/17729#discussion_r112822065
--- Diff: R/pkg/R/functions.R ---
@@ -3745,3 +3745,55 @@ setMethod("collect_set",
jc <- callJStatic("org.apache.spark.sql.functions",
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/17731
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/17731
**[Test build #76074 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/76074/testReport)**
for PR 17731 at commit
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/17731
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/76074/
Test PASSed.
---
Github user felixcheung commented on the issue:
https://github.com/apache/spark/pull/17467
@brkyvz are you ok with this PR at a high level? If yes, I could help with
review and shepherd this
---
If your project is set up for it, you can reply to this email and have your
reply
Github user felixcheung commented on the issue:
https://github.com/apache/spark/pull/17731
this **might** be reasonable, but `sparkR.sparkContext` is only called when
`sparkR.session()` is called, and so I'm not sure I follow how if someone is
doing this in a brand new R session:
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/17731
**[Test build #76074 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/76074/testReport)**
for PR 17731 at commit
Github user felixcheung commented on a diff in the pull request:
https://github.com/apache/spark/pull/15125#discussion_r112821540
--- Diff: docs/graphx-programming-guide.md ---
@@ -708,9 +708,8 @@ messages remaining.
> messaging function. These constraints allow additional
Github user vijoshi commented on the issue:
https://github.com/apache/spark/pull/17731
@felixcheung
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/17731
Merged build finished. Test FAILed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/17731
**[Test build #76073 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/76073/testReport)**
for PR 17731 at commit
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/17731
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/76073/
Test FAILed.
---
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/17731
**[Test build #76073 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/76073/testReport)**
for PR 17731 at commit
GitHub user vijoshi opened a pull request:
https://github.com/apache/spark/pull/17731
[SPARK-20440][SparkR] Allow SparkR session and context to have delayed
bindings
## What changes were proposed in this pull request?
Allow SparkR to ignore the "promise already under
Github user zero323 commented on the issue:
https://github.com/apache/spark/pull/17728
cc @felixcheung
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or
Github user zero323 commented on the issue:
https://github.com/apache/spark/pull/17729
cc @felixcheung
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/17729
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/76071/
Test PASSed.
---
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/17729
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/17729
**[Test build #76071 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/76071/testReport)**
for PR 17729 at commit
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/17730
**[Test build #76072 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/76072/testReport)**
for PR 17730 at commit
GitHub user gatorsmile opened a pull request:
https://github.com/apache/spark/pull/17730
[SPARK-20439] [SQL] Fix Catalog API listTables and getTable when failed to
fetch table metadata
### What changes were proposed in this pull request?
`spark.catalog.listTables` and
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/17729
**[Test build #76071 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/76071/testReport)**
for PR 17729 at commit
GitHub user zero323 opened a pull request:
https://github.com/apache/spark/pull/17729
[SPARK-20438][R] SparkR wrappers for split and repeat
## What changes were proposed in this pull request?
Add wrappers for `o.a.s.sql.functions`:
- `split` as `split_string`
-
Github user dilipbiswal commented on a diff in the pull request:
https://github.com/apache/spark/pull/17713#discussion_r112819365
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/CheckAnalysis.scala
---
@@ -414,4 +352,269 @@ trait CheckAnalysis extends
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/15125
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/76069/
Test PASSed.
---
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/15125
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/15125
**[Test build #76069 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/76069/testReport)**
for PR 15125 at commit
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/17728
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/76070/
Test PASSed.
---
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/17728
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/17728
**[Test build #76070 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/76070/testReport)**
for PR 17728 at commit
Github user budde commented on a diff in the pull request:
https://github.com/apache/spark/pull/17467#discussion_r112816898
--- Diff:
external/kinesis-asl/src/main/scala/org/apache/spark/streaming/kinesis/KinesisBackedBlockRDD.scala
---
@@ -295,6 +306,23 @@ class
Github user budde commented on a diff in the pull request:
https://github.com/apache/spark/pull/17467#discussion_r112816922
--- Diff:
external/kinesis-asl/src/main/scala/org/apache/spark/streaming/kinesis/KinesisBackedBlockRDD.scala
---
@@ -295,6 +306,23 @@ class
Github user budde commented on a diff in the pull request:
https://github.com/apache/spark/pull/17467#discussion_r112817123
--- Diff: docs/streaming-kinesis-integration.md ---
@@ -216,3 +216,7 @@ de-aggregate records during consumption.
- If no Kinesis checkpoint info exists
Github user budde commented on a diff in the pull request:
https://github.com/apache/spark/pull/17467#discussion_r112816822
--- Diff:
external/kinesis-asl/src/main/scala/org/apache/spark/streaming/kinesis/KinesisBackedBlockRDD.scala
---
@@ -295,6 +306,23 @@ class
Github user budde commented on a diff in the pull request:
https://github.com/apache/spark/pull/17467#discussion_r112816810
--- Diff:
external/kinesis-asl/src/main/scala/org/apache/spark/streaming/kinesis/KinesisBackedBlockRDD.scala
---
@@ -295,6 +306,23 @@ class
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/17728
**[Test build #76070 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/76070/testReport)**
for PR 17728 at commit
Github user budde commented on the issue:
https://github.com/apache/spark/pull/17467
@yssharma Fair enough. I'll try to get your update reviewed later today
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/17693
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/17693
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/76068/
Test PASSed.
---
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/17693
**[Test build #76068 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/76068/testReport)**
for PR 17693 at commit
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/17712
cc @gatorsmile
This is related to the deterministic thing you want to do?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/15125
**[Test build #76069 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/76069/testReport)**
for PR 15125 at commit
Github user dding3 commented on the issue:
https://github.com/apache/spark/pull/15125
OK, agreed. If user didn't set checkpointer directory while we turn on
checkpoint in pregel by default, there may be exception. I will change
spark.graphx.pregel.checkpointInterval to -1 as default
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/17693
**[Test build #76068 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/76068/testReport)**
for PR 17693 at commit
Github user holdenk commented on the issue:
https://github.com/apache/spark/pull/17688
@vundela L1237
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/17693
ok to test
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/17720
Thanks! Merging to 2.1
Could you close it?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/17728
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/76067/
Test PASSed.
---
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/17728
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/17728
**[Test build #76067 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/76067/testReport)**
for PR 17728 at commit
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/17719#discussion_r112813484
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/DataFrameReader.scala ---
@@ -68,6 +68,18 @@ class DataFrameReader private[sql](sparkSession:
1 - 100 of 186 matches
Mail list logo