Github user BryanCutler commented on a diff in the pull request:
https://github.com/apache/spark/pull/19459#discussion_r146421602
--- Diff: python/pyspark/sql/session.py ---
@@ -510,6 +578,12 @@ def createDataFrame(self, data, schema=None,
samplingRatio=None, verifySchema=Tr
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/12147
@xwu0226 Maybe close this PR if you do not have time to finish it? Thanks!
---
-
To unsubscribe, e-mail:
Github user mccheah commented on a diff in the pull request:
https://github.com/apache/spark/pull/19468#discussion_r146421445
--- Diff:
resource-managers/kubernetes/core/src/main/scala/org/apache/spark/scheduler/cluster/k8s/KubernetesClusterSchedulerBackend.scala
---
@@ -0,0
Github user shivaram commented on the issue:
https://github.com/apache/spark/pull/19557
Is there a reason we can't use the same glm trick for attach ? I guess this
was explained above but I'm wondering if there is a reason the base::attach is
not compiled in the same way ?
---
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/18544
ping @stanzhai
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail:
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/18607
Could we please close this PR? Thanks!
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional
Github user BryanCutler commented on a diff in the pull request:
https://github.com/apache/spark/pull/19459#discussion_r146421298
--- Diff: python/pyspark/sql/session.py ---
@@ -414,6 +415,73 @@ def _createFromLocal(self, data, schema):
data = [schema.toInternal(row)
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/19169
So far, session-specific user management is not part of our plan yet. This
API is not useful before it. Could you please close this PR?
---
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/19169#discussion_r146420815
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/CurrentUser.scala
---
@@ -0,0 +1,47 @@
+/*
+ * Licensed to the
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/19451#discussion_r146420276
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/optimizer/ReplaceExceptWithFilter.scala
---
@@ -0,0 +1,114 @@
+/*
+ *
Github user dilipbiswal commented on a diff in the pull request:
https://github.com/apache/spark/pull/19451#discussion_r146419941
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/optimizer/ReplaceExceptWithFilter.scala
---
@@ -0,0 +1,114 @@
+/*
+ *
Github user BryanCutler commented on the issue:
https://github.com/apache/spark/pull/18664
I cleaned up some of the timestamp conversion code and added a test for a
`pandas_udf` that returns a `DateType` which is currently causing an error.
see
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/19451#discussion_r146419210
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/optimizer/ReplaceExceptWithFilter.scala
---
@@ -0,0 +1,114 @@
+/*
+ *
Github user BryanCutler commented on a diff in the pull request:
https://github.com/apache/spark/pull/18664#discussion_r146419125
--- Diff: python/pyspark/serializers.py ---
@@ -224,7 +225,13 @@ def _create_batch(series):
# If a nullable integer series has been promoted to
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/19451#discussion_r146419044
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/optimizer/ReplaceExceptWithFilter.scala
---
@@ -0,0 +1,114 @@
+/*
+ *
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/19451#discussion_r146418962
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/optimizer/ReplaceExceptWithFilter.scala
---
@@ -0,0 +1,114 @@
+/*
+ *
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/19552
Thank you for review, @gatorsmile and @budde .
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/19529
I found a very simple way to reduce the line of changes.
Could you put the PlanTest and PlanTestBase in the same file? We can
refactor it later, if necessary. For example, in
Github user JasmineGeorge commented on the issue:
https://github.com/apache/spark/pull/7842
removed all blank lines except the one in the import statements between
different groups. Tests have passed. Are we ready to merge ??
---
Github user dilipbiswal commented on a diff in the pull request:
https://github.com/apache/spark/pull/19451#discussion_r146417603
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/optimizer/ReplaceExceptWithFilter.scala
---
@@ -0,0 +1,114 @@
+/*
+ *
Github user budde commented on a diff in the pull request:
https://github.com/apache/spark/pull/19552#discussion_r146416338
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/internal/SQLConf.scala ---
@@ -388,7 +388,7 @@ object SQLConf {
.stringConf
Github user JasmineGeorge commented on a diff in the pull request:
https://github.com/apache/spark/pull/7842#discussion_r146416252
--- Diff:
mllib/src/main/scala/org/apache/spark/mllib/pmml/export/PMMLTreeModelUtils.scala
---
@@ -0,0 +1,261 @@
+/*
+ * Licensed to the
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/7842
Merged build finished. Test PASSed.
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/7842
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/82995/
Test PASSed.
---
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/7842
**[Test build #82995 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/82995/testReport)**
for PR 7842 at commit
Github user cloud-fan commented on a diff in the pull request:
https://github.com/apache/spark/pull/19458#discussion_r146413565
--- Diff:
core/src/main/scala/org/apache/spark/storage/DiskBlockManager.scala ---
@@ -100,7 +100,17 @@ private[spark] class DiskBlockManager(conf:
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/19498
cc @tdas
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail:
Github user cloud-fan commented on a diff in the pull request:
https://github.com/apache/spark/pull/19458#discussion_r146412838
--- Diff: core/src/main/scala/org/apache/spark/storage/BlockId.scala ---
@@ -100,6 +100,8 @@ private[spark] case class TestBlockId(id: String)
extends
Github user cloud-fan commented on the issue:
https://github.com/apache/spark/pull/18747
LGTM except 2 minor comments. Can you benchmark some complex queries
instead of full scan? I was expecting to see larger speed up via the columnar
reader.
---
Github user cloud-fan commented on a diff in the pull request:
https://github.com/apache/spark/pull/18747#discussion_r146412033
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/columnar/InMemoryTableScanExec.scala
---
@@ -23,21 +23,66 @@ import
Github user cloud-fan commented on a diff in the pull request:
https://github.com/apache/spark/pull/18747#discussion_r146411821
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/WholeStageCodegenExec.scala
---
@@ -490,22 +502,14 @@ case class
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/18664
**[Test build #82997 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/82997/testReport)**
for PR 18664 at commit
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/19506
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org
Github user cloud-fan commented on the issue:
https://github.com/apache/spark/pull/19506
thanks, merging to master!
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail:
Github user cloud-fan commented on the issue:
https://github.com/apache/spark/pull/18251
Sorry for the delay, pretty busy these days...
I'll update this PR this week and try to merge it ASAP.
---
-
To
Github user cloud-fan commented on the issue:
https://github.com/apache/spark/pull/19269
@steveloughran The current API already supports it.
`WriteSupport.createWriter` takes the option parameter, and
`DataSourceV2Writer` can propagate this option down to the data writer, via
Github user MrBago commented on a diff in the pull request:
https://github.com/apache/spark/pull/19527#discussion_r146402800
--- Diff:
mllib/src/main/scala/org/apache/spark/ml/feature/OneHotEncoderEstimator.scala
---
@@ -0,0 +1,464 @@
+/*
+ * Licensed to the Apache
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/19562
**[Test build #82996 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/82996/testReport)**
for PR 19562 at commit
GitHub user dongjoon-hyun opened a pull request:
https://github.com/apache/spark/pull/19562
[SPARK-21912][SQL][FOLLOW-UP] ORC/Parquet table should not create invalid
column names
## What changes were proposed in this pull request?
During
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/7842
**[Test build #82995 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/82995/testReport)**
for PR 7842 at commit
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/19519
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/82992/
Test PASSed.
---
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/19519
Merged build finished. Test PASSed.
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/19519
**[Test build #82992 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/82992/testReport)**
for PR 19519 at commit
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/17100
**[Test build #82994 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/82994/testReport)**
for PR 17100 at commit
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/18747
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/82993/
Test PASSed.
---
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/18747
Merged build finished. Test PASSed.
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/18747
**[Test build #82993 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/82993/testReport)**
for PR 18747 at commit
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/19561
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/82991/
Test PASSed.
---
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/19561
Merged build finished. Test PASSed.
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/19561
**[Test build #82991 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/82991/testReport)**
for PR 19561 at commit
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/19552#discussion_r146385329
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/internal/SQLConf.scala ---
@@ -388,7 +388,7 @@ object SQLConf {
.stringConf
Github user sathiyapk commented on a diff in the pull request:
https://github.com/apache/spark/pull/19451#discussion_r146384601
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/optimizer/ReplaceExceptWithFilter.scala
---
@@ -0,0 +1,114 @@
+/*
+ *
Github user sathiyapk commented on a diff in the pull request:
https://github.com/apache/spark/pull/19451#discussion_r146377893
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/optimizer/ReplaceExceptWithFilter.scala
---
@@ -0,0 +1,114 @@
+/*
+ *
Github user nkronenfeld commented on the issue:
https://github.com/apache/spark/pull/19529
@gatorsmile the code changes aren't huge - there's almost no new code here,
it's all just moving code around from one file to another in order to expose a
SharedSparkSession with no dependence
Github user sathiyapk commented on a diff in the pull request:
https://github.com/apache/spark/pull/19451#discussion_r146377175
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/optimizer/ReplaceExceptWithFilter.scala
---
@@ -0,0 +1,114 @@
+/*
+ *
Github user sathiyapk commented on a diff in the pull request:
https://github.com/apache/spark/pull/19451#discussion_r146376886
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/optimizer/ReplaceExceptWithFilter.scala
---
@@ -0,0 +1,114 @@
+/*
+ *
Github user sitalkedia closed the pull request at:
https://github.com/apache/spark/pull/19534
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail:
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/19538
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/82989/
Test FAILed.
---
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/19538
Merged build finished. Test FAILed.
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/19538
**[Test build #82989 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/82989/consoleFull)**
for PR 19538 at commit
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/19528
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/82990/
Test FAILed.
---
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/19528
**[Test build #82990 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/82990/consoleFull)**
for PR 19528 at commit
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/19528
Merged build finished. Test FAILed.
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/19529
Generally, it makes sense to me. Since the code changes are pretty large
here, it is not very straightforward for us to review it. Do you mind if I
taking over some of them? Or could you split
Github user vanzin commented on the issue:
https://github.com/apache/spark/pull/11205
This PR is pretty old and a lot has changed since, but it looks like this
can be fixed now by just fixing code to look at `stageIdToTaskIndices` instead
of keeping `numRunningTasks` around? (Or
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/18747
**[Test build #82993 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/82993/testReport)**
for PR 18747 at commit
Github user vanzin commented on the issue:
https://github.com/apache/spark/pull/19534
Let's fix up Saisai's PR then.
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands,
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/19519
**[Test build #82992 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/82992/testReport)**
for PR 19519 at commit
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/19560#discussion_r146351183
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/internal/SQLConf.scala ---
@@ -187,6 +187,15 @@ object SQLConf {
.booleanConf
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/19560#discussion_r146351097
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/internal/SQLConf.scala ---
@@ -187,6 +187,15 @@ object SQLConf {
.booleanConf
Github user BryanCutler commented on a diff in the pull request:
https://github.com/apache/spark/pull/18664#discussion_r146347107
--- Diff: python/pyspark/sql/types.py ---
@@ -1619,11 +1619,39 @@ def to_arrow_type(dt):
arrow_type = pa.decimal(dt.precision, dt.scale)
Github user srowen commented on a diff in the pull request:
https://github.com/apache/spark/pull/19556#discussion_r146346452
--- Diff: core/src/main/scala/org/apache/spark/util/ClosureCleaner.scala ---
@@ -91,6 +91,52 @@ private[spark] object ClosureCleaner extends Logging {
Github user falaki commented on a diff in the pull request:
https://github.com/apache/spark/pull/19551#discussion_r146345486
--- Diff: R/pkg/tests/fulltests/test_sparkSQL.R ---
@@ -499,6 +499,12 @@ test_that("create DataFrame with different data
types", {
Github user falaki commented on the issue:
https://github.com/apache/spark/pull/19551
LGTM
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail:
Github user falaki commented on a diff in the pull request:
https://github.com/apache/spark/pull/19551#discussion_r146345075
--- Diff: R/pkg/tests/fulltests/test_sparkSQL.R ---
@@ -499,6 +499,12 @@ test_that("create DataFrame with different data
types", {
Github user srowen commented on a diff in the pull request:
https://github.com/apache/spark/pull/19561#discussion_r146342937
--- Diff: core/src/main/scala/org/apache/spark/FutureAction.scala ---
@@ -89,7 +89,11 @@ trait FutureAction[T] extends Future[T] {
*/
override
Github user srowen commented on a diff in the pull request:
https://github.com/apache/spark/pull/19561#discussion_r146342877
--- Diff:
sql/core/src/test/scala/org/apache/spark/sql/streaming/StreamingQuerySuite.scala
---
@@ -744,7 +744,7 @@ class StreamingQuerySuite extends
Github user srowen commented on a diff in the pull request:
https://github.com/apache/spark/pull/19561#discussion_r146342815
--- Diff: pom.xml ---
@@ -2692,7 +2692,7 @@
scala-2.12
-2.12.3
+2.12.4
--- End diff --
Github user srowen commented on a diff in the pull request:
https://github.com/apache/spark/pull/19561#discussion_r146342617
--- Diff: core/src/main/scala/org/apache/spark/FutureAction.scala ---
@@ -113,6 +117,42 @@ trait FutureAction[T] extends Future[T] {
}
Github user kiszk commented on the issue:
https://github.com/apache/spark/pull/19222
@hvanhovell @tejasapatil would it be possible to review this?
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/19561
**[Test build #82991 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/82991/testReport)**
for PR 19561 at commit
Github user kiszk commented on a diff in the pull request:
https://github.com/apache/spark/pull/18747#discussion_r146340873
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/columnar/InMemoryTableScanExec.scala
---
@@ -23,21 +23,70 @@ import
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/19559
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/82988/
Test FAILed.
---
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/19559
Merged build finished. Test FAILed.
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional
Github user kiszk commented on a diff in the pull request:
https://github.com/apache/spark/pull/18747#discussion_r146340589
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/columnar/InMemoryTableScanExec.scala
---
@@ -23,21 +23,70 @@ import
Github user kiszk commented on a diff in the pull request:
https://github.com/apache/spark/pull/18747#discussion_r146340533
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/columnar/InMemoryTableScanExec.scala
---
@@ -23,21 +23,70 @@ import
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/19559
**[Test build #82988 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/82988/testReport)**
for PR 19559 at commit
GitHub user srowen opened a pull request:
https://github.com/apache/spark/pull/19561
[SPARK-22322][CORE] Update FutureAction for compatibility with Scala 2.12
Future
## What changes were proposed in this pull request?
Scala 2.12's `Future` defines two new methods to
Github user squito commented on the issue:
https://github.com/apache/spark/pull/19194
@dhruve is this still active? Sorry I was out a while, catching up on
everything.
---
-
To unsubscribe, e-mail:
Github user susanxhuynh commented on the issue:
https://github.com/apache/spark/pull/19437
@srowen Ping, would you like to help review?
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For
Github user shivaram commented on the issue:
https://github.com/apache/spark/pull/19550
LGTM. Thanks @felixcheung
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail:
Github user vanzin commented on a diff in the pull request:
https://github.com/apache/spark/pull/19519#discussion_r146332249
--- Diff:
core/src/main/scala/org/apache/spark/deploy/SparkApplication.scala ---
@@ -0,0 +1,55 @@
+/*
+ * Licensed to the Apache Software Foundation
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/19538
**[Test build #82989 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/82989/consoleFull)**
for PR 19538 at commit
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/19528
**[Test build #82990 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/82990/consoleFull)**
for PR 19528 at commit
Github user felixcheung commented on the issue:
https://github.com/apache/spark/pull/19528
Jenkins, retest this please
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands,
Github user felixcheung commented on the issue:
https://github.com/apache/spark/pull/19538
Jenkins, retest this please
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands,
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/19548
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/19548
Thanks! Merged to master.
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail:
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/19548
LGTM
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail:
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/19556
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/82987/
Test PASSed.
---
101 - 200 of 363 matches
Mail list logo