Github user mgaido91 commented on the issue:
https://github.com/apache/spark/pull/21895
retest this please
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail:
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/21927
**[Test build #93880 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/93880/testReport)**
for PR 21927 at commit
Github user liyinan926 commented on a diff in the pull request:
https://github.com/apache/spark/pull/21884#discussion_r206986544
--- Diff:
resource-managers/kubernetes/core/src/test/scala/org/apache/spark/deploy/k8s/features/BasicDriverFeatureStepSuite.scala
---
@@ -203,4 +212,12
Github user squito commented on a diff in the pull request:
https://github.com/apache/spark/pull/21927#discussion_r206985923
--- Diff: core/src/main/scala/org/apache/spark/scheduler/DAGScheduler.scala
---
@@ -340,6 +340,22 @@ class DAGScheduler(
}
}
+
Github user skambha commented on a diff in the pull request:
https://github.com/apache/spark/pull/17185#discussion_r206985225
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/plans/logical/LogicalPlan.scala
---
@@ -120,22 +120,54 @@ abstract class LogicalPlan
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/21909#discussion_r206985104
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/FailureSafeParser.scala
---
@@ -56,9 +57,14 @@ class
Github user cloud-fan commented on a diff in the pull request:
https://github.com/apache/spark/pull/21935#discussion_r206985120
--- Diff:
external/avro/src/main/scala/org/apache/spark/sql/avro/SchemaConverters.scala
---
@@ -103,31 +108,48 @@ object SchemaConverters {
Github user squito commented on the issue:
https://github.com/apache/spark/pull/21923
> Are there more specific use cases? I always feel it'd be impossible to
design APIs without seeing couple different use cases.
With this basic api, you could just do things that tie into
Github user adelbertc commented on a diff in the pull request:
https://github.com/apache/spark/pull/21884#discussion_r206984628
--- Diff:
resource-managers/kubernetes/core/src/test/scala/org/apache/spark/deploy/k8s/features/BasicDriverFeatureStepSuite.scala
---
@@ -203,4 +212,12
Github user cloud-fan commented on a diff in the pull request:
https://github.com/apache/spark/pull/21935#discussion_r206983995
--- Diff:
external/avro/src/main/scala/org/apache/spark/sql/avro/SchemaConverters.scala
---
@@ -42,7 +43,11 @@ object SchemaConverters {
case
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/21895
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/93881/
Test FAILed.
---
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/21895
Merged build finished. Test FAILed.
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional
Github user MaxGekk commented on a diff in the pull request:
https://github.com/apache/spark/pull/21909#discussion_r206983433
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/FailureSafeParser.scala
---
@@ -56,9 +57,14 @@ class FailureSafeParser[IN](
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/21943
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/21943
**[Test build #93893 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/93893/testReport)**
for PR 21943 at commit
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/21895
**[Test build #93881 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/93881/testReport)**
for PR 21895 at commit
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/21947
**[Test build #93892 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/93892/testReport)**
for PR 21947 at commit
Github user cloud-fan commented on a diff in the pull request:
https://github.com/apache/spark/pull/21935#discussion_r206983421
--- Diff:
external/avro/src/main/scala/org/apache/spark/sql/avro/AvroSerializer.scala ---
@@ -93,7 +94,11 @@ class AvroSerializer(rootCatalystType:
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/21943
Merged build finished. Test PASSed.
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/21947
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/21947
Merged build finished. Test PASSed.
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional
Github user cloud-fan commented on a diff in the pull request:
https://github.com/apache/spark/pull/21935#discussion_r206983054
--- Diff:
external/avro/src/main/scala/org/apache/spark/sql/avro/AvroDeserializer.scala
---
@@ -86,8 +87,14 @@ class AvroDeserializer(rootAvroType:
Github user mengxr commented on the issue:
https://github.com/apache/spark/pull/21943
@jiangxb1987 Could you add a test to the new method?
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For
Github user NiharS commented on the issue:
https://github.com/apache/spark/pull/21885
Chatted with @squito about this. From what I understood from that
discussion, ExternalShuffleService shouldn't be controlled by configurations
passed into a spark application as it is its own
Github user mengxr commented on a diff in the pull request:
https://github.com/apache/spark/pull/21943#discussion_r206982253
--- Diff:
core/src/main/scala/org/apache/spark/scheduler/TaskSchedulerImpl.scala ---
@@ -252,6 +252,22 @@ private[spark] class TaskSchedulerImpl(
}
Github user cloud-fan commented on a diff in the pull request:
https://github.com/apache/spark/pull/21305#discussion_r206982199
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/Analyzer.scala
---
@@ -2217,6 +2218,100 @@ class Analyzer(
}
GitHub user srowen opened a pull request:
https://github.com/apache/spark/pull/21947
[MINOR][DOCS] Add note about Spark network security
## What changes were proposed in this pull request?
In response to a recent question, this reiterates that network access to a
Spark
Github user holdensmagicalunicorn commented on the issue:
https://github.com/apache/spark/pull/21947
@srowen, thanks! I am a bot who has found some folks who might be able to
help with the review:@pwendell, @vanzin and @mcavdar
---
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/21946
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/21946
Merged build finished. Test PASSed.
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/21946
**[Test build #93891 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/93891/testReport)**
for PR 21946 at commit
Github user rdblue commented on the issue:
https://github.com/apache/spark/pull/21946
Yeah, I'm fine with this, then.
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands,
Github user rdblue commented on a diff in the pull request:
https://github.com/apache/spark/pull/21305#discussion_r206979097
--- Diff:
sql/core/src/main/java/org/apache/spark/sql/sources/v2/WriteSupport.java ---
@@ -38,15 +38,16 @@
* If this method fails (by throwing an
Github user rdblue commented on a diff in the pull request:
https://github.com/apache/spark/pull/21305#discussion_r206978690
--- Diff:
sql/core/src/main/java/org/apache/spark/sql/sources/v2/WriteSupport.java ---
@@ -38,15 +38,16 @@
* If this method fails (by throwing an
Github user rdblue commented on a diff in the pull request:
https://github.com/apache/spark/pull/21305#discussion_r206978506
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/plans/logical/basicLogicalOperators.scala
---
@@ -352,6 +351,36 @@ case class Join(
Github user cloud-fan commented on the issue:
https://github.com/apache/spark/pull/21946
> ReadSupport and ReadSupportWithSchema -> BatchReadSupportProvider
DataSourceReader -> ReadSupport
Yea, this is what I'm doing in my local branch for the redesign. I'll push
it soon
Github user rdblue commented on a diff in the pull request:
https://github.com/apache/spark/pull/21305#discussion_r206978261
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/Analyzer.scala
---
@@ -2217,6 +2218,100 @@ class Analyzer(
}
}
Github user skambha commented on a diff in the pull request:
https://github.com/apache/spark/pull/17185#discussion_r206978220
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/catalog/SessionCatalog.scala
---
@@ -654,16 +654,19 @@ class SessionCatalog(
*
Github user cloud-fan commented on the issue:
https://github.com/apache/spark/pull/21946
@rdblue the plan is, I will have a big PR that implements the redesign.
However, if there is something makes sense even without the redesign, we should
have a separated PR. I think merging
Github user rdblue commented on a diff in the pull request:
https://github.com/apache/spark/pull/21305#discussion_r206977289
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/Analyzer.scala
---
@@ -2217,6 +2218,100 @@ class Analyzer(
}
}
Github user rdblue commented on a diff in the pull request:
https://github.com/apache/spark/pull/21305#discussion_r206976856
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/Analyzer.scala
---
@@ -2217,6 +2218,100 @@ class Analyzer(
}
}
Github user viirya commented on a diff in the pull request:
https://github.com/apache/spark/pull/21909#discussion_r206976717
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/FailureSafeParser.scala
---
@@ -56,9 +57,14 @@ class FailureSafeParser[IN](
Github user cloud-fan commented on the issue:
https://github.com/apache/spark/pull/21946
> a ReadSupportProvider will supply a create method (or anonymousTable) to
return a Table that implements ReadSupport...
I'd prefer the current proposal in
Github user skambha commented on a diff in the pull request:
https://github.com/apache/spark/pull/17185#discussion_r206975896
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/unresolved.scala
---
@@ -262,17 +262,47 @@ abstract class Star extends
Github user rdblue commented on the issue:
https://github.com/apache/spark/pull/21946
@cloud-fan, from your comment around the same time as mine, it sounds like
the confusion may just be in how you're updating the current API to the
proposed one. Can you post a migration plan? It
Github user skambha commented on the issue:
https://github.com/apache/spark/pull/17185
@gatorsmile , @cloud-fan, just a quick comment, I have been working on
this and will respond soon.
---
-
To unsubscribe,
Github user NiharS commented on the issue:
https://github.com/apache/spark/pull/21885
Thanks for the review and feedback! I made the changes, except for the
moving the if clause to the same line as "yarn", unfortunately that does make
the line 104 characters long.
---
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/21898
**[Test build #93890 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/93890/testReport)**
for PR 21898 at commit
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/21915
**[Test build #93889 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/93889/testReport)**
for PR 21915 at commit
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/21898
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/21898
Merged build finished. Test PASSed.
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/21915
Merged build finished. Test PASSed.
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional
Github user jiangxb1987 commented on a diff in the pull request:
https://github.com/apache/spark/pull/21898#discussion_r206972303
--- Diff:
core/src/main/scala/org/apache/spark/scheduler/TaskScheduler.scala ---
@@ -61,6 +61,9 @@ private[spark] trait TaskScheduler {
*/
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/21915
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
Github user rdblue commented on the issue:
https://github.com/apache/spark/pull/21946
Isn't this unnecessary after the API redesign?
For the redesign, the `DataSourceV2` or a `ReadSupportProvider` will supply
a `create` method (or `anonymousTable`) to return a `Table` that
Github user jiangxb1987 commented on the issue:
https://github.com/apache/spark/pull/21915
retest this please
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail:
Github user cloud-fan commented on the issue:
https://github.com/apache/spark/pull/21946
In the new proposal, we just rename `ReadSupport` to
`BatchReadSupportProvider`, so this change is kind of part of the big proposal.
---
Github user skonto commented on a diff in the pull request:
https://github.com/apache/spark/pull/21669#discussion_r206970881
--- Diff:
resource-managers/kubernetes/core/src/main/scala/org/apache/spark/deploy/k8s/submit/KubernetesClientApplication.scala
---
@@ -107,7 +109,14 @@
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/21946
Merged build finished. Test PASSed.
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/21946
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
Github user srowen commented on a diff in the pull request:
https://github.com/apache/spark/pull/21930#discussion_r206969720
--- Diff: core/src/main/scala/org/apache/spark/util/ClosureCleaner.scala ---
@@ -366,14 +423,26 @@ private[spark] object ClosureCleaner extends Logging {
Github user srowen commented on a diff in the pull request:
https://github.com/apache/spark/pull/21930#discussion_r206969409
--- Diff: core/src/main/scala/org/apache/spark/util/ClosureCleaner.scala ---
@@ -218,118 +261,132 @@ private[spark] object ClosureCleaner extends
Logging {
Github user srowen commented on a diff in the pull request:
https://github.com/apache/spark/pull/21930#discussion_r206969057
--- Diff: core/src/main/scala/org/apache/spark/util/ClosureCleaner.scala ---
@@ -218,118 +261,132 @@ private[spark] object ClosureCleaner extends
Logging {
Github user jose-torres commented on the issue:
https://github.com/apache/spark/pull/21946
Wouldn't the redo of the API that we're discussing obsolete this?
---
-
To unsubscribe, e-mail:
Github user srowen commented on a diff in the pull request:
https://github.com/apache/spark/pull/21930#discussion_r206968792
--- Diff: core/src/main/scala/org/apache/spark/util/ClosureCleaner.scala ---
@@ -218,118 +261,132 @@ private[spark] object ClosureCleaner extends
Logging {
Github user srowen commented on a diff in the pull request:
https://github.com/apache/spark/pull/21930#discussion_r206969293
--- Diff: core/src/main/scala/org/apache/spark/util/ClosureCleaner.scala ---
@@ -218,118 +261,132 @@ private[spark] object ClosureCleaner extends
Logging {
Github user rdblue commented on the issue:
https://github.com/apache/spark/pull/21921
@cloud-fan, @gatorsmile, I'm fine with that if it's documented somewhere. I
wasn't aware of that convention and no one brought it up the last time I
pointed out commits without a committer +1.
---
Github user srowen commented on a diff in the pull request:
https://github.com/apache/spark/pull/21930#discussion_r206968600
--- Diff: core/src/main/scala/org/apache/spark/util/ClosureCleaner.scala ---
@@ -159,6 +160,43 @@ private[spark] object ClosureCleaner extends Logging {
Github user srowen commented on a diff in the pull request:
https://github.com/apache/spark/pull/21930#discussion_r206968962
--- Diff: core/src/main/scala/org/apache/spark/util/ClosureCleaner.scala ---
@@ -218,118 +261,132 @@ private[spark] object ClosureCleaner extends
Logging {
Github user srowen commented on a diff in the pull request:
https://github.com/apache/spark/pull/21930#discussion_r206968430
--- Diff: core/src/main/scala/org/apache/spark/util/ClosureCleaner.scala ---
@@ -159,6 +160,43 @@ private[spark] object ClosureCleaner extends Logging {
Github user srowen commented on a diff in the pull request:
https://github.com/apache/spark/pull/21930#discussion_r206968703
--- Diff: core/src/main/scala/org/apache/spark/util/ClosureCleaner.scala ---
@@ -159,6 +160,43 @@ private[spark] object ClosureCleaner extends Logging {
Github user cloud-fan commented on the issue:
https://github.com/apache/spark/pull/21946
cc @rxin @rdblue @jose-torres
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands,
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/21946
**[Test build #93888 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/93888/testReport)**
for PR 21946 at commit
Github user dilipbiswal commented on a diff in the pull request:
https://github.com/apache/spark/pull/21941#discussion_r206969310
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/internal/SQLConf.scala ---
@@ -1451,6 +1451,15 @@ object SQLConf {
.intConf
GitHub user cloud-fan opened a pull request:
https://github.com/apache/spark/pull/21946
[SPARK-24990][SQL] merge ReadSupport and ReadSupportWithSchema
## What changes were proposed in this pull request?
Regarding user-specified schema, data sources may have 3 different
Github user holdensmagicalunicorn commented on the issue:
https://github.com/apache/spark/pull/21946
@cloud-fan, thanks! I am a bot who has found some folks who might be able
to help with the review:@gatorsmile, @zsxwing and @tdas
---
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/21921
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/21921
Merged build finished. Test PASSed.
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/21921
**[Test build #93887 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/93887/testReport)**
for PR 21921 at commit
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/21921
retest this please
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail:
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/21921
@cloud-fan To be save, let us get one more LGTM from the other committer.
---
-
To unsubscribe, e-mail:
Github user maryannxue commented on the issue:
https://github.com/apache/spark/pull/21699
> Actually I am mostly worry of the pivotColumn. Specifying multiple
columns via struct is not intuitive I believe.
It depends on whether we'd like to add extra interfaces for multiple
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/21930
Merged build finished. Test PASSed.
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/21930
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/93879/
Test PASSed.
---
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/21930
**[Test build #93879 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/93879/testReport)**
for PR 21930 at commit
Github user mgaido91 commented on the issue:
https://github.com/apache/spark/pull/21910
@cloud-fan sure, will do (anyway the cherry-pick to 2.2 was clean for me)
---
-
To unsubscribe, e-mail:
Github user sujith71955 commented on the issue:
https://github.com/apache/spark/pull/20611
@gatorsmile Yes there is a change in the behavior, As i mentioned above in
descriptions now we will be able to support wildcard even in the folder level
for local file systems. Previous
Github user ferdonline commented on the issue:
https://github.com/apache/spark/pull/21087
It would be great if some admin could review. If there is anything to
improve please tell. It is very simple though.
---
-
Github user sujith71955 commented on a diff in the pull request:
https://github.com/apache/spark/pull/20611#discussion_r206961528
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/command/tables.scala ---
@@ -303,94 +303,44 @@ case class LoadDataCommand(
Github user cloud-fan commented on the issue:
https://github.com/apache/spark/pull/21910
@mgaido91 do you mind open a PR for 2.2? I think this fixes a serious bug
which is very hard to detect. Maybe that's the reason no one report it for such
a long time.
---
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/21883
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/21883
Merged build finished. Test PASSed.
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/21884
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/21884
Merged build finished. Test PASSed.
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/21884
Kubernetes integration test status success
URL:
https://amplab.cs.berkeley.edu/jenkins/job/testing-k8s-prb-make-spark-distribution-unified/1565/
---
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/21883
**[Test build #93886 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/93886/testReport)**
for PR 21883 at commit
Github user gengliangwang commented on the issue:
https://github.com/apache/spark/pull/21883
retest this please.
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail:
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/21884
Kubernetes integration test starting
URL:
https://amplab.cs.berkeley.edu/jenkins/job/testing-k8s-prb-make-spark-distribution-unified/1565/
---
Github user kupferk commented on a diff in the pull request:
https://github.com/apache/spark/pull/21722#discussion_r206950519
--- Diff:
sql/catalyst/src/test/scala/org/apache/spark/sql/types/MetadataSuite.scala ---
@@ -0,0 +1,74 @@
+/*
+ * Licensed to the Apache Software
Github user mccheah commented on the issue:
https://github.com/apache/spark/pull/21884
ok to test
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail:
601 - 700 of 890 matches
Mail list logo