Github user yanboliang commented on the issue:
https://github.com/apache/spark/pull/13481
Merged to master/2.0. Thanks!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user clockfly commented on the issue:
https://github.com/apache/spark/pull/13605
How about SparkSession.range? It returns a Dataset[Long], but the encoder's
schema (value: Long) is not matching logical plan's schema (id: Long).
---
If your project is set up for it, you
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/13605
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/60308/
Test PASSed.
---
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/13605
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/13605
**[Test build #60308 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/60308/consoleFull)**
for PR 13605 at commit
Github user kayousterhout commented on the issue:
https://github.com/apache/spark/pull/13603
Did you consider instead doing this when a task fails (on line 761 in
TaskSetManager)? Instead of just checking if the number of failures is greater
than maxTaskFailures, you could add a
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/6848
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/60306/
Test PASSed.
---
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/6848
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/6848
**[Test build #60306 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/60306/consoleFull)**
for PR 6848 at commit
Github user nezihyigitbasi commented on a diff in the pull request:
https://github.com/apache/spark/pull/13527#discussion_r66675499
--- Diff: core/src/main/scala/org/apache/spark/deploy/SparkSubmit.scala ---
@@ -444,6 +444,7 @@ object SparkSubmit {
Github user kayousterhout commented on the issue:
https://github.com/apache/spark/pull/13603
@squito if it's not too painful, would you mind moving the visibility stuff
to a separate PR? (I suspect that PR can be merged almost immediately!).
---
If your project is set up for it,
Github user kayousterhout commented on the issue:
https://github.com/apache/spark/pull/13603
@rxin no, this is an old (an un-documented) feature that was added a while
ago as a band-aid until we did the more complete solution (which @squito is
planning to do soon, but not targeted
Github user cloud-fan commented on the issue:
https://github.com/apache/spark/pull/13605
@liancheng yes, but is always used as a `DataFrame` and call untyped
operations on it.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/13601
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user amitsela commented on the issue:
https://github.com/apache/spark/pull/13424
Done.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/13601
Merging in master/2.0. I'm not sure whether any of these cases are actually
perf sensitive though. If they are, they probably shouldn't be using Seq anyway.
---
If your project is set up for it, you
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/13147
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user marmbrus commented on the issue:
https://github.com/apache/spark/pull/13147
Thanks, merged to master.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/13591
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/13591
Let me just merge this to save some time debating. Merging in master/2.0.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user zsxwing commented on the issue:
https://github.com/apache/spark/pull/13563
> I'd more readily believe that a process uses UTF-8 regardless of the
platform encoding.
This is about stdout and stderr. I think most processes won't hard code
UTF-8 for them. Otherwise,
Github user shivaram commented on a diff in the pull request:
https://github.com/apache/spark/pull/12836#discussion_r66673292
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/plans/logical/object.scala
---
@@ -286,6 +290,9 @@ case class FlatMapGroupsInR(
Github user marmbrus commented on the issue:
https://github.com/apache/spark/pull/13424
LGTM, can you update the description (it still says WIP).
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/13600
Any reason you are not using the jdbc data source in Spark SQL? It seems
like it'd only make sense for jdbc to use the jdbc data source since it is
structured.
---
If your project is set up for it,
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/13597
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user NarineK commented on a diff in the pull request:
https://github.com/apache/spark/pull/12836#discussion_r66672823
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/plans/logical/object.scala
---
@@ -286,6 +290,9 @@ case class FlatMapGroupsInR(
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/13603
q: was blacklist merged in 2.0?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and
Github user zsxwing commented on the issue:
https://github.com/apache/spark/pull/13597
Thanks, merging to master and 2.0.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user kayousterhout closed the pull request at:
https://github.com/apache/spark/pull/13580
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the
Github user kayousterhout commented on the issue:
https://github.com/apache/spark/pull/13580
Merged into master and 2.0
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user shivaram commented on a diff in the pull request:
https://github.com/apache/spark/pull/12836#discussion_r66671272
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/plans/logical/object.scala
---
@@ -286,6 +290,9 @@ case class FlatMapGroupsInR(
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/13486
Thank you, @marmbrus !
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/13601
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/60301/
Test PASSed.
---
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/13508
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/13601
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user NarineK commented on a diff in the pull request:
https://github.com/apache/spark/pull/12836#discussion_r66670797
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/plans/logical/object.scala
---
@@ -286,6 +290,9 @@ case class FlatMapGroupsInR(
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/13486
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/13601
**[Test build #60301 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/60301/consoleFull)**
for PR 13601 at commit
Github user marmbrus commented on the issue:
https://github.com/apache/spark/pull/13486
Merging to master and 2.0
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and
Github user shivaram commented on the issue:
https://github.com/apache/spark/pull/13508
Thanks @wangmiao1981 - LGTM. Merging this to master and branch-2.0
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project
Github user shivaram commented on the issue:
https://github.com/apache/spark/pull/12836
Thanks @liancheng for clarification and @NarineK for implementing the
override. I just had one minor comment.
@sun-rui Can you take one final look ? Since we have not still cut RC1, we
Github user shivaram commented on a diff in the pull request:
https://github.com/apache/spark/pull/12836#discussion_r9908
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/plans/logical/object.scala
---
@@ -286,6 +290,9 @@ case class FlatMapGroupsInR(
Github user marmbrus commented on the issue:
https://github.com/apache/spark/pull/13597
Seems fine to me.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so,
Github user srowen commented on a diff in the pull request:
https://github.com/apache/spark/pull/13543#discussion_r9128
--- Diff:
core/src/main/scala/org/apache/spark/deploy/master/MasterArguments.scala ---
@@ -20,18 +20,24 @@ package org.apache.spark.deploy.master
import
Github user srowen commented on the issue:
https://github.com/apache/spark/pull/13563
Yes that was the original argument for not hard-coding UTF-8 here. I'd more
readily believe that a process uses UTF-8 regardless of the platform encoding.
But, making it configurable and defaulting
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/13589
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/60302/
Test PASSed.
---
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/13589
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/13589
**[Test build #60302 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/60302/consoleFull)**
for PR 13589 at commit
Github user sameeragarwal commented on a diff in the pull request:
https://github.com/apache/spark/pull/13566#discussion_r6584
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/catalog/Catalog.scala ---
@@ -226,4 +226,11 @@ abstract class Catalog {
*/
def
Github user sameeragarwal commented on a diff in the pull request:
https://github.com/apache/spark/pull/13566#discussion_r6348
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/CacheManager.scala ---
@@ -157,4 +161,49 @@ private[sql] class CacheManager extends
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/13592
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/60309/
Test PASSed.
---
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/13592
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/13592
**[Test build #60309 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/60309/consoleFull)**
for PR 13592 at commit
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/13603
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/13603
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/60300/
Test PASSed.
---
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/13603
**[Test build #60300 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/60300/consoleFull)**
for PR 13603 at commit
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/12313
@rdblue Thank you for updating the patch. I was out of town late last week
and was busy on spark summit early this week. Sorry for my late reply. Having
name-based resolution is very useful! Since
Github user tejasapatil commented on the issue:
https://github.com/apache/spark/pull/13563
@rxin @zsxwing Thanks for confirming. I will work on the change. Just to
clarify, I will use the configuration value for interactions with all the
streams (stderr, stdout and stdin).
---
If
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/13604
**[Test build #60310 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/60310/consoleFull)**
for PR 13604 at commit
Github user liancheng commented on a diff in the pull request:
https://github.com/apache/spark/pull/13592#discussion_r3884
--- Diff: docs/sql-programming-guide.md ---
@@ -12,130 +12,121 @@ title: Spark SQL and DataFrames
Spark SQL is a Spark module for structured data
Github user liancheng commented on the issue:
https://github.com/apache/spark/pull/13605
Do we use `SQLContext.range` in any test/example code?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/13605
LGTM
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/13604
To summarize:
For people coming from RDD land, they would assume read.text to return
Dataset[String], and it is easier to program for those guys.
That said, there are also arguments
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/13563
Got it - yea it'd make sense to make this configurable.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/13592
**[Test build #60309 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/60309/consoleFull)**
for PR 13592 at commit
Github user liancheng commented on a diff in the pull request:
https://github.com/apache/spark/pull/13592#discussion_r3638
--- Diff: docs/sql-programming-guide.md ---
@@ -184,20 +175,20 @@ showDF(df)
-## DataFrame Operations
+## Untyped Dataset
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/13604
@cloud-fan change the title/description to have only text?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not
Github user marmbrus commented on the issue:
https://github.com/apache/spark/pull/13604
I'm not sure I agree with all of the reasoning here. Here are my thoughts:
- `SQLContext` should probably not break any APIs (its only there for
compatibility anyway).
- In
Github user zsxwing commented on the issue:
https://github.com/apache/spark/pull/13563
> Are there cases when we need non UTF-8 encoding?
It depends on the encoding of the external commands. Some commands may just
use the default system encoding. If we hard code UTF-8, we
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/13155
**[Test build #3074 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/NewSparkPullRequestBuilder/3074/consoleFull)**
for PR 13155 at commit
Github user cloud-fan commented on the issue:
https://github.com/apache/spark/pull/13605
cc @rxin @liancheng @yhuai
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and
Github user srowen commented on the issue:
https://github.com/apache/spark/pull/13591
The effect for source and readability is that of a cast; yes it is not a
call to asInstanceOf. I do not feel strongly though this would have been the
righter way to write this code.
---
If your
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/13605
**[Test build #60308 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/60308/consoleFull)**
for PR 13605 at commit
GitHub user cloud-fan opened a pull request:
https://github.com/apache/spark/pull/13605
[SPARK-15856][SQL] Revert API breaking changes made in SQLContext.range
## What changes were proposed in this pull request?
It's easy for users to call `range(...).as[Long]` to get typed
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/13591
@srowen it wasn't really casting here. It's just an explicit type
definition.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/13604
Merged build finished. Test FAILed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/13604
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/60307/
Test FAILed.
---
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/13604
**[Test build #60307 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/60307/consoleFull)**
for PR 13604 at commit
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/13481
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/13563
Are there cases when we need non UTF-8 encoding?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/13604
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/60305/
Test FAILed.
---
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/13604
**[Test build #60305 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/60305/consoleFull)**
for PR 13604 at commit
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/13604
Merged build finished. Test FAILed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/13604
**[Test build #60307 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/60307/consoleFull)**
for PR 13604 at commit
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/13604
Can you break the text change from the range change? i.e. 2 prs.
I think @marmbrus doesn't agree with the text change so we should debate
more.
---
If your project is set up for it, you can
Github user tdas commented on the issue:
https://github.com/apache/spark/pull/13595
Yes. we are still reviewing the final API, so hold it for a bit. Thanks for
noticing the inconsistencies.
---
If your project is set up for it, you can reply to this email and have your
reply appear
Github user zsxwing commented on the issue:
https://github.com/apache/spark/pull/13595
@lw-lin thanks for the PR. However, we have not yet finalized the terms...
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/13604
Merged build finished. Test FAILed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/13604
**[Test build #60304 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/60304/consoleFull)**
for PR 13604 at commit
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/13604
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/60304/
Test FAILed.
---
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/13496
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/6848
**[Test build #60306 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/60306/consoleFull)**
for PR 6848 at commit
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/13604
**[Test build #60305 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/60305/consoleFull)**
for PR 13604 at commit
Github user liancheng commented on the issue:
https://github.com/apache/spark/pull/13604
I'd prefer change `SparkSession.range`'s return type to `Dataset[Row]` for
better consistency, also there's an annoying nullability issue regarding to
`Dataset[java.lang.Long]` (boxed types are
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/13604
LGTM - I guess the main question now is whether we want to change
SparkSession.range's return type.
---
If your project is set up for it, you can reply to this email and have your
reply appear on
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/13594
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/13604
Merged build finished. Test FAILed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/13604
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/60303/
Test FAILed.
---
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/13604
**[Test build #60303 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/60303/consoleFull)**
for PR 13604 at commit
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/13604
**[Test build #60304 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/60304/consoleFull)**
for PR 13604 at commit
301 - 400 of 618 matches
Mail list logo