Github user petermaxlee commented on the issue:
https://github.com/apache/spark/pull/13989
What do you mean by both positive and negative cases?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not
Github user petermaxlee commented on a diff in the pull request:
https://github.com/apache/spark/pull/13989#discussion_r69075298
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/internal/SessionState.scala ---
@@ -166,8 +166,8 @@ private[sql] class SessionState(sparkSession:
Github user kayousterhout commented on the issue:
https://github.com/apache/spark/pull/13603
LGTM!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if
Github user dongjoon-hyun commented on a diff in the pull request:
https://github.com/apache/spark/pull/13971#discussion_r69075261
--- Diff:
sql/catalyst/src/test/scala/org/apache/spark/sql/catalyst/expressions/GeneratorExpressionSuite.scala
---
@@ -0,0 +1,71 @@
+/*
+ *
Github user petermaxlee commented on a diff in the pull request:
https://github.com/apache/spark/pull/13989#discussion_r69075247
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/LogicalRelation.scala
---
@@ -85,5 +85,10 @@ case class LogicalRelation(
Github user petermaxlee commented on a diff in the pull request:
https://github.com/apache/spark/pull/13989#discussion_r69075198
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/plans/logical/LogicalPlan.scala
---
@@ -265,6 +265,11 @@ abstract class LogicalPlan
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/13972
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/13989#discussion_r69074558
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/plans/logical/LogicalPlan.scala
---
@@ -265,6 +265,11 @@ abstract class LogicalPlan
Github user mengxr commented on the issue:
https://github.com/apache/spark/pull/13972
@yinxusen Do you have time to consolidate example files for
`mllib-data-types.md`?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If
Github user mengxr commented on the issue:
https://github.com/apache/spark/pull/13972
LGTM2. Merged into master and branch-2.0. Thanks!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/13989#discussion_r69074411
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/LogicalRelation.scala
---
@@ -85,5 +85,10 @@ case class LogicalRelation(
Github user petermaxlee commented on a diff in the pull request:
https://github.com/apache/spark/pull/13989#discussion_r69074328
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/plans/logical/LogicalPlan.scala
---
@@ -265,6 +265,11 @@ abstract class LogicalPlan
Github user dongjoon-hyun commented on a diff in the pull request:
https://github.com/apache/spark/pull/13971#discussion_r69074335
--- Diff:
sql/catalyst/src/test/scala/org/apache/spark/sql/catalyst/expressions/GeneratorExpressionSuite.scala
---
@@ -0,0 +1,71 @@
+/*
+ *
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/13989#discussion_r69074265
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/plans/logical/LogicalPlan.scala
---
@@ -265,6 +265,11 @@ abstract class LogicalPlan
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/13989
Test cases are not enough to cover the metadata refreshing. The current
metadata cache is only used for data source tables. We still could convert Hive
tables to data source tables. For example,
Github user petermaxlee commented on a diff in the pull request:
https://github.com/apache/spark/pull/13989#discussion_r69074253
--- Diff: sql/core/src/main/scala/org/apache/spark/sql/Dataset.scala ---
@@ -2307,6 +2307,19 @@ class Dataset[T] private[sql](
def distinct():
Github user petermaxlee commented on a diff in the pull request:
https://github.com/apache/spark/pull/13989#discussion_r69074131
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/LogicalRelation.scala
---
@@ -85,5 +85,10 @@ case class LogicalRelation(
Github user petermaxlee commented on a diff in the pull request:
https://github.com/apache/spark/pull/13989#discussion_r69074039
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/plans/logical/LogicalPlan.scala
---
@@ -265,6 +265,11 @@ abstract class LogicalPlan
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/13989#discussion_r69073906
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/internal/SessionState.scala ---
@@ -166,8 +166,8 @@ private[sql] class SessionState(sparkSession:
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/13989#discussion_r69073454
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/LogicalRelation.scala
---
@@ -85,5 +85,10 @@ case class LogicalRelation(
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/13989#discussion_r69073383
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/plans/logical/LogicalPlan.scala
---
@@ -265,6 +265,11 @@ abstract class LogicalPlan
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/13989#discussion_r69073191
--- Diff:
sql/hive/src/main/scala/org/apache/spark/sql/hive/HiveMetastoreCatalog.scala ---
@@ -139,18 +139,6 @@ private[hive] class
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/13989#discussion_r69072136
--- Diff: sql/core/src/main/scala/org/apache/spark/sql/Dataset.scala ---
@@ -2307,6 +2307,19 @@ class Dataset[T] private[sql](
def distinct():
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/13988
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/13988
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/61523/
Test PASSed.
---
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/13988
**[Test build #61523 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/61523/consoleFull)**
for PR 13988 at commit
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/13989#discussion_r69071788
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/LogicalRelation.scala
---
@@ -85,5 +85,10 @@ case class LogicalRelation(
Github user ScrapCodes commented on the issue:
https://github.com/apache/spark/pull/13978
Looks good !
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or
Github user petermaxlee commented on a diff in the pull request:
https://github.com/apache/spark/pull/13989#discussion_r69071622
--- Diff: sql/core/src/main/scala/org/apache/spark/sql/Dataset.scala ---
@@ -2307,6 +2307,19 @@ class Dataset[T] private[sql](
def distinct():
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/13969
**[Test build #3152 has
started](https://amplab.cs.berkeley.edu/jenkins/job/NewSparkPullRequestBuilder/3152/consoleFull)**
for PR 13969 at commit
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/13966
**[Test build #3153 has
started](https://amplab.cs.berkeley.edu/jenkins/job/NewSparkPullRequestBuilder/3153/consoleFull)**
for PR 13966 at commit
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/13989#discussion_r69071525
--- Diff: sql/core/src/main/scala/org/apache/spark/sql/Dataset.scala ---
@@ -2307,6 +2307,19 @@ class Dataset[T] private[sql](
def distinct():
Github user petermaxlee commented on a diff in the pull request:
https://github.com/apache/spark/pull/13966#discussion_r69070865
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/stringExpressions.scala
---
@@ -162,6 +163,46 @@ case class
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/13987
**[Test build #61528 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/61528/consoleFull)**
for PR 13987 at commit
Github user petermaxlee commented on a diff in the pull request:
https://github.com/apache/spark/pull/13966#discussion_r69070679
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/stringExpressions.scala
---
@@ -162,6 +163,46 @@ case class
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/13989
Before, I tried to merge `invalidateTable` and `refreshTable`. @yhuai left
the following comment:
https://github.com/apache/spark/pull/13156#discussion_r63729506
I think maybe we
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/13982
cc @JoshRosen and @ericl
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes
Github user techaddict commented on the issue:
https://github.com/apache/spark/pull/13767
cc: @srowen
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/13990
**[Test build #61525 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/61525/consoleFull)**
for PR 13990 at commit
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/13987
**[Test build #61526 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/61526/consoleFull)**
for PR 13987 at commit
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/13926
**[Test build #61527 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/61527/consoleFull)**
for PR 13926 at commit
GitHub user techaddict opened a pull request:
https://github.com/apache/spark/pull/13990
[SPARK-16287][SQL][WIP] Implement str_to_map SQL function
## What changes were proposed in this pull request?
This PR adds `str_to_map` SQL function in order to remove Hive fallback.
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/13926
retest this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/13989
**[Test build #61524 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/61524/consoleFull)**
for PR 13989 at commit
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/13926
ping @hvanhovell Could you please take a look at this again? : )
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/13886
Could you please review this PR again? @cloud-fan Thanks!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/13989
cc @cloud-fan / @liancheng
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and
Github user petermaxlee commented on the issue:
https://github.com/apache/spark/pull/13989
cc @rxin
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if
GitHub user petermaxlee opened a pull request:
https://github.com/apache/spark/pull/13989
[SPARK-16311][SQL] Improve metadata refresh
## What changes were proposed in this pull request?
This patch implements the 3 things specified in SPARK-16311:
(1) Append a message to
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/13979
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/61520/
Test PASSed.
---
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/13979
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/13979
**[Test build #61520 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/61520/consoleFull)**
for PR 13979 at commit
Github user clockfly commented on a diff in the pull request:
https://github.com/apache/spark/pull/13987#discussion_r69067474
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/ListingFileCatalog.scala
---
@@ -58,10 +56,16 @@ class ListingFileCatalog(
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/13987
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/61521/
Test FAILed.
---
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/13987
**[Test build #61521 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/61521/consoleFull)**
for PR 13987 at commit
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/13987
Merged build finished. Test FAILed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user liancheng commented on the issue:
https://github.com/apache/spark/pull/13906
@cloud-fan Yea, that's a good point.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/13988
I still need to correct some nits and check the consistency with JSON data
source but I opened this just to check if it breaks anything. I will submit
some more commits soon.
---
If your
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/13988
**[Test build #61523 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/61523/consoleFull)**
for PR 13988 at commit
GitHub user HyukjinKwon opened a pull request:
https://github.com/apache/spark/pull/13988
[WIP][SPARK-16101][SQL] Refactoring CSV data source to be consistent with
JSON data source
## What changes were proposed in this pull request?
This PR refactors CSV data source to be
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/13829
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/13829
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/61517/
Test PASSed.
---
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/13829
**[Test build #61517 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/61517/consoleFull)**
for PR 13829 at commit
Github user cloud-fan commented on the issue:
https://github.com/apache/spark/pull/13906
@liancheng , I think we still need to keep some simple rules for unary
node, which also helps the binary cases, as the empty relation is propagated up.
---
If your project is set up for it, you
Github user liancheng commented on a diff in the pull request:
https://github.com/apache/spark/pull/13906#discussion_r69065541
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/optimizer/CollapseEmptyPlan.scala
---
@@ -0,0 +1,49 @@
+/*
+ * Licensed to
Github user liancheng commented on a diff in the pull request:
https://github.com/apache/spark/pull/13906#discussion_r69065425
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/optimizer/CollapseEmptyPlan.scala
---
@@ -0,0 +1,49 @@
+/*
+ * Licensed to
Github user liancheng commented on the issue:
https://github.com/apache/spark/pull/13906
My feeling is that, this optimization rule is mostly useful for binary plan
nodes like inner join and intersection, where we can avoid scanning output of
the non-empty side.
On the other
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/13829
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/13829
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/61515/
Test PASSed.
---
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/13829
**[Test build #61515 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/61515/consoleFull)**
for PR 13829 at commit
Github user cloud-fan commented on a diff in the pull request:
https://github.com/apache/spark/pull/13906#discussion_r69065025
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/optimizer/CollapseEmptyPlan.scala
---
@@ -0,0 +1,49 @@
+/*
+ * Licensed to
Github user cloud-fan commented on a diff in the pull request:
https://github.com/apache/spark/pull/13906#discussion_r69064885
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/optimizer/CollapseEmptyPlan.scala
---
@@ -0,0 +1,49 @@
+/*
+ * Licensed to
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/13978
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/61522/
Test PASSed.
---
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/13978
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/13978
**[Test build #61522 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/61522/consoleFull)**
for PR 13978 at commit
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/11863
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/61513/
Test PASSed.
---
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/11863
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/11863
**[Test build #61513 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/61513/consoleFull)**
for PR 11863 at commit
Github user liancheng commented on a diff in the pull request:
https://github.com/apache/spark/pull/13906#discussion_r69064054
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/optimizer/CollapseEmptyPlan.scala
---
@@ -0,0 +1,49 @@
+/*
+ * Licensed to
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/13829
Merged build finished. Test FAILed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/13829
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/61514/
Test FAILed.
---
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/13829
**[Test build #61514 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/61514/consoleFull)**
for PR 13829 at commit
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/13978
**[Test build #61522 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/61522/consoleFull)**
for PR 13978 at commit
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/13987
LGTM
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/13987
**[Test build #61521 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/61521/consoleFull)**
for PR 13987 at commit
GitHub user rxin opened a pull request:
https://github.com/apache/spark/pull/13987
[SPARK-16313][SQL] Spark should not silently drop exceptions in file listing
## What changes were proposed in this pull request?
Spark silently drops exceptions during file listing. This is a very
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/13972
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/61519/
Test PASSed.
---
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/13972
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/13972
**[Test build #61519 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/61519/consoleFull)**
for PR 13972 at commit
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/13979
**[Test build #61520 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/61520/consoleFull)**
for PR 13979 at commit
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/12384
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/61518/
Test PASSed.
---
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/12384
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/12384
**[Test build #61518 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/61518/consoleFull)**
for PR 12384 at commit
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/13941
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/13941
**[Test build #61516 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/61516/consoleFull)**
for PR 13941 at commit
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/13941
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/61516/
Test PASSed.
---
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/11863
**[Test build #3150 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/NewSparkPullRequestBuilder/3150/consoleFull)**
for PR 11863 at commit
Github user yinxusen commented on the issue:
https://github.com/apache/spark/pull/13972
@mengxr With this PR merged, I think we can also fix the [SPARK-13015
(mllib-data-types.md )](https://issues.apache.org/jira/browse/SPARK-13015) with
a consolidated example file.
---
If your
Github user liancheng commented on the issue:
https://github.com/apache/spark/pull/13972
@yinxusen Thanks!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so,
Github user yinxusen commented on the issue:
https://github.com/apache/spark/pull/13972
LGTM
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the
1 - 100 of 707 matches
Mail list logo