Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/14897#discussion_r80395565
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/catalog/SessionCatalog.scala
---
@@ -142,8 +149,12 @@ class SessionCatalog
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/14897#discussion_r80395798
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/command/views.scala ---
@@ -222,8 +265,8 @@ case class AlterViewAsCommand
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/14897#discussion_r80395752
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/catalog/SessionCatalog.scala
---
@@ -393,21 +459,25 @@ class SessionCatalog
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/14897#discussion_r80395517
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/catalog/GlobalTempViewManager.scala
---
@@ -0,0 +1,96 @@
+/*
+ * Licensed
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/14897#discussion_r80395993
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/internal/CatalogImpl.scala ---
@@ -277,7 +275,7 @@ class CatalogImpl(sparkSession: SparkSession
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/14897#discussion_r80395571
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/catalog/SessionCatalog.scala
---
@@ -47,6 +50,8 @@ object SessionCatalog
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/14897#discussion_r80395605
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/catalog/SessionCatalog.scala
---
@@ -329,33 +343,77 @@ class SessionCatalog
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/14897#discussion_r80395754
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/catalog/SessionCatalog.scala
---
@@ -393,21 +459,25 @@ class SessionCatalog
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/14897#discussion_r80396066
--- Diff:
sql/core/src/test/scala/org/apache/spark/sql/execution/GlobalTempViewSuite.scala
---
@@ -0,0 +1,107 @@
+/*
+ * Licensed to the Apache
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/14897#discussion_r80395510
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/catalog/GlobalTempViewManager.scala
---
@@ -0,0 +1,96 @@
+/*
+ * Licensed
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/14897#discussion_r80395895
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/command/views.scala ---
@@ -197,6 +201,45 @@ case class CreateViewCommand
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/14897#discussion_r80393533
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/Analyzer.scala
---
@@ -459,7 +459,8 @@ class Analyzer(
case u
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/15225#discussion_r80340526
--- Diff:
sql/core/src/main/java/org/apache/spark/sql/execution/vectorized/ColumnVector.java
---
@@ -285,19 +285,19 @@ public void reserve(int
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/15225#discussion_r80337683
--- Diff:
sql/core/src/main/java/org/apache/spark/sql/execution/vectorized/ColumnVector.java
---
@@ -285,19 +285,19 @@ public void reserve(int
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/15220
LGTM. Just left one comment to make the comment a little bit more clear.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/15220#discussion_r80331963
--- Diff:
core/src/main/scala/org/apache/spark/scheduler/LiveListenerBus.scala ---
@@ -57,6 +57,12 @@ private[spark] class LiveListenerBus(val sparkContext
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/15222
lgtm
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/15222#discussion_r80328167
--- Diff:
core/src/main/scala/org/apache/spark/scheduler/LiveListenerBus.scala ---
@@ -32,18 +33,24 @@ import org.apache.spark.util.Utils
* has started
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/15222#discussion_r80321666
--- Diff:
core/src/main/scala/org/apache/spark/scheduler/LiveListenerBus.scala ---
@@ -32,18 +33,24 @@ import org.apache.spark.util.Utils
* has started
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/15220#discussion_r80319523
--- Diff:
core/src/main/scala/org/apache/spark/scheduler/LiveListenerBus.scala ---
@@ -123,6 +129,23 @@ private[spark] class LiveListenerBus(val sparkContext
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/15220#discussion_r80319977
--- Diff:
core/src/main/scala/org/apache/spark/scheduler/LiveListenerBus.scala ---
@@ -123,6 +129,23 @@ private[spark] class LiveListenerBus(val sparkContext
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/15189
Cool. Thanks! I may have time today or tomorrow. I will try to take a look
at it during the weekend.
---
If your project is set up for it, you can reply to this email and have your
reply appear
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/15190
Can you try `CREATE TABLE tmp_default(id INT) as select ` and see if
the table will be converted to parquet format?
---
If your project is set up for it, you can reply to this email and have
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/15070
Just a note. It is also in branch 2.0.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
es.apache.org/jira/browse/SPARK-17549?focusedCommentId=15505060=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15505060
Author: Yin Huai <yh...@databricks.com>
Closes #15157 from yhuai/revert-SPARK-17549.
Project: http://git-wip-us.apache.org/repos/asf/spark/repo
ned at
https://issues.apache.org/jira/browse/SPARK-17549?focusedCommentId=15505060=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15505060
Author: Yin Huai <yh...@databricks.com>
Closes #15157 from yhuai/revert-SPARK-17549.
(cherry picked from commit 9ac68dbc5720026ea92acc61d295c
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/15157
@vanzin I am merging this PR to master and branch 2.0
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Repository: spark
Updated Branches:
refs/heads/branch-2.0 643f161d5 -> e76f4f47f
[SPARK-17051][SQL] we should use hadoopConf in InsertIntoHiveTable
## What changes were proposed in this pull request?
Hive confs in hive-site.xml will be loaded in `hadoopConf`, so we should use
`hadoopConf`
Repository: spark
Updated Branches:
refs/heads/master d5ec5dbb0 -> eb004c662
[SPARK-17051][SQL] we should use hadoopConf in InsertIntoHiveTable
## What changes were proposed in this pull request?
Hive confs in hive-site.xml will be loaded in `hadoopConf`, so we should use
`hadoopConf` in
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/14634
LGTM. Merging to master and branch 2.0.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/15157
I will merge this PR to master and branch 2.0 once it passes jenkins.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/15157
Done.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/15157
Oh, right. Will do that.
On Tue, Sep 20, 2016 at 8:57 AM -0700, "Marcelo Vanzin"
<notificati...@gith
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/15157
cc @vanzin
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
GitHub user yhuai opened a pull request:
https://github.com/apache/spark/pull/15157
Revert "[SPARK-17549][SQL] Only collect table size stat in driver for
cached relation."
This reverts commit 39e2bad6a866d27c3ca594d15e574a1da3ee84cc because of the
problem mentioned
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/14634
This change looks good. Let's add a regression test.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/15145
Merged.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Repository: spark
Updated Branches:
refs/heads/branch-2.0 ac060397c -> c4660d607
[SPARK-17589][TEST][2.0] Fix test case `create external table` in
MetastoreDataSourcesSuite
### What changes were proposed in this pull request?
This PR is to fix a test failure on the branch 2.0 builds:
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/15145
Thanks @gatorsmile. I am merging this fix to branch 2.0.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/15122
ok, got it. Thanks!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/15145
LGTM. Pending jenkins.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/15122
@petermaxlee I believe you will get a runtime exception saying that the
file does not exist.
Also, regarding your options 2, are you suggesting that users of structured
streaming to use
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/15122
also cc @cloud-fan
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Repository: spark
Updated Branches:
refs/heads/master b9323fc93 -> 39e2bad6a
[SPARK-17549][SQL] Only collect table size stat in driver for cached relation.
The existing code caches all stats for all columns for each partition
in the driver; for a large relation, this causes extreme memory
Repository: spark
Updated Branches:
refs/heads/branch-2.0 5ad4395e1 -> 3fce1255a
[SPARK-17549][SQL] Only collect table size stat in driver for cached relation.
The existing code caches all stats for all columns for each partition
in the driver; for a large relation, this causes extreme memory
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/15112
LGTM. Merging to master and branch 2.0.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/15112#discussion_r79221081
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/codegen/CodeGenerator.scala
---
@@ -910,14 +910,19 @@ object CodeGenerator
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/15112#discussion_r79208645
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/codegen/CodeGenerator.scala
---
@@ -910,14 +910,19 @@ object CodeGenerator
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/15112#discussion_r79062145
--- Diff:
sql/core/src/test/scala/org/apache/spark/sql/execution/columnar/InMemoryColumnarQuerySuite.scala
---
@@ -232,4 +232,18 @@ class
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/15112#discussion_r79061923
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/codegen/CodeGenerator.scala
---
@@ -910,14 +910,19 @@ object CodeGenerator
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/15112#discussion_r79020377
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/codegen/CodeGenerator.scala
---
@@ -910,14 +910,19 @@ object CodeGenerator
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/15112#discussion_r79019181
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/columnar/InMemoryRelation.scala
---
@@ -74,21 +71,12 @@ case class InMemoryRelation
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/15112#discussion_r79018651
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/codegen/CodeGenerator.scala
---
@@ -910,14 +910,19 @@ object CodeGenerator
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/15112
Thanks! I will take a look.
btw, if you have comparison related to memory footprint before and after
the change, it will be good to add that in the description.
---
If your project is set
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/15112
I have touched this part for a long time. I think we also use min/max to
evaluate predicates. Can you double check? Also, what stats do we collect right
now?
---
If your project is set up
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/14962#discussion_r78687128
--- Diff:
sql/core/src/test/scala/org/apache/spark/sql/test/DataFrameReaderWriterSuite.scala
---
@@ -457,6 +457,20 @@ class DataFrameReaderWriterSuite
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/14962#discussion_r78687123
--- Diff:
sql/core/src/test/scala/org/apache/spark/sql/internal/CatalogSuite.scala ---
@@ -322,6 +325,14 @@ class CatalogSuite
assert(e2.message
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/14962#discussion_r78687075
--- Diff: sql/core/src/test/scala/org/apache/spark/sql/SQLQuerySuite.scala
---
@@ -2661,4 +2661,15 @@ class SQLQuerySuite extends QueryTest
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/14962#discussion_r78686868
--- Diff:
sql/core/src/test/scala/org/apache/spark/sql/test/DataFrameReaderWriterSuite.scala
---
@@ -457,6 +457,20 @@ class DataFrameReaderWriterSuite
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/14962#discussion_r78686835
--- Diff:
sql/core/src/test/scala/org/apache/spark/sql/internal/CatalogSuite.scala ---
@@ -322,6 +325,14 @@ class CatalogSuite
assert(e2.message
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/14962#discussion_r78686776
--- Diff: sql/core/src/test/scala/org/apache/spark/sql/SQLQuerySuite.scala
---
@@ -2661,4 +2661,15 @@ class SQLQuerySuite extends QueryTest
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/14962
Is it possible to first have a PR to fix the bugs?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/14962#discussion_r78683471
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/Analyzer.scala
---
@@ -439,7 +439,7 @@ class Analyzer(
object
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/15087
I have merged this.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Repository: spark
Updated Branches:
refs/heads/branch-1.6 047bc3f13 -> bf3f6d2f1
[SPARK-17531][BACKPORT] Don't initialize Hive Listeners for the Execution Client
## What changes were proposed in this pull request?
If a user provides listeners inside the Hive Conf, the configuration for these
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/15087
Thanks! Merging to branch 1.6.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Repository: spark
Updated Branches:
refs/heads/branch-2.0 b17f10ced -> c1426452b
[SPARK-17531] Don't initialize Hive Listeners for the Execution Client
## What changes were proposed in this pull request?
If a user provides listeners inside the Hive Conf, the configuration for these
Repository: spark
Updated Branches:
refs/heads/master 4ba63b193 -> 72edc7e95
[SPARK-17531] Don't initialize Hive Listeners for the Execution Client
## What changes were proposed in this pull request?
If a user provides listeners inside the Hive Conf, the configuration for these
listeners
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/15086
Thanks. Merging to master and branch 2.0.
https://github.com/apache/spark/pull/15087 is the backport for branch 1.6.
---
If your project is set up for it, you can reply to this email and have your
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/15087
LGTM pending jenkins.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/15086
Yea. That will be great!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/15086
LGTM pending jenkins. Once jenkins passes, I will merge this to master,
branch 2.0, and branch 1.6.
---
If your project is set up for it, you can reply to this email and have your
reply appear
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/15086#discussion_r78638868
--- Diff:
sql/hive/src/test/scala/org/apache/spark/sql/hive/HiveUtilsSuite.scala ---
@@ -0,0 +1,36 @@
+/*
+ * Licensed to the Apache Software
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/15061
LGTM
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/15056#discussion_r78426904
--- Diff:
core/src/main/scala/org/apache/spark/storage/memory/MemoryStore.scala ---
@@ -663,31 +663,43 @@ private[spark] class MemoryStore(
private
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/15056#discussion_r78424222
--- Diff:
core/src/main/scala/org/apache/spark/storage/memory/MemoryStore.scala ---
@@ -663,31 +663,43 @@ private[spark] class MemoryStore(
private
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/14992
Thanks!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/10655
test this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user yhuai closed the pull request at:
https://github.com/apache/spark/pull/14816
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/14958
```
Using `mvn` from path:
/home/jenkins/workspace/spark-branch-1.6-lint/build/apache-maven-3.3.9/bin/mvn
Spark's published dependencies DO NOT MATCH the manifest file
(dev/spark-deps
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/14816
test this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/14973
Thanks!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/14968
@clockfly Can you close this? It has been merged to branch 2.0 (btw, prs
targeting branches other than master will not be auto-closed).
---
If your project is set up for it, you can reply
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/14915
test this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/14797
Thanks! Is there a jira?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/14797
@gatorsmile want to put the regression tests at here? Or, you have already
have a pr?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/14915
LGTM. Pending jenkins.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/14712
I have created https://issues.apache.org/jira/browse/SPARK-17408. @wzhfy
Can you take a look?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/14712
Can you take a look at the test at
https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/64956/testReport/junit/org.apache.spark.sql.hive/StatisticsSuite
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/14915
test this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Repository: spark
Updated Branches:
refs/heads/branch-2.0 dd27530c7 -> f56b70fec
Revert "[SPARK-17369][SQL] MetastoreRelation toJSON throws AssertException due
to missing otherCopyArgs"
This reverts commit 7b1aa2153bc6c8b753dba0710fe7b5d031158a34.
Project:
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/14928
I will revert it from branch 2.0.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/14928
Seems this breaks 2.0 build.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user yhuai closed the pull request at:
https://github.com/apache/spark/pull/14964
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/14964
test this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
GitHub user yhuai reopened a pull request:
https://github.com/apache/spark/pull/14964
[DO NOT MERGE] Test DefinedByConstructorParams
## What changes were proposed in this pull request?
I am testing DefinedByConstructorParams with branch 1.6. Do not merge it.
You can merge
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/14964
test this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user yhuai closed the pull request at:
https://github.com/apache/spark/pull/14964
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature
GitHub user yhuai opened a pull request:
https://github.com/apache/spark/pull/14964
[DO NOT MERGE] Test DefinedByConstructorParams
## What changes were proposed in this pull request?
I am testing DefinedByConstructorParams with branch 1.6. Do not merge it.
You can merge
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/14915#discussion_r77441612
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/trees/TreeNode.scala
---
@@ -604,6 +604,7 @@ abstract class TreeNode[BaseType <: TreeN
501 - 600 of 5990 matches
Mail list logo