Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/15575#discussion_r84410677
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/limit.scala ---
@@ -96,13 +95,15 @@ trait BaseLimitExec extends UnaryExecNode
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/15575#discussion_r84410707
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/window/WindowExec.scala
---
@@ -103,6 +103,8 @@ case class WindowExec(
override
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/15575#discussion_r84410586
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/limit.scala ---
@@ -96,13 +95,15 @@ trait BaseLimitExec extends UnaryExecNode
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/15575#discussion_r84410162
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/SortExec.scala ---
@@ -45,6 +45,8 @@ case class SortExec(
override def
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/15577
lgtm
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/15433
test this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/15433
test this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/15517
cc @mengxr and @jkbradley
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/15434
Merged to master.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Repository: spark
Updated Branches:
refs/heads/master 2629cd746 -> 4329c5cea
[SPARK-17873][SQL] ALTER TABLE RENAME TO should allow users to specify database
in destination table name(but have to be same as source table)
## What changes were proposed in this pull request?
Unlike Hive, in
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/15476
lgtm1
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/15476#discussion_r83992147
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/streaming/SourceStatus.scala ---
@@ -47,8 +53,22 @@ class SourceStatus private(
val
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/15434
LGTM. Merging to master.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/15274
@DanielMe oh, I see. `get_json_object` will not parse json array. You need
to have a UDF to do that for Spark 1.6.
---
If your project is set up for it, you can reply to this email and have your
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/15518
Seems this fails the scala style check.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/15495
@gatorsmile If this pr fixes the problem related to the build, I am fine to
merge it.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/15471
test this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/15515
Also cc @ericl
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/15274
@DanielMe The best options for 1.6 are `get_json_object ` and `json_tuple`
(their docs can be found at
https://spark.apache.org/docs/1.6.0/api/scala/index.html#org.apache.spark.sql.functions
Repository: spark
Updated Branches:
refs/heads/branch-2.0 d7fa3e324 -> c53b83749
[SPARK-17863][SQL] should not add column into Distinct
## What changes were proposed in this pull request?
We are trying to resolve the attribute in sort by pulling up some column for
grandchild into child, but
Repository: spark
Updated Branches:
refs/heads/master 522dd0d0e -> da9aeb0fd
[SPARK-17863][SQL] should not add column into Distinct
## What changes were proposed in this pull request?
We are trying to resolve the attribute in sort by pulling up some column for
grandchild into child, but
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/15489
Thanks! Merging to master and branch 2.0.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/15421
test this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/15190
Yea. Looks like so. No worries. Let's get it tested again.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/15190
Reverted
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Repository: spark
Updated Branches:
refs/heads/master 7ab86244e -> 522dd0d0e
Revert "[SPARK-17620][SQL] Determine Serde by hive.default.fileformat when
Creating Hive Serde Tables"
This reverts commit 7ab86244e30ca81eb4fa40ea77b4c2b8881cbab2.
Project:
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/15190
I am reverting this patch. Sorry.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/15190
Seems this breaks the build.
https://amplab.cs.berkeley.edu/jenkins/view/Spark%20QA%20Compile/job/spark-master-compile-sbt-scala-2.10/2843/console
---
If your project is set up for it, you can reply
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/15489
LGTM.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/15489#discussion_r83500257
--- Diff: sql/core/src/test/scala/org/apache/spark/sql/SQLQuerySuite.scala
---
@@ -1106,6 +1106,30 @@ class SQLQuerySuite extends QueryTest
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/15489#discussion_r83486321
--- Diff: sql/core/src/test/scala/org/apache/spark/sql/SQLQuerySuite.scala
---
@@ -1106,6 +1106,30 @@ class SQLQuerySuite extends QueryTest
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/15459#discussion_r83476207
--- Diff:
sql/hive/src/test/scala/org/apache/spark/sql/hive/MetastoreRelationSuite.scala
---
@@ -36,4 +38,16 @@ class MetastoreRelationSuite extends
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/15475
Jenkins test this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/15475
Jenkins test this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/15475
Jenkins test this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/15190
@dilipbiswal That makes sense. Thank you for testing that. I do not have
any other question.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/15458#discussion_r83467245
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/internal/SQLConf.scala ---
@@ -915,4 +915,9 @@ object StaticSQLConf {
.internal
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/15460
`Caused by: sbt.ForkMain$ForkError:
org.apache.spark.sql.catalyst.analysis.NoSuchTableException: Table or view
'default' not found in database 'srcpart';` Seems we swapped the db and table
name
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/15048
Also, can we add a test for hive tables?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/15048
Also, another good test for this is
```
val df = sql("select 0 as id")
df.registerTempTable("foo")
val df2 = sql("""select * from foo group by i
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/15048
Thanks! btw, does this patch cover hive tables?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/15048
@gatorsmile Also, does it affect `CTAS` for creating a hive serde table?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/15048
@gatorsmile We should also backport this to branch 2.0, right?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/15190
If we have `spark.sql.hive.convertCTAS=true` and
`hive.default.fileformat=orc`, what format will we use when we create a table
through a CTAS statement?
---
If your project is set up for it, you
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/15398
I think we should consider other databases behavior and see if our behavior
makes sense.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/15398
@jodersky Thank you for the patch! How about we add a summary related to
the behavior of escape in pr description and jira?
---
If your project is set up for it, you can reply to this email and have
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/15434
Can you update the description to explain if `ALTER TABLE db1.tbl RENAME TO
db2.tbl2` is allowed (I guess it is not allowed)?
---
If your project is set up for it, you can reply to this email
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/14365
@cloud-fan actually, this conversion was disabled because of this bug.
btw, pr that @cloud-fan mentioned is
https://github.com/apache/spark/pull/14690.
I think it is better to hold
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/14897
LGTM. Let's make a small change according to
https://github.com/apache/spark/pull/14897#discussion_r82536096 and we can
merge this pr. Thanks!
---
If your project is set up for it, you can reply
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/14897#discussion_r82538273
--- Diff:
sql/core/src/test/scala/org/apache/spark/sql/execution/GlobalTempViewSuite.scala
---
@@ -0,0 +1,168 @@
+/*
+ * Licensed to the Apache
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/14897#discussion_r82536096
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/internal/SharedState.scala ---
@@ -94,6 +69,47 @@ private[sql] class SharedState(val sparkContext
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/14897#discussion_r82536038
--- Diff: python/pyspark/sql/catalog.py ---
@@ -181,6 +181,22 @@ def dropTempView(self, viewName):
"""
self._jcatal
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/15329
Yea. That's a good point. If we do not allow non-nullable fields, we should
also let users easily convert nullability field. Let me also check with
@marmbrus.
---
If your project is set up
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/15329
Actually, when will a user want to specify non-nullable for any json field?
I am not sure if we are actually addressing the right problem. I am wondering
if we should just not allow non-nullable
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/14897#discussion_r82492217
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/catalog/SessionCatalog.scala
---
@@ -453,7 +534,11 @@ class SessionCatalog
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/14897#discussion_r82493489
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/internal/SharedState.scala ---
@@ -37,39 +37,14 @@ import org.apache.spark.util.{MutableURLClassLoader
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/14897#discussion_r82493266
--- Diff: sql/core/src/main/scala/org/apache/spark/sql/Dataset.scala ---
@@ -2433,31 +2433,65 @@ class Dataset[T] private[sql
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/14897#discussion_r82493366
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/command/ddl.scala ---
@@ -183,17 +183,19 @@ case class DropTableCommand
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/14897#discussion_r82279784
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/catalog/GlobalTempViewManager.scala
---
@@ -0,0 +1,121 @@
+/*
+ * Licensed
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/14897#discussion_r82299965
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/catalog/SessionCatalog.scala
---
@@ -188,6 +196,11 @@ class SessionCatalog
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/14897#discussion_r82493491
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/internal/SharedState.scala ---
@@ -37,39 +37,14 @@ import org.apache.spark.util.{MutableURLClassLoader
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/14897#discussion_r82493318
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/SparkSqlParser.scala ---
@@ -380,6 +380,7 @@ class SparkSqlAstBuilder(conf: SQLConf) extends
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/14897#discussion_r82493591
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/internal/SharedState.scala ---
@@ -94,6 +69,47 @@ private[sql] class SharedState(val sparkContext
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/14897#discussion_r82493274
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/QueryExecution.scala ---
@@ -125,6 +124,9 @@ class QueryExecution(val sparkSession
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/14897#discussion_r82493549
--- Diff:
sql/core/src/test/scala/org/apache/spark/sql/execution/GlobalTempViewSuite.scala
---
@@ -0,0 +1,168 @@
+/*
+ * Licensed to the Apache
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/14897#discussion_r82272178
--- Diff: python/pyspark/sql/catalog.py ---
@@ -181,6 +181,22 @@ def dropTempView(self, viewName):
"""
self._jcatal
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/14897#discussion_r82493562
--- Diff:
sql/core/src/test/scala/org/apache/spark/sql/execution/GlobalTempViewSuite.scala
---
@@ -0,0 +1,107 @@
+/*
+ * Licensed to the Apache
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/14897#discussion_r82493357
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/command/ddl.scala ---
@@ -183,17 +183,19 @@ case class DropTableCommand
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/15263
Seems it breaks scala 2.10 compilation. Can you take a look? Thanks!
```
[error]
/home/jenkins/workspace/spark-master-compile-sbt-scala-2.10/sql/core/src/main/scala/org/apache/spark/sql
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/15329#discussion_r82425780
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/json/JacksonParser.scala
---
@@ -114,6 +120,18 @@ class JacksonParser
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/15329#discussion_r82425563
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/json/JacksonParser.scala
---
@@ -34,6 +34,8 @@ import org.apache.spark.util.Utils
Repository: spark
Updated Branches:
refs/heads/master 221b418b1 -> 5fd54b994
[SPARK-17758][SQL] Last returns wrong result in case of empty partition
## What changes were proposed in this pull request?
The result of the `Last` function can be wrong when the last partition
processed is empty.
Repository: spark
Updated Branches:
refs/heads/branch-2.0 b8df2e53c -> 3b6463a79
[SPARK-17758][SQL] Last returns wrong result in case of empty partition
## What changes were proposed in this pull request?
The result of the `Last` function can be wrong when the last partition
processed is
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/15348
LGTM. Merging to master and branch 2.0.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/15348
Thank you for fixing this! It is great to have unit tests to test
individual aggregate functions. We can start to add more tests for other
functions.
---
If your project is set up for it, you can
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/15304
Thanks!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/15304
Changes look good. How about we change the title back to `[SPARK-17549]
[SQL] Only collect table size stat in driver for cached relation`? Thanks!
---
If your project is set up for it, you can reply
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/13930
oh, how is hive's analyze got involved at here? I am thinking that when we
create the hive function's expression, we will know the expected input type of
the function. Then, spark's analyzer will add
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/13930#discussion_r81268420
--- Diff:
sql/hive/src/main/scala/org/apache/spark/sql/hive/HiveSessionCatalog.scala ---
@@ -163,6 +164,19 @@ private[sql] class HiveSessionCatalog
Repository: spark
Updated Branches:
refs/heads/master 027dea8f2 -> fe33121a5
[SPARK-17699] Support for parsing JSON string columns
Spark SQL has great support for reading text files that contain JSON data.
However, in many cases the JSON data is just one column amongst others. This
is
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/15274
LGTM. Merging to master.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/15285
@tdas will also take a look
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/15285#discussion_r81200688
--- Diff: core/src/test/scala/org/apache/spark/util/FileAppenderSuite.scala
---
@@ -292,8 +332,20 @@ class FileAppenderSuite extends SparkFunSuite
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/15285#discussion_r81200650
--- Diff: core/src/test/scala/org/apache/spark/util/FileAppenderSuite.scala
---
@@ -292,8 +332,20 @@ class FileAppenderSuite extends SparkFunSuite
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/15285#discussion_r81200584
--- Diff: core/src/main/scala/org/apache/spark/util/Utils.scala ---
@@ -1455,7 +1456,11 @@ private[spark] object Utils extends Logging {
val
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/15285#discussion_r81199615
--- Diff:
core/src/main/scala/org/apache/spark/util/logging/RollingFileAppender.scala ---
@@ -97,11 +125,11 @@ private[spark] class RollingFileAppender
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/15285#discussion_r81199036
--- Diff:
core/src/main/scala/org/apache/spark/util/logging/RollingFileAppender.scala ---
@@ -76,15 +79,40 @@ private[spark] class RollingFileAppender
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/15285#discussion_r81198417
--- Diff:
core/src/main/scala/org/apache/spark/util/logging/RollingFileAppender.scala ---
@@ -45,6 +47,7 @@ private[spark] class RollingFileAppender
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/15189#discussion_r80834862
--- Diff:
sql/core/src/test/scala/org/apache/spark/sql/execution/columnar/InMemoryColumnarQuerySuite.scala
---
@@ -232,4 +232,29 @@ class
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/15189#discussion_r80833202
--- Diff:
sql/core/src/test/scala/org/apache/spark/sql/execution/columnar/InMemoryColumnarQuerySuite.scala
---
@@ -232,4 +232,29 @@ class
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/15189#discussion_r80833138
--- Diff:
sql/core/src/test/scala/org/apache/spark/sql/execution/columnar/InMemoryColumnarQuerySuite.scala
---
@@ -232,4 +232,29 @@ class
Repository: spark
Updated Branches:
refs/heads/branch-2.0 cf5324127 -> 8a58f2e8e
[SPARK-17652] Fix confusing exception message while reserving capacity
## What changes were proposed in this pull request?
This minor patch fixes a confusing exception message while reserving additional
Repository: spark
Updated Branches:
refs/heads/master 8135e0e5e -> 7c7586aef
[SPARK-17652] Fix confusing exception message while reserving capacity
## What changes were proposed in this pull request?
This minor patch fixes a confusing exception message while reserving additional
capacity in
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/15225
LGTM. Merging to master and branch 2.0.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/14897
test this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/15189#discussion_r80406877
--- Diff:
sql/core/src/test/scala/org/apache/spark/sql/execution/columnar/InMemoryColumnarQuerySuite.scala
---
@@ -232,4 +232,29 @@ class
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/15189#discussion_r80406261
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/columnar/InMemoryRelation.scala
---
@@ -44,6 +44,70 @@ object InMemoryRelation
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/14897#discussion_r80396050
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/internal/SharedState.scala ---
@@ -37,6 +37,20 @@ import org.apache.spark.util.{MutableURLClassLoader
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/14897#discussion_r80395764
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/catalog/SessionCatalog.scala
---
@@ -453,7 +532,11 @@ class SessionCatalog
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/14897#discussion_r80396017
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/internal/CatalogImpl.scala ---
@@ -277,7 +275,7 @@ class CatalogImpl(sparkSession: SparkSession
401 - 500 of 5990 matches
Mail list logo