Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/14506
oh, these checks are used to make sure that users do not mess up spark
sql's internal settings. Let's have a discussion about these checks first.
---
If your project is set up for it, you can reply
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/14497#discussion_r73723916
--- Diff:
sql/hive/src/test/scala/org/apache/spark/sql/hive/HiveSparkSubmitSuite.scala ---
@@ -253,6 +253,47 @@ class HiveSparkSubmitSuite
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/14155#discussion_r73633166
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/command/createDataSourceTables.scala
---
@@ -275,238 +269,21 @@ case class
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/14155#discussion_r73623682
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/command/ddl.scala ---
@@ -301,9 +298,6 @@ case class AlterTableSerDePropertiesCommand
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/14155#discussion_r73623489
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/command/ddl.scala ---
@@ -229,10 +230,8 @@ case class AlterTableSetPropertiesCommand
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/14155#discussion_r73623449
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/command/ddl.scala ---
@@ -229,10 +230,8 @@ case class AlterTableSetPropertiesCommand
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/14155#discussion_r73622711
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/internal/HiveSerDe.scala ---
@@ -42,8 +41,7 @@ object HiveSerDe {
HiveSerDe
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/14500
We do not generate golden files anymore. Let's port those tests. Thanks.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
GitHub user yhuai opened a pull request:
https://github.com/apache/spark/pull/14497
[SPARK-16901] Hive settings in hive-site.xml may be overridden by Hive's
default values
## What changes were proposed in this pull request?
When we create the HiveConf for metastore client, we
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/14492
Sure. This change is for putting Spark jars in a different dir than the
default dir in `spark/assembly` or `spark/jars`. So, in this case, the main
class is not in `SPARK_JARS_DIR`.
---
If your
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/14476#discussion_r73458295
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/catalog/ExternalCatalog.scala
---
@@ -82,7 +82,7 @@ abstract class ExternalCatalog
GitHub user yhuai opened a pull request:
https://github.com/apache/spark/pull/14492
[SPARK-16887] Add SPARK_DIST_CLASSPATH to LAUNCH_CLASSPATH
## What changes were proposed in this pull request?
To deploy Spark, it can be pretty convenient to put all jars (spark jars,
hadoop
Repository: spark
Updated Branches:
refs/heads/branch-2.0 969313bb2 -> 2daab33c4
[SPARK-16714][SPARK-16735][SPARK-16646] array, map, greatest, least's type
coercion should handle decimal type
## What changes were proposed in this pull request?
Here is a table about the behaviours of
Repository: spark
Updated Branches:
refs/heads/master 639df046a -> b55f34370
[SPARK-16714][SPARK-16735][SPARK-16646] array, map, greatest, least's type
coercion should handle decimal type
## What changes were proposed in this pull request?
Here is a table about the behaviours of
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/14439
OK. I am merging this PR to master and branch 2.0.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/14482
SaveMode is a public API. We cannot move it to catalyst.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/14439
@cloud-fan Thanks for the fix. The new logic looks good. I will merge it
once jenkins pass.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/14439#discussion_r73365299
--- Diff:
sql/catalyst/src/test/scala/org/apache/spark/sql/catalyst/analysis/TypeCoercionSuite.scala
---
@@ -344,6 +384,15 @@ class TypeCoercionSuite extends
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/14439#discussion_r73363487
--- Diff:
sql/catalyst/src/test/scala/org/apache/spark/sql/catalyst/analysis/TypeCoercionSuite.scala
---
@@ -344,6 +384,15 @@ class TypeCoercionSuite extends
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/14439#discussion_r73361360
--- Diff:
sql/catalyst/src/test/scala/org/apache/spark/sql/catalyst/analysis/TypeCoercionSuite.scala
---
@@ -344,6 +384,15 @@ class TypeCoercionSuite extends
Repository: spark
Updated Branches:
refs/heads/master 03d46aafe -> 2eedc00b0
[SPARK-16828][SQL] remove MaxOf and MinOf
## What changes were proposed in this pull request?
These 2 expressions are not needed anymore after we have `Greatest` and
`Least`. This PR removes them and related tests.
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/14434
Thanks. Merging to master.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/14439#discussion_r73064350
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/TypeCoercion.scala
---
@@ -157,6 +145,26 @@ object TypeCoercion
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/14439
It will be good to summarize the behaviors of other systems in the
description. Let's also explain the behavioral change of this pr in the
description. So, others can understand its implication
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/14439
Let's be careful at here. I am not sure we can just use
`DecimalPrecision.widerDecimalType`, which produces `Decimal(38, 38)` when we
have one decimal with the type of `Decimal(38, 0)` and another
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/14401
Seems it is mainly removing the field of `warehousePath` from
`TestHiveSessionState` and `TestHiveSharedState`. Probably it will help us
remove `TestHiveSessionState` and `TestHiveSharedState
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/14434#discussion_r73023513
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/optimizer/Optimizer.scala
---
@@ -662,10 +662,6 @@ object NullPropagation extends Rule
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/14368
@liancheng after second thought, I think it makes sense to also merge it to
branch 2.0 to avoid potential conflicts on doc fixes.
---
If your project is set up for it, you can reply to this email
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/14363
Thanks. But, what are specific cases are not supported? If there is any
case, we should make change to support that, right?
---
If your project is set up for it, you can reply to this email and have
Repository: spark
Updated Branches:
refs/heads/branch-2.0 d357ca302 -> c651ff53a
[SPARK-16805][SQL] Log timezone when query result does not match
## What changes were proposed in this pull request?
It is useful to log the timezone when query result does not match, especially
on build
Repository: spark
Updated Branches:
refs/heads/master 301fb0d72 -> 579fbcf3b
[SPARK-16805][SQL] Log timezone when query result does not match
## What changes were proposed in this pull request?
It is useful to log the timezone when query result does not match, especially
on build machines
Repository: spark
Updated Branches:
refs/heads/master 064d91ff7 -> 301fb0d72
[SPARK-16731][SQL] use StructType in CatalogTable and remove CatalogColumn
## What changes were proposed in this pull request?
`StructField` has very similar semantic with `CatalogColumn`, except that
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/14413
LGTM. Merging to master and branch 2.0.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/14363
LGTM. Thanks. Merging to master.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/14363
Do we know which hive type strings cannot be parsed by spark?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/14363#discussion_r72910844
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/catalog/interface.scala
---
@@ -78,28 +78,6 @@ object CatalogStorageFormat
Repository: spark
Updated Branches:
refs/heads/branch-2.0 a32531a72 -> 7d87fc964
[SPARK-16748][SQL] SparkExceptions during planning should not wrapped in
TreeNodeException
## What changes were proposed in this pull request?
We do not want SparkExceptions from job failures in the planning
Repository: spark
Updated Branches:
refs/heads/master 2182e4322 -> bbc247548
[SPARK-16748][SQL] SparkExceptions during planning should not wrapped in
TreeNodeException
## What changes were proposed in this pull request?
We do not want SparkExceptions from job failures in the planning phase
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/14395
LGTM. Merging to master and branch 2.0.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/14395
seems jenkins is down?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/14395
this this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/12645#discussion_r72834549
--- Diff:
sql/hive/src/main/scala/org/apache/spark/sql/hive/HiveStrategies.scala ---
@@ -83,27 +83,4 @@ private[hive] trait HiveStrategies {
Nil
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/12645#discussion_r72834461
--- Diff:
sql/hive/src/main/scala/org/apache/spark/sql/hive/HiveStrategies.scala ---
@@ -83,27 +83,4 @@ private[hive] trait HiveStrategies {
Nil
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/12645#discussion_r72825428
--- Diff:
sql/hive/src/main/scala/org/apache/spark/sql/hive/HiveStrategies.scala ---
@@ -83,27 +83,4 @@ private[hive] trait HiveStrategies {
Nil
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/12645#discussion_r72823630
--- Diff:
sql/hive/src/main/scala/org/apache/spark/sql/hive/HiveStrategies.scala ---
@@ -83,27 +83,4 @@ private[hive] trait HiveStrategies {
Nil
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/12645#discussion_r72822407
--- Diff:
sql/hive/src/main/scala/org/apache/spark/sql/hive/HiveStrategies.scala ---
@@ -83,27 +83,4 @@ private[hive] trait HiveStrategies {
Nil
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/14395#discussion_r72820228
--- Diff: sql/core/src/test/scala/org/apache/spark/sql/SQLQuerySuite.scala
---
@@ -20,7 +20,7 @@ package org.apache.spark.sql
import
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/13830#discussion_r72515446
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/ListingFileCatalog.scala
---
@@ -73,21 +73,67 @@ class ListingFileCatalog
Yin Huai <yh...@databricks.com>
Closes #14284 from yhuai/lead-lag.
(cherry picked from commit 815f3eece5f095919a329af8cbd762b9ed71c7a8)
Signed-off-by: Yin Huai <yh...@databricks.com>
Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commit: http://git-wip-us.apache.org/
uai <yh...@databricks.com>
Closes #14284 from yhuai/lead-lag.
Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commit: http://git-wip-us.apache.org/repos/asf/spark/commit/815f3eec
Tree: http://git-wip-us.apache.org/repos/asf/spark/tree/815f3eec
Diff: http://git-wip-us.apache.org/
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/14284
Thanks for review. I am merging this to master and branch 2.0.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/14353#discussion_r72182390
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/complexTypeCreator.scala
---
@@ -33,13 +33,24 @@ case class CreateArray
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/14353#discussion_r72182316
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/complexTypeCreator.scala
---
@@ -33,13 +33,24 @@ case class CreateArray
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/14132#discussion_r72181762
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/Analyzer.scala
---
@@ -1774,6 +1775,49 @@ class Analyzer
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/14284
yea. that's a good point.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/14284
this this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/14350
LGTM
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/13585
@chenghao-intel Will you have time to update this PR?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/14297#discussion_r72104591
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/command/views.scala ---
@@ -44,7 +50,11 @@ import
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/14284#discussion_r72083334
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/WindowExec.scala ---
@@ -625,10 +643,12 @@ private[execution] final class
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/14284#discussion_r72083182
--- Diff:
sql/core/src/test/scala/org/apache/spark/sql/execution/SQLWindowFunctionSuite.scala
---
@@ -357,14 +356,59 @@ class SQLWindowFunctionSuite extends
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/14204
In my test, the column does not exist.
On Sun, Jul 24, 2016 at 6:41 PM -0700, "Tao Lin" <notificati...@githu
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/14204#discussion_r71997820
--- Diff:
core/src/main/scala/org/apache/spark/scheduler/cluster/ExecutorData.scala ---
@@ -34,5 +34,6 @@ private[cluster] class ExecutorData
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/14204
@nblintao I tried `./bin/spark-shell --master=local-cluster[2,1,1024]`.
Seems those worker links do not show up? Maybe something has been changed and
links do not show up anymore?
---
If your
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/13620#discussion_r71997586
--- Diff: core/src/main/scala/org/apache/spark/ui/jobs/AllJobsPage.scala ---
@@ -369,3 +375,246 @@ private[ui] class AllJobsPage(parent: JobsTab)
extends
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/13620
@nblintao Can you comment on your PR to explain which parts are new code
and which parts are based on existing code?
---
If your project is set up for it, you can reply to this email and have your
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/13620#discussion_r71997489
--- Diff: core/src/main/scala/org/apache/spark/ui/jobs/AllJobsPage.scala ---
@@ -210,64 +214,69 @@ private[ui] class AllJobsPage(parent: JobsTab)
extends
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/13620#discussion_r71997400
--- Diff: core/src/main/scala/org/apache/spark/ui/jobs/AllJobsPage.scala ---
@@ -210,64 +214,69 @@ private[ui] class AllJobsPage(parent: JobsTab)
extends
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/14331#discussion_r71996213
--- Diff:
sql/hive/src/main/scala/org/apache/spark/sql/hive/client/HiveClientImpl.scala
---
@@ -365,9 +365,6 @@ private[hive] class HiveClientImpl
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/14302#discussion_r71995943
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/command/tables.scala ---
@@ -520,7 +522,7 @@ case class DescribeTableCommand(table
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/14283
LGTM pending jenkins (I trigged the tests again in case some changes merged
in the past 4 days causing issues with this one.).
---
If your project is set up for it, you can reply to this email
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/14283
test this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/14295
@liancheng Can you also change `First`? I think that one is also broken
for this case.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/14318
Let's create a jira :)
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/14036
Having a query just to test this expression is good.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/14284#discussion_r71782889
--- Diff:
sql/core/src/test/scala/org/apache/spark/sql/execution/SQLWindowFunctionSuite.scala
---
@@ -357,14 +356,59 @@ class SQLWindowFunctionSuite extends
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/14284#discussion_r71781548
--- Diff:
sql/core/src/test/scala/org/apache/spark/sql/execution/SQLWindowFunctionSuite.scala
---
@@ -357,14 +356,59 @@ class SQLWindowFunctionSuite extends
ore stable.
Author: Yin Huai <yh...@databricks.com>
Closes #14289 from yhuai/SPARK-16656.
(cherry picked from commit 9abd99b3c318d0ec8b91124d40f3ab9e9d835dcf)
Signed-off-by: Yin Huai <yh...@databricks.com>
Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commit: http://git-wi
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/14289
I am merging this PR to master and branch 2.0
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/14289
Hopefully this can make the test more stable by using different temp dirs
for different tests.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub
Repository: spark
Updated Branches:
refs/heads/branch-2.0 81004f13f -> a804c9260
[SPARK-16644][SQL] Aggregate should not propagate constraints containing
aggregate expressions
aggregate expressions can only be executed inside `Aggregate`, if we propagate
it up with constraints, the parent
Repository: spark
Updated Branches:
refs/heads/master 75a06aa25 -> cfa5ae84e
[SPARK-16644][SQL] Aggregate should not propagate constraints containing
aggregate expressions
## What changes were proposed in this pull request?
aggregate expressions can only be executed inside `Aggregate`, if
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/14281
Thanks. I am merging this to master and branch 2.0.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Repository: spark
Updated Branches:
refs/heads/master e3cd5b305 -> e651900bd
[SPARK-16344][SQL] Decoding Parquet array of struct with a single field named
"element"
## What changes were proposed in this pull request?
Due to backward-compatibility reasons, the following Parquet schema is
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/14014
Thank you! I am going to merge this to master.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/14278#discussion_r71626483
--- Diff:
sql/core/src/main/java/org/apache/spark/sql/execution/datasources/parquet/SpecificParquetRecordReaderBase.java
---
@@ -136,7 +137,9 @@ public void
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/14289
test this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/14289
test this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
GitHub user yhuai opened a pull request:
https://github.com/apache/spark/pull/14289
[SPARK-16656] [SQL] Try to make CreateTableAsSelectSuite more stable
## What changes were proposed in this pull request?
https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/62593
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/14281#discussion_r71614119
--- Diff:
sql/catalyst/src/test/scala/org/apache/spark/sql/catalyst/plans/ConstraintPropagationSuite.scala
---
@@ -79,13 +79,15 @@ class
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/14284#discussion_r71598723
--- Diff:
sql/hive/src/test/scala/org/apache/spark/sql/hive/execution/SQLWindowFunctionSuite.scala
---
@@ -367,4 +367,50 @@ class SQLWindowFunctionSuite
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/14284#discussion_r71598678
--- Diff:
sql/core/src/test/scala/org/apache/spark/sql/execution/SQLWindowFunctionSuite.scala
---
@@ -357,14 +356,59 @@ class SQLWindowFunctionSuite extends
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/14284#discussion_r71588935
--- Diff:
sql/hive/src/test/scala/org/apache/spark/sql/hive/execution/SQLWindowFunctionSuite.scala
---
@@ -367,4 +367,50 @@ class SQLWindowFunctionSuite
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/14284
test this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/14284
Without a good reason and providing a way  to make lead and lag respect
Bulls, we should not change the behavior.
On Wed, Jul 20, 2016 at 2:04 AM -0700, "A
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/14284#discussion_r71489063
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/WindowExec.scala ---
@@ -582,25 +582,43 @@ private[execution] final class
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/14284#discussion_r71488537
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/windowExpressions.scala
---
@@ -382,7 +382,7 @@ abstract class
GitHub user yhuai opened a pull request:
https://github.com/apache/spark/pull/14284
[SPARK-16633] [SPARK-16642] Fixes three issues related to window functions
## What changes were proposed in this pull request?
This PR contains three changes.
First, this PR changes
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/14272
yea. I think the fix is pretty safe. After discussion with @liancheng,
seems the more general fix is to just to use the requested catalyst schema to
initialize the vectorized reader.
---
If your
GitHub user yhuai opened a pull request:
https://github.com/apache/spark/pull/14267
[SPARK-15705] [SQL] Change the default value of
spark.sql.hive.convertMetastoreOrc to false.
## What changes were proposed in this pull request?
In 2.0, we add a new logic to convert
801 - 900 of 5990 matches
Mail list logo