Github user gatorsmile commented on the pull request:
https://github.com/apache/spark/pull/10689#issuecomment-170422086
@hvanhovell @rxin Could you take a look? Thank you!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well
Github user gatorsmile commented on the pull request:
https://github.com/apache/spark/pull/10678#issuecomment-170421900
@davies Could you take a look if the fix covers the analysis resolution
issue in TPCD? Thank you!
---
If your project is set up for it, you can reply to this email
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/10646#discussion_r49290212
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/trees/TreeNode.scala
---
@@ -398,6 +395,7 @@ abstract class TreeNode[BaseType
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/10689#discussion_r49292180
--- Diff:
sql/catalyst/src/test/scala/org/apache/spark/sql/catalyst/CatalystQlSuite.scala
---
@@ -49,4 +50,16 @@ class CatalystQlSuite extends PlanTest
Github user gatorsmile commented on the pull request:
https://github.com/apache/spark/pull/10678#issuecomment-170451240
Let me do more investigation tomorrow. @davies : )
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/10577#discussion_r49629751
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/plans/logical/basicOperators.scala
---
@@ -123,6 +115,39 @@ case class Except(left
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/13415
Sure, will do it. Thanks!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/13415#discussion_r65618427
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/SparkSqlParser.scala ---
@@ -880,6 +880,23 @@ class SparkSqlAstBuilder(conf: SQLConf
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/13070
CC @clockfly @cloud-fan
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/13415#discussion_r65618453
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/SparkSqlParser.scala ---
@@ -880,6 +880,23 @@ class SparkSqlAstBuilder(conf: SQLConf
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/13415
@cloud-fan Yeah. Agree. I knew you will say that. : )
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/13447
I think your concern is valid. Will add an `assert` in `InsertIntoTable`.
So far, dynamic partitioning is used by the `insertInto` API. However,
there is no way to specify `IF NOT EXISTS
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/13483
@viirya Based on my understanding, we want to avoid having duplicate output
columns generated by our APIs. If users want to explicitly have two duplicate
output columns, they can specify them
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/13483#discussion_r65647111
--- Diff:
sql/core/src/test/scala/org/apache/spark/sql/DatasetAggregatorSuite.scala ---
@@ -224,6 +224,21 @@ class DatasetAggregatorSuite extends
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/13483
Please update the PR description and post the analyzed plan. Also added
another example to show the root node is `Window`. Thanks!
---
If your project is set up for it, you can reply
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/13483#discussion_r65647177
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/RelationalGroupedDataset.scala ---
@@ -46,7 +46,18 @@ class RelationalGroupedDataset protected[sql
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/13070#discussion_r65654836
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/catalog/interface.scala
---
@@ -48,7 +49,26 @@ case class CatalogStorageFormat
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/13070#discussion_r65654827
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/catalog/interface.scala
---
@@ -65,8 +85,18 @@ case class CatalogColumn
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/13415
@cloud-fan @andrewor14 In this scenario, we do not have the case
sensitivity issues. All the catalog columns are converted to lower case by
https://github.com/apache/spark/blob
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/13070
@cloud-fan Thank you for your review! Just quickly added the space.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/13415
retest this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/13400
retest this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/13447
retest this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/13483
Thank you @marmbrus @dilipbiswal and @viirya !
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
GitHub user gatorsmile opened a pull request:
https://github.com/apache/spark/pull/13546
[SPARK-15808] [SQL] File Format Checking When Appending Data
What changes were proposed in this pull request?
**Issue:** Got wrong results or strange errors when append data to a table
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/13496#discussion_r66724862
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/Analyzer.scala
---
@@ -452,6 +452,17 @@ class Analyzer(
def
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/13496#discussion_r66725456
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/Analyzer.scala
---
@@ -452,6 +452,17 @@ class Analyzer(
def
GitHub user gatorsmile opened a pull request:
https://github.com/apache/spark/pull/13628
[SPARK-15907] [SQL] Issue Exceptions when Not Enough Input Columns for
Dynamic Partitioning
What changes were proposed in this pull request?
```SQL
CREATE TABLE
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/12313
If this is too large for merging to 2.0, could @rdblue deliver a small fix
for capturing the illegal user inputs? Thanks!
---
If your project is set up for it, you can reply to this email
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/13570
@hvanhovell @jkbradley Could you add @ioana-delaney to the whitelist?
Thanks!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/13496#discussion_r66729419
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/Analyzer.scala
---
@@ -452,6 +452,17 @@ class Analyzer(
def
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/13496#discussion_r66729456
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/Analyzer.scala
---
@@ -452,6 +452,17 @@ class Analyzer(
def
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/13496#discussion_r66729871
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/Analyzer.scala
---
@@ -452,6 +452,17 @@ class Analyzer(
def
Github user gatorsmile closed the pull request at:
https://github.com/apache/spark/pull/13628
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/13628
There is a better fix in https://github.com/apache/spark/pull/12313. Let me
close it.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub
GitHub user gatorsmile opened a pull request:
https://github.com/apache/spark/pull/13633
[SPARK-15912] [SQL] Replace getPartitionsByFilter by getPartitions in
inputFiles of MetastoreRelation
What changes were proposed in this pull request?
Always returns the files of all
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/13572#discussion_r66701399
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/command/cache.scala ---
@@ -17,30 +17,30 @@
package
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/13593
@rxin @liancheng I see. Since the existing Dataset API
`sparkSession.catalog.uncacheTable("non-cachedTable")` issues an error if
uncaching non-cached tables. Thus, to ensure both SQL
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/13572#discussion_r66697219
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/command/cache.scala ---
@@ -17,30 +17,30 @@
package
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/13415
Thank you! @andrewor14
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/13593#discussion_r66702359
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/internal/CatalogImpl.scala ---
@@ -323,7 +323,7 @@ class CatalogImpl(sparkSession: SparkSession
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/13593
@liancheng `tryUncacheQuery` and `uncacheQuery` are different.
`tryUncacheQuery` does not unregister the accumulators, but `uncacheQuery` does
it. This difference is also confusing
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/13572
Yeah, in Spark 1.6, we also silently drop the temporary table if the names
are the same. Let me remove the related changes and update the title and JIRA
---
If your project is set up
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/13415
retest this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes
GitHub user gatorsmile opened a pull request:
https://github.com/apache/spark/pull/13593
[SPARK-15864] [SQL] Fix Inconsistent Behaviors when Uncaching Non-cached
Tables
What changes were proposed in this pull request?
To uncache a table, we have two different APIs
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/13546
Agree! This has an external change. Just let me know if we can do it.
Thanks!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/9162#discussion_r65785335
--- Diff:
external/docker-integration-tests/src/test/scala/org/apache/spark/sql/jdbc/DB2IntegrationSuite.scala
---
@@ -47,19 +49,20 @@ class
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/13070#discussion_r65655636
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/catalog/interface.scala
---
@@ -140,6 +170,32 @@ case class CatalogTable
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/13483
LGTM except a few comments in test cases. : )
@cloud-fan @yhuai Could you please review this PR? Thanks!
---
If your project is set up for it, you can reply to this email and have your
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/13483#discussion_r65656657
--- Diff:
sql/core/src/test/scala/org/apache/spark/sql/DatasetAggregatorSuite.scala ---
@@ -224,6 +224,26 @@ class DatasetAggregatorSuite extends
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/13483#discussion_r65656488
--- Diff:
sql/core/src/test/scala/org/apache/spark/sql/DatasetAggregatorSuite.scala ---
@@ -224,6 +224,26 @@ class DatasetAggregatorSuite extends
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/13283#discussion_r64626689
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/DataSource.scala
---
@@ -132,12 +131,11 @@ case class DataSource
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/13283#discussion_r64652370
--- Diff: python/pyspark/sql/utils.py ---
@@ -77,6 +83,8 @@ def deco(*a, **kw):
raise QueryExecutionException(s.split(': ', 1)[1
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/13283#discussion_r64648263
--- Diff: python/pyspark/sql/utils.py ---
@@ -77,6 +83,8 @@ def deco(*a, **kw):
raise QueryExecutionException(s.split(': ', 1)[1
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/13283#discussion_r64647927
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/DataSource.scala
---
@@ -108,21 +108,20 @@ case class DataSource
Github user gatorsmile commented on the pull request:
https://github.com/apache/spark/pull/12772#issuecomment-221470057
Sorry, I am unable to reproduce it. Without the fix, the following test
case works well.
```scala
val data = Seq(("A\tB\tC\tD\t\t"), (&qu
Github user gatorsmile commented on the pull request:
https://github.com/apache/spark/pull/13283#issuecomment-221717353
It sounds like we need to verify all the possible source types we can
support. Let me add them. Thanks!
---
If your project is set up for it, you can reply
Github user gatorsmile commented on the pull request:
https://github.com/apache/spark/pull/13283#issuecomment-221882121
@zsxwing Now, code is ready for review. Thanks!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well
Github user gatorsmile commented on the pull request:
https://github.com/apache/spark/pull/13283#issuecomment-221770911
**Update**: The latest code changes contains
- For JDBC format, we added an extra checking in the rule
`ResolveRelations` of `Analyzer`. Without the PR, Spark
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/13593
cc @rxin @hvanhovell @liancheng This is another issue related to `Cache`
and `Uncache`. Actually, I am not sure if we should provide a SQL interface
`UNCACHE TABLE IF EXISTS`.
Thanks
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/13447
@yhuai You are right. This is not a good test case for verifying this. I
will add a case like
```scala
sql(
"""
|INSERT
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/12313
Really like this PR! It removes one of Hive-specific rule! : )
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project
GitHub user gatorsmile opened a pull request:
https://github.com/apache/spark/pull/13622
[SPARK-15901] [SQL] [TEST] Verification of CONVERT_METASTORE_ORC and
CONVERT_METASTORE_PARQUET
What changes were proposed in this pull request?
So far, we do not have test cases
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/13415
retest this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/13400
retest this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/12669#discussion_r66798040
--- Diff:
sql/hive/src/main/scala/org/apache/spark/sql/hive/HiveSessionState.scala ---
@@ -223,6 +223,7 @@ private[hive] class HiveSessionState(ctx
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/12669#discussion_r66804238
--- Diff:
sql/hive/src/main/scala/org/apache/spark/sql/hive/HiveSessionState.scala ---
@@ -223,6 +223,7 @@ private[hive] class HiveSessionState(ctx
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/13622#discussion_r67021276
--- Diff:
sql/hive/src/test/scala/org/apache/spark/sql/hive/parquetSuites.scala ---
@@ -676,6 +676,46 @@ class ParquetSourceSuite extends
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/13400
@cloud-fan @clockfly Thank you for your review! Let me know if the latest
code changes look fine. Thanks!
---
If your project is set up for it, you can reply to this email and have your
reply
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/13593#discussion_r66912774
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/internal/CatalogImpl.scala ---
@@ -323,7 +323,7 @@ class CatalogImpl(sparkSession: SparkSession
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/13593
@cloud-fan @rxin @liancheng Thank you for your reviews!
The PR description is updated. Let me know if any change is needed. Thanks!
---
If your project is set up for it, you can
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/13593
@cloud-fan Sorry, please refresh the browser. Just finishing the changes.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/13679
Have a very general question. Based on my understanding, we should
introduce our own `SQLConf` parameter if any `HiveConf` parameter can control
Spark internal behavior. Now, this PR is a very
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/13679#discussion_r67109509
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/internal/SharedState.scala ---
@@ -66,6 +67,30 @@ private[sql] class SharedState(val sparkContext
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/13679
For example, `hive.exec.stagingdir`, `hive.exec.dynamic.partition`,
`hive.exec.dynamic.partition.mode` and so on.
---
If your project is set up for it, you can reply to this email and have your
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/13593
@cloud-fan Thank you for letting me know it. Just updated the PR. Please
let me know if anything needs a change. Thanks!
---
If your project is set up for it, you can reply to this email
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/13380
retest this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/13546
cc @cloud-fan
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/13415
Thank you! @cloud-fan
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
GitHub user gatorsmile opened a pull request:
https://github.com/apache/spark/pull/13572
[SPARK-15838] CACHE TABLE AS SELECT should not replace the existing Temp
Table
What changes were proposed in this pull request?
If the temp table already exists, we should not silently
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/13572
@hvanhovell @liancheng Sorry, I realized we can perfectly support caching
the fully qualified table. In this PR, I just added a test case to improve the
test case coverage. Thanks
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/13447
@hvanhovell Does the latest code changes resolve your comment? Thanks!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/13400
@rxin @andrewor14 @cloud-fan This returns a wrong result. Do you think we
should fix it in Spark 2.0? Thanks!
---
If your project is set up for it, you can reply to this email and have your
Github user gatorsmile commented on the pull request:
https://github.com/apache/spark/pull/13122#issuecomment-222330922
@hvanhovell @yhuai This is also ready to review. Thanks!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub
Github user gatorsmile commented on the pull request:
https://github.com/apache/spark/pull/13380#issuecomment-222337714
@dongjoon-hyun Thank you!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user gatorsmile commented on the pull request:
https://github.com/apache/spark/pull/13111#issuecomment-222330637
cc @yhuai The code is ready for review. Thanks!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well
Github user gatorsmile commented on the pull request:
https://github.com/apache/spark/pull/13352#issuecomment-222335566
Sorry, I did not notice this PR. I submitted another PR
(https://github.com/apache/spark/pull/13380) that removes `SQLContext` from
`MLlib`. Any reason why you
Github user gatorsmile commented on the pull request:
https://github.com/apache/spark/pull/13380#issuecomment-222335661
@dongjoon-hyun @andrewor14 This PR removes `SQLContext` from `MLlib`. Let
me know if we should keep it. Thanks!
---
If your project is set up for it, you can reply
Github user gatorsmile commented on the pull request:
https://github.com/apache/spark/pull/13380#issuecomment-222328340
@rxin `SQLContext` is not being used in the `MLlib` test suites. Thus, this
PR is just to use the latest `SparkSession` to replace the existing
`SQLContext
GitHub user gatorsmile opened a pull request:
https://github.com/apache/spark/pull/13380
[SPARK-15644] [MLlib] [SQL] Replace SQLContext with SparkSession in MLlib
What changes were proposed in this pull request?
This PR is to use the latest `SparkSession` to replace
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/13380#discussion_r64996988
--- Diff:
sql/core/src/test/scala/org/apache/spark/sql/execution/joins/BroadcastJoinSuite.scala
---
@@ -48,7 +48,7 @@ class BroadcastJoinSuite extends
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/13380#discussion_r64997106
--- Diff:
sql/core/src/test/scala/org/apache/spark/sql/execution/joins/BroadcastJoinSuite.scala
---
@@ -48,7 +48,7 @@ class BroadcastJoinSuite extends
Github user gatorsmile commented on the pull request:
https://github.com/apache/spark/pull/13392#issuecomment-222346373
cc @cloud-fan @rxin Could you verify if my understanding is right? Thanks!
---
If your project is set up for it, you can reply to this email and have your
reply
Github user gatorsmile commented on the pull request:
https://github.com/apache/spark/pull/13070#issuecomment-222345018
@rxin Code is ready for review. Thanks!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
GitHub user gatorsmile opened a pull request:
https://github.com/apache/spark/pull/13392
[SPARK-15647] [SQL] Fix Boundary Cases in OptimizeCodegen Rule
What changes were proposed in this pull request?
The following condition in the Optimizer rule `OptimizeCodegen
Github user gatorsmile commented on the pull request:
https://github.com/apache/spark/pull/13392#issuecomment-222346333
@dongjoon-hyun FYI, this PR is just to fix the boundary cases. I knew this
issue was not introduced in your PR:
https://github.com/apache/spark/pull/12353. Thanks
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/12728#discussion_r65092509
--- Diff:
sql/hive/src/main/scala/org/apache/spark/sql/hive/execution/InsertIntoHiveTable.scala
---
@@ -114,13 +110,15 @@ case class InsertIntoHiveTable
Github user gatorsmile commented on the pull request:
https://github.com/apache/spark/pull/13392#issuecomment-222565978
Sorry, pushed to a wrong branch. : )
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project
GitHub user gatorsmile opened a pull request:
https://github.com/apache/spark/pull/13400
[SPARK-15655] [SQL] Fix Wrong Partition Column Order when Fetching
Partitioned Tables
What changes were proposed in this pull request?
When fetching the partitioned table, the output
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/13392#discussion_r65121902
--- Diff:
sql/core/src/test/scala/org/apache/spark/sql/internal/SQLConfSuite.scala ---
@@ -17,13 +17,36 @@
package
501 - 600 of 14035 matches
Mail list logo