Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/16233#discussion_r93898801
--- Diff:
sql/catalyst/src/test/scala/org/apache/spark/sql/catalyst/catalog/ExternalCatalogSuite.scala
---
@@ -860,6 +864,24 @@ abstract class
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/16233#discussion_r93898653
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/catalog/SessionCatalog.scala
---
@@ -551,17 +551,26 @@ class SessionCatalog
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/16233#discussion_r93898562
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/Analyzer.scala
---
@@ -658,6 +720,17 @@ class Analyzer(
Generate
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/16233#discussion_r93898349
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/Analyzer.scala
---
@@ -510,32 +510,94 @@ class Analyzer(
* Replaces
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/16233#discussion_r93898300
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/Analyzer.scala
---
@@ -510,32 +510,94 @@ class Analyzer(
* Replaces
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/16233#discussion_r93898210
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/Analyzer.scala
---
@@ -510,32 +510,94 @@ class Analyzer(
* Replaces
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/16233#discussion_r93898130
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/Analyzer.scala
---
@@ -510,32 +510,94 @@ class Analyzer(
* Replaces
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/16233#discussion_r93898018
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/Analyzer.scala
---
@@ -510,32 +510,94 @@ class Analyzer(
* Replaces
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/16388#discussion_r93897706
--- Diff:
sql/hive/src/main/scala/org/apache/spark/sql/hive/client/HiveClientImpl.scala
---
@@ -408,8 +408,15 @@ private[hive] class HiveClientImpl
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/16233
> We can make ResolveRelations View aware, and make it keep track of the
default databases (plural - in case of nested views). The default database will
be the one of the last seen parent v
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/15996
ah
https://github.com/apache/spark/commit/9a1ad71db44558bb6eb380dc23a1a1abbc2f3e98
failed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/15996
LGTM. Can you update the comment to address my last comment
(https://github.com/apache/spark/pull/15996#discussion_r93730700)?
---
If your project is set up for it, you can reply to this email
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/15996#discussion_r93730700
--- Diff:
sql/core/src/test/scala/org/apache/spark/sql/test/DataFrameReaderWriterSuite.scala
---
@@ -643,6 +644,14 @@ class DataFrameReaderWriterSuite
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/15996#discussion_r93720714
--- Diff:
sql/hive/src/test/scala/org/apache/spark/sql/hive/PartitionProviderCompatibilitySuite.scala
---
@@ -195,12 +195,25 @@ class
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/15996#discussion_r93720687
--- Diff:
sql/hive/src/test/scala/org/apache/spark/sql/hive/PartitionProviderCompatibilitySuite.scala
---
@@ -195,12 +195,25 @@ class
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/15996#discussion_r93720630
--- Diff:
sql/hive/src/test/scala/org/apache/spark/sql/hive/PartitionProviderCompatibilitySuite.scala
---
@@ -195,12 +195,25 @@ class
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/15996#discussion_r93720613
--- Diff:
sql/hive/src/test/scala/org/apache/spark/sql/hive/PartitionProviderCompatibilitySuite.scala
---
@@ -195,12 +195,25 @@ class
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/15996#discussion_r93720521
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/command/createDataSourceTables.scala
---
@@ -140,153 +140,55 @@ case class
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/15996#discussion_r93720313
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/DataFrameWriter.scala ---
@@ -363,48 +365,125 @@ final class DataFrameWriter[T] private[sql](ds
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/15996#discussion_r93720195
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/DataFrameWriter.scala ---
@@ -364,48 +366,162 @@ final class DataFrameWriter[T] private[sql](ds
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/15996#discussion_r93719938
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/DataFrameWriter.scala ---
@@ -364,48 +366,162 @@ final class DataFrameWriter[T] private[sql](ds
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/15996#discussion_r93719624
--- Diff:
sql/core/src/test/scala/org/apache/spark/sql/test/DataFrameReaderWriterSuite.scala
---
@@ -635,4 +638,13 @@ class DataFrameReaderWriterSuite
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/15996#discussion_r93718921
--- Diff:
sql/core/src/test/scala/org/apache/spark/sql/test/DataFrameReaderWriterSuite.scala
---
@@ -635,4 +638,13 @@ class DataFrameReaderWriterSuite
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/16233
One general comment, let's explain how this patch maintains the
compatibility with views defined by previous versions of Spark. It is also good
to explain it in the corresponding part in the code
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/16233#discussion_r93551189
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/catalog/SessionCatalog.scala
---
@@ -549,17 +549,26 @@ class SessionCatalog
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/16233#discussion_r93551078
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/Analyzer.scala
---
@@ -510,32 +510,62 @@ class Analyzer(
* Replaces
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/16233#discussion_r93549589
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/plans/logical/basicLogicalOperators.scala
---
@@ -396,6 +396,20 @@ case class
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/16233#discussion_r93549309
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/Analyzer.scala
---
@@ -510,32 +510,62 @@ class Analyzer(
* Replaces
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/16233#discussion_r93549165
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/Analyzer.scala
---
@@ -510,32 +510,62 @@ class Analyzer(
* Replaces
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/16233#discussion_r93548847
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/Analyzer.scala
---
@@ -510,32 +510,62 @@ class Analyzer(
* Replaces
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/16357
@mridulm ok. I merged this because it is a backport (the original patch has
already been merged to 2.1 and master) and I believe Josh has already addressed
your concerns. If you want us hold
mer/paranamer 2.6, I suggests that we upgrade paranamer
to 2.6.
Author: Yin Huai <yh...@databricks.com>
Closes #16359 from yhuai/SPARK-18951.
Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commit: http://git-wip-us.apache.org/repos/asf/spark/commit/1a643889
Tree: http://git-wip-us.a
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/16359
Thanks! I will get this in master.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/16359
test this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Repository: spark
Updated Branches:
refs/heads/branch-2.0 678d91c1d -> 2aae220b5
[SPARK-18928][BRANCH-2.0] Check TaskContext.isInterrupted() in FileScanRDD,
JDBCRDD & UnsafeSorter
This is a branch-2.0 backport of #16340; the original description follows:
## What changes were proposed in
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/16357
Merging to branch 2.0.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Repository: spark
Updated Branches:
refs/heads/branch-2.0 1f0c5fa75 -> 678d91c1d
[SPARK-18761][BRANCH-2.0] Introduce "task reaper" to oversee task killing in
executors
Branch-2.0 backport of #16189; original description follows:
## What changes were proposed in this pull request?
Spark's
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/16358
LGTM. Merging to branch-2.0
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/16359
@srowen @JoshRosen for review
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/16359#discussion_r93334655
--- Diff: pom.xml ---
@@ -179,7 +179,7 @@
4.5.3
1.1
2.52.0
-2.8
+2.6
--- End diff --
Although we
GitHub user yhuai opened a pull request:
https://github.com/apache/spark/pull/16359
[SPARK-18951] Upgrade com.thoughtworks.paranamer/paranamer to 2.6
## What changes were proposed in this pull request?
I recently hit a bug of com.thoughtworks.paranamer/paranamer, which causes
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/16357
LGTM
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/16330#discussion_r93176308
--- Diff: core/src/main/scala/org/apache/spark/deploy/SparkHadoopUtil.scala
---
@@ -104,6 +104,12 @@ class SparkHadoopUtil extends Logging
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/16189
@mridulm Sure. Also, please feel free to leave more comments :)
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Repository: spark
Updated Branches:
refs/heads/master 5857b9ac2 -> fa829ce21
[SPARK-18761][CORE] Introduce "task reaper" to oversee task killing in executors
## What changes were proposed in this pull request?
Spark's current task cancellation / task killing mechanism is "best effort"
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/16189
LGTM!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/16189
Thank you for those comments. I am merging this to master.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/16189#discussion_r93162832
--- Diff: core/src/test/scala/org/apache/spark/JobCancellationSuite.scala
---
@@ -209,6 +209,83 @@ class JobCancellationSuite extends SparkFunSuite
Repository: spark
Updated Branches:
refs/heads/branch-2.1 fc1b25660 -> c1a26b458
[SPARK-18921][SQL] check database existence with Hive.databaseExists instead of
getDatabase
## What changes were proposed in this pull request?
It's weird that we use `Hive.getDatabase` to check the existence
Repository: spark
Updated Branches:
refs/heads/master 24482858e -> 7a75ee1c9
[SPARK-18921][SQL] check database existence with Hive.databaseExists instead of
getDatabase
## What changes were proposed in this pull request?
It's weird that we use `Hive.getDatabase` to check the existence of a
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/16332
LGTM. Merging to master and branch 2.1.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/16189#discussion_r92905791
--- Diff: core/src/main/scala/org/apache/spark/executor/Executor.scala ---
@@ -432,6 +465,93 @@ private[spark] class Executor
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/16189#discussion_r92905622
--- Diff: core/src/main/scala/org/apache/spark/executor/Executor.scala ---
@@ -432,6 +465,93 @@ private[spark] class Executor
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/16189#discussion_r92905603
--- Diff: core/src/main/scala/org/apache/spark/executor/Executor.scala ---
@@ -432,6 +465,93 @@ private[spark] class Executor
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/16189#discussion_r92904925
--- Diff: core/src/main/scala/org/apache/spark/executor/Executor.scala ---
@@ -229,9 +259,12 @@ private[spark] class Executor
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/16189
I am reviewing it now
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/16189
let's trigger more tests and see if the test is flaky.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/16189
test this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/16288
lgtm
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/16277
LGTM
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/16268
LGTM
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Repository: spark
Updated Branches:
refs/heads/master d53f18cae -> fb3081d3b
[SPARK-13747][CORE] Fix potential ThreadLocal leaks in RPC when using
ForkJoinPool
## What changes were proposed in this pull request?
Some places in SQL may call `RpcEndpointRef.askWithRetry` (e.g.,
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/16230
Merging to master
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/16230
LGTM
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Repository: spark
Updated Branches:
refs/heads/master 096f868b7 -> d53f18cae
[SPARK-18675][SQL] CTAS for hive serde table should work for all hive versions
## What changes were proposed in this pull request?
Before hive 1.1, when inserting into a table, hive will create the staging
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/16104
LGTM. Thanks. Merging to master
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Repository: spark
Updated Branches:
refs/heads/master d57a594b8 -> f8878a4c6
[SPARK-18631][SQL] Changed ExchangeCoordinator re-partitioning to avoid more
data skew
## What changes were proposed in this pull request?
Re-partitioning logic in ExchangeCoordinator changed so that adding another
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/16065
Thanks @markhamstra Merging to master.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/16065
lgtm
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/14638#discussion_r90098793
--- Diff:
sql/hive/src/main/scala/org/apache/spark/sql/hive/TableReader.scala ---
@@ -122,10 +126,20 @@ class HadoopTableReader(
val attrsWithIndex
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/14638#discussion_r90098551
--- Diff:
sql/hive/src/main/scala/org/apache/spark/sql/hive/TableReader.scala ---
@@ -122,10 +126,20 @@ class HadoopTableReader(
val attrsWithIndex
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/15979
looks good. @liancheng want to double check?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/15979#discussion_r89936859
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/encoders/ExpressionEncoder.scala
---
@@ -47,6 +47,14 @@ object ExpressionEncoder
tps://github.com/apache/spark/blob/branch-2.1/pom.xml#L1759). So, this PR
upgrades org.codehaus.janino:commons-compiler to 3.0.0.
## How was this patch tested?
jenkins
Author: Yin Huai <yh...@databricks.com>
Closes #16025 from yhuai/janino-commons-compile.
(cherry picked fr
tps://github.com/apache/spark/blob/branch-2.1/pom.xml#L1759). So, this PR
upgrades org.codehaus.janino:commons-compiler to 3.0.0.
## How was this patch tested?
jenkins
Author: Yin Huai <yh...@databricks.com>
Closes #16025 from yhuai/janino-commons-compile.
Project: http://git-wip-us.apache.org/
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/16025
Thanks. Since we start to use janino 3.0.0 in spark 2.1, I am merging this
pr to both master and branch 2.1.
---
If your project is set up for it, you can reply to this email and have your
reply
GitHub user yhuai opened a pull request:
https://github.com/apache/spark/pull/16025
[SPARK-18602] Set the version of org.codehaus.janino:commons-compiler to
3.0.0 to match the version of org.codehaus.janino:janino
## What changes were proposed in this pull request
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/16025
@kiszk want to take a look?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/15062
any chance that is the same issue as
https://issues.apache.org/jira/browse/SPARK-17109?
@rdblue When you were debugging this issue, which version of scala did you
use? Scala 2.10 or Scala 2.11
Repository: spark
Updated Branches:
refs/heads/branch-2.1 978798880 -> fc466be4f
[SPARK-18360][SQL] default table path of tables in default database should
depend on the location of default database
## What changes were proposed in this pull request?
The current semantic of the warehouse
Repository: spark
Updated Branches:
refs/heads/master b0aa1aa1a -> ce13c2672
[SPARK-18360][SQL] default table path of tables in default database should
depend on the location of default database
## What changes were proposed in this pull request?
The current semantic of the warehouse
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/15812
LGTM. Merging to master and branch 2.1.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/15922
lgtm pending jenkins
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/15703
LGTM. Merging to master.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes
Repository: spark
Updated Branches:
refs/heads/master a36a76ac4 -> 2ca8ae9aa
[SPARK-18186] Migrate HiveUDAFFunction to TypedImperativeAggregate for partial
aggregation support
## What changes were proposed in this pull request?
While being evaluated in Spark SQL, Hive UDAFs don't support
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/15703
Code changes looks good to me. Let's also do a benchmark to sanity check
our implementation.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/15703#discussion_r88326725
--- Diff: sql/hive/src/main/scala/org/apache/spark/sql/hive/hiveUDFs.scala
---
@@ -289,73 +302,75 @@ private[hive] case class HiveUDAFFunction
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/15900#discussion_r88289527
--- Diff:
sql/hive/src/test/scala/org/apache/spark/sql/hive/MetastoreDataSourcesSuite.scala
---
@@ -1371,4 +1371,23 @@ class MetastoreDataSourcesSuite
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/15900#discussion_r88289294
--- Diff:
sql/hive/src/main/scala/org/apache/spark/sql/hive/HiveExternalCatalog.scala ---
@@ -1023,6 +1023,11 @@ object HiveExternalCatalog
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/15900#discussion_r88289046
--- Diff:
sql/hive/src/test/scala/org/apache/spark/sql/hive/MetastoreDataSourcesSuite.scala
---
@@ -1371,4 +1371,23 @@ class MetastoreDataSourcesSuite
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/15703#discussion_r88172060
--- Diff: sql/hive/src/main/scala/org/apache/spark/sql/hive/hiveUDFs.scala
---
@@ -365,4 +380,66 @@ private[hive] case class HiveUDAFFunction(
val
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/15703#discussion_r88171437
--- Diff: sql/hive/src/main/scala/org/apache/spark/sql/hive/hiveUDFs.scala
---
@@ -289,73 +302,75 @@ private[hive] case class HiveUDAFFunction
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/15703#discussion_r88171373
--- Diff: sql/hive/src/main/scala/org/apache/spark/sql/hive/hiveUDFs.scala
---
@@ -289,73 +302,75 @@ private[hive] case class HiveUDAFFunction
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/15703#discussion_r88171285
--- Diff: sql/hive/src/main/scala/org/apache/spark/sql/hive/hiveUDFs.scala
---
@@ -289,73 +302,75 @@ private[hive] case class HiveUDAFFunction
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/15703#discussion_r88171097
--- Diff: sql/hive/src/main/scala/org/apache/spark/sql/hive/hiveUDFs.scala
---
@@ -289,73 +302,75 @@ private[hive] case class HiveUDAFFunction
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/15703#discussion_r88170694
--- Diff: sql/hive/src/main/scala/org/apache/spark/sql/hive/hiveUDFs.scala
---
@@ -263,8 +265,19 @@ private[hive] case class HiveGenericUDTF
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/15703#discussion_r88170550
--- Diff: sql/hive/src/main/scala/org/apache/spark/sql/hive/hiveUDFs.scala
---
@@ -263,8 +265,19 @@ private[hive] case class HiveGenericUDTF
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/15703#discussion_r88168751
--- Diff: sql/hive/src/main/scala/org/apache/spark/sql/hive/hiveUDFs.scala
---
@@ -263,8 +265,19 @@ private[hive] case class HiveGenericUDTF
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/15703#discussion_r88140760
--- Diff: sql/hive/src/main/scala/org/apache/spark/sql/hive/hiveUDFs.scala
---
@@ -365,4 +380,66 @@ private[hive] case class HiveUDAFFunction(
val
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/15703#discussion_r88140970
--- Diff: sql/hive/src/main/scala/org/apache/spark/sql/hive/hiveUDFs.scala
---
@@ -365,4 +380,66 @@ private[hive] case class HiveUDAFFunction(
val
201 - 300 of 5990 matches
Mail list logo