Github user dongjoon-hyun commented on a diff in the pull request:
https://github.com/apache/spark/pull/17405#discussion_r107829978
--- Diff:
sql/hive/src/main/scala/org/apache/spark/sql/hive/client/HiveClientImpl.scala
---
@@ -441,7 +443,7 @@ private[hive] class HiveClientImpl
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/17405
@gatorsmile . It looks good to me. Thank you for refactoring.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/17251
Retest this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/17251
Could you review this `stack` PR again, @cloud-fan ?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not
Github user dongjoon-hyun commented on a diff in the pull request:
https://github.com/apache/spark/pull/17432#discussion_r108065494
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/optimizer/Optimizer.scala
---
@@ -597,12 +597,14 @@ object CollapseRepartition
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/17432
LGTM!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/17432
Thank you for fixing this! It seems that this should be included in RC2.
When will be cut for RC2?
---
If your project is set up for it, you can reply to this email and have your
reply
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/17251
Retest this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/17251
Hi, @gatorsmile .
Could you review this PR when you have sometime?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/17251
Thank you so much! :)
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and
Github user dongjoon-hyun commented on a diff in the pull request:
https://github.com/apache/spark/pull/17251#discussion_r108854595
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/generators.scala
---
@@ -156,9 +157,21 @@ case class Stack(children
Github user dongjoon-hyun commented on a diff in the pull request:
https://github.com/apache/spark/pull/17251#discussion_r108854853
--- Diff:
sql/core/src/test/scala/org/apache/spark/sql/GeneratorFunctionSuite.scala ---
@@ -39,9 +39,9 @@ class GeneratorFunctionSuite extends
Github user dongjoon-hyun commented on a diff in the pull request:
https://github.com/apache/spark/pull/17251#discussion_r108854836
--- Diff:
sql/catalyst/src/test/scala/org/apache/spark/sql/catalyst/analysis/TypeCoercionSuite.scala
---
@@ -707,6 +707,36 @@ class
Github user dongjoon-hyun commented on a diff in the pull request:
https://github.com/apache/spark/pull/17251#discussion_r108855184
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/TypeCoercion.scala
---
@@ -590,6 +591,21 @@ object TypeCoercion
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/17251
Thank you so much for review, @gatorsmile .
I updated the PR.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/17251
Retest this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/17251
Hi, @cloud-fan and @gatorsmile .
If there is something to do more, please let me know.
---
If your project is set up for it, you can reply to this email and have your
reply appear on
GitHub user dongjoon-hyun opened a pull request:
https://github.com/apache/spark/pull/17106
[SPARK-19775][SQL] Remove an obsolete `partitionBy().insertInto()` test case
## What changes were proposed in this pull request?
This issue removes [a test
case](https://github.com
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/17106
Right, thank you, @srowen .
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/17106
Thank you for the review, @rdblue !
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/17106
Thank you for merging, @srowen .
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
GitHub user dongjoon-hyun opened a pull request:
https://github.com/apache/spark/pull/17143
[SPARK-19801][BUILD] Remove JDK7 from Travis CI
## What changes were proposed in this pull request?
Since Spark 2.1.0, Travis CI was supported by SPARK-15207 for automated PR
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/17143
Thank you, @srowen .
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/17143
The only failure is irrelevant to this PR.
```
[info] KafkaSourceStressForDontFailOnDataLossSuite:
[info] - stress test for failOnDataLoss=false *** FAILED *** (1 minute, 2
seconds
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/17143
Retest this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/17143
Thank you, @felixcheung .
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/17143
Thank you for merging, @srowen .
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
GitHub user dongjoon-hyun opened a pull request:
https://github.com/apache/spark/pull/17182
[SPARK-19840][SQL] Disallow creating permanent functions with invalid class
names
## What changes were proposed in this pull request?
Currently, Spark raises exceptions on creating
Github user dongjoon-hyun commented on a diff in the pull request:
https://github.com/apache/spark/pull/16896#discussion_r104601283
--- Diff: python/pyspark/sql/types.py ---
@@ -189,7 +189,7 @@ def toInternal(self, dt):
if dt is not None:
seconds
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/16896
+1 LGTM.
Could you review and merge this please, @davies ?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/17182
Retest this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and
Github user dongjoon-hyun commented on a diff in the pull request:
https://github.com/apache/spark/pull/17182#discussion_r104718115
--- Diff: python/pyspark/sql/tests.py ---
@@ -1937,19 +1937,6 @@ def test_list_functions(self):
className
Github user dongjoon-hyun commented on a diff in the pull request:
https://github.com/apache/spark/pull/17182#discussion_r104719151
--- Diff: python/pyspark/sql/tests.py ---
@@ -1937,19 +1937,6 @@ def test_list_functions(self):
className
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/17182
Could you review this, @gatorsmile ?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user dongjoon-hyun commented on a diff in the pull request:
https://github.com/apache/spark/pull/17182#discussion_r104771499
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/command/functions.scala
---
@@ -63,7 +63,10 @@ case class CreateFunctionCommand
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/16896
Sorry. I didn't realized too.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this fe
GitHub user dongjoon-hyun opened a pull request:
https://github.com/apache/spark/pull/17223
[SPARK-19881][SQL] Support Dynamic Partition Inserts params with SET command
## What changes were proposed in this pull request?
Since Spark 2.0.0, `SET` commands do not pass the
Github user dongjoon-hyun commented on a diff in the pull request:
https://github.com/apache/spark/pull/17223#discussion_r105102457
--- Diff:
sql/hive/src/main/scala/org/apache/spark/sql/hive/HiveExternalCatalog.scala ---
@@ -793,6 +794,20 @@ private[spark] class
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/17223
Retest this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/16909
Retest this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/16944
Hi, @budde and @cloud-fan .
I met the following situation with Apache master after this commit. Could
you check the following case? Previously, Apache Spark shows the correct result
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/16944
Maybe, `t1` is already corrupted. Let me try a new one with that option.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/16944
After `hive` alteration, I run `spark-shell` and set the following
immediately.
But, it's already broken. When do you save the new inferred schema?
```
scala>
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/16944
Should the option be set before `CREATE TABLE`? If then, it seems that we
cannot prevent the corruption of the existing parquet tables.
---
If your project is set up for it, you can reply to
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/16944
Ur, I've got the following.
```
scala> sql("set spark.sql.hive.caseSensitiveInferenceMode=INFER_ONLY").show
++--+
|
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/16944
I found that `NEVER_INFER` work.
```
scala> sql("set spark.sql.hive.caseSensitiveInferenceMode=NEVER_INFER").show
++---+
|
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/16944
Yes. It's Mac environment with `spark-shell` and `hive 1.2.1` fully
locally. You can try that in your mac.
---
If your project is set up for it, you can reply to this email and have
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/16944
Oh, whenever I starts `spark-shell`, I should do the following, or may have
that in Spark configuration. Otherwise, it shows wrong result.
```
sql(&quo
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/16944
Yep. right. I should have describe that more clearly at the initial
report...
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/14033
Oh, I'll look at that, @cloud-fan .
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this fe
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/14033
Do you mean the column-wise type coercion to support the following?
```
hive> select stack(2,1,'a',3,2);
FAILED: UDFArgumentException Argument 2's type (string)
Github user dongjoon-hyun commented on a diff in the pull request:
https://github.com/apache/spark/pull/17232#discussion_r105448253
--- Diff:
sql/hive/src/main/scala/org/apache/spark/sql/hive/client/IsolatedClientLoader.scala
---
@@ -95,6 +95,7 @@ private[hive] object
Github user dongjoon-hyun commented on a diff in the pull request:
https://github.com/apache/spark/pull/17232#discussion_r105449419
--- Diff:
sql/hive/src/main/scala/org/apache/spark/sql/hive/client/package.scala ---
@@ -67,7 +67,11 @@ package object client {
exclusions
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/14033
Ah, I see the case. Hive allows NULL. I'll try to support the NULL case.
Thanks, @cloud-fan .
```sql
hive> select stack(3, 1, 'a', 2, 'b', 3, 'c')
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/16944
Thank you for quick investigation. Yep. please go ahead!
BTW, can we hold on backporting (#17229) for a while before resolving all
issues?
---
If your project is set up for it, you
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/17229
Hi, @budde and @cloud-fan .
If possible, please hold on this backporting for a while before resolving
the new issue?
---
If your project is set up for it, you can reply to this email and
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/14033
I created the JIRA issue. I'll make a PR soon.
https://issues.apache.org/jira/browse/SPARK-19910
---
If your project is set up for it, you can reply to this email and have your
GitHub user dongjoon-hyun opened a pull request:
https://github.com/apache/spark/pull/17251
[SPARK-19910][SQL] `stack` should not reject NULL values due to type
mismatch
## What changes were proposed in this pull request?
Since `stack` function generates a table with
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/17249
Sure, I'll test locally, too.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this fe
Github user dongjoon-hyun commented on a diff in the pull request:
https://github.com/apache/spark/pull/17249#discussion_r105499333
--- Diff:
sql/hive/src/main/scala/org/apache/spark/sql/hive/HiveMetastoreCatalog.scala ---
@@ -356,13 +356,10 @@ private[hive] object
Github user dongjoon-hyun commented on a diff in the pull request:
https://github.com/apache/spark/pull/17249#discussion_r105499863
--- Diff:
sql/hive/src/main/scala/org/apache/spark/sql/hive/HiveMetastoreCatalog.scala ---
@@ -356,13 +356,10 @@ private[hive] object
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/17249
Great. I also verified the patch.
```scala
scala> sql("SELECT a, b FROM t1").show
+---+---+
| a| b|
+---+---+
|100|200|
+---+---+
scala
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/17223
Could you review this when you have sometime, @gatorsmile ?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project
Github user dongjoon-hyun commented on a diff in the pull request:
https://github.com/apache/spark/pull/17251#discussion_r105514460
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/generators.scala
---
@@ -156,9 +157,21 @@ case class Stack(children
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/17251
Hi, @cloud-fan .
Could you review this PR about `stack` function?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user dongjoon-hyun commented on a diff in the pull request:
https://github.com/apache/spark/pull/17251#discussion_r105515079
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/generators.scala
---
@@ -146,7 +146,8 @@ case class Stack(children
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/17251
I see. I'll check that.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enable
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/17251
It seems that we already have that rule
[here](https://github.com/apache/spark/blob/master/sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/TypeCoercion.scala#L673-L674
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/17251
For `stack` function, we cannot use `ExpectsInputTypes` or
`ImplicitCastInputTypes`, do we?
---
If your project is set up for it, you can reply to this email and have your
reply appear on
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/17251
@cloud-fan .
For the following case, the values of `stack` consists of multiple columns.
Then, by adding *StackCoercion* rule, we need to *insert Cast() for all
NullType
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/17251
Basically, we can do that by bringing the logic from
`Stack.checkInputDataTypes`.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as
Github user dongjoon-hyun commented on a diff in the pull request:
https://github.com/apache/spark/pull/17251#discussion_r105528166
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/TypeCoercion.scala
---
@@ -590,6 +591,22 @@ object TypeCoercion
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/17251
I added `StackCoercision` at
[here](https://github.com/apache/spark/pull/17251/commits/36d90811a77889b19c47347fc591a8e1a6a482f3),
but reverted that.
For `StackCoercision`, we need
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/17251
Retest this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and
GitHub user dongjoon-hyun opened a pull request:
https://github.com/apache/spark/pull/17266
[SPARK-19912][SQL] String literals should be escaped for Hive metastore
partition pruning
## What changes were proposed in this pull request?
Currently, HiveShim's `convertFi
Github user dongjoon-hyun commented on a diff in the pull request:
https://github.com/apache/spark/pull/17266#discussion_r105553664
--- Diff:
sql/hive/src/main/scala/org/apache/spark/sql/hive/client/HiveShim.scala ---
@@ -566,13 +566,24 @@ private[client] class Shim_v0_13 extends
Github user dongjoon-hyun commented on a diff in the pull request:
https://github.com/apache/spark/pull/17264#discussion_r105554355
--- Diff:
sql/hive/src/main/scala/org/apache/spark/sql/hive/orc/OrcFileFormat.scala ---
@@ -208,10 +215,8 @@ private[orc] class OrcSerializer
Github user dongjoon-hyun commented on a diff in the pull request:
https://github.com/apache/spark/pull/17266#discussion_r105572627
--- Diff:
sql/hive/src/main/scala/org/apache/spark/sql/hive/client/HiveShim.scala ---
@@ -566,13 +566,24 @@ private[client] class Shim_v0_13 extends
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/17266
Yep. I tried escaping first, but it doesn't work inside Hive side.
Also, I tried `'"'"'"` for `"'` because it works in Hive. But, for fil
Github user dongjoon-hyun commented on a diff in the pull request:
https://github.com/apache/spark/pull/17266#discussion_r105572873
--- Diff:
sql/hive/src/main/scala/org/apache/spark/sql/hive/client/HiveShim.scala ---
@@ -566,13 +566,24 @@ private[client] class Shim_v0_13 extends
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/17266
BTW, I forgot to thank you. :) Thank you for review.
For the non-mixed cases, I think we don't need to escape.
---
If your project is set up for it, you can reply to this email and
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/17266
Yes. What I meant was that is not supported correctly from Hive
documentation.
For the mixed case, all combination of the escaping fails.
---
If your project is set up for it, you can
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/17266
It's not bug of Hive CLI, it seems a limitation of that API.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your pr
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/17266
If you want, I will make some other failure test cases in this PR.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/17266
Yep. I see. Thank for the guide, @gatorsmile !
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
GitHub user dongjoon-hyun opened a pull request:
https://github.com/apache/spark/pull/17271
[SPARK-19912][SQL] String literals should be escaped for Hive metastore
partition pruning
## What changes were proposed in this pull request?
This is not for merging.
This shows
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/17271
@gatorsmile . This is the PR for showing failure.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user dongjoon-hyun commented on a diff in the pull request:
https://github.com/apache/spark/pull/17271#discussion_r105591776
--- Diff:
sql/hive/src/main/scala/org/apache/spark/sql/hive/client/HiveShim.scala ---
@@ -566,13 +566,17 @@ private[client] class Shim_v0_13 extends
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/17271
This fails as expected.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/17266
#17271 failed as expected. Hive API does not handle the filters with
escaped string, e.g. two escaped chars like `\"\"`.
---
If your project is set up for it, you can reply to this
Github user dongjoon-hyun commented on a diff in the pull request:
https://github.com/apache/spark/pull/17251#discussion_r105598528
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/TypeCoercion.scala
---
@@ -590,6 +591,22 @@ object TypeCoercion
Github user dongjoon-hyun commented on a diff in the pull request:
https://github.com/apache/spark/pull/17251#discussion_r105599753
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/TypeCoercion.scala
---
@@ -590,6 +591,22 @@ object TypeCoercion
GitHub user dongjoon-hyun opened a pull request:
https://github.com/apache/spark/pull/17273
[MINOR][CORE] No need to call `prunePartitions` in case of empty partition
## What changes were proposed in this pull request?
`PrunedInMemoryFileIndex.prunePartitions` shows `pruned
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/17273
Otherwise, we can modify the [logInfo
code](https://github.com/apache/spark/blob/master/sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/PartitioningAwareFileIndex.scala#L185
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/17273
Thank you for review, @srowen .
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/17271
Retest this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/17223
Hi, @cloud-fan .
Could you review this when you have sometime?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user dongjoon-hyun commented on a diff in the pull request:
https://github.com/apache/spark/pull/17271#discussion_r105739210
--- Diff:
sql/hive/src/main/scala/org/apache/spark/sql/hive/client/HiveShim.scala ---
@@ -566,13 +566,17 @@ private[client] class Shim_v0_13 extends
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/17266
Sure!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/17266
For HIVE-11723, it resolved it in
[SemanticAnalyzer](https://github.com/apache/hive/blob/master/ql/src/java/org/apache/hadoop/hive/ql/parse/SemanticAnalyzer.java#L1002-L1003).
I think
301 - 400 of 7376 matches
Mail list logo