Github user dilipbiswal commented on a diff in the pull request:
https://github.com/apache/spark/pull/20795#discussion_r175022409
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/Analyzer.scala
---
@@ -1192,11 +1195,23 @@ class Analyzer(
* @see
Github user dilipbiswal commented on a diff in the pull request:
https://github.com/apache/spark/pull/20579#discussion_r175019613
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/DataSource.scala
---
@@ -542,6 +542,11 @@ case class DataSource
Github user dilipbiswal commented on the issue:
https://github.com/apache/spark/pull/20579
@cloud-fan @rdblue Thank you for clarification. I am sorry, i hadn't seen
your comments before i pushed the last change which targets only parquet. I
will adjust the fix to target all formats
Github user dilipbiswal commented on the issue:
https://github.com/apache/spark/pull/20525
@cloud-fan @jiangxb1987 Thank you very much !!
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
Github user dilipbiswal commented on a diff in the pull request:
https://github.com/apache/spark/pull/20525#discussion_r173086959
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/FileFormatWriter.scala
---
@@ -202,7 +211,7 @@ object FileFormatWriter
Github user dilipbiswal commented on the issue:
https://github.com/apache/spark/pull/20525
retest this please
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail
Github user dilipbiswal commented on the issue:
https://github.com/apache/spark/pull/20525
@cloud-fan Can we resume on this now , wenchen ?
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
Github user dilipbiswal commented on the issue:
https://github.com/apache/spark/pull/20548
@cloud-fan Its SPARK-21759
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e
Github user dilipbiswal commented on the issue:
https://github.com/apache/spark/pull/20525
@cloud-fan Got it Wenchen. Thanks for your reply. I will hold off on 20579
for a while till we get
Github user dilipbiswal commented on the issue:
https://github.com/apache/spark/pull/20548
@gatorsmile Yeah. Its due to the changes made for SPARK-21759. The fix
looks okay to me.
---
-
To unsubscribe, e-mail
Github user dilipbiswal commented on the issue:
https://github.com/apache/spark/pull/20525
@gatorsmile @cloud-fan Hello, is there anything pending for this ? The
reason i ask is, for the other [PR]
(https://github.com/apache/spark/pull/20579) , i just realised that the write
code
Github user dilipbiswal commented on a diff in the pull request:
https://github.com/apache/spark/pull/20579#discussion_r167664375
--- Diff:
sql/core/src/test/scala/org/apache/spark/sql/FileBasedDataSourceSuite.scala ---
@@ -72,6 +72,29 @@ class FileBasedDataSourceSuite extends
Github user dilipbiswal commented on a diff in the pull request:
https://github.com/apache/spark/pull/20579#discussion_r167660095
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/parquet/ParquetFileFormat.scala
---
@@ -68,6 +68,16 @@ class
Github user dilipbiswal commented on a diff in the pull request:
https://github.com/apache/spark/pull/20579#discussion_r167659522
--- Diff:
sql/core/src/test/scala/org/apache/spark/sql/FileBasedDataSourceSuite.scala ---
@@ -72,6 +72,29 @@ class FileBasedDataSourceSuite extends
Github user dilipbiswal commented on a diff in the pull request:
https://github.com/apache/spark/pull/20579#discussion_r167465371
--- Diff:
sql/core/src/test/scala/org/apache/spark/sql/FileBasedDataSourceSuite.scala ---
@@ -72,6 +72,29 @@ class FileBasedDataSourceSuite extends
Github user dilipbiswal commented on a diff in the pull request:
https://github.com/apache/spark/pull/20579#discussion_r167463979
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/parquet/ParquetFileFormat.scala
---
@@ -68,6 +68,16 @@ class
GitHub user dilipbiswal opened a pull request:
https://github.com/apache/spark/pull/20579
[SPARK-23372][SQL] Writing empty struct in parquet fails during execution.
It should fail earlier in the processing.
## What changes were proposed in this pull request?
Running
Github user dilipbiswal commented on a diff in the pull request:
https://github.com/apache/spark/pull/20525#discussion_r167177337
--- Diff:
sql/core/src/test/scala/org/apache/spark/sql/test/DataFrameReaderWriterSuite.scala
---
@@ -301,7 +301,6 @@ class DataFrameReaderWriterSuite
Github user dilipbiswal commented on a diff in the pull request:
https://github.com/apache/spark/pull/20525#discussion_r167164589
--- Diff:
sql/core/src/test/scala/org/apache/spark/sql/test/DataFrameReaderWriterSuite.scala
---
@@ -301,7 +301,6 @@ class DataFrameReaderWriterSuite
Github user dilipbiswal commented on a diff in the pull request:
https://github.com/apache/spark/pull/20525#discussion_r167163936
--- Diff:
sql/core/src/test/scala/org/apache/spark/sql/test/DataFrameReaderWriterSuite.scala
---
@@ -301,7 +301,6 @@ class DataFrameReaderWriterSuite
Github user dilipbiswal commented on a diff in the pull request:
https://github.com/apache/spark/pull/20525#discussion_r167163193
--- Diff: docs/sql-programming-guide.md ---
@@ -1930,6 +1930,8 @@ working with timestamps in `pandas_udf`s to get the
best performance, see
Github user dilipbiswal commented on a diff in the pull request:
https://github.com/apache/spark/pull/20525#discussion_r167160198
--- Diff:
sql/core/src/test/scala/org/apache/spark/sql/test/DataFrameReaderWriterSuite.scala
---
@@ -301,7 +301,6 @@ class DataFrameReaderWriterSuite
Github user dilipbiswal commented on a diff in the pull request:
https://github.com/apache/spark/pull/20525#discussion_r167160086
--- Diff: docs/sql-programming-guide.md ---
@@ -1930,6 +1930,9 @@ working with timestamps in `pandas_udf`s to get the
best performance, see
Github user dilipbiswal commented on a diff in the pull request:
https://github.com/apache/spark/pull/20525#discussion_r167159534
--- Diff: docs/sql-programming-guide.md ---
@@ -1930,6 +1930,9 @@ working with timestamps in `pandas_udf`s to get the
best performance, see
Github user dilipbiswal closed the pull request at:
https://github.com/apache/spark/pull/20551
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h
Github user dilipbiswal commented on the issue:
https://github.com/apache/spark/pull/20525
@cloud-fan @gatorsmile Done.
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e
Github user dilipbiswal commented on the issue:
https://github.com/apache/spark/pull/20525
@cloud-fan Actually i had already created the doc pr in the morning using
the same JIRA number. Whenchen, if we want to have both the changes in the same
commit , will we be able to do it when
Github user dilipbiswal commented on the issue:
https://github.com/apache/spark/pull/20551
cc @gatorsmile Hi Sean, please let me your thoughts.
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
GitHub user dilipbiswal opened a pull request:
https://github.com/apache/spark/pull/20551
[SPARK-23271][DOC] Document the empty dataframe write semantics
## What changes were proposed in this pull request?
Document the change in semantics while writing/saving empty dataframes
Github user dilipbiswal commented on the issue:
https://github.com/apache/spark/pull/20525
@gatorsmile Thanks. I will create a doc pr and address it.
---
-
To unsubscribe, e-mail: reviews-unsubscr
Github user dilipbiswal commented on the issue:
https://github.com/apache/spark/pull/20525
retest this please
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail
Github user dilipbiswal commented on the issue:
https://github.com/apache/spark/pull/20525
retest this please
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail
Github user dilipbiswal commented on a diff in the pull request:
https://github.com/apache/spark/pull/20525#discussion_r166849342
--- Diff:
sql/core/src/test/scala/org/apache/spark/sql/execution/datasources/FileFormatWriterSuite.scala
---
@@ -19,6 +19,7 @@ package
Github user dilipbiswal commented on a diff in the pull request:
https://github.com/apache/spark/pull/20525#discussion_r166849297
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/FileFormatWriter.scala
---
@@ -190,9 +190,18 @@ object FileFormatWriter
Github user dilipbiswal commented on a diff in the pull request:
https://github.com/apache/spark/pull/20525#discussion_r166834243
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/FileFormatWriter.scala
---
@@ -190,9 +190,13 @@ object FileFormatWriter
Github user dilipbiswal commented on a diff in the pull request:
https://github.com/apache/spark/pull/20525#discussion_r166782197
--- Diff:
sql/core/src/test/scala/org/apache/spark/sql/execution/datasources/FileFormatWriterSuite.scala
---
@@ -32,6 +33,24 @@ class
Github user dilipbiswal commented on a diff in the pull request:
https://github.com/apache/spark/pull/20525#discussion_r166779585
--- Diff:
sql/core/src/test/scala/org/apache/spark/sql/execution/datasources/FileFormatWriterSuite.scala
---
@@ -32,6 +33,24 @@ class
Github user dilipbiswal commented on a diff in the pull request:
https://github.com/apache/spark/pull/20525#discussion_r166778140
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/FileFormatWriter.scala
---
@@ -190,9 +190,13 @@ object FileFormatWriter
Github user dilipbiswal commented on a diff in the pull request:
https://github.com/apache/spark/pull/20525#discussion_r166776296
--- Diff:
sql/core/src/test/scala/org/apache/spark/sql/execution/datasources/FileFormatWriterSuite.scala
---
@@ -32,6 +33,24 @@ class
Github user dilipbiswal commented on a diff in the pull request:
https://github.com/apache/spark/pull/20525#discussion_r166774153
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/FileFormatWriter.scala
---
@@ -190,9 +190,13 @@ object FileFormatWriter
Github user dilipbiswal commented on a diff in the pull request:
https://github.com/apache/spark/pull/20525#discussion_r166679420
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/FileFormatWriter.scala
---
@@ -190,9 +190,13 @@ object FileFormatWriter
Github user dilipbiswal commented on a diff in the pull request:
https://github.com/apache/spark/pull/20525#discussion_r166540812
--- Diff:
sql/core/src/test/scala/org/apache/spark/sql/execution/datasources/FileFormatWriterSuite.scala
---
@@ -32,6 +33,24 @@ class
Github user dilipbiswal commented on a diff in the pull request:
https://github.com/apache/spark/pull/20525#discussion_r166540285
--- Diff:
sql/core/src/test/scala/org/apache/spark/sql/test/DataFrameReaderWriterSuite.scala
---
@@ -301,7 +301,6 @@ class DataFrameReaderWriterSuite
GitHub user dilipbiswal opened a pull request:
https://github.com/apache/spark/pull/20525
SPARK-23271 Parquet output contains only _SUCCESS file after writing an
empty dataframe
## What changes were proposed in this pull request?
Below are the two cases.
``` SQL
case 1
Github user dilipbiswal commented on a diff in the pull request:
https://github.com/apache/spark/pull/20453#discussion_r165203182
--- Diff: sql/core/src/test/scala/org/apache/spark/sql/SQLQuerySuite.scala
---
@@ -18,7 +18,6 @@
package org.apache.spark.sql
import
Github user dilipbiswal commented on the issue:
https://github.com/apache/spark/pull/20453
Thanks a LOT @gatorsmile
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail
Github user dilipbiswal commented on a diff in the pull request:
https://github.com/apache/spark/pull/20453#discussion_r165198141
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/Analyzer.scala
---
@@ -1493,7 +1493,7 @@ class Analyzer
Github user dilipbiswal commented on the issue:
https://github.com/apache/spark/pull/20453
cc @gatorsmile @cloud-fan
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e
GitHub user dilipbiswal opened a pull request:
https://github.com/apache/spark/pull/20453
[SPARK-23281][SQL] Query produces results in incorrect order when a
composite order by clause refers to both original columns and aliases
## What changes were proposed in this pull request
Github user dilipbiswal commented on the issue:
https://github.com/apache/spark/pull/20441
@srowen Hello, yeah, i saw the same error. Quite a few errors like
```
java.lang.RuntimeException: Unable to instantiate
org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient
Github user dilipbiswal commented on the issue:
https://github.com/apache/spark/pull/20441
Many thanks @gatorsmile .
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail
GitHub user dilipbiswal opened a pull request:
https://github.com/apache/spark/pull/20441
[SPARK-23275] hive/tests have been failing when run locally on the laptop
(Mac) with OOM
## What changes were proposed in this pull request?
hive tests have been failing when they are run
Github user dilipbiswal commented on the issue:
https://github.com/apache/spark/pull/20283
Thank you very much @jiangxb1987 @gatorsmile
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
GitHub user dilipbiswal opened a pull request:
https://github.com/apache/spark/pull/20283
[SPARK-23095][SQL] Decorrelation of scalar subquery fails with
java.util.NoSuchElementException
## What changes were proposed in this pull request?
The following SQL involving scalar
Github user dilipbiswal commented on the issue:
https://github.com/apache/spark/pull/15332
@cloud-fan fyi -
https://github.com/apache/parquet-mr/tree/master/parquet-tools
---
-
To unsubscribe, e-mail: reviews
Github user dilipbiswal commented on the issue:
https://github.com/apache/spark/pull/15332
@cloud-fan Hi Wenchen, its been a while .. i am trying my best to
recollect. I think once i had the write code implemented in spark, i used it to
produce files. Depending on the data, parquet
Github user dilipbiswal commented on a diff in the pull request:
https://github.com/apache/spark/pull/19451#discussion_r146650909
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/optimizer/ReplaceExceptWithFilter.scala
---
@@ -0,0 +1,114
Github user dilipbiswal commented on a diff in the pull request:
https://github.com/apache/spark/pull/19451#discussion_r146467360
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/optimizer/ReplaceExceptWithFilter.scala
---
@@ -0,0 +1,114
Github user dilipbiswal commented on a diff in the pull request:
https://github.com/apache/spark/pull/19451#discussion_r146466342
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/optimizer/ReplaceExceptWithFilter.scala
---
@@ -0,0 +1,114
Github user dilipbiswal commented on a diff in the pull request:
https://github.com/apache/spark/pull/19451#discussion_r146419941
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/optimizer/ReplaceExceptWithFilter.scala
---
@@ -0,0 +1,114
Github user dilipbiswal commented on a diff in the pull request:
https://github.com/apache/spark/pull/19451#discussion_r146417603
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/optimizer/ReplaceExceptWithFilter.scala
---
@@ -0,0 +1,114
Github user dilipbiswal commented on the issue:
https://github.com/apache/spark/pull/19286
@viirya No problem. The newer version you have looks clean as well.
---
-
To unsubscribe, e-mail: reviews-unsubscr
Github user dilipbiswal commented on the issue:
https://github.com/apache/spark/pull/19286
@viirya Hey simon, thanks for catching this. Will it be little easier to
follow if we wrote like this ?
```
override def isCascadingTruncateTable(): Option[Boolean] = {
def
Github user dilipbiswal commented on the issue:
https://github.com/apache/spark/pull/19215
many thanks @gatorsmile
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail
Github user dilipbiswal commented on the issue:
https://github.com/apache/spark/pull/19215
cc @gatorsmile @cloud-fan
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e
Github user dilipbiswal commented on the issue:
https://github.com/apache/spark/pull/19068
@cloud-fan it didn't trigger the test ?
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional
Github user dilipbiswal commented on the issue:
https://github.com/apache/spark/pull/19068
@cloud-fan Yeah.. I have tried my script against this PR and it works fine.
I am not familiar with the changes and don't know if it can have any side
effects. One thing that haven't had
GitHub user dilipbiswal opened a pull request:
https://github.com/apache/spark/pull/19215
[MINOR][SQL] Only populate type metadata for required types such as
CHAR/VARCHAR.
## What changes were proposed in this pull request?
When reading column descriptions from hive catalog, we
Github user dilipbiswal commented on the issue:
https://github.com/apache/spark/pull/19068
@gatorsmile Hi Sean,
I am hitting this issue. Actually this seems like a regression as my script
which was working before is no longer working. Here is my scenario.
1) spark-sql
Github user dilipbiswal commented on the issue:
https://github.com/apache/spark/pull/15334
@gatorsmile Hi Sean, i tried apache-drill after looking through their
documentation. And they are able to encode interval data into parquet.
```
0: jdbc:drill:zk=local> CREATE TA
Github user dilipbiswal commented on the issue:
https://github.com/apache/spark/pull/15334
@gatorsmile Hi Sean, i am sorry, i didn't see your ping on this pr. I will
get back to you on your question as i need to re-create my env. I do remember
one thing though. I do remember looking
Github user dilipbiswal commented on a diff in the pull request:
https://github.com/apache/spark/pull/19050#discussion_r135394212
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/optimizer/subquery.scala
---
@@ -98,6 +99,11 @@ object RewritePredicateSubquery
Github user dilipbiswal commented on a diff in the pull request:
https://github.com/apache/spark/pull/19050#discussion_r135307335
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/optimizer/subquery.scala
---
@@ -98,6 +99,11 @@ object RewritePredicateSubquery
Github user dilipbiswal commented on the issue:
https://github.com/apache/spark/pull/18968
Thanks Simon. Changes look good to me. cc @cloud-fan @gatorsmile for any
additional comments.
---
If your project is set up for it, you can reply to this email and have your
reply appear
Github user dilipbiswal commented on a diff in the pull request:
https://github.com/apache/spark/pull/18968#discussion_r134933318
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/Analyzer.scala
---
@@ -1286,8 +1286,16 @@ class Analyzer
Github user dilipbiswal commented on the issue:
https://github.com/apache/spark/pull/18968
Thanks @viirya @cloud-fan. This looks much better. Can we not preserve the
user facing error we raise today? I think the error we raise today is better
for the user ?
---
If your project
Github user dilipbiswal commented on the issue:
https://github.com/apache/spark/pull/18968
@viirya Thank you !!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user dilipbiswal commented on the issue:
https://github.com/apache/spark/pull/18968
@gatorsmile @cloud-fan @viirya I was thinking about cloud-fan's question.
Actually we may not be representing the data type of the listquery expression
correctly. Would this represent
Github user dilipbiswal commented on a diff in the pull request:
https://github.com/apache/spark/pull/18968#discussion_r134126635
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/predicates.scala
---
@@ -138,46 +138,80 @@ case class Not(child
Github user dilipbiswal commented on the issue:
https://github.com/apache/spark/pull/18968
@viirya ok.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user dilipbiswal commented on the issue:
https://github.com/apache/spark/pull/18968
@gatorsmile @viirya There was pr from Natt
[pr](https://github.com/apache/spark/pull/17520). Is it possible to get some
feedback on the idea ? If we do this, the next step was to combine
Github user dilipbiswal commented on the issue:
https://github.com/apache/spark/pull/18968
@viirya I agree that we should report violations. The only question i had
is whether we should tie this particular check to the expression being resolved
or not. In the old version of the code
Github user dilipbiswal commented on the issue:
https://github.com/apache/spark/pull/18968
@viirya Isn't checkAnalysis supposed to catch such semantic errors ? In my
thinking, this particular error to make sure the left hand side number of args
matching the right hand side
Github user dilipbiswal commented on the issue:
https://github.com/apache/spark/pull/18968
@viirya Hi Simon, many thanks for finding this. Instead of adding the
compensation code in the resolve logic for in-subquery expression can we
consider to move the semantic checking
Github user dilipbiswal commented on the issue:
https://github.com/apache/spark/pull/18649
@debugger87 Hello,
There have been a few posts about hive, where this setting causes too many
open files issue.
https://community.hortonworks.com/questions/48351/hiveserver2-hive-users
Github user dilipbiswal closed the pull request at:
https://github.com/apache/spark/pull/18847
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature
Github user dilipbiswal commented on the issue:
https://github.com/apache/spark/pull/18847
@gatorsmile Thanks !! To the best of my knowledge, we don't have the
problem of analyze table command failing with java.util.NoSuchElement exception
in 2.2. In 2.2, we used to add the column
Github user dilipbiswal commented on the issue:
https://github.com/apache/spark/pull/18847
@gatorsmile I just created this PR for you take a look and decide if we
need to back port the above 3 PRs. The problem for SPARK-21599 does not exist
on 2.2 as it was introduced as part
GitHub user dilipbiswal opened a pull request:
https://github.com/apache/spark/pull/18847
[SPARK-12717][SPARK-21031][SPARK-21599][SQL][BRANCH-2.2]] Collecting column
statistics for datasource tables may fail with java.util.NoSuchElementException
## What changes were proposed
Github user dilipbiswal commented on the issue:
https://github.com/apache/spark/pull/18804
@gatorsmile Thank you very much !! Sure, i will submit a backport to 2.2.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user dilipbiswal commented on the issue:
https://github.com/apache/spark/pull/18804
retest this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes
Github user dilipbiswal commented on a diff in the pull request:
https://github.com/apache/spark/pull/18804#discussion_r131003021
--- Diff:
sql/hive/src/test/scala/org/apache/spark/sql/hive/StatisticsSuite.scala ---
@@ -117,6 +117,40 @@ class StatisticsSuite extends
Github user dilipbiswal commented on a diff in the pull request:
https://github.com/apache/spark/pull/18804#discussion_r131001045
--- Diff:
sql/hive/src/test/scala/org/apache/spark/sql/hive/StatisticsSuite.scala ---
@@ -117,6 +117,40 @@ class StatisticsSuite extends
Github user dilipbiswal commented on a diff in the pull request:
https://github.com/apache/spark/pull/18804#discussion_r130992707
--- Diff:
sql/hive/src/main/scala/org/apache/spark/sql/hive/HiveExternalCatalog.scala ---
@@ -642,8 +642,13 @@ private[spark] class HiveExternalCatalog
Github user dilipbiswal commented on a diff in the pull request:
https://github.com/apache/spark/pull/18804#discussion_r130804246
--- Diff:
sql/hive/src/main/scala/org/apache/spark/sql/hive/HiveExternalCatalog.scala ---
@@ -642,8 +642,13 @@ private[spark] class HiveExternalCatalog
Github user dilipbiswal commented on the issue:
https://github.com/apache/spark/pull/18804
retest this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes
Github user dilipbiswal commented on a diff in the pull request:
https://github.com/apache/spark/pull/18804#discussion_r130801694
--- Diff:
sql/hive/src/main/scala/org/apache/spark/sql/hive/HiveExternalCatalog.scala ---
@@ -642,8 +642,13 @@ private[spark] class HiveExternalCatalog
Github user dilipbiswal commented on a diff in the pull request:
https://github.com/apache/spark/pull/18804#discussion_r130785255
--- Diff:
sql/hive/src/main/scala/org/apache/spark/sql/hive/HiveExternalCatalog.scala ---
@@ -642,8 +642,15 @@ private[spark] class HiveExternalCatalog
Github user dilipbiswal commented on a diff in the pull request:
https://github.com/apache/spark/pull/18804#discussion_r130784019
--- Diff:
sql/hive/src/main/scala/org/apache/spark/sql/hive/HiveExternalCatalog.scala ---
@@ -642,8 +642,15 @@ private[spark] class HiveExternalCatalog
Github user dilipbiswal commented on a diff in the pull request:
https://github.com/apache/spark/pull/18804#discussion_r130783643
--- Diff:
sql/hive/src/main/scala/org/apache/spark/sql/hive/HiveExternalCatalog.scala ---
@@ -642,8 +642,15 @@ private[spark] class HiveExternalCatalog
501 - 600 of 1257 matches
Mail list logo