Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/16549
LGTM
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/16549
Thanks! Merging to master.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/16583#discussion_r96122416
--- Diff:
sql/hive/src/test/scala/org/apache/spark/sql/hive/execution/HiveDDLSuite.scala
---
@@ -247,6 +247,16 @@ class HiveDDLSuite
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/16565
I think it is fine to do it together. Basically, your PR is to fix the bug
of https://github.com/apache/spark/pull/15111
---
If your project is set up for it, you can reply to this email and
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/16517#discussion_r96128666
--- Diff:
sql/hive/src/main/scala/org/apache/spark/sql/hive/execution/InsertIntoHiveTable.scala
---
@@ -69,34 +69,31 @@ import
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/16517#discussion_r96130080
--- Diff:
core/src/main/scala/org/apache/spark/internal/io/HadoopMapReduceCommitProtocol.scala
---
@@ -99,7 +99,7 @@ class HadoopMapReduceCommitProtocol
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/16517#discussion_r96130310
--- Diff:
sql/hive/src/main/scala/org/apache/spark/sql/hive/HiveStrategies.scala ---
@@ -86,6 +86,42 @@ class DetermineHiveSerde(conf: SQLConf) extends
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/16517#discussion_r96130318
--- Diff:
sql/hive/src/main/scala/org/apache/spark/sql/hive/HiveStrategies.scala ---
@@ -86,6 +86,42 @@ class DetermineHiveSerde(conf: SQLConf) extends
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/16553#discussion_r96131308
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/UDFRegistration.scala ---
@@ -109,9 +109,10 @@ class UDFRegistration private[sql
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/16553
LGTM. cc @marmbrus for final sign off
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/16517#discussion_r96131580
--- Diff:
core/src/main/scala/org/apache/spark/internal/io/HadoopMapReduceCommitProtocol.scala
---
@@ -99,7 +99,7 @@ class HadoopMapReduceCommitProtocol
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/16517
Left a few comments. I am not 100% sure whether `HiveFileFormat` can
completely replace the existing writer containers, but the other changes look
good to me.
---
If your project is set up
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/16565
Thanks! Merging to 2.0
Could you please close it?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/16586#discussion_r96142347
--- Diff:
sql/hive/src/test/scala/org/apache/spark/sql/hive/execution/HiveDDLSuite.scala
---
@@ -221,8 +221,8 @@ class HiveDDLSuite
sql
GitHub user gatorsmile opened a pull request:
https://github.com/apache/spark/pull/16587
[SPARK-19229] [SQL] Disallow Creating Hive Source Tables when Hive Support
is Not Enabled
### What changes were proposed in this pull request?
It is weird to create Hive source tables when
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/16587#discussion_r96145363
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/CheckAnalysis.scala
---
@@ -370,22 +370,6 @@ trait CheckAnalysis extends
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/16587#discussion_r96145383
--- Diff:
sql/core/src/test/scala/org/apache/spark/sql/execution/command/DDLSuite.scala
---
@@ -758,15 +758,17 @@ class DDLSuite extends QueryTest with
GitHub user gatorsmile opened a pull request:
https://github.com/apache/spark/pull/16588
[SPARK-19092] [SQL] [Backport-2.1] Save() API of DataFrameWriter should not
scan all the saved files #16481
### What changes were proposed in this pull request?
This PR is to
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/16528
retest this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/14971
Recently, CBO introduces many changes in this part. Let me revisit it.
Thank you for the reviews! @cloud-fan
---
If your project is set up for it, you can reply to this email and have your
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/16588
retest this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/16588
cc @cloud-fan
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/16373#discussion_r96149748
--- Diff: sql/core/src/test/resources/sql-tests/results/show-tables.sql.out
---
@@ -128,62 +128,108 @@ SHOW TABLE EXTENDED
-- !query 13
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/16587
The following test cases failed:
- change-column.sql
- describe.sql
- show-tables.sql
- show_columns.sql
All these test cases are creating hive serde tables. We might need
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/16587
cc @cloud-fan @yhuai @hvanhovell Any comment?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/16373#discussion_r96150183
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/command/tables.scala ---
@@ -619,18 +621,34 @@ case class ShowTablesCommand
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/16528
ok to test
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if
Github user gatorsmile closed the pull request at:
https://github.com/apache/spark/pull/16588
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/16588
Thanks!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if
GitHub user gatorsmile opened a pull request:
https://github.com/apache/spark/pull/16592
[SPARK-19235] [SQL] [TESTS] Enable Test Cases in DDLSuite with Hive
Metastore
### What changes were proposed in this pull request?
So far, the test cases in DDLSuites only verify the
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/16592#discussion_r96166347
--- Diff:
sql/core/src/test/scala/org/apache/spark/sql/execution/command/DDLSuite.scala
---
@@ -102,6 +76,198 @@ class DDLSuite extends QueryTest with
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/16592#discussion_r96166442
--- Diff:
sql/core/src/test/scala/org/apache/spark/sql/execution/command/DDLSuite.scala
---
@@ -102,6 +76,198 @@ class DDLSuite extends QueryTest with
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/16592
cc @cloud-fan I think we really need to do this ASAP for improving the test
case coverage in DDL commands, when I do the PR:
https://github.com/apache/spark/pull/16587
---
If your project is
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/16517
retest this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/16583#discussion_r96276482
--- Diff:
sql/catalyst/src/test/scala/org/apache/spark/sql/catalyst/catalog/SessionCatalogSuite.scala
---
@@ -937,10 +985,22 @@ class
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/16592
Sure, no problem.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/16586#discussion_r96278101
--- Diff:
sql/hive/src/test/scala/org/apache/spark/sql/hive/execution/HiveDDLSuite.scala
---
@@ -221,8 +221,8 @@ class HiveDDLSuite
sql
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/16599
ok to test
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/16583#discussion_r96349730
--- Diff:
sql/hive/src/main/scala/org/apache/spark/sql/hive/client/HiveClientImpl.scala
---
@@ -568,7 +569,9 @@ private[hive] class HiveClientImpl
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/16599
Have you manually tested your code changes?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/16599#discussion_r96355936
--- Diff: python/pyspark/sql/readwriter.py ---
@@ -431,6 +432,8 @@ def jdbc(self, url, table, column=None,
lowerBound=None, upperBound=None, numPar
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/14559
This is a pretty general issue for JDBC users. Could we backport it to
Spark 2.0?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/16597
Just FYI, this only test the behaviors of InMemoryCatalog. I will port it
to `HiveDDLSuite` in https://github.com/apache/spark/pull/16592
---
If your project is set up for it, you can reply to
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/16599#discussion_r96357852
--- Diff: python/pyspark/sql/readwriter.py ---
@@ -431,6 +432,8 @@ def jdbc(self, url, table, column=None,
lowerBound=None, upperBound=None, numPar
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/16606#discussion_r96358416
--- Diff:
sql/hive/src/test/scala/org/apache/spark/sql/hive/PartitionProviderCompatibilitySuite.scala
---
@@ -481,4 +481,27 @@ class
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/16339#discussion_r96360640
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/DataSource.scala
---
@@ -445,21 +445,28 @@ case class DataSource
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/16597
retest this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/16598#discussion_r96361477
--- Diff: sql/core/src/main/scala/org/apache/spark/sql/Dataset.scala ---
@@ -2603,6 +2603,21 @@ class Dataset[T] private[sql](
def
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/16599#discussion_r96362408
--- Diff: python/pyspark/sql/readwriter.py ---
@@ -429,8 +430,10 @@ def jdbc(self, url, table, column=None,
lowerBound=None, upperBound=None, numPar
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/16587
retest this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/16606
retest this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/15318#discussion_r81646164
--- Diff:
sql/hive/src/test/scala/org/apache/spark/sql/catalyst/ExpressionSQLBuilderSuite.scala
---
@@ -119,4 +121,18 @@ class ExpressionSQLBuilderSuite
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/15292#discussion_r81655398
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/jdbc/JDBCRDD.scala
---
@@ -46,17 +45,18 @@ object JDBCRDD extends Logging
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/15292#discussion_r81655584
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/jdbc/JDBCOptions.scala
---
@@ -17,13 +17,28 @@
package
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/15292
@HyukjinKwon One comment about the document. Can you please emphasize these
JDBC properties are case sensitive? Thanks!
---
If your project is set up for it, you can reply to this email and
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/15292#discussion_r81685853
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/jdbc/JDBCRDD.scala
---
@@ -46,17 +45,18 @@ object JDBCRDD extends Logging
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/14828
retest this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/15292#discussion_r81698298
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/jdbc/JDBCRDD.scala
---
@@ -46,17 +45,18 @@ object JDBCRDD extends Logging
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/15292#discussion_r81802372
--- Diff: docs/sql-programming-guide.md ---
@@ -1048,28 +1049,42 @@ the Data Sources API. The following options are
supported:
partitionColumn
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/15292#discussion_r81802909
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/jdbc/JDBCOptions.scala
---
@@ -35,7 +50,12 @@ class JDBCOptions(
val
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/15292#discussion_r81804213
--- Diff: docs/sql-programming-guide.md ---
@@ -1024,6 +1024,7 @@ the Data Sources API. The following options are
supported:
The JDBC URL to
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/15292#discussion_r81804799
--- Diff: docs/sql-programming-guide.md ---
@@ -1463,7 +1478,7 @@ Prior to Spark 1.3 there were separate Java
compatible classes (`JavaSQLContext
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/14531
@sitalkedia So far, you can set table properties to the new table by using
the DDL command.
@rxin @cloud-fan @yhuai Let me know if you need me to submit a PR to make
such a change. I
GitHub user gatorsmile opened a pull request:
https://github.com/apache/spark/pull/15358
Hide Credentials in CREATE and DESC FORMATTED/EXTENDED a PERSISTENT/TEMP
Table for JDBC
### What changes were proposed in this pull request?
(Please fill in changes proposed in this
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/15358
@rxin Sorry, I did not finish the PR description last night. The connection
is broken in the train. Will fix it soon.
---
If your project is set up for it, you can reply to this email and have
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/15358#discussion_r82001717
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/catalog/interface.scala
---
@@ -52,9 +52,15 @@ case class CatalogStorageFormat
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/14531
@cloud-fan Hive does not copy the table properties in CREATE TABLE LIKE
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/15318
Will make a try. Thanks!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/15318
retest this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/14828
I tried it in DB2. It sounds like the original behavior is correct. Let me
do more research
```
db2 => SELECT 3.14, -3.14, 3.14e8, 3.14e-8, -3.14e8, -3.14e-8, 3.14e+8,
3.14E8, 3.
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/14828
`10.00` is an exact numeric value. `10.0e10` is an approximate numeric
value, or a floating point number. That is the major reason why they should
have different data types.
For
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/14828
@hvanhovell Sure, I am fine as long as you are aware of it. The data types
of literals and constants are not well documented. This
[link](http://www.ibm.com/support/knowledgecenter/SSEPGG_9.7.0
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/15363
ok to test
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/15363
CC @rxin @hvanhovell @cloud-fan @srinathshankar @davies @marmbrus : )
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/15363
The design doc can be downloaded from the link:
https://issues.apache.org/jira/secure/attachment/12831827/StarJoinReordering1005.doc
---
If your project is set up for it, you can reply to this
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/15360
Thank you! Will review it tonight or tomorrow morning.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/15318
Merging to master. Thanks!
I think we also need to merge it to 2.0 branch, right?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/15318
Thank you!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/15360#discussion_r82279052
--- Diff:
sql/hive/src/test/scala/org/apache/spark/sql/hive/StatisticsSuite.scala ---
@@ -405,6 +405,78 @@ class StatisticsSuite extends QueryTest with
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/15360#discussion_r82293578
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/command/AnalyzeColumnCommand.scala
---
@@ -62,7 +62,7 @@ case class
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/15360#discussion_r82297526
--- Diff:
sql/hive/src/test/scala/org/apache/spark/sql/hive/StatisticsSuite.scala ---
@@ -405,6 +405,78 @@ class StatisticsSuite extends QueryTest with
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/15360#discussion_r82297698
--- Diff:
sql/hive/src/test/scala/org/apache/spark/sql/hive/StatisticsSuite.scala ---
@@ -405,6 +405,78 @@ class StatisticsSuite extends QueryTest with
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/15351
@hvanhovell If you are busy, I can take a look at this.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/15383
LGTM
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/15383
Merged to 2.0.
@dongjoon-hyun Could you please close it? Thanks!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/15190
retest this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/15292#discussion_r82306624
--- Diff: docs/sql-programming-guide.md ---
@@ -1014,16 +1014,31 @@ bin/spark-shell --driver-class-path
postgresql-9.4.1207.jar --jars postgresql-9
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/15292
Will continue the review tonight. Thanks!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/14531
@sitalkedia Yeah, I saw it. Thank you for investigation. Normally, we do
not want to add many configuration flags. It hurts the usability. Let @rxin
make a decision whether we should add another
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/15292#discussion_r82329341
--- Diff: docs/sql-programming-guide.md ---
@@ -1014,16 +1014,31 @@ bin/spark-shell --driver-class-path
postgresql-9.4.1207.jar --jars postgresql-9
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/15292#discussion_r82329571
--- Diff: docs/sql-programming-guide.md ---
@@ -1014,16 +1014,31 @@ bin/spark-shell --driver-class-path
postgresql-9.4.1207.jar --jars postgresql-9
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/15292#discussion_r82329644
--- Diff: docs/sql-programming-guide.md ---
@@ -1014,16 +1014,31 @@ bin/spark-shell --driver-class-path
postgresql-9.4.1207.jar --jars postgresql-9
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/15292#discussion_r82330973
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/jdbc/JDBCOptions.scala
---
@@ -17,47 +17,130 @@
package
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/15292#discussion_r82332285
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/jdbc/JDBCOptions.scala
---
@@ -17,47 +17,130 @@
package
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/15292
BTW, I finished the review. LGTM except the above comments. Let @cloud-fan
double check it. Thanks!
---
If your project is set up for it, you can reply to this email and have your
reply appear
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/15263
retest this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/15360#discussion_r82431001
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/command/AnalyzeColumnCommand.scala
---
@@ -62,7 +62,7 @@ case class
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/15360
We need a test case for Hive serde table. So far, I still have not found
any test case to cover Hive serde tables.
---
If your project is set up for it, you can reply to this email and have
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/15263
Thanks! Merging to master!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/15393
LGTM pending tests
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes
701 - 800 of 14069 matches
Mail list logo