Github user dongjoon-hyun commented on a diff in the pull request:
https://github.com/apache/spark/pull/16440#discussion_r94280463
--- Diff:
sql/hive-thriftserver/src/main/scala/org/apache/spark/sql/hive/thriftserver/SparkExecuteStatementOperation.scala
---
@@ -103,6 +103,10
Github user dongjoon-hyun commented on a diff in the pull request:
https://github.com/apache/spark/pull/16440#discussion_r94280478
--- Diff:
sql/hive-thriftserver/src/main/scala/org/apache/spark/sql/hive/thriftserver/SparkExecuteStatementOperation.scala
---
@@ -111,9 +115,15
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/16320
Thank you for review, @gatorsmile .
Happy new year!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/16426
Retest this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/16443
Hi, @hustfxj .
If you add `[SPARK-19042]` into your title of this PR as a prefix, the
Apache JIRA issue will be changed into `IN PROGRESS'.
---
If your project is set up for it, yo
Github user dongjoon-hyun commented on a diff in the pull request:
https://github.com/apache/spark/pull/16440#discussion_r94286661
--- Diff:
sql/hive-thriftserver/src/main/scala/org/apache/spark/sql/hive/thriftserver/SparkExecuteStatementOperation.scala
---
@@ -111,9 +115,15
Github user dongjoon-hyun commented on a diff in the pull request:
https://github.com/apache/spark/pull/16320#discussion_r94358365
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/csv/CSVInferSchema.scala
---
@@ -85,7 +85,9 @@ private[csv] object
Github user dongjoon-hyun commented on a diff in the pull request:
https://github.com/apache/spark/pull/16320#discussion_r94358461
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/csv/CSVInferSchema.scala
---
@@ -85,7 +85,9 @@ private[csv] object
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/16320
Thank you again, @cloud-fan and @HyukjinKwon . I updated the fallback
datatype.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/16320
I assumed this one. Right?
```scala
val path = "/tmp/test1"
Seq(s"${Long.MaxValue}1", "2015-12-01 00:00:00",
"1").toDF().coalesce(1
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/16320
Yep. I added the testcase, too. @gatorsmile
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/16320
Retest this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/16320
I see, @gatorsmile . I will try to make a PR to improve the coverage.
For this issue, the failure about the last commit (adding test case) was a
R failure. So, it's irreleva
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/16320
Thank you, @cloud-fan !
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/16320
Sure. I'll create a backpor PR for 2.1.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
GitHub user dongjoon-hyun opened a pull request:
https://github.com/apache/spark/pull/16463
[SPARK-18877][SQL][BACKPORT-2.1] `CSVInferSchema.inferField` on DecimalType
should find a common type with `typeSoFar`
## What changes were proposed in this pull request?
CSV type
Github user dongjoon-hyun commented on a diff in the pull request:
https://github.com/apache/spark/pull/16440#discussion_r94495291
--- Diff:
sql/hive-thriftserver/src/main/scala/org/apache/spark/sql/hive/thriftserver/SparkExecuteStatementOperation.scala
---
@@ -111,9 +115,15
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/16463
Hi, @gatorsmile .
This is a backport of https://github.com/apache/spark/pull/16320 .
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/16463
Thank you for review, @jaceklaskowski .
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
Github user dongjoon-hyun closed the pull request at:
https://github.com/apache/spark/pull/16463
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the
GitHub user dongjoon-hyun opened a pull request:
https://github.com/apache/spark/pull/16472
[SPARK-18877][SQL][BACKPORT-2.0] `CSVInferSchema.inferField` on DecimalType
should find a common type with `typeSoFar`
## What changes were proposed in this pull request?
CSV type
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/16472
Hi, @gatorsmile .
This is a backport for `branch-2.0` of #16320 .
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/12583
Hi, @bomeng .
I hit the same issue. Could you update the PR?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/16472
Thank you for merging!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and
Github user dongjoon-hyun closed the pull request at:
https://github.com/apache/spark/pull/16472
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/16440
Thank you again, @srowen .
Hi, @alicegugu , @ericl , @rxin . Could you give me some opinion about this?
---
If your project is set up for it, you can reply to this email and have
Github user dongjoon-hyun commented on a diff in the pull request:
https://github.com/apache/spark/pull/16440#discussion_r94838239
--- Diff:
sql/hive-thriftserver/src/main/scala/org/apache/spark/sql/hive/thriftserver/SparkExecuteStatementOperation.scala
---
@@ -50,8 +50,8
Github user dongjoon-hyun commented on a diff in the pull request:
https://github.com/apache/spark/pull/16440#discussion_r94839678
--- Diff:
sql/hive-thriftserver/src/main/scala/org/apache/spark/sql/hive/thriftserver/SparkExecuteStatementOperation.scala
---
@@ -103,6 +103,10
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/16440
The PR is updated like the followings.
- Add `SQLConf.THRIFTSERVER_INCREMENTAL_COLLECT`.
- Remove `private def useIncrementalCollect`.
- Add description for `resultList` variable
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/16440
Hi, @ericl and @srowen .
If there is something to do more, please let me know.
Thank you always.
---
If your project is set up for it, you can reply to this email and have your
reply
Github user dongjoon-hyun commented on a diff in the pull request:
https://github.com/apache/spark/pull/16440#discussion_r94989356
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/internal/SQLConf.scala ---
@@ -309,6 +309,12 @@ object SQLConf {
.stringConf
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/16400
Hi @gatorsmile .
Could you review this when you have some time?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/16440
Thank you for approving, @srowen .
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user dongjoon-hyun commented on a diff in the pull request:
https://github.com/apache/spark/pull/16400#discussion_r95061882
--- Diff: docs/sql-programming-guide.md ---
@@ -1362,6 +1362,13 @@ options.
- Dataset and DataFrame API `explode` has been deprecated
Github user dongjoon-hyun commented on a diff in the pull request:
https://github.com/apache/spark/pull/16400#discussion_r95061877
--- Diff: docs/sql-programming-guide.md ---
@@ -1362,6 +1362,13 @@ options.
- Dataset and DataFrame API `explode` has been deprecated
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/16400
Thank you, @gatorsmile . I updated the commit and description of this PR.
You can see the new image from generated doc.
---
If your project is set up for it, you can reply to this email and
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/16400
Thank you for merging, @gatorsmile !
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user dongjoon-hyun commented on a diff in the pull request:
https://github.com/apache/spark/pull/16515#discussion_r95222673
--- Diff:
examples/src/main/python/mllib/binary_classification_metrics_example.py ---
@@ -18,25 +18,20 @@
Binary Classification Metrics Example
GitHub user dongjoon-hyun opened a pull request:
https://github.com/apache/spark/pull/16522
[SPARK-19137][SQL][SS] Garbage left in source tree after SQL tests ran
## What changes were proposed in this pull request?
`DataStreamReaderWriterSuite` makes test files in source
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/16522
Oh, I see. Then, I'll look inside the `temp folder` generation code and fix
that.
Thank you for review, @vanzin and @zsxwing .
---
If your project is set up for it, you can rep
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/16522
I found the root cause.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/16522
Hi, @vanzin and @zsxwing .
It was a bug of `withSQLConf`.
I think this is correct fix, but we need to see the result of whole result
because this is test utility issue.
---
If your
Github user dongjoon-hyun commented on a diff in the pull request:
https://github.com/apache/spark/pull/16522#discussion_r95281973
--- Diff:
sql/core/src/test/scala/org/apache/spark/sql/test/SQLTestUtils.scala ---
@@ -94,7 +94,13 @@ private[sql] trait SQLTestUtils
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/16522
Thank you. I updated it.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/16440
Hi, @srowen .
Could you merge this PR?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/16522
Hi, @vanzin and @zsxwing .
The PR passes the tests. Could you review this PR again?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/16440
Thank you for merging, @srowen !
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user dongjoon-hyun commented on a diff in the pull request:
https://github.com/apache/spark/pull/16522#discussion_r95417670
--- Diff:
sql/core/src/test/scala/org/apache/spark/sql/test/SQLTestUtils.scala ---
@@ -94,7 +94,13 @@ private[sql] trait SQLTestUtils
Github user dongjoon-hyun commented on a diff in the pull request:
https://github.com/apache/spark/pull/16522#discussion_r95418435
--- Diff:
sql/core/src/test/scala/org/apache/spark/sql/test/SQLTestUtils.scala ---
@@ -94,7 +94,13 @@ private[sql] trait SQLTestUtils
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/16440
Hi, @srowen . May I create a backport for 2.0 and 2.1 ?
https://github.com/apache/spark/pull/14218 was merged into branch-2.0 and
branch-2.1.
---
If your project is set up for it, you
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/16522
Thank you, @zsxwing and @vanzin !
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/16440
Thank you so much, @srowen !
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/12583
Hmm. This PR seems to be stale. Actually, what I wanted was a **sorted**
result of `SET` and `SET -v`.
---
If your project is set up for it, you can reply to this email and have your
reply
GitHub user dongjoon-hyun opened a pull request:
https://github.com/apache/spark/pull/16579
[SPARK-19218][SQL] SET command should show a result sorted by key
## What changes were proposed in this pull request?
Currently, `SET` command shows unsorted result. We had better
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/12583
I made another #16579 for `SET` for a sorted result.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not
Github user dongjoon-hyun commented on a diff in the pull request:
https://github.com/apache/spark/pull/16579#discussion_r96065519
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/command/SetCommand.scala
---
@@ -79,7 +79,7 @@ case class SetCommand(kv: Option
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/16579
Thank you for approval, @srowen !
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/16579
@gatorsmile Thank you for review and approval, too!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/16579
Interesting. The only failure was a new test case.
```
[info] - SET commands should return a list sorted by key *** FAILED *** (18
milliseconds)
[info] java.lang.RuntimeException
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/16579
Retest this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/16579
The failure does not happen local pc, but it always happens in Jenkins.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/16605
Sure, @maropu . I'll do that tomorrow morning (PST).
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project doe
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/15302
Thank you for review, @hvanhovell ! Do you mean SQL grammar or
`listPartition`?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/15302
Recently, I've watched you improved those related function greatly.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/15302
I see. Then, how can evaluate the generic expression? Is it okay to use
'eval(null)'?
---
If your project is set up for it, you can reply to this email and have your
reply appear
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/15302
Thank you for the direction. I'll proceed to improve in that way.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If
GitHub user dongjoon-hyun opened a pull request:
https://github.com/apache/spark/pull/15336
[SPARK-17767][SQL] Support user-provided ExternalCatalog
## What changes were proposed in this pull request?
Currently, Spark supports ExternalCatalog, but the usage is limited to
Github user dongjoon-hyun commented on a diff in the pull request:
https://github.com/apache/spark/pull/15318#discussion_r81647109
--- Diff:
sql/hive/src/test/scala/org/apache/spark/sql/catalyst/ExpressionSQLBuilderSuite.scala
---
@@ -119,4 +121,18 @@ class
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/15302
The only failure looks irrelevant. Anyway, I'm revising the PR.
```scala
[info] *** 1 SUITE ABORTED ***
[error] Error: Total 2604, Failed 0, Errors 1, Passed 2603, Ignor
Github user dongjoon-hyun closed the pull request at:
https://github.com/apache/spark/pull/15336
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/15336
The issue is closed as 'Later'.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have th
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/15339
Hi, @yashbopardikar .
Could you close this PR? It seems wrong. :)
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/15325
Thank You so much for review and merging, @rxin .
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
GitHub user dongjoon-hyun opened a pull request:
https://github.com/apache/spark/pull/15351
[SPARK-17612][SQL] Support `DESCRIBE table PARTITION` SQL syntax
## What changes were proposed in this pull request?
This is a backport of SPARK-17612. This implements `DESCRIBE
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/15351
Hi, @hvanhovell .
This is the backport of https://github.com/apache/spark/pull/15168 . Is
there any chance to be merged into branch-2.0?
---
If your project is set up for it, you can
GitHub user dongjoon-hyun opened a pull request:
https://github.com/apache/spark/pull/15357
[SPARK-17328][SQL] Fix NPE with EXPLAIN DESCRIBE TABLE
## What changes were proposed in this pull request?
This PR fixes the following NPE scenario.
**Reported Error
Github user dongjoon-hyun commented on a diff in the pull request:
https://github.com/apache/spark/pull/15357#discussion_r81900796
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/SparkSqlParser.scala ---
@@ -265,7 +265,9 @@ class SparkSqlAstBuilder(conf: SQLConf
Github user dongjoon-hyun commented on a diff in the pull request:
https://github.com/apache/spark/pull/15357#discussion_r81901737
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/SparkSqlParser.scala ---
@@ -265,7 +265,9 @@ class SparkSqlAstBuilder(conf: SQLConf
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/15318
Hi, @gatorsmile .
Could you merge this PR? :)
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/15318
Thank you, @gatorsmile .
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/15357
Thank you, @hvanhovell !
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/15351
Hi, @hvanhovell .
Could you give some optinion about this backport when you have sometime?
---
If your project is set up for it, you can reply to this email and have your
reply appear on
GitHub user dongjoon-hyun opened a pull request:
https://github.com/apache/spark/pull/15376
[SPARK-17796][SQL] Support wildcard character in filename for LOAD DATA
LOCAL INPATH
## What changes were proposed in this pull request?
Currently, Spark 2.0 raises a `input path
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/15318
Thank you, @rxin and @gatorsmile .
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/15318
Sure. I'll make abackport for this.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this fe
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/15376
Retest this please.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/15376
The only test failure is irrelevant. Locally, it passed.
```
[info] - MAP append/extract *** FAILED *** (2 milliseconds)
[info] java.lang.IllegalArgumentException
GitHub user dongjoon-hyun opened a pull request:
https://github.com/apache/spark/pull/15383
[SPARK-17750][SQL][BACKPORT-2.0] Fix CREATE VIEW with INTERVAL arithmetic
## What changes were proposed in this pull request?
Currently, Spark raises `RuntimeException` when creating
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/15383
Hi, @gatorsmile .
This is a backport of
https://github.com/apache/spark/commit/92b7e5728025b1bb6ed3aab5f1753c946a73568c
.
---
If your project is set up for it, you can reply to this
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/15383
Thank you so much, @gatorsmile !
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user dongjoon-hyun closed the pull request at:
https://github.com/apache/spark/pull/15383
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/15351
Could you review this backport, @gatorsmile ?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user dongjoon-hyun closed the pull request at:
https://github.com/apache/spark/pull/15351
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/15351
Thank you so much, @hvanhovell .
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/14426
Hi, @gatorsmile .
Could you review this PR when you have some time?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user dongjoon-hyun commented on a diff in the pull request:
https://github.com/apache/spark/pull/15393#discussion_r82449361
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/jdbc/JdbcRelationProvider.scala
---
@@ -70,7 +70,7 @@ class
Github user dongjoon-hyun commented on a diff in the pull request:
https://github.com/apache/spark/pull/15393#discussion_r82449290
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/jdbc/JdbcRelationProvider.scala
---
@@ -70,7 +70,7 @@ class
Github user dongjoon-hyun commented on a diff in the pull request:
https://github.com/apache/spark/pull/15393#discussion_r82450501
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/jdbc/JdbcRelationProvider.scala
---
@@ -70,7 +70,7 @@ class
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/15393
BTW, could you update the PR description, @HyukjinKwon ? Maybe, removing
`exists` part?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/15393
Yep. I knew. What I'm asking is we don't need to explain 'contains' usage
here in this PR.
I prefer not to advertise `contains` for `Option[Boolean]`.
```
101 - 200 of 7376 matches
Mail list logo