Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/16858
Thank you for review and merging, @srowen !
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/16795
At least, `spark-master-test-maven-hadoop-2.6` goes green.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/16859
Retest this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and
Github user dongjoon-hyun commented on a diff in the pull request:
https://github.com/apache/spark/pull/16865#discussion_r100422867
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/codegen/CodeGenerator.scala
---
@@ -1004,7 +1016,8 @@ object
Github user dongjoon-hyun commented on a diff in the pull request:
https://github.com/apache/spark/pull/16865#discussion_r100423521
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/codegen/CodeGenerator.scala
---
@@ -1004,7 +1016,8 @@ object
Github user dongjoon-hyun commented on a diff in the pull request:
https://github.com/apache/spark/pull/16865#discussion_r100431612
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/codegen/CodeGenerator.scala
---
@@ -1004,7 +1016,8 @@ object
Github user dongjoon-hyun commented on a diff in the pull request:
https://github.com/apache/spark/pull/16871#discussion_r100601823
--- Diff: build/sbt-launch-lib.bash ---
@@ -112,12 +112,9 @@ addDebugger () {
# so they need not be dicked around with individually
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/16896
Retest this please.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and
Github user dongjoon-hyun commented on a diff in the pull request:
https://github.com/apache/spark/pull/16896#discussion_r100680623
--- Diff: python/pyspark/sql/tests.py ---
@@ -1435,6 +1435,12 @@ def test_time_with_timezone(self):
self.assertEqual(now, now1
Github user dongjoon-hyun commented on a diff in the pull request:
https://github.com/apache/spark/pull/16896#discussion_r100680863
--- Diff: python/pyspark/sql/tests.py ---
@@ -1435,6 +1435,12 @@ def test_time_with_timezone(self):
self.assertEqual(now, now1
Github user dongjoon-hyun commented on a diff in the pull request:
https://github.com/apache/spark/pull/16882#discussion_r100681737
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/TypeCoercion.scala
---
@@ -116,7 +116,7 @@ object TypeCoercion
Github user dongjoon-hyun commented on a diff in the pull request:
https://github.com/apache/spark/pull/16909#discussion_r100864950
--- Diff:
sql/core/src/test/scala/org/apache/spark/sql/execution/ExternalAppendOnlyUnsafeRowArraySuite.scala
---
@@ -0,0 +1,300
Github user dongjoon-hyun commented on a diff in the pull request:
https://github.com/apache/spark/pull/16909#discussion_r100868235
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/ExternalAppendOnlyUnsafeRowArray.scala
---
@@ -0,0 +1,179
Github user dongjoon-hyun commented on a diff in the pull request:
https://github.com/apache/spark/pull/16909#discussion_r100870912
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/ExternalAppendOnlyUnsafeRowArray.scala
---
@@ -0,0 +1,179
Github user dongjoon-hyun commented on a diff in the pull request:
https://github.com/apache/spark/pull/16909#discussion_r100878983
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/window/WindowExec.scala
---
@@ -285,6 +283,9 @@ case class WindowExec(
val
Github user dongjoon-hyun commented on a diff in the pull request:
https://github.com/apache/spark/pull/16909#discussion_r100880685
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/ExternalAppendOnlyUnsafeRowArray.scala
---
@@ -0,0 +1,179
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/16909
Retest this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/14426
Oh.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/16925
@rxin . May I update that better or do you wnat to finish here?
Anything is okay for me. Anyway, thank you for everything!
---
If your project is set up for it, you can reply to this
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/16926
Hi, @HyukjinKwon .
Could you fix java linter errors together here?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/16925
Except the doc you mentioned, it looks great to me, @rxin !
It's great to have this in Spark SQL finally.
---
If your project is set up for it, you can reply to this email and have
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/16926
nit. In the PR description, a typo: `jeykill` -> `jekyll`
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project d
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/16896
Hi, @davies .
Could you review this PR?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/16926
I removed my previous comment about `java linter` errors here.
Never mind about that.
---
If your project is set up for it, you can reply to this email and have your
reply appear on
Github user dongjoon-hyun commented on a diff in the pull request:
https://github.com/apache/spark/pull/16925#discussion_r101122783
--- Diff:
sql/catalyst/src/main/antlr4/org/apache/spark/sql/catalyst/parser/SqlBase.g4 ---
@@ -1002,8 +1012,12 @@ SIMPLE_COMMENT
: '--&
Github user dongjoon-hyun commented on a diff in the pull request:
https://github.com/apache/spark/pull/16925#discussion_r101122924
--- Diff:
sql/catalyst/src/main/antlr4/org/apache/spark/sql/catalyst/parser/SqlBase.g4 ---
@@ -1002,8 +1012,12 @@ SIMPLE_COMMENT
: '--&
Github user dongjoon-hyun commented on a diff in the pull request:
https://github.com/apache/spark/pull/16925#discussion_r101129302
--- Diff:
sql/catalyst/src/test/scala/org/apache/spark/sql/catalyst/parser/PlanParserSuite.scala
---
@@ -493,4 +493,46 @@ class PlanParserSuite
Github user dongjoon-hyun commented on a diff in the pull request:
https://github.com/apache/spark/pull/16925#discussion_r101129830
--- Diff:
sql/catalyst/src/main/antlr4/org/apache/spark/sql/catalyst/parser/SqlBase.g4 ---
@@ -374,6 +374,16 @@ querySpecification
Github user dongjoon-hyun commented on a diff in the pull request:
https://github.com/apache/spark/pull/16933#discussion_r101163037
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/optimizer/Optimizer.scala
---
@@ -563,23 +563,27 @@ object CollapseProject
Github user dongjoon-hyun commented on a diff in the pull request:
https://github.com/apache/spark/pull/16941#discussion_r101328119
--- Diff:
sql/catalyst/src/test/scala/org/apache/spark/sql/catalyst/parser/PlanParserSuite.scala
---
@@ -524,7 +530,7 @@ class PlanParserSuite
GitHub user dongjoon-hyun opened a pull request:
https://github.com/apache/spark/pull/16943
[SPARK-19607][HOTFIX] Finding QueryExecution that matches provided
executionId
## What changes were proposed in this pull request?
#16940 adds a test case which does not stop the
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/16940
Hi, @rxin and @ala .
This seems to cause a test failures.
Could you review the hotfix #16943 ?
---
If your project is set up for it, you can reply to this email and have your
reply
Github user dongjoon-hyun commented on a diff in the pull request:
https://github.com/apache/spark/pull/16943#discussion_r101382836
--- Diff:
sql/core/src/test/scala/org/apache/spark/sql/execution/SQLExecutionSuite.scala
---
@@ -129,6 +129,8 @@ class SQLExecutionSuite extends
Github user dongjoon-hyun commented on a diff in the pull request:
https://github.com/apache/spark/pull/16943#discussion_r101383754
--- Diff:
sql/core/src/test/scala/org/apache/spark/sql/execution/SQLExecutionSuite.scala
---
@@ -129,6 +129,8 @@ class SQLExecutionSuite extends
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/16739
Hi, @felixcheung .
While backporting,
https://github.com/apache/spark/commit/6c35399068f1035fec6d5f909a83a5b1683702e0#diff-3d2a6b9d2b7d84ae179d7ea0f9eca696R1232
seems to break the build
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/16739
Thank YOU, always! :)
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/16961
Retest this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/16961
+1 LGTM!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/16964
+1, LGTM, too. Very tidy!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/16366
Thank you, @viirya and @srowen !
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/16337
Retest this please.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and
Github user dongjoon-hyun commented on a diff in the pull request:
https://github.com/apache/spark/pull/16337#discussion_r93820846
--- Diff:
sql/core/src/test/resources/sql-tests/inputs/subquery/in-subquery/in-group-by.sql
---
@@ -0,0 +1,117 @@
+-- A test suite for GROUP BY
GitHub user dongjoon-hyun opened a pull request:
https://github.com/apache/spark/pull/16400
[SPARK-18941][SQL][DOC] Add a new behavior document on `CREATE/DROP TABLE`
with `LOCATION`
## What changes were proposed in this pull request?
This PR adds a new behavior change
Github user dongjoon-hyun commented on a diff in the pull request:
https://github.com/apache/spark/pull/15664#discussion_r93859518
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/jdbc/JdbcUtils.scala
---
@@ -112,7 +112,25 @@ object JdbcUtils extends
Github user dongjoon-hyun commented on a diff in the pull request:
https://github.com/apache/spark/pull/15664#discussion_r93864198
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/jdbc/JdbcUtils.scala
---
@@ -112,7 +112,25 @@ object JdbcUtils extends
GitHub user dongjoon-hyun opened a pull request:
https://github.com/apache/spark/pull/16409
[SPARK-19004][SQL] Fix `testH2Dialect` by removing `getCatalystType`
## What changes were proposed in this pull request?
`JdbcDialect` subclasses should return `None` by default
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/16409
Hi, @gatorsmile .
Could you review this PR?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user dongjoon-hyun commented on a diff in the pull request:
https://github.com/apache/spark/pull/16405#discussion_r93883412
--- Diff: dev/lint-python ---
@@ -23,6 +23,7 @@ PATHS_TO_CHECK="./python/pyspark/
./examples/src/main/python/ ./dev/sparktestsup
# TODO: fix
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/16320
Hi, @gatorsmile .
Could you review this PR when you have some time?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user dongjoon-hyun commented on a diff in the pull request:
https://github.com/apache/spark/pull/16409#discussion_r93901001
--- Diff:
sql/core/src/test/scala/org/apache/spark/sql/jdbc/JDBCWriteSuite.scala ---
@@ -44,9 +44,6 @@ class JDBCWriteSuite extends SharedSQLContext
Github user dongjoon-hyun commented on a diff in the pull request:
https://github.com/apache/spark/pull/16409#discussion_r93901957
--- Diff:
sql/core/src/test/scala/org/apache/spark/sql/jdbc/JDBCWriteSuite.scala ---
@@ -44,9 +44,6 @@ class JDBCWriteSuite extends SharedSQLContext
Github user dongjoon-hyun commented on a diff in the pull request:
https://github.com/apache/spark/pull/16409#discussion_r93902254
--- Diff:
sql/core/src/test/scala/org/apache/spark/sql/jdbc/JDBCWriteSuite.scala ---
@@ -44,9 +44,6 @@ class JDBCWriteSuite extends SharedSQLContext
Github user dongjoon-hyun commented on a diff in the pull request:
https://github.com/apache/spark/pull/16409#discussion_r93902654
--- Diff:
sql/core/src/test/scala/org/apache/spark/sql/jdbc/JDBCWriteSuite.scala ---
@@ -44,9 +44,6 @@ class JDBCWriteSuite extends SharedSQLContext
Github user dongjoon-hyun commented on a diff in the pull request:
https://github.com/apache/spark/pull/16409#discussion_r93903615
--- Diff:
sql/core/src/test/scala/org/apache/spark/sql/jdbc/JDBCWriteSuite.scala ---
@@ -44,9 +44,6 @@ class JDBCWriteSuite extends SharedSQLContext
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/16409
Thank you for review and merging!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/15664
The PR is updated to
- get table schema once in `createRelation`
- respect `spark.sql.caseSensitive`
For `insertStatement`, I thought it seems to be better to keep the current
Github user dongjoon-hyun commented on a diff in the pull request:
https://github.com/apache/spark/pull/15664#discussion_r94004271
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/jdbc/JdbcUtils.scala
---
@@ -211,6 +211,52 @@ object JdbcUtils extends
Github user dongjoon-hyun commented on a diff in the pull request:
https://github.com/apache/spark/pull/15664#discussion_r94004299
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/jdbc/JdbcUtils.scala
---
@@ -211,6 +211,52 @@ object JdbcUtils extends
Github user dongjoon-hyun commented on a diff in the pull request:
https://github.com/apache/spark/pull/15664#discussion_r94004348
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/jdbc/JdbcUtils.scala
---
@@ -211,6 +211,52 @@ object JdbcUtils extends
Github user dongjoon-hyun commented on a diff in the pull request:
https://github.com/apache/spark/pull/15664#discussion_r94004565
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/jdbc/JdbcUtils.scala
---
@@ -211,6 +211,52 @@ object JdbcUtils extends
Github user dongjoon-hyun commented on a diff in the pull request:
https://github.com/apache/spark/pull/15664#discussion_r94007008
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/jdbc/JdbcRelationProvider.scala
---
@@ -60,23 +60,27 @@ class
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/15664
The failure is not related to this PR.
```
[info] - fatal errors from a source should be sent to the user *** FAILED
*** (84 milliseconds)
[info
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/15664
Wow. Thank you for the code.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user dongjoon-hyun commented on a diff in the pull request:
https://github.com/apache/spark/pull/15664#discussion_r94079276
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/jdbc/JdbcUtils.scala
---
@@ -211,6 +211,55 @@ object JdbcUtils extends
Github user dongjoon-hyun commented on a diff in the pull request:
https://github.com/apache/spark/pull/15664#discussion_r94079568
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/jdbc/JdbcUtils.scala
---
@@ -211,6 +211,55 @@ object JdbcUtils extends
Github user dongjoon-hyun commented on a diff in the pull request:
https://github.com/apache/spark/pull/16320#discussion_r94081192
--- Diff:
sql/core/src/test/scala/org/apache/spark/sql/execution/datasources/csv/CSVInferSchemaSuite.scala
---
@@ -114,4 +114,11 @@ class
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/16426
+1
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/16426
Retest this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/16426
Hmm. Last time,
```
[error] java.util.concurrent.ExecutionException:
java.lang.OutOfMemoryError: GC overhead limit exceeded
```
And, this time,
```
[info
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/16426
Retest this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and
GitHub user dongjoon-hyun opened a pull request:
https://github.com/apache/spark/pull/16427
[SPARK-19012][SQL] Fix `createTempViewCommand` to throw AnalysisException
instead of ParseException
## What changes were proposed in this pull request?
Currently, `createTempView
Github user dongjoon-hyun commented on a diff in the pull request:
https://github.com/apache/spark/pull/16427#discussion_r94093403
--- Diff: sql/core/src/test/scala/org/apache/spark/sql/DataFrameSuite.scala
---
@@ -1518,14 +1518,16 @@ class DataFrameSuite extends QueryTest with
Github user dongjoon-hyun commented on a diff in the pull request:
https://github.com/apache/spark/pull/16427#discussion_r94093354
--- Diff: sql/core/src/main/scala/org/apache/spark/sql/Dataset.scala ---
@@ -2585,9 +2586,12 @@ class Dataset[T] private[sql](
* Creates a
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/16427
cc @hvanhovell and @gatorsmile .
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user dongjoon-hyun commented on a diff in the pull request:
https://github.com/apache/spark/pull/16427#discussion_r94093910
--- Diff: sql/core/src/main/scala/org/apache/spark/sql/Dataset.scala ---
@@ -2585,9 +2586,12 @@ class Dataset[T] private[sql](
* Creates a
Github user dongjoon-hyun commented on a diff in the pull request:
https://github.com/apache/spark/pull/16427#discussion_r94094190
--- Diff: sql/core/src/main/scala/org/apache/spark/sql/Dataset.scala ---
@@ -2585,9 +2586,12 @@ class Dataset[T] private[sql](
* Creates a
Github user dongjoon-hyun commented on a diff in the pull request:
https://github.com/apache/spark/pull/15664#discussion_r94094524
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/jdbc/JdbcUtils.scala
---
@@ -568,10 +617,10 @@ object JdbcUtils extends
Github user dongjoon-hyun commented on a diff in the pull request:
https://github.com/apache/spark/pull/15664#discussion_r94094789
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/jdbc/JdbcUtils.scala
---
@@ -211,6 +211,55 @@ object JdbcUtils extends
Github user dongjoon-hyun commented on a diff in the pull request:
https://github.com/apache/spark/pull/15664#discussion_r94094859
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/jdbc/JdbcUtils.scala
---
@@ -568,10 +617,9 @@ object JdbcUtils extends
Github user dongjoon-hyun commented on a diff in the pull request:
https://github.com/apache/spark/pull/15664#discussion_r94095014
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/jdbc/JdbcUtils.scala
---
@@ -568,10 +617,9 @@ object JdbcUtils extends
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/16320
Could you review this `CSVInferSchema` issue again, @gatorsmile ?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/16427
Hi, @hvanhovell .
Could you review this `createTempViewCommand` PR again?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/16400
Hi, @gatorsmile .
Could you review this `CREATE TABLE ... LOCATION` document issue when you
have sometime?
---
If your project is set up for it, you can reply to this email and have your
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/15664
Thank you for deep review again, @viirya and @gatorsmile .
I'll update the PR soon.
---
If your project is set up for it, you can reply to this email and have your
reply appear on G
Github user dongjoon-hyun commented on a diff in the pull request:
https://github.com/apache/spark/pull/15664#discussion_r94175664
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/jdbc/JdbcUtils.scala
---
@@ -568,10 +617,9 @@ object JdbcUtils extends
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/16427
Thank you always, @hvanhovel !
Happy new year! :)
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not
Github user dongjoon-hyun commented on a diff in the pull request:
https://github.com/apache/spark/pull/15664#discussion_r94177293
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/jdbc/JdbcUtils.scala
---
@@ -211,6 +211,55 @@ object JdbcUtils extends
Github user dongjoon-hyun commented on a diff in the pull request:
https://github.com/apache/spark/pull/15664#discussion_r94177466
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/jdbc/JdbcUtils.scala
---
@@ -211,6 +211,55 @@ object JdbcUtils extends
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/15664
Hi, @gatorsmile .
I have no objections on your patch! So, I merged that into this PR.
---
If your project is set up for it, you can reply to this email and have your
reply appear on
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/15664
The only failure is irrelevant to this PR.
```
[info] StreamSuite:
[info] - fatal errors from a source should be sent to the user *** FAILED
*** (101 milliseconds)
[info
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/15664
Retest this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and
Github user dongjoon-hyun commented on a diff in the pull request:
https://github.com/apache/spark/pull/15664#discussion_r94210039
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/jdbc/JdbcUtils.scala
---
@@ -108,14 +108,32 @@ object JdbcUtils extends
Github user dongjoon-hyun commented on a diff in the pull request:
https://github.com/apache/spark/pull/15664#discussion_r94211157
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/jdbc/JdbcRelationProvider.scala
---
@@ -57,26 +57,28 @@ class
Github user dongjoon-hyun commented on a diff in the pull request:
https://github.com/apache/spark/pull/15664#discussion_r94216756
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/jdbc/JdbcRelationProvider.scala
---
@@ -67,16 +68,18 @@ class
Github user dongjoon-hyun commented on a diff in the pull request:
https://github.com/apache/spark/pull/15664#discussion_r94216803
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/jdbc/JdbcUtils.scala
---
@@ -108,14 +108,36 @@ object JdbcUtils extends
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/15664
The only failure is irrelevant to this PR.
```
[info] StreamSuite:
[info] - fatal errors from a source should be sent to the user *** FAILED
*** (84 milliseconds)
[info
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/15664
Retest this please.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and
GitHub user dongjoon-hyun opened a pull request:
https://github.com/apache/spark/pull/16440
[SPARK-18857][SQL] Don't use `Iterator.duplicate` for `incrementalCollect`
in Thrift Server
## What changes were proposed in this pull request?
To support `FETCH_FIRST`, SPARK-
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/15664
Thank you, @gatorsmile .
Happy New Year! :)
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user dongjoon-hyun commented on a diff in the pull request:
https://github.com/apache/spark/pull/16440#discussion_r94280218
--- Diff:
sql/hive-thriftserver/src/main/scala/org/apache/spark/sql/hive/thriftserver/SparkExecuteStatementOperation.scala
---
@@ -50,8 +50,8
1 - 100 of 7376 matches
Mail list logo