Github user jmchung commented on the issue:
https://github.com/apache/spark/pull/19613
Hi @ganeshchand , could you also fix the typo in `JdbcUtils.scala`? Thanks!
#L459 underling => underlying
---
-
Github user jmchung closed the pull request at:
https://github.com/apache/spark/pull/19604
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org
Github user jmchung commented on the issue:
https://github.com/apache/spark/pull/19604
Sure, I'll close it and thank you and @viirya.
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For addit
Github user jmchung commented on the issue:
https://github.com/apache/spark/pull/19604
I found in this branch the Docker-based integration will fail due to can
not pull the image `wnameless/oracle-xe-11g:14.04.4`, should we move on to
`wnameless/oracle-xe-11g`?
```
Error
Github user jmchung commented on a diff in the pull request:
https://github.com/apache/spark/pull/19604#discussion_r147605119
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/jdbc/JdbcUtils.scala
---
@@ -440,8 +440,9 @@ object JdbcUtils extends Logging
Github user jmchung commented on the issue:
https://github.com/apache/spark/pull/19604
cc @cloud-fan, the follow-up PR for 2.2, thanks!
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For
GitHub user jmchung opened a pull request:
https://github.com/apache/spark/pull/19604
[SPARK-22291][SQL][FOLLOWUP] Conversion error when transforming array types
of uuid, inet and cidr to StingType in PostgreSQL
⦠types of uuid, inet and cidr to StingType in PostgreSQL
Github user jmchung commented on a diff in the pull request:
https://github.com/apache/spark/pull/19567#discussion_r147581349
--- Diff:
external/docker-integration-tests/src/test/scala/org/apache/spark/sql/jdbc/PostgresIntegrationSuite.scala
---
@@ -134,11 +149,28 @@ class
Github user jmchung commented on a diff in the pull request:
https://github.com/apache/spark/pull/19567#discussion_r147579149
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/jdbc/JdbcUtils.scala
---
@@ -456,8 +456,10 @@ object JdbcUtils extends
Github user jmchung commented on the issue:
https://github.com/apache/spark/pull/19567
gentle ping @cloud-fan and @viirya, there are some feedbacks about the
behavior of obj to string.
``` scala
case StringType =>
(array: Object) =>
array.asInstanceOf
Github user jmchung commented on the issue:
https://github.com/apache/spark/pull/19567
Thanks @viirya, the title has been changed. Please correct me if the
modified title is still inappropriate.
---
-
To
Github user jmchung commented on a diff in the pull request:
https://github.com/apache/spark/pull/19567#discussion_r147542014
--- Diff:
external/docker-integration-tests/src/test/scala/org/apache/spark/sql/jdbc/PostgresIntegrationSuite.scala
---
@@ -134,11 +149,28 @@ class
Github user jmchung commented on a diff in the pull request:
https://github.com/apache/spark/pull/19567#discussion_r147316748
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/jdbc/JdbcUtils.scala
---
@@ -456,8 +456,17 @@ object JdbcUtils extends
Github user jmchung commented on a diff in the pull request:
https://github.com/apache/spark/pull/19567#discussion_r147314628
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/jdbc/JdbcUtils.scala
---
@@ -456,8 +456,17 @@ object JdbcUtils extends
Github user jmchung commented on a diff in the pull request:
https://github.com/apache/spark/pull/19567#discussion_r147312311
--- Diff:
external/docker-integration-tests/src/test/scala/org/apache/spark/sql/jdbc/PostgresIntegrationSuite.scala
---
@@ -18,7 +18,7 @@
package
Github user jmchung commented on a diff in the pull request:
https://github.com/apache/spark/pull/19567#discussion_r147309952
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/jdbc/JdbcUtils.scala
---
@@ -456,8 +456,17 @@ object JdbcUtils extends
Github user jmchung commented on the issue:
https://github.com/apache/spark/pull/19567
Thanks @HyukjinKwon :)
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail
Github user jmchung commented on the issue:
https://github.com/apache/spark/pull/19567
gentle ping @wangyum and @viirya, the test case has been added to
`PostgresIntegrationSuite`. As @viirya mentioned early, data types `inet[]` and
`cidr[]` did not work as for instance of String
Github user jmchung commented on the issue:
https://github.com/apache/spark/pull/19567
Thanks @wangyum and @viirya, I'll add the corresponding tests in
`PostgresIntegrationSuite`.
To @viirya , I'm not sure if the other data types will work, will consider
them into tes
Github user jmchung commented on the issue:
https://github.com/apache/spark/pull/19567
cc @viirya Can you help review this? Thanks.
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional
GitHub user jmchung opened a pull request:
https://github.com/apache/spark/pull/19567
[SPARK-22291] Postgresql UUID[] to Cassandra: Conversion Error
## What changes were proposed in this pull request?
This PR fixes the conversion error when reads data from a PostgreSQL
Github user jmchung commented on the issue:
https://github.com/apache/spark/pull/19199
Thanks @HyukjinKwon and @viirya :)
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e
Github user jmchung commented on a diff in the pull request:
https://github.com/apache/spark/pull/19199#discussion_r138287636
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/csv/CSVFileFormat.scala
---
@@ -109,6 +109,20 @@ class CSVFileFormat extends
Github user jmchung commented on the issue:
https://github.com/apache/spark/pull/19199
cc @gatorsmile, @HyukjinKwon and @viirya. Could you guys help to review
this? Thanks.
---
-
To unsubscribe, e-mail: reviews
GitHub user jmchung opened a pull request:
https://github.com/apache/spark/pull/19199
[SPARK-21610][SQL][FOLLOWUP] Corrupt records are not handled properly when
creating a dataframe from a file
## What changes were proposed in this pull request?
When the `requiredSchema
Github user jmchung commented on the issue:
https://github.com/apache/spark/pull/18865
@gatorsmile Sure, I'll make a follow-up PR for CSV.
Great thanks for everyone's feedback in this patch, I really learned lot
Github user jmchung commented on the issue:
https://github.com/apache/spark/pull/18865
cc @gatorsmile Please take another look when you have time. I've already
updated. Thanks!
---
-
To unsubscribe, e-mail: re
Github user jmchung commented on the issue:
https://github.com/apache/spark/pull/18865
@gatorsmile those negative cases are already added in `JsonSuite`.
---
-
To unsubscribe, e-mail: reviews-unsubscr
Github user jmchung commented on a diff in the pull request:
https://github.com/apache/spark/pull/18865#discussion_r137928316
--- Diff: docs/sql-programming-guide.md ---
@@ -1542,6 +1542,10 @@ options.
# Migration Guide
+## Upgrading From Spark SQL 2.2 to 2.3
Github user jmchung commented on a diff in the pull request:
https://github.com/apache/spark/pull/18865#discussion_r137924841
--- Diff:
sql/core/src/test/scala/org/apache/spark/sql/execution/datasources/json/JsonSuite.scala
---
@@ -2034,4 +2034,25 @@ class JsonSuite extends
Github user jmchung commented on the issue:
https://github.com/apache/spark/pull/18865
Thank you for review, @HyukjinKwon.
cc @gatorsmile
Could you review this again when you have sometime? thanks
Github user jmchung commented on the issue:
https://github.com/apache/spark/pull/18865
@viirya, thank you so much for taking a look and your time.
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
Github user jmchung commented on the issue:
https://github.com/apache/spark/pull/18865
Could @gatorsmile and @HyukjinKwon please share some instructions for
revised details on exception message? The current message indicates the reason
of disallowance when users just select the
Github user jmchung commented on a diff in the pull request:
https://github.com/apache/spark/pull/18865#discussion_r136715367
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/json/JsonFileFormat.scala
---
@@ -113,6 +113,18 @@ class JsonFileFormat
Github user jmchung commented on the issue:
https://github.com/apache/spark/pull/18865
To @viirya and @gatorsmile, I made some modifications as follows:
1. move the check of `_corrupt_record` out of the function block to get
fast fail in driver instead of executor side.
2
Github user jmchung commented on the issue:
https://github.com/apache/spark/pull/18865
@gatorsmile Thanks for your feedback.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user jmchung commented on the issue:
https://github.com/apache/spark/pull/18865
@viirya Thanks, the description of PR has been updated.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user jmchung commented on the issue:
https://github.com/apache/spark/pull/18865
Thanks @viirya's suggestion, the redundant comment is removed and
`withTempPath` is applied in the test case.
---
If your project is set up for it, you can reply to this email and have your
Github user jmchung commented on the issue:
https://github.com/apache/spark/pull/18865
gentle ping @viirya, @gatorsmile, made a minor change to throw reasonable
workaround message.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub
Github user jmchung commented on the issue:
https://github.com/apache/spark/pull/19017
Thanks @viirya, @HyukjinKwon and @gatorsmile.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
Github user jmchung commented on the issue:
https://github.com/apache/spark/pull/19017
@gatorsmile ok and really thanks for all the nice comments.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not
Github user jmchung commented on a diff in the pull request:
https://github.com/apache/spark/pull/19017#discussion_r134925669
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/jsonExpressions.scala
---
@@ -447,7 +448,18 @@ case class JsonTuple
Github user jmchung commented on the issue:
https://github.com/apache/spark/pull/19017
@viirya PR title fixed, thanks.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user jmchung commented on the issue:
https://github.com/apache/spark/pull/19017
retest this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user jmchung commented on the issue:
https://github.com/apache/spark/pull/19017
@HyukjinKwon @viirya I replaced the functional transformations with a while
loop.
What do you think about this? Thanks.
---
If your project is set up for it, you can reply to this email and
Github user jmchung commented on the issue:
https://github.com/apache/spark/pull/19017
@HyukjinKwon That's a good point, thanks.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this fe
Github user jmchung commented on the issue:
https://github.com/apache/spark/pull/19017
cc @viirya
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if
GitHub user jmchung opened a pull request:
https://github.com/apache/spark/pull/19017
SPARK-21804: json_tuple returns null values within repeated columns except
the first one
## What changes were proposed in this pull request?
When json_tuple in extracting values from JSON
Github user jmchung commented on the issue:
https://github.com/apache/spark/pull/18930
Thanks @viirya @HyukjinKwon @gatorsmile , I learned a lot from this journey.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user jmchung commented on a diff in the pull request:
https://github.com/apache/spark/pull/18930#discussion_r133723664
--- Diff: sql/core/src/test/resources/sql-tests/inputs/json-functions.sql
---
@@ -20,3 +20,9 @@ select from_json('{"a":1}', 'a I
Github user jmchung commented on a diff in the pull request:
https://github.com/apache/spark/pull/18930#discussion_r133723113
--- Diff: sql/core/src/test/resources/sql-tests/inputs/json-functions.sql
---
@@ -20,3 +20,9 @@ select from_json('{"a":1}', 'a I
Github user jmchung commented on a diff in the pull request:
https://github.com/apache/spark/pull/18930#discussion_r133614456
--- Diff: sql/core/src/test/resources/sql-tests/inputs/json-functions.sql
---
@@ -20,3 +20,9 @@ select from_json('{"a":1}', 'a I
Github user jmchung commented on a diff in the pull request:
https://github.com/apache/spark/pull/18930#discussion_r133479566
--- Diff:
sql/core/src/test/scala/org/apache/spark/sql/execution/datasources/json/JsonSuite.scala
---
@@ -2034,4 +2034,25 @@ class JsonSuite extends
Github user jmchung commented on a diff in the pull request:
https://github.com/apache/spark/pull/18930#discussion_r133480372
--- Diff:
sql/core/src/test/scala/org/apache/spark/sql/execution/datasources/json/JsonSuite.scala
---
@@ -2034,4 +2034,25 @@ class JsonSuite extends
Github user jmchung commented on a diff in the pull request:
https://github.com/apache/spark/pull/18930#discussion_r133479509
--- Diff:
sql/core/src/test/scala/org/apache/spark/sql/execution/datasources/json/JsonSuite.scala
---
@@ -2034,4 +2034,25 @@ class JsonSuite extends
Github user jmchung commented on a diff in the pull request:
https://github.com/apache/spark/pull/18930#discussion_r133200977
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/jsonExpressions.scala
---
@@ -362,12 +362,12 @@ case class JsonTuple
Github user jmchung commented on a diff in the pull request:
https://github.com/apache/spark/pull/18930#discussion_r133135302
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/jsonExpressions.scala
---
@@ -359,14 +359,14 @@ case class JsonTuple
Github user jmchung commented on a diff in the pull request:
https://github.com/apache/spark/pull/18930#discussion_r133116488
--- Diff:
sql/core/src/test/scala/org/apache/spark/sql/execution/datasources/json/JsonSuite.scala
---
@@ -2034,4 +2034,13 @@ class JsonSuite extends
Github user jmchung commented on a diff in the pull request:
https://github.com/apache/spark/pull/18930#discussion_r132984129
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/jsonExpressions.scala
---
@@ -361,10 +361,18 @@ case class JsonTuple
Github user jmchung commented on the issue:
https://github.com/apache/spark/pull/18930
cc @viirya
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if
GitHub user jmchung opened a pull request:
https://github.com/apache/spark/pull/18930
Spark 21677
## What changes were proposed in this pull request?
``` scala
scala> Seq(("""{"Hyukjin": 224, "John":
1225}""")).
GitHub user jmchung reopened a pull request:
https://github.com/apache/spark/pull/18865
[SPARK-21610][SQL] Corrupt records are not handled properly when creating a
dataframe from a file
## What changes were proposed in this pull request?
```
echo '{"field":
Github user jmchung closed the pull request at:
https://github.com/apache/spark/pull/18865
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
GitHub user jmchung opened a pull request:
https://github.com/apache/spark/pull/18865
[SPARK-21610][SQL] Corrupt records are not handled properly when creating a
dataframe from a file
## What changes were proposed in this pull request?
```
echo '{"field": 1}
64 matches
Mail list logo