Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/10218#discussion_r47201429
--- Diff: sql/core/src/main/scala/org/apache/spark/sql/DataFrame.scala ---
@@ -1271,10 +1271,11 @@ class DataFrame private[sql](
* @since 1.6.0
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/10218#discussion_r47197511
--- Diff: sql/core/src/main/scala/org/apache/spark/sql/DataFrame.scala ---
@@ -1271,10 +1271,11 @@ class DataFrame private[sql](
* @since 1.6.0
GitHub user HyukjinKwon opened a pull request:
https://github.com/apache/spark/pull/10341
[SPARK-11677][SQL][FOLLOW-UP] Add tests for checking the ORC filter
creation against pushed down filters.
https://issues.apache.org/jira/browse/SPARK-11677
Although it checks correctly
Github user HyukjinKwon commented on the pull request:
https://github.com/apache/spark/pull/10341#issuecomment-165304971
As I talked with @liancheng, this PR is not covering `NOT`, `AND` and `OR`.
This is because `ExpressionTree` is not accessible at Hive 1.2.x.
But as I see
Github user HyukjinKwon commented on the pull request:
https://github.com/apache/spark/pull/10341#issuecomment-165309276
retest this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
GitHub user HyukjinKwon opened a pull request:
https://github.com/apache/spark/pull/10287
[SPARK-12315][SQL] isnotnull operator not pushed down for JDBC datasource.
`IsNotNull` filter is not being pushed down for JDBC datasource.
It looks it is SQL standard according
GitHub user HyukjinKwon opened a pull request:
https://github.com/apache/spark/pull/10286
[SPARK-12315][SQL[ isnull operator not pushed down for JDBC datasource.
`IsNull` filter is not being pushed down for JDBC datasource.
It looks it is SQL standard according to
[SQL-92
Github user HyukjinKwon commented on the pull request:
https://github.com/apache/spark/pull/10287#issuecomment-164331892
Actually the test for this PR is not correctly being tested with the result
but I just added it just like the other predicate tests. I will correct the
test
Github user HyukjinKwon commented on the pull request:
https://github.com/apache/spark/pull/10286#issuecomment-164331911
Actually the test for this PR is not correctly being tested with the result
but I just added it just like the other predicate tests. I will correct the
test
Github user HyukjinKwon commented on the pull request:
https://github.com/apache/spark/pull/10221#issuecomment-164927780
@marmbrus I saw that Jira ticket. I though It looked for internal
datsources guys agree with using Spark side filter as it is.
I found this problem from
Github user HyukjinKwon commented on the pull request:
https://github.com/apache/spark/pull/10221#issuecomment-164942750
Actually, we might still need such function even after adding
`unhandledFilters` although the logics are a bit modified because it woould
still pass all tests even
Github user HyukjinKwon commented on the pull request:
https://github.com/apache/spark/pull/9687#issuecomment-164963096
In the commits above, I removed the `stripSparkFilter` function to share
this.
---
If your project is set up for it, you can reply to this email and have your
Github user HyukjinKwon commented on the pull request:
https://github.com/apache/spark/pull/10287#issuecomment-164964071
I updated the test for this since
https://github.com/apache/spark/pull/10221 is merged.
---
If your project is set up for it, you can reply to this email and have
Github user HyukjinKwon commented on the pull request:
https://github.com/apache/spark/pull/10287#issuecomment-164969771
retest this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user HyukjinKwon commented on the pull request:
https://github.com/apache/spark/pull/10286#issuecomment-164969727
retest this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user HyukjinKwon commented on the pull request:
https://github.com/apache/spark/pull/10286#issuecomment-164995821
cc @rxin @marmbrus
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user HyukjinKwon commented on the pull request:
https://github.com/apache/spark/pull/10233#issuecomment-164996315
cc @rxin @marmbrus
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user HyukjinKwon commented on the pull request:
https://github.com/apache/spark/pull/10287#issuecomment-164996332
cc @rxin @marmbrus
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user HyukjinKwon commented on the pull request:
https://github.com/apache/spark/pull/9687#issuecomment-164996396
cc @marmbrus
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user HyukjinKwon commented on the pull request:
https://github.com/apache/spark/pull/10221#issuecomment-164996803
Thanks!
I filed the issue about the implementation of `unhandledFilter` here
https://issues.apache.org/jira/browse/SPARK-12354.
---
If your project is set up
Github user HyukjinKwon commented on the pull request:
https://github.com/apache/spark/pull/10218#issuecomment-163797608
I see. I added this as `sort`, `cube`, `select` and etc do the same thing
for supporting it by name and expression. Although I am not greatly insightful
Github user HyukjinKwon closed the pull request at:
https://github.com/apache/spark/pull/10218
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature
Github user HyukjinKwon commented on the pull request:
https://github.com/apache/spark/pull/10218#issuecomment-163816325
Thank for the detailed explanation! Then does this mean closing this?
---
If your project is set up for it, you can reply to this email and have your
reply appear
Github user HyukjinKwon commented on the pull request:
https://github.com/apache/spark/pull/10221#issuecomment-163861353
@liancheng Would you like to look through this? it is related with the
filter tests.
---
If your project is set up for it, you can reply to this email and have
Github user HyukjinKwon commented on the pull request:
https://github.com/apache/spark/pull/10221#issuecomment-164585973
Oh yes. That is the eventual plan. I will share that function. I opended
some PRs before they are closed. So, I ended up with adding the same function
to another
Github user HyukjinKwon commented on the pull request:
https://github.com/apache/spark/pull/10221#issuecomment-164586872
@holdenk Actually, would you merge this PR if it looks good?
Four other PRs are having a bit of troubles like you just said and I would
like to correct them
Github user HyukjinKwon commented on the pull request:
https://github.com/apache/spark/pull/10233#issuecomment-164966748
cc @rxin @marmbrus
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user HyukjinKwon commented on the pull request:
https://github.com/apache/spark/pull/9687#issuecomment-164957946
The function I mentioned is moved to `SQLTestUtils` in another PR. So I
will add a commit for this soon.
---
If your project is set up for it, you can reply
Github user HyukjinKwon commented on the pull request:
https://github.com/apache/spark/pull/10233#issuecomment-164965418
I updated the test for this since
https://github.com/apache/spark/pull/10221 is merged.
---
If your project is set up for it, you can reply to this email and have
Github user HyukjinKwon commented on the pull request:
https://github.com/apache/spark/pull/9687#issuecomment-164975689
retest this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user HyukjinKwon commented on the pull request:
https://github.com/apache/spark/pull/10286#issuecomment-164964555
I updated the test for this since
https://github.com/apache/spark/pull/10221 is merged.
---
If your project is set up for it, you can reply to this email and have
Github user HyukjinKwon commented on the pull request:
https://github.com/apache/spark/pull/8743#issuecomment-168367522
test this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user HyukjinKwon commented on the pull request:
https://github.com/apache/spark/pull/8743#issuecomment-168346039
test this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/10615#discussion_r48932885
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/csv/CSVInferSchema.scala
---
@@ -0,0 +1,231 @@
+/*
+ * Licensed
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/10615#discussion_r48932909
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/csv/CSVInferSchema.scala
---
@@ -0,0 +1,231 @@
+/*
+ * Licensed
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/10615#discussion_r48933011
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/csv/CSVRelation.scala
---
@@ -0,0 +1,305 @@
+/*
+ * Licensed
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/10615#discussion_r49147677
--- Diff:
sql/core/src/test/scala/org/apache/spark/sql/execution/datasources/csv/CSVSuite.scala
---
@@ -0,0 +1,341 @@
+/*
+ * Licensed
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/10615#discussion_r49147704
--- Diff:
sql/core/src/test/scala/org/apache/spark/sql/execution/datasources/csv/CSVSuite.scala
---
@@ -0,0 +1,341 @@
+/*
+ * Licensed
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/10615#discussion_r49147610
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/csv/CSVParser.scala
---
@@ -0,0 +1,243 @@
+/*
+ * Licensed
Github user HyukjinKwon commented on the pull request:
https://github.com/apache/spark/pull/10615#issuecomment-169855191
Cool!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user HyukjinKwon commented on the pull request:
https://github.com/apache/spark/pull/10502#issuecomment-167895015
@yhuai Sure. I will try!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user HyukjinKwon commented on the pull request:
https://github.com/apache/spark/pull/8743#issuecomment-168306690
Could anybody please type "test this please" for this PR? I can't trigger a
test for this PR.
---
If your project is set up for it, you can reply to
Github user HyukjinKwon commented on the pull request:
https://github.com/apache/spark/pull/8743#issuecomment-168306374
@rxin Nothing is eazy and happy new year. I just resolved conflicts.
---
If your project is set up for it, you can reply to this email and have your
reply appear
Github user HyukjinKwon commented on the pull request:
https://github.com/apache/spark/pull/8743#issuecomment-168306376
test this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user HyukjinKwon commented on the pull request:
https://github.com/apache/spark/pull/10510#issuecomment-168100789
@davies @squito the purpose of `stripSparkFilter` is not to copy but strip
the wrapped Spark-side filter. I am not too sure if it is right to modify the
results
Github user HyukjinKwon commented on the pull request:
https://github.com/apache/spark/pull/10510#issuecomment-169164600
@squito Ah, sorry I misunderstood.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project
Github user HyukjinKwon commented on the pull request:
https://github.com/apache/spark/pull/8743#issuecomment-168102081
@zsxwing I will anyway resolve the conflicts first
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well
Github user HyukjinKwon commented on the pull request:
https://github.com/apache/spark/pull/10341#issuecomment-166445749
Ah. I will correct them soon!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/10427#discussion_r48313071
--- Diff: sql/core/src/test/scala/org/apache/spark/sql/jdbc/JDBCSuite.scala
---
@@ -180,14 +181,23 @@ class JDBCSuite extends SparkFunSuite
GitHub user HyukjinKwon opened a pull request:
https://github.com/apache/spark/pull/10502
[SPARK-12355][SQL] Implement unhandledFilter interface for Parquet
https://issues.apache.org/jira/browse/SPARK-12355
This is similar with https://github.com/apache/spark/pull/10427
Github user HyukjinKwon commented on the pull request:
https://github.com/apache/spark/pull/10502#issuecomment-167722257
The test is failed from wrong results from Parquet.
The test result was below:
```
== Physical Plan ==
Scan ParquetRelation[_1#4] InputPaths
Github user HyukjinKwon commented on the pull request:
https://github.com/apache/spark/pull/10341#issuecomment-166232455
In this commit, I used string expression `SearchArgument.toString` to check
filter creation.
I am not too sure if generalising them with string template
Github user HyukjinKwon commented on the pull request:
https://github.com/apache/spark/pull/10341#issuecomment-166508623
Hm.. I will update `checkFilterPredicate` so that this function return
nothing.
---
If your project is set up for it, you can reply to this email and have your
Github user HyukjinKwon commented on the pull request:
https://github.com/apache/spark/pull/10341#issuecomment-166517389
In the commits above, I added tests for logical operators separately.
Although it does not check all the combinations across types with logical
operators, I think
Github user HyukjinKwon commented on the pull request:
https://github.com/apache/spark/pull/10341#issuecomment-166516260
In the commits above, I added tests for logical operators separately.
Although it does not check all the combinations across types with logical
operators, I think
Github user HyukjinKwon commented on the pull request:
https://github.com/apache/spark/pull/10427#issuecomment-166588013
@liancheng would you tell me what you think on
[this](https://github.com/apache/spark/pull/10427#discussion_r48231701)? I made
some commits locally and want
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/10427#discussion_r48231102
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/jdbc/JDBCRelation.scala
---
@@ -90,6 +90,19 @@ private[sql] case class
Github user HyukjinKwon commented on the pull request:
https://github.com/apache/spark/pull/10427#issuecomment-166568408
cc @liancheng @yhuai
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/10427#discussion_r48243182
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/jdbc/JDBCRelation.scala
---
@@ -90,6 +90,21 @@ private[sql] case class
Github user HyukjinKwon commented on the pull request:
https://github.com/apache/spark/pull/10427#issuecomment-166570264
cc @liancheng @yhuai
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/10427#discussion_r48237984
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/jdbc/JDBCRelation.scala
---
@@ -90,6 +90,19 @@ private[sql] case class
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/10427#discussion_r48242805
--- Diff: sql/core/src/test/scala/org/apache/spark/sql/jdbc/JDBCSuite.scala
---
@@ -180,14 +181,23 @@ class JDBCSuite extends SparkFunSuite
Github user HyukjinKwon commented on the pull request:
https://github.com/apache/spark/pull/10427#issuecomment-166587134
@liancheng Would you tell me what you think on
[this](https://github.com/apache/spark/pull/10427#discussion_r48231701)? I made
some commits locally but want
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/10427#discussion_r48236730
--- Diff: sql/core/src/test/scala/org/apache/spark/sql/jdbc/JDBCSuite.scala
---
@@ -176,14 +178,23 @@ class JDBCSuite extends SparkFunSuite
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/10427#discussion_r48243987
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/jdbc/JDBCRelation.scala
---
@@ -90,6 +90,19 @@ private[sql] case class
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/10427#discussion_r48229951
--- Diff: sql/core/src/test/scala/org/apache/spark/sql/jdbc/JDBCSuite.scala
---
@@ -176,14 +178,23 @@ class JDBCSuite extends SparkFunSuite
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/10427#discussion_r48230062
--- Diff: sql/core/src/test/scala/org/apache/spark/sql/jdbc/JDBCSuite.scala
---
@@ -176,14 +178,23 @@ class JDBCSuite extends SparkFunSuite
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/10427#discussion_r48231701
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/jdbc/JDBCRelation.scala
---
@@ -90,6 +90,19 @@ private[sql] case class
Github user HyukjinKwon commented on the pull request:
https://github.com/apache/spark/pull/10427#issuecomment-166553543
cc @liancheng @yhuai
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/10427#discussion_r48246841
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/jdbc/JDBCRelation.scala
---
@@ -90,6 +90,21 @@ private[sql] case class
Github user HyukjinKwon commented on the pull request:
https://github.com/apache/spark/pull/10427#issuecomment-167017518
Let me leave a comment. I tested some cases with this PR and looks
generally working fine. But I would like to mention one thing that I am pretty
sure you guys
Github user HyukjinKwon commented on the pull request:
https://github.com/apache/spark/pull/10427#issuecomment-167033198
retest this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user HyukjinKwon commented on the pull request:
https://github.com/apache/spark/pull/10427#issuecomment-167040023
And you are right I think the comments I said is not related with this PR.
Let's wait for their comments!
---
If your project is set up for it, you can reply
Github user HyukjinKwon commented on the pull request:
https://github.com/apache/spark/pull/10427#issuecomment-167039644
@maropu I believe it is a Parquet stuff. AFAIK, the columns in filters
should be set to `requestedSchema` for Parquet. But
[this](https://github.com/apache/spark
Github user HyukjinKwon commented on the pull request:
https://github.com/apache/spark/pull/10427#issuecomment-167044200
@maropu I agree that can be another way! but I just think an interface
should be inclusive not exclusive. Handing in `ParquetRelation` might mean
other datasources
Github user HyukjinKwon commented on the pull request:
https://github.com/apache/spark/pull/10341#issuecomment-165980756
@liancheng Thanks! I will try to apply that.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/10502#discussion_r48590599
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/parquet/ParquetFilters.scala
---
@@ -208,11 +210,30 @@ private[sql] object
Github user HyukjinKwon commented on the pull request:
https://github.com/apache/spark/pull/10502#issuecomment-167933518
# Benchmark (Removed Spark-side Filter)
## Motivation
This PR simplifies the query plans for Parquet files by stripping
duplicated Spark-side
Github user HyukjinKwon commented on the pull request:
https://github.com/apache/spark/pull/10502#issuecomment-167933544
@yhuai Would you look through this please?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/10502#discussion_r48593890
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/parquet/ParquetRelation.scala
---
@@ -288,20 +293,28 @@ private[sql] class
Github user HyukjinKwon commented on the pull request:
https://github.com/apache/spark/pull/8743#issuecomment-167968091
Hm.. actually I think adding tests more in docker is a bit over-tested. It
looks the comparison operators I used are all already being used in
`compileFilter` and i
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/10470#discussion_r48586845
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/jdbc/JDBCRDD.scala
---
@@ -182,18 +183,40 @@ private[sql] object JDBCRDD
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/10502#discussion_r48590538
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/parquet/ParquetFilters.scala
---
@@ -208,11 +210,30 @@ private[sql] object
Github user HyukjinKwon commented on the pull request:
https://github.com/apache/spark/pull/10502#issuecomment-167750403
cc @yhuai @liancheng
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user HyukjinKwon commented on the pull request:
https://github.com/apache/spark/pull/10502#issuecomment-167729051
I see. `UnsafeRowParquetRecordReader` at Parquet does not support filter
record by record but just block. So, even with `=` operator produces the same
results
Github user HyukjinKwon commented on the pull request:
https://github.com/apache/spark/pull/10502#issuecomment-167733162
retest this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/10470#discussion_r48525943
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/jdbc/JDBCRDD.scala
---
@@ -184,16 +185,38 @@ private[sql] object JDBCRDD
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/10502#discussion_r48603240
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/parquet/ParquetFilters.scala
---
@@ -208,11 +210,30 @@ private[sql] object
Github user HyukjinKwon commented on the pull request:
https://github.com/apache/spark/pull/8743#issuecomment-158330364
It looks Jenkins does not run the test for the past commits that I made as
a user not added to whitelist. Would anybody please run the test for this
please
Github user HyukjinKwon commented on the pull request:
https://github.com/apache/spark/pull/8743#issuecomment-158316387
I added a simple test for this. I wanted to add a test including `null` but
when the given value is `null`, Spark converts it to `IsNull`. To make this
worse, more
Github user HyukjinKwon commented on the pull request:
https://github.com/apache/spark/pull/8743#issuecomment-158316439
test this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user HyukjinKwon closed the pull request at:
https://github.com/apache/spark/pull/9763
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature
Github user HyukjinKwon commented on the pull request:
https://github.com/apache/spark/pull/9763#issuecomment-159095287
Oh. Right. Thanks!
Closing this.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/13517
**[Test build #60004 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/60004/consoleFull)**
for PR 13517 at commit
[`905bdc5`](https://github.com/apache/spark
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/13517
**[Test build #60004 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/60004/consoleFull)**
for PR 13517 at commit
[`905bdc5`](https://github.com/apache/spark
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/13517
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/13517
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/60004/
Test PASSed
GitHub user HyukjinKwon opened a pull request:
https://github.com/apache/spark/pull/13517
[SPARK-14839][SQL] Support for other types as option in OPTIONS clause
## What changes were proposed in this pull request?
Currently, Scala API supports to take options with the types
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/13517
Please let me cc @davies. Thanks!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
GitHub user HyukjinKwon opened a pull request:
https://github.com/apache/spark/pull/13576
[SPARK-15840][SQL] Add missing options in documentation, inferSchema for
CSV and mergeSchema for Parquet
## What changes were proposed in this pull request?
This PR
1. Adds
101 - 200 of 12622 matches
Mail list logo