Github user HyukjinKwon commented on the pull request:
https://github.com/apache/spark/pull/12817#issuecomment-216014355
Hm.. this gives me a pass of Python style test at local.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/12817#discussion_r61679234
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/DataFrameReader.scala ---
@@ -393,6 +393,45 @@ class DataFrameReader private[sql](sparkSession
Github user HyukjinKwon commented on the pull request:
https://github.com/apache/spark/pull/12817#issuecomment-216014925
cc @rxin
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user HyukjinKwon commented on the pull request:
https://github.com/apache/spark/pull/11947#issuecomment-216003501
LGTM. (Maybe we should not forget, for documentation, `nullValue` has the
highest priority than other options such as `nanValue` if the same value is
given
Github user HyukjinKwon commented on the pull request:
https://github.com/apache/spark/pull/12817#issuecomment-216015221
@rxin BTW, I found two todos, `TODO: Remove this one in Spark 2.0.` at
`DataFrameReader` and `DataFrameWriter` added in
https://github.com/apache/spark/pull/9945
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/12817#discussion_r61679408
--- Diff: python/pyspark/sql/readwriter.py ---
@@ -663,6 +700,18 @@ def csv(self, path, mode=None, compression=None
GitHub user HyukjinKwon opened a pull request:
https://github.com/apache/spark/pull/12818
[MINOR][SQL] Remove not affected settings for writing in CSV.
## What changes were proposed in this pull request?
This PR removes not affected settings for writing CSV files
Github user HyukjinKwon commented on the pull request:
https://github.com/apache/spark/pull/12818#issuecomment-216019892
test this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/12817#discussion_r61682274
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/DataFrameReader.scala ---
@@ -393,6 +393,45 @@ class DataFrameReader private[sql](sparkSession
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/12768#discussion_r61523748
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/util/TypeUtils.scala
---
@@ -42,13 +42,14 @@ object TypeUtils
Github user HyukjinKwon commented on the pull request:
https://github.com/apache/spark/pull/12768#issuecomment-215601060
@dosoft I think it would be great if the PR descriptions are filled up.
---
If your project is set up for it, you can reply to this email and have your
reply
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/12768#discussion_r61523655
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/util/TypeUtils.scala
---
@@ -42,13 +42,14 @@ object TypeUtils
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/12768#discussion_r61523626
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/util/TypeUtils.scala
---
@@ -42,13 +42,14 @@ object TypeUtils
Github user HyukjinKwon commented on the pull request:
https://github.com/apache/spark/pull/12772#issuecomment-215624817
Maybe I think the title is incomplete. It would be nicer if the title
includes where (in.. where).
---
If your project is set up for it, you can reply
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/12774#discussion_r61534248
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/fileSourceInterfaces.scala
---
@@ -376,14 +376,10 @@ class HDFSFileCatalog
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/12774#discussion_r61534612
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/fileSourceInterfaces.scala
---
@@ -376,14 +376,10 @@ class HDFSFileCatalog
Github user HyukjinKwon commented on the pull request:
https://github.com/apache/spark/pull/12774#issuecomment-215627856
(I think "(If this patch involves UI changes, please attach a screenshot;
otherwise, remove this)" can be removed in the PR description)
---
If yo
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/12680#discussion_r61025886
--- Diff: mllib/src/main/scala/org/apache/spark/ml/r/KMeansWrapper.scala ---
@@ -17,14 +17,21 @@
package org.apache.spark.ml.r
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/12680#discussion_r61026788
--- Diff: mllib/src/main/scala/org/apache/spark/ml/r/KMeansWrapper.scala ---
@@ -17,14 +17,21 @@
package org.apache.spark.ml.r
Github user HyukjinKwon commented on the pull request:
https://github.com/apache/spark/pull/12268#issuecomment-214614770
@rxin It looks this is still failing,
https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/56962
https://amplab.cs.berkeley.edu/jenkins/job
Github user HyukjinKwon commented on the pull request:
https://github.com/apache/spark/pull/12268#issuecomment-214614924
Fixed in
https://github.com/apache/spark/commit/f8709218115f6c7aa4fb321865cdef8ceb443bd1
---
If your project is set up for it, you can reply to this email
Github user HyukjinKwon commented on the pull request:
https://github.com/apache/spark/pull/12268#issuecomment-214608678
retest this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user HyukjinKwon commented on the pull request:
https://github.com/apache/spark/pull/12268#issuecomment-214609981
This was due to
https://github.com/apache/spark/commit/d2614eaadb93a48fba27fe7de64aff942e345f8e
---
If your project is set up for it, you can reply to this email
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/12696#discussion_r61045052
--- Diff: core/src/main/scala/org/apache/spark/util/Utils.scala ---
@@ -47,7 +46,6 @@ import org.apache.log4j.PropertyConfigurator
import
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/12695#discussion_r61045917
--- Diff: core/src/main/scala/org/apache/spark/SparkContext.scala ---
@@ -1313,11 +1313,8 @@ class SparkContext(config: SparkConf) extends
Logging
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/12695#discussion_r61046074
--- Diff: core/src/test/java/org/apache/spark/JavaAPISuite.java ---
@@ -35,6 +35,7 @@
import java.util.Set;
import java.util.concurrent
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/12695#discussion_r61046496
--- Diff:
core/src/test/scala/org/apache/spark/metrics/InputOutputMetricsSuite.scala ---
@@ -61,8 +61,7 @@ class InputOutputMetricsSuite extends
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/12695#discussion_r61046782
--- Diff: core/src/test/scala/org/apache/spark/util/UtilsSuite.scala ---
@@ -416,9 +416,9 @@ class UtilsSuite extends SparkFunSuite
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/12695#discussion_r61046667
--- Diff:
core/src/test/scala/org/apache/spark/scheduler/cluster/mesos/MesosSchedulerBackendSuite.scala
---
@@ -25,20 +26,18 @@ import
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/12694#discussion_r61047133
--- Diff: core/src/main/scala/org/apache/spark/util/Utils.scala ---
@@ -1090,6 +1091,50 @@ private[spark] object Utils extends Logging
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/12694#discussion_r61047082
--- Diff: core/src/main/scala/org/apache/spark/util/Utils.scala ---
@@ -1090,6 +1091,50 @@ private[spark] object Utils extends Logging
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/12693#discussion_r61047763
--- Diff:
streaming/src/test/scala/org/apache/spark/streaming/MapWithStateSuite.scala ---
@@ -39,18 +39,15 @@ class MapWithStateSuite extends
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/12694#discussion_r61047942
--- Diff:
core/src/test/java/org/apache/spark/launcher/SparkLauncherSuite.java ---
@@ -17,10 +17,13 @@
package org.apache.spark.launcher
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/12693#discussion_r61047794
--- Diff:
streaming/src/test/scala/org/apache/spark/streaming/MapWithStateSuite.scala ---
@@ -39,18 +39,15 @@ class MapWithStateSuite extends
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/12697#discussion_r61048906
--- Diff: core/src/main/scala/org/apache/spark/metrics/sink/Slf4jSink.scala
---
@@ -25,6 +25,9 @@ import com.codahale.metrics.{MetricRegistry
Github user HyukjinKwon commented on the pull request:
https://github.com/apache/spark/pull/10761#issuecomment-213151310
@kevincox right.
- Usually, I believe such changes need a JIRA which manages the issues in
Spark. Usually there sould be a JIRA to discuss a proper way to fix
Github user HyukjinKwon commented on the pull request:
https://github.com/apache/spark/pull/12569#issuecomment-213166316
@jaceklaskowski I think the title is not clear for the changes this PR has.
---
If your project is set up for it, you can reply to this email and have your
reply
Github user HyukjinKwon commented on the pull request:
https://github.com/apache/spark/pull/12494#issuecomment-213183696
@mathieulongtin
I think it might be more sensible that SQL in OPTIONS clause supports
`null` and some other types such as long, double and boolean. It
[looks
Github user HyukjinKwon commented on the pull request:
https://github.com/apache/spark/pull/10761#issuecomment-213164527
@kevincox Could you please close this for now? You can easily reopen this
once you start to work.
---
If your project is set up for it, you can reply
Github user HyukjinKwon commented on the pull request:
https://github.com/apache/spark/pull/12494#issuecomment-213188258
@viirya @davies Could I ask your thought on this? I can make a JIRA for
this.
---
If your project is set up for it, you can reply to this email and have your
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/12598#discussion_r60687833
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/joins/HashedRelation.scala
---
@@ -324,8 +324,8 @@ private[joins] object
Github user HyukjinKwon commented on the pull request:
https://github.com/apache/spark/pull/12598#issuecomment-213233028
(I think `CC: @rajeshbalamohan` should be removed because the PR
description describes the PR itself and the names of reviewers might not be
related with the PR
Github user HyukjinKwon commented on the pull request:
https://github.com/apache/spark/pull/11724#issuecomment-213242625
@rxin I am willing to close this one if you are not sure of this one.
---
If your project is set up for it, you can reply to this email and have your
reply appear
Github user HyukjinKwon commented on the pull request:
https://github.com/apache/spark/pull/11457#issuecomment-213243006
I am closing this but it would be great to let me know if you have a new PR
for this.
---
If your project is set up for it, you can reply to this email and have
Github user HyukjinKwon closed the pull request at:
https://github.com/apache/spark/pull/11457
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature
Github user HyukjinKwon commented on the pull request:
https://github.com/apache/spark/pull/12601#issuecomment-213568166
A possible problem was noticed (is `Properties` guaranteed to be converted
to `String`?) in the JIRA before this PR and any evidence was not prodivded or
said
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/13135#discussion_r63640828
--- Diff: examples/src/main/python/ml/simple_params_example.py ---
@@ -36,18 +35,20 @@
if len(sys.argv) > 1:
--- End d
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/13135#discussion_r63640687
--- Diff: examples/src/main/python/ml/simple_params_example.py ---
@@ -36,18 +35,20 @@
if len(sys.argv) > 1:
--- End diff --
I
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/13135#discussion_r63644923
--- Diff: examples/src/main/python/ml/simple_params_example.py ---
@@ -36,18 +35,20 @@
if len(sys.argv) > 1:
--- End diff --
Th
Github user HyukjinKwon commented on the pull request:
https://github.com/apache/spark/pull/13161#issuecomment-219910188
(@ericl I see you set a component in JIRA. It would be nicer if the
component is specified in the PR title as described in
[Contributing+to+Spark
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/13135#discussion_r63638465
--- Diff: examples/src/main/python/ml/simple_params_example.py ---
@@ -36,18 +35,20 @@
if len(sys.argv) > 1:
--- End diff --
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/7025#discussion_r63638336
--- Diff: R/pkg/R/client.R ---
@@ -42,6 +42,19 @@ determineSparkSubmitBin <- function() {
}
sparkSubmitBinName
}
+# R supports b
Github user HyukjinKwon commented on the pull request:
https://github.com/apache/spark/pull/13187#issuecomment-220491134
@srowen Ah, I will keep in mind that I should read comments more carefully.
---
If your project is set up for it, you can reply to this email and have your
reply
Github user HyukjinKwon commented on the pull request:
https://github.com/apache/spark/pull/13165#issuecomment-220515182
@sun-rui @felixcheung Right. It seems finally I made it. I made gists and
upload a PDF file for Spark UI.
Let me tell you the test results first
Github user HyukjinKwon commented on the pull request:
https://github.com/apache/spark/pull/13165#issuecomment-220515342
This raises some questions to me.
1. It seems several tests were failed. Could you please inform me your
thoughts?
2. Now, I think I can add some
Github user HyukjinKwon commented on the pull request:
https://github.com/apache/spark/pull/13217#issuecomment-220532414
cc @sun-rui
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user HyukjinKwon commented on the pull request:
https://github.com/apache/spark/pull/13181#issuecomment-220522607
@marmbrus I tested and could produce the exceptions for reading in
https://issues.apache.org/jira/browse/SPARK-15393 but it seems this might not
be the reason
Github user HyukjinKwon commented on the pull request:
https://github.com/apache/spark/pull/13165#issuecomment-220533282
@sun-rui Thank you so much. I will try to add a test.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub
Github user HyukjinKwon commented on the pull request:
https://github.com/apache/spark/pull/13181#issuecomment-220527118
@jurriaan Maybe I am doing this wrong. I will tell you after testing that.
---
If your project is set up for it, you can reply to this email and have your
reply
GitHub user HyukjinKwon opened a pull request:
https://github.com/apache/spark/pull/13217
[MI
## What changes were proposed in this pull request?
(Please fill in changes proposed in this fix)
## How was this patch tested?
(Please explain how
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/13217#discussion_r64025198
--- Diff: R/WINDOWS.md ---
@@ -11,3 +11,19 @@ include Rtools and R in `PATH`.
directory in Maven in `PATH`.
4. Set `MAVEN_OPTS` as described
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/13217#discussion_r64025769
--- Diff: R/WINDOWS.md ---
@@ -11,3 +11,19 @@ include Rtools and R in `PATH`.
directory in Maven in `PATH`.
4. Set `MAVEN_OPTS` as described
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/13217#discussion_r64026341
--- Diff: R/WINDOWS.md ---
@@ -11,3 +11,19 @@ include Rtools and R in `PATH`.
directory in Maven in `PATH`.
4. Set `MAVEN_OPTS` as described
Github user HyukjinKwon commented on the pull request:
https://github.com/apache/spark/pull/13217#issuecomment-220751922
@sun-rui @steveloughran While it seems obviously better for someone to
follow and test this, I wonder who is going to test this and leave some
comments here
Github user HyukjinKwon commented on the pull request:
https://github.com/apache/spark/pull/13217#issuecomment-220751065
@sun-rui @steveloughran While it seems obviously better for someone to
follow this and test, I wonder who is going to test this and leave some
comments here
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/13267#discussion_r64312653
--- Diff:
sql/core/src/test/scala/org/apache/spark/sql/execution/datasources/csv/CSVSuite.scala
---
@@ -364,6 +364,33 @@ class CSVSuite extends
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/13267#discussion_r64312722
--- Diff:
sql/core/src/test/scala/org/apache/spark/sql/execution/datasources/csv/CSVSuite.scala
---
@@ -364,6 +364,33 @@ class CSVSuite extends
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/13267#discussion_r64313317
--- Diff:
sql/core/src/test/scala/org/apache/spark/sql/execution/datasources/csv/CSVSuite.scala
---
@@ -364,6 +364,33 @@ class CSVSuite extends
Github user HyukjinKwon commented on the pull request:
https://github.com/apache/spark/pull/13267#issuecomment-221137806
@jurriaan Just to double check.. It dose not escape `quote`s if `quote`
and/or `escape` are not set?
I think they might better be documented..
---
If your
Github user HyukjinKwon commented on the pull request:
https://github.com/apache/spark/pull/13257#issuecomment-221132286
Ah, I think he meant this below:
- Parquet
```scala
val emptyDf = spark.range(10).limit(0).toDF()
emptyDf.write
.format("pa
Github user HyukjinKwon commented on the pull request:
https://github.com/apache/spark/pull/13165#issuecomment-221133227
@shivaram Thank you so much. Let me try to add a test meanwhile.
---
If your project is set up for it, you can reply to this email and have your
reply appear
Github user HyukjinKwon commented on the pull request:
https://github.com/apache/spark/pull/13217#issuecomment-221133254
@shivaram Thank you so much.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/13267#discussion_r64312893
--- Diff:
sql/core/src/test/scala/org/apache/spark/sql/execution/datasources/csv/CSVSuite.scala
---
@@ -364,6 +364,33 @@ class CSVSuite extends
Github user HyukjinKwon commented on the pull request:
https://github.com/apache/spark/pull/13257#issuecomment-221455213
@sbcd90 I currently can't think of other alternatives and it seems that's
why it has not been enabled again.
---
If your project is set up for it, you can reply
Github user HyukjinKwon commented on the pull request:
https://github.com/apache/spark/pull/13253#issuecomment-221448131
@rxin Thank you!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user HyukjinKwon commented on the pull request:
https://github.com/apache/spark/pull/13187#issuecomment-220281464
@srowen Actually I asked @cloud-fan about this in
https://github.com/apache/spark/pull/12858#issuecomment-220207574. It seems (If
I understood correctly) we should
Github user HyukjinKwon commented on the pull request:
https://github.com/apache/spark/pull/12855#issuecomment-220193132
@marmbrus Sorry for letting you reverting this, I should have thought of
this further before opening this PR. I will try to think more and try more
carefully
Github user HyukjinKwon commented on the pull request:
https://github.com/apache/spark/pull/13181#issuecomment-220200736
@marmbrus Sure I will
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user HyukjinKwon commented on the pull request:
https://github.com/apache/spark/pull/12855#issuecomment-220188177
@jurriaan Oh, thank you. @marmbrus Yes please. You mea n reopening JIRA (it
seems I can't reopen a merged PR).
---
If your project is set up for it, you can reply
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/13177#discussion_r63809772
--- Diff:
sql/hive/src/main/scala/org/apache/spark/sql/hive/client/HiveShim.scala ---
@@ -480,7 +480,7 @@ private[client] class Shim_v0_13 extends
GitHub user HyukjinKwon opened a pull request:
https://github.com/apache/spark/pull/13187
[SPARK-15322][SQL][FOLLOW-UP] Update deprecated accumulator usage into
accumulatorV2
## What changes were proposed in this pull request?
This PR corrects another case that uses
Github user HyukjinKwon commented on the pull request:
https://github.com/apache/spark/pull/13165#issuecomment-220224055
@sun-rui @felixcheung Let me try to build and run all tests for R first in
Windows and then will try to correct and add each test one by one. This will
take a bit
Github user HyukjinKwon commented on the pull request:
https://github.com/apache/spark/pull/12858#issuecomment-220207574
Hi @cloud-fan , Could I please ask a question? It seems old accumulator is
deprecated but new accumulator should be used everywhere. I am trying to change
some
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/13175#discussion_r63809651
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/Analyzer.scala
---
@@ -45,7 +45,9 @@ object SimpleAnalyzer extends
Github user HyukjinKwon commented on the pull request:
https://github.com/apache/spark/pull/13176#issuecomment-220202679
(@GayathriMurali It seems the title is incomplete ending with ... Maybe it
would be nicer if the title is complete and rebased for the conflict)
---
If your
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/13169#discussion_r63809918
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/util/DateTimeUtils.scala
---
@@ -58,6 +58,7 @@ object DateTimeUtils {
final
Github user HyukjinKwon commented on the pull request:
https://github.com/apache/spark/pull/12858#issuecomment-220208963
@cloud-fan Ah, thank you so much for a detailed explanation. I will look
into this for myself.
---
If your project is set up for it, you can reply to this email
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/13175#discussion_r63813324
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/Analyzer.scala
---
@@ -45,7 +45,9 @@ object SimpleAnalyzer extends
Github user HyukjinKwon commented on the pull request:
https://github.com/apache/spark/pull/13181#issuecomment-220222603
Hi @marmbrus , it seems okay!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user HyukjinKwon commented on the pull request:
https://github.com/apache/spark/pull/12921#issuecomment-220827427
Hi @cloud-fan, Could you please take a look?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/13116#discussion_r64160441
--- Diff: sql/core/src/main/scala/org/apache/spark/sql/Dataset.scala ---
@@ -51,6 +49,7 @@ import
org.apache.spark.sql.execution.python.EvaluatePython
Github user HyukjinKwon commented on the pull request:
https://github.com/apache/spark/pull/13257#issuecomment-220886604
Hi @sbcd90 I am not a committer but I just left a comment because I like
your PR. Let me add some more comments which I think might be changed.
---
If your
Github user HyukjinKwon commented on the pull request:
https://github.com/apache/spark/pull/13254#issuecomment-220866532
@andrewor14 Could you please take a look maybe?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well
Github user HyukjinKwon commented on the pull request:
https://github.com/apache/spark/pull/13257#issuecomment-220860596
@sbcd90 I think we will need a test to make sure this fixes the issue and
other changes in the future do not break this change.
---
If your project is set up
GitHub user HyukjinKwon opened a pull request:
https://github.com/apache/spark/pull/13253
[SPARK-15475][SQL] Add tests for writing and reading back empty data for
Parquet, Json and Text data sources
## What changes were proposed in this pull request?
This PR adds the tests
GitHub user HyukjinKwon opened a pull request:
https://github.com/apache/spark/pull/13254
[SPARK-15475][SQL] Support for reading text data source without specifying
schema
## What changes were proposed in this pull request?
Currently, Text data source requires a schema
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/13254#discussion_r64149589
--- Diff:
sql/hive/src/test/scala/org/apache/spark/sql/sources/HadoopFsRelationTest.scala
---
@@ -515,20 +515,20 @@ abstract class HadoopFsRelationTest
GitHub user HyukjinKwon opened a pull request:
https://github.com/apache/spark/pull/13252
[SPARK-15473][SQL] CSV data source fails to write and read back empty data
## What changes were proposed in this pull request?
This PR adds the support for writing and reading back
Github user HyukjinKwon commented on the pull request:
https://github.com/apache/spark/pull/13253#issuecomment-220831607
retest this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user HyukjinKwon commented on the pull request:
https://github.com/apache/spark/pull/13257#issuecomment-220898876
@sbcd90 I just read this JIRA
[SPARK-8501](https://issues.apache.org/jira/browse/SPARK-8501) to figure out
why it was disabled and tried to manually test
701 - 800 of 12622 matches
Mail list logo