Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/16611
Ah, sure. Let me give a shot.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/16611#discussion_r96482934
--- Diff:
sql/hive/src/test/scala/org/apache/spark/sql/sources/SimpleTextHadoopFsRelationSuite.scala
---
@@ -69,18 +69,19 @@ class
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/16586#discussion_r96554333
--- Diff:
sql/hive/src/test/scala/org/apache/spark/sql/hive/execution/HiveQuerySuite.scala
---
@@ -461,7 +461,8 @@ class HiveQuerySuite extends
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/16586
Ah, thank you @shivaram and @felixcheung
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/16618
Could you add `[WIP]` in the title if it is WIP?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/16586
(Just FYI, it is now fixed for both mine and AFS account.)
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/16611#discussion_r96565772
--- Diff:
sql/hive/src/test/scala/org/apache/spark/sql/sources/SimpleTextHadoopFsRelationSuite.scala
---
@@ -69,18 +69,19 @@ class
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/16611
I just added DDL support with some more tests with fixed PR description.
Could you please take another look and see if it makes sense?
---
If your project is set up for it, you can reply to
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/16611
retest this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/16611
retest this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/12739
@bomeng, do you mind if I ask to point out the PR fixing this issue if
possible please?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/16496
@cloud-fan, could you take a look please?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/16586
Current status of this PR:
It seems these tests below constantly failing during 6 times build (please
check the logs in https://ci.appveyor.com/project/spark-test/spark/history
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/16586
Build started: [TESTS] `org.apache.spark.scheduler.SparkListenerSuite`
[![PR-16586](https://ci.appveyor.com/api/projects/status/github/spark-test/spark?branch=68031366-45EE-45B4-867A
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/16586
They all pass in individual tests with `test-only` (please check the logs
above).
```
org.apache.spark.scheduler.SparkListenerSuite:
- local metrics (8 seconds, 656
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/16586#discussion_r97022356
--- Diff:
core/src/test/scala/org/apache/spark/scheduler/SparkListenerSuite.scala ---
@@ -229,7 +229,7 @@ class SparkListenerSuite extends SparkFunSuite
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/16586
Hi @srowen, I think it is ready for a second look. In short, the current
status is,
- there are some test failures
(https://github.com/apache/spark/pull/16586#issuecomment-273437565
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/16496#discussion_r97051937
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/csv/UnivocityGenerator.scala
---
@@ -0,0 +1,91 @@
+/*
+ * Licensed
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/16496
Thank you!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or
GitHub user HyukjinKwon opened a pull request:
https://github.com/apache/spark/pull/16669
[SPARK-16101][SQL] Refactoring CSV read path to be consistent with JSON
data source
## What changes were proposed in this pull request?
This PR refactors CSV read path to be
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/16669#discussion_r97200922
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/csv/CSVRelation.scala
---
@@ -56,91 +49,6 @@ object CSVRelation extends
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/16669#discussion_r97201031
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/csv/UnivocityParser.scala
---
@@ -0,0 +1,234 @@
+/*
+ * Licensed
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/16669#discussion_r97200937
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/csv/UnivocityParser.scala
---
@@ -0,0 +1,234 @@
+/*
+ * Licensed
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/16669#discussion_r97200905
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/csv/CSVInferSchema.scala
---
@@ -217,124 +217,6 @@ private[csv] object
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/16669
cc @cloud-fan, I tried to only change the parsing path (not schema
inference or de-duplicating other filtering logics). Could you take a look and
see if it makes sense?
---
If your project is
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/16668#discussion_r97201250
--- Diff: R/pkg/R/DataFrame.R ---
@@ -3406,3 +3406,28 @@ setMethod("randomSplit",
}
sapply(sdfs,
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/16553
gentle ping..
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/16669#discussion_r97201994
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/csv/CSVFileFormat.scala
---
@@ -172,21 +172,12 @@ class CSVFileFormat
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/16611
retest this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/16669
retest this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/16671
FWIW, I am negative of this approach too. It does not look a good solution
to require full table scans to resolve skew between partitions.
As said, it is not good for a large table
GitHub user HyukjinKwon opened a pull request:
https://github.com/apache/spark/pull/16680
[SPARK-16101][SQL] Refactoring CSV schema inference path to be consistent
with JSON
## What changes were proposed in this pull request?
This PR refactors CSV schema inference path to
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/16680#discussion_r97321103
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/csv/CSVUtils.scala
---
@@ -0,0 +1,116 @@
+/*
--- End diff
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/16680#discussion_r97321283
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/csv/CSVFileFormat.scala
---
@@ -60,64 +57,9 @@ class CSVFileFormat extends
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/16680#discussion_r97321455
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/csv/CSVFileFormat.scala
---
@@ -170,32 +111,21 @@ class CSVFileFormat
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/16680#discussion_r97321590
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/csv/CSVInferSchema.scala
---
@@ -215,32 +267,3 @@ private[csv] object
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/16680#discussion_r97321669
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/csv/CSVRelation.scala
---
@@ -1,69 +0,0 @@
-/*
--- End diff
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/16680
Let me double check tomorrow in case and then cc someone.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/16669
Oh, sure. I think I should fix the default scala version to 2.10 in my
local..
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/16669
Sure, I will send a PR as soon as after building it with 2.10.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project
GitHub user HyukjinKwon opened a pull request:
https://github.com/apache/spark/pull/16684
[HOTFIX] Fix the build with Scala 2.10 by explicit typed argument
## What changes were proposed in this pull request?
This fixes
```bash
[error]
/home/jenkins/workspace
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/16684
cc @tdas
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/16684
Sure!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/16553
@gatorsmile Thanks !
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/16684
I though you meant adding `SPARK-16101`/#16669 in the PR description/title.
Maybe the change showed up latter in your browser or mine if I understood
correctly?
---
If your project is set up
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/16685#discussion_r97455127
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/jdbc/JdbcUtils.scala
---
@@ -17,20 +17,22 @@
package
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/16685#discussion_r97457118
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/jdbc/JdbcUtils.scala
---
@@ -17,20 +17,22 @@
package
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/16684
(I also tested this via)
```bash
./build/sbt -Dscala-2.10 clean "test-only
org.apache.spark.sql.execution.datasources.csv.CSVSuite"
```
---
If your project is set
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/16686#discussion_r97462264
--- Diff:
external/kafka-0-10-sql/src/main/resources/META-INF/services/org.apache.spark.sql.sources.DataSourceRegister
---
@@ -1 +1
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/16553
(FWIW, I checked `/build/mvn -Pyarn -Phadoop-2.4 -Dscala-2.10 -DskipTests
clean package`)
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/16553
Thank you @gatorsmile
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/16689
Oh @felixcheung , I was writing a comment but I just saw you. I was looking
into this for my curiosity.
Isn't this due to R type coercion rule with POSIXlt?
```r
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/16689
(I might be wrong but was suspecting that it returns `NA` first as
`logical` when we collect via `SerDe.scala` and then it ends up `numeric` due
to the type coercion when `NA` is located first
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/16689
Oh. it was all written in the PR description... I removed my uesless
comments.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/14918
I do not agree with this change too by the same reason in
https://github.com/apache/spark/pull/14918#issuecomment-250882422.
---
If your project is set up for it, you can reply to this email
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/16680#discussion_r97706016
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/csv/CSVInferSchema.scala
---
@@ -39,22 +37,76 @@ private[csv] object
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/16680
@cloud-fan, could you please take a look? I tried to not change the current
behaviour and logics at my best but just re-locate them here.
---
If your project is set up for it, you can reply to
GitHub user HyukjinKwon opened a pull request:
https://github.com/apache/spark/pull/16703
[SPARK-12970][DOCS] Fix the example in SturctType APIs for Scala and Java
## What changes were proposed in this pull request?
This PR fixes both,
javadoc8 break
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/16703
cc @srowen and @joshrosen who are in the JIRA.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/16703
(I just added some links for `StructType` and `StructField` around this
examples just for consistency and while I was here)
---
If your project is set up for it, you can reply to this email
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/16707#discussion_r97937744
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/UDFRegistration.scala ---
@@ -125,7 +125,7 @@ class UDFRegistration private[sql
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/13300
Actually, this feature might not be urgent as said above but IMO I like
this feature to be honest. I guess the reason it was hold is that IMHO it does
not look a clean fix.
I recently
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/13300
Oh, I remember the answer from my previous similar question, which was that
we should not add some APIs just for consistency.
I have some references about the requests for this feature
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/16680
Sure!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/13808
@davies, removed `StringIteratorReader` concatenates the lines in each
iterator into reader in each partition IIRC.
New line in the column was not supported correctly up to my
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/13808
FWIW, I remember I had a hard time to figure out
https://issues.apache.org/jira/browse/SPARK-14103 where the issue itself was
about quote but it ended up reading whole partition as a value
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/16723#discussion_r98329844
--- Diff:
mllib/src/main/scala/org/apache/spark/ml/classification/LinearSVC.scala ---
@@ -47,7 +47,7 @@ private[classification] trait LinearSVCParams
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/15053
Sure, let me try to pick the commits here and open another with cc'ing you
and all here soon.
---
If your project is set up for it, you can reply to this email and have your
reply appe
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/15053
Thank you very much for your update.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
GitHub user HyukjinKwon opened a pull request:
https://github.com/apache/spark/pull/16741
[WIP][SPARK-19402][DOCS] Support LaTex inline annotation correctly and fix
warnings in Scala/Java APIs generation
## What changes were proposed in this pull request?
This PR proposes
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/16741#discussion_r98441292
--- Diff: core/src/main/scala/org/apache/spark/FutureAction.scala ---
@@ -58,7 +58,7 @@ trait FutureAction[T] extends Future[T
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/16741
@srowen, let me just leave this PR as a minor without a JIRA for actual
errors here.
Let consider the warning changes later when I happened to be pretty sure of
it, if someone raises
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/16741
Actually, I am still neutral on this and confused which way is more
correct. It warns anyhow but IMHO changing `[[` to back-ticks which are super
strictly a little bit worse because `[[` at
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/16741
I mean.. it'd be surely better to change it to back-ticks if this breaks
but it just warns.. (It is funny that I am saying my proposal should be fixed
though). Let me follow your lead p
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/16741
Oh, wait it seems case-by-case. Let me leave some before-after image
with some explanation soon.
---
If your project is set up for it, you can reply to this email and have your
reply
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/16741#discussion_r98463985
--- Diff: core/src/main/scala/org/apache/spark/SparkConf.scala ---
@@ -262,7 +262,7 @@ class SparkConf(loadDefaults: Boolean) extends
Cloneable with
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/16741#discussion_r98464912
--- Diff:
mllib/src/main/scala/org/apache/spark/mllib/optimization/Gradient.scala ---
@@ -93,9 +93,9 @@ abstract class Gradient extends Serializable
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/16741#discussion_r98463736
--- Diff: core/src/main/scala/org/apache/spark/SparkConf.scala ---
@@ -262,7 +262,7 @@ class SparkConf(loadDefaults: Boolean) extends
Cloneable with
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/16741#discussion_r98464968
--- Diff:
mllib/src/main/scala/org/apache/spark/mllib/optimization/Gradient.scala ---
@@ -110,18 +110,19 @@ abstract class Gradient extends Serializable
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/16741#discussion_r98464851
--- Diff:
mllib/src/main/scala/org/apache/spark/mllib/optimization/Gradient.scala ---
@@ -78,7 +78,7 @@ abstract class Gradient extends Serializable
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/15053
Thank you @holdenk I definitely will. +1 for closing.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/16745
LGTM too.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/16752
Hi @kishorbp , it seems mistakenly open. Would you please close this?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/16750#discussion_r98623217
--- Diff: python/pyspark/sql/readwriter.py ---
@@ -297,7 +300,7 @@ def text(self, paths):
def csv(self, path, schema=None, sep=None, encoding
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/16750#discussion_r98629735
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/DataFrameReader.scala ---
@@ -329,7 +332,17 @@ class DataFrameReader private[sql](sparkSession
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/16750#discussion_r98625766
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/csv/CSVOptions.scala
---
@@ -161,12 +163,3 @@ private[csv] class
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/16750#discussion_r98624418
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/DataFrameReader.scala ---
@@ -329,7 +332,17 @@ class DataFrameReader private[sql](sparkSession
GitHub user HyukjinKwon opened a pull request:
https://github.com/apache/spark/pull/16753
[SPARK-19296][SQL] Deduplicate arguments in JdbcUtils.saveTable
## What changes were proposed in this pull request?
This PR deduplicates arguments, `url` and `table` in `JdbcUtils
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/16753
Hi @gatorsmile, could you take a look for this one please? (It might not
need a JIRA but it happened to be opened by someone).
---
If your project is set up for it, you can reply to this email
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/16747
I am OK but I remember there are some discussions about whether this type
should be exposed or not and I could not track down the conclusion.
---
If your project is set up for it, you can
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/16753
@srowen, I see. Let me maybe give a shot to make them consistent to show if
it look good.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/16735#discussion_r98661043
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/csv/CSVInferSchema.scala
---
@@ -140,12 +137,21 @@ private[csv] object
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/16043
(Ugh, that -9 again. It is unknown up to my knowledge. I talked about this
before)
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/16043
I am just interested in it :). Yes, this one looks not related again..
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/16735#discussion_r98795601
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/csv/CSVInferSchema.scala
---
@@ -140,12 +137,21 @@ private[csv] object
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/16735
BTW, @sergey-rubtsov, could you check if we should add a type-widening rule
in `findTightestCommonType` between `DateType` and `TimestampType`?
---
If your project is set up for it, you can
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/16753
@gatorsmile Thank you!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/16753
Thank you @srowen and @gatorsmile!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/8066
Do you mind if I ask why it was asked to be closed? (I am just purely
curious while looking through PRs and JIRAs).
---
If your project is set up for it, you can reply to this email and have
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/16756
@viirya, just per the discussion in
https://github.com/apache/spark/pull/16751, should we maybe add this change,
https://github.com/apache/spark/compare/master...HyukjinKwon:spark-PARQUET-363
1601 - 1700 of 12634 matches
Mail list logo