Github user maropu commented on a diff in the pull request:
https://github.com/apache/spark/pull/16610#discussion_r100365381
--- Diff: sql/core/src/main/scala/org/apache/spark/sql/functions.scala ---
@@ -102,6 +102,27 @@ object functions {
Column(literalExpr
Github user maropu commented on a diff in the pull request:
https://github.com/apache/spark/pull/16610#discussion_r100366614
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/literals.scala
---
@@ -153,6 +154,12 @@ object Literal {
Literal
Github user maropu commented on a diff in the pull request:
https://github.com/apache/spark/pull/16610#discussion_r100395948
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/literals.scala
---
@@ -153,6 +154,12 @@ object Literal {
Literal
Github user maropu commented on a diff in the pull request:
https://github.com/apache/spark/pull/16610#discussion_r100398880
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/literals.scala
---
@@ -153,6 +154,12 @@ object Literal {
Literal
Github user maropu commented on the issue:
https://github.com/apache/spark/pull/16886
Looks great to me because Hive actually supports these types for `UDF`.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user maropu commented on a diff in the pull request:
https://github.com/apache/spark/pull/16886#discussion_r100540845
--- Diff:
sql/hive/src/main/scala/org/apache/spark/sql/hive/HiveInspectors.scala ---
@@ -218,22 +220,33 @@ private[hive] trait HiveInspectors
Github user maropu commented on a diff in the pull request:
https://github.com/apache/spark/pull/16886#discussion_r100539896
--- Diff:
sql/hive/src/main/scala/org/apache/spark/sql/hive/HiveInspectors.scala ---
@@ -218,22 +220,33 @@ private[hive] trait HiveInspectors
Github user maropu commented on a diff in the pull request:
https://github.com/apache/spark/pull/16886#discussion_r100547590
--- Diff:
sql/hive/src/main/scala/org/apache/spark/sql/hive/HiveInspectors.scala ---
@@ -218,22 +220,33 @@ private[hive] trait HiveInspectors
Github user maropu commented on the issue:
https://github.com/apache/spark/pull/15928
@hvanhovell Could you have time to review this?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
Github user maropu commented on a diff in the pull request:
https://github.com/apache/spark/pull/16886#discussion_r100549783
--- Diff:
sql/hive/src/main/scala/org/apache/spark/sql/hive/HiveInspectors.scala ---
@@ -218,22 +220,33 @@ private[hive] trait HiveInspectors
Github user maropu commented on a diff in the pull request:
https://github.com/apache/spark/pull/15928#discussion_r100576613
--- Diff:
sql/hive/src/test/scala/org/apache/spark/sql/execution/benchmark/HiveUDFsBenchmark.scala
---
@@ -0,0 +1,66 @@
+/*
+ * Licensed to the
Github user maropu commented on a diff in the pull request:
https://github.com/apache/spark/pull/15945#discussion_r100663544
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/QueryExecution.scala ---
@@ -82,32 +81,14 @@ class QueryExecution(val sparkSession
Github user maropu commented on a diff in the pull request:
https://github.com/apache/spark/pull/15945#discussion_r100663565
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/aggregate/AggregateExec.scala
---
@@ -0,0 +1,68 @@
+/*
+ * Licensed to the Apache
Github user maropu commented on a diff in the pull request:
https://github.com/apache/spark/pull/15945#discussion_r10053
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/aggregate/MergePartialAggregate.scala
---
@@ -0,0 +1,96 @@
+/*
+ * Licensed to the
Github user maropu commented on a diff in the pull request:
https://github.com/apache/spark/pull/15945#discussion_r10060
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/aggregate/AggregateExec.scala
---
@@ -0,0 +1,68 @@
+/*
+ * Licensed to the Apache
Github user maropu commented on a diff in the pull request:
https://github.com/apache/spark/pull/15945#discussion_r100667006
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/QueryExecution.scala ---
@@ -82,32 +81,14 @@ class QueryExecution(val sparkSession
Github user maropu commented on the issue:
https://github.com/apache/spark/pull/16733
yea, I think we do not need to handle this.
Either way, it'd be better to just add checking the exception in tests?;
```
intercept[NoSuchElementException] {
a
Github user maropu closed the pull request at:
https://github.com/apache/spark/pull/16733
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user maropu commented on the issue:
https://github.com/apache/spark/pull/16733
okay, I'll close this and jira, thanks!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this fe
Github user maropu commented on the issue:
https://github.com/apache/spark/pull/14038
@liancheng ping
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or
Github user maropu commented on the issue:
https://github.com/apache/spark/pull/16605
@cloud-fan Could you give me more insights on this?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
GitHub user maropu opened a pull request:
https://github.com/apache/spark/pull/16928
[SPARK-18699][SQL] Fill NULL in a field when detecting a malformed token
## What changes were proposed in this pull request?
This pr added a logic to fill NULL when detecting malformed tokens in
Github user maropu commented on the issue:
https://github.com/apache/spark/pull/16928
@HyukjinKwon Could you check this and give me any insight before committers
do?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If
Github user maropu commented on the issue:
https://github.com/apache/spark/pull/16928
Aha, looks good to me. Just a sec, and I'll modify the code.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project doe
Github user maropu commented on the issue:
https://github.com/apache/spark/pull/16928
I also keep considering other ways to fix this...
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
Github user maropu commented on the issue:
https://github.com/apache/spark/pull/16928
Jenkins, retest this please.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and
GitHub user maropu opened a pull request:
https://github.com/apache/spark/pull/16981
[SPARK-19637][SQL] Add from_json/to_json in FunctionRegistry
## What changes were proposed in this pull request?
This pr added entries in `FunctionRegistry` and supported
`from_json`/`to_json
Github user maropu commented on the issue:
https://github.com/apache/spark/pull/16981
Jenkins, retest this please.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and
Github user maropu commented on the issue:
https://github.com/apache/spark/pull/16981
@HyukjinKwon Thanks! I'll check.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
en
Github user maropu commented on the issue:
https://github.com/apache/spark/pull/16928
@HyukjinKwon okay, thanks! I'll check soon
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this fe
Github user maropu commented on a diff in the pull request:
https://github.com/apache/spark/pull/16928#discussion_r101895589
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/csv/UnivocityParser.scala
---
@@ -222,12 +250,6 @@ private[csv] class
Github user maropu commented on the issue:
https://github.com/apache/spark/pull/16928
I'll update in a day, thanks!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enable
Github user maropu commented on the issue:
https://github.com/apache/spark/pull/16981
I'll update in a day, thanks!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enable
Github user maropu commented on a diff in the pull request:
https://github.com/apache/spark/pull/16981#discussion_r101946265
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/json/JacksonUtils.scala
---
@@ -55,4 +60,26 @@ object JacksonUtils
Github user maropu commented on the issue:
https://github.com/apache/spark/pull/16995
Could you add tests for this pr?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user maropu commented on a diff in the pull request:
https://github.com/apache/spark/pull/16981#discussion_r101947175
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/json/JacksonUtils.scala
---
@@ -55,4 +60,26 @@ object JacksonUtils
Github user maropu commented on the issue:
https://github.com/apache/spark/pull/16981
@gatorsmile okay, I'll do soon
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enable
Github user maropu commented on a diff in the pull request:
https://github.com/apache/spark/pull/16981#discussion_r101948229
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/json/JacksonUtils.scala
---
@@ -55,4 +60,24 @@ object JacksonUtils
Github user maropu commented on a diff in the pull request:
https://github.com/apache/spark/pull/16981#discussion_r101950039
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/jsonExpressions.scala
---
@@ -482,6 +482,15 @@ case class JsonTuple
Github user maropu commented on a diff in the pull request:
https://github.com/apache/spark/pull/16981#discussion_r101950073
--- Diff:
sql/core/src/test/scala/org/apache/spark/sql/JsonFunctionsSuite.scala ---
@@ -174,4 +174,44 @@ class JsonFunctionsSuite extends QueryTest with
Github user maropu commented on a diff in the pull request:
https://github.com/apache/spark/pull/16981#discussion_r101971506
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/jsonExpressions.scala
---
@@ -482,6 +482,15 @@ case class JsonTuple
Github user maropu commented on the issue:
https://github.com/apache/spark/pull/16610
ping
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the
Github user maropu commented on a diff in the pull request:
https://github.com/apache/spark/pull/16610#discussion_r101977589
--- Diff: sql/core/src/main/scala/org/apache/spark/sql/functions.scala ---
@@ -102,6 +102,27 @@ object functions {
Column(literalExpr
Github user maropu commented on a diff in the pull request:
https://github.com/apache/spark/pull/16610#discussion_r101985694
--- Diff: sql/core/src/main/scala/org/apache/spark/sql/functions.scala ---
@@ -102,6 +102,27 @@ object functions {
Column(literalExpr
Github user maropu commented on a diff in the pull request:
https://github.com/apache/spark/pull/16928#discussion_r101988141
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/csv/UnivocityParser.scala
---
@@ -202,21 +214,30 @@ private[csv] class
Github user maropu commented on a diff in the pull request:
https://github.com/apache/spark/pull/16928#discussion_r102008366
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/csv/UnivocityParser.scala
---
@@ -173,25 +188,22 @@ private[csv] class
Github user maropu commented on a diff in the pull request:
https://github.com/apache/spark/pull/16928#discussion_r102016459
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/csv/UnivocityParser.scala
---
@@ -173,25 +188,22 @@ private[csv] class
Github user maropu commented on a diff in the pull request:
https://github.com/apache/spark/pull/16928#discussion_r102016391
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/csv/CSVFileFormat.scala
---
@@ -101,6 +101,11 @@ class CSVFileFormat extends
Github user maropu commented on a diff in the pull request:
https://github.com/apache/spark/pull/16928#discussion_r102017077
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/csv/UnivocityParser.scala
---
@@ -202,21 +214,30 @@ private[csv] class
Github user maropu commented on a diff in the pull request:
https://github.com/apache/spark/pull/16928#discussion_r102029272
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/csv/CSVOptions.scala
---
@@ -95,6 +104,9 @@ private[csv] class CSVOptions
Github user maropu commented on the issue:
https://github.com/apache/spark/pull/16928
@HyukjinKwon The current patch has a bit different behaviour between csv
and json cases when `_corrupt_record` has types other than `StringType`; in
json cases, it hits `requirement failed` and, in
Github user maropu commented on a diff in the pull request:
https://github.com/apache/spark/pull/17013#discussion_r102175210
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/JavaTypeInference.scala
---
@@ -123,7 +123,11 @@ object JavaTypeInference
Github user maropu commented on a diff in the pull request:
https://github.com/apache/spark/pull/17013#discussion_r102175873
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/JavaTypeInference.scala
---
@@ -123,7 +123,11 @@ object JavaTypeInference
Github user maropu commented on a diff in the pull request:
https://github.com/apache/spark/pull/16928#discussion_r102362652
--- Diff: python/pyspark/sql/readwriter.py ---
@@ -303,8 +303,9 @@ def text(self, paths):
def csv(self, path, schema=None, sep=None, encoding=None
Github user maropu commented on the issue:
https://github.com/apache/spark/pull/16928
@cloud-fan okay, so I'll make this pr pending for now. Then, I'll make a
new pr to fix the json behaivour.
---
If your project is set up for it, you can reply to this email and have
Github user maropu commented on a diff in the pull request:
https://github.com/apache/spark/pull/16928#discussion_r102367846
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/csv/UnivocityParser.scala
---
@@ -147,8 +165,6 @@ private[csv] class
Github user maropu commented on a diff in the pull request:
https://github.com/apache/spark/pull/16928#discussion_r102369500
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/csv/CSVFileFormat.scala
---
@@ -96,31 +96,44 @@ class CSVFileFormat extends
Github user maropu commented on a diff in the pull request:
https://github.com/apache/spark/pull/16928#discussion_r102372757
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/csv/UnivocityParser.scala
---
@@ -45,24 +46,41 @@ private[csv] class
GitHub user maropu opened a pull request:
https://github.com/apache/spark/pull/17023
[SPARK-19695][SQL] Throw an exception if a `columnNameOfCorruptRecord`
field violates requirements
## What changes were proposed in this pull request?
This pr comes from #16928 and fixed a json
Github user maropu commented on the issue:
https://github.com/apache/spark/pull/16928
@HyukjinKwon @cloud-fan okay, all tests passed. Also, I made a pr to fix
the json behaviour #17023.
---
If your project is set up for it, you can reply to this email and have your
reply appear on
Github user maropu commented on the issue:
https://github.com/apache/spark/pull/17023
@HyukjinKwon @cloud-fan cloud you check this? thanks.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user maropu commented on the issue:
https://github.com/apache/spark/pull/16981
ping
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the
GitHub user maropu opened a pull request:
https://github.com/apache/spark/pull/17028
[SPARK-19691][SQL] Fix ClassCastException when calculating percentile of
decimal column
## What changes were proposed in this pull request?
This pr fixed a class-cast exception below
Github user maropu commented on a diff in the pull request:
https://github.com/apache/spark/pull/17028#discussion_r102492855
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/aggregate/Percentile.scala
---
@@ -138,7 +138,8 @@ case class Percentile
Github user maropu commented on a diff in the pull request:
https://github.com/apache/spark/pull/17028#discussion_r102517336
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/aggregate/Percentile.scala
---
@@ -138,7 +138,8 @@ case class Percentile
Github user maropu commented on the issue:
https://github.com/apache/spark/pull/17028
Just a sec, I'll apply the @hvanhovell suggestion...
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not
Github user maropu commented on a diff in the pull request:
https://github.com/apache/spark/pull/17023#discussion_r102624321
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/json/JsonFileFormat.scala
---
@@ -102,6 +102,15 @@ class JsonFileFormat
Github user maropu commented on a diff in the pull request:
https://github.com/apache/spark/pull/16928#discussion_r102638266
--- Diff: python/pyspark/sql/readwriter.py ---
@@ -304,7 +304,8 @@ def csv(self, path, schema=None, sep=None,
encoding=None, quote=None, escape=Non
Github user maropu commented on a diff in the pull request:
https://github.com/apache/spark/pull/16928#discussion_r102643961
--- Diff: python/pyspark/sql/readwriter.py ---
@@ -367,10 +368,18 @@ def csv(self, path, schema=None, sep=None,
encoding=None, quote=None, escape=Non
Github user maropu commented on a diff in the pull request:
https://github.com/apache/spark/pull/16928#discussion_r102648114
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/csv/UnivocityParser.scala
---
@@ -45,24 +45,41 @@ private[csv] class
Github user maropu commented on a diff in the pull request:
https://github.com/apache/spark/pull/16928#discussion_r102648290
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/csv/UnivocityParser.scala
---
@@ -202,21 +221,25 @@ private[csv] class
Github user maropu commented on the issue:
https://github.com/apache/spark/pull/17028
@HyukjinKwon @hvanhovell How about the latest fix?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
Github user maropu commented on a diff in the pull request:
https://github.com/apache/spark/pull/16928#discussion_r102653357
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/csv/UnivocityParser.scala
---
@@ -202,21 +212,41 @@ private[csv] class
Github user maropu commented on a diff in the pull request:
https://github.com/apache/spark/pull/16928#discussion_r102654066
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/csv/UnivocityParser.scala
---
@@ -202,21 +221,25 @@ private[csv] class
Github user maropu commented on a diff in the pull request:
https://github.com/apache/spark/pull/16928#discussion_r102656966
--- Diff: python/pyspark/sql/readwriter.py ---
@@ -193,8 +193,9 @@ def json(self, path, schema=None,
primitivesAsString=None, prefersDecimal=None
Github user maropu commented on a diff in the pull request:
https://github.com/apache/spark/pull/16928#discussion_r102657558
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/csv/UnivocityParser.scala
---
@@ -202,21 +212,41 @@ private[csv] class
Github user maropu commented on the issue:
https://github.com/apache/spark/pull/16928
Jenkins, retest this please.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and
Github user maropu commented on a diff in the pull request:
https://github.com/apache/spark/pull/16928#discussion_r102661203
--- Diff: python/pyspark/sql/readwriter.py ---
@@ -193,8 +193,9 @@ def json(self, path, schema=None,
primitivesAsString=None, prefersDecimal=None
Github user maropu commented on a diff in the pull request:
https://github.com/apache/spark/pull/16928#discussion_r102665176
--- Diff: python/pyspark/sql/readwriter.py ---
@@ -193,8 +193,9 @@ def json(self, path, schema=None,
primitivesAsString=None, prefersDecimal=None
Github user maropu commented on a diff in the pull request:
https://github.com/apache/spark/pull/16928#discussion_r102666977
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/csv/UnivocityParser.scala
---
@@ -45,6 +45,14 @@ private[csv] class
Github user maropu commented on a diff in the pull request:
https://github.com/apache/spark/pull/16928#discussion_r102667789
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/csv/UnivocityParser.scala
---
@@ -202,21 +212,41 @@ private[csv] class
Github user maropu commented on a diff in the pull request:
https://github.com/apache/spark/pull/17028#discussion_r102673907
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/aggregate/Percentile.scala
---
@@ -274,7 +283,8 @@ case class Percentile
Github user maropu commented on a diff in the pull request:
https://github.com/apache/spark/pull/17028#discussion_r102674655
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/aggregate/Percentile.scala
---
@@ -130,20 +130,30 @@ case class Percentile
Github user maropu commented on the issue:
https://github.com/apache/spark/pull/17028
Thanks for your review! I'll fix now.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this fe
Github user maropu commented on a diff in the pull request:
https://github.com/apache/spark/pull/17028#discussion_r102676907
--- Diff:
sql/catalyst/src/test/scala/org/apache/spark/sql/catalyst/expressions/aggregate/PercentileSuite.scala
---
@@ -39,44 +38,44 @@ class
Github user maropu commented on a diff in the pull request:
https://github.com/apache/spark/pull/17028#discussion_r102679493
--- Diff:
sql/catalyst/src/test/scala/org/apache/spark/sql/catalyst/expressions/aggregate/PercentileSuite.scala
---
@@ -39,44 +38,44 @@ class
Github user maropu commented on the issue:
https://github.com/apache/spark/pull/17028
Done. I wait for tests finished.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user maropu commented on a diff in the pull request:
https://github.com/apache/spark/pull/16928#discussion_r102688956
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/csv/UnivocityParser.scala
---
@@ -202,21 +212,41 @@ private[csv] class
Github user maropu commented on a diff in the pull request:
https://github.com/apache/spark/pull/16928#discussion_r102701569
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/csv/UnivocityParser.scala
---
@@ -45,6 +45,14 @@ private[csv] class
Github user maropu commented on a diff in the pull request:
https://github.com/apache/spark/pull/16928#discussion_r102734091
--- Diff: python/pyspark/sql/readwriter.py ---
@@ -193,8 +193,9 @@ def json(self, path, schema=None,
primitivesAsString=None, prefersDecimal=None
Github user maropu commented on a diff in the pull request:
https://github.com/apache/spark/pull/16928#discussion_r102734548
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/csv/UnivocityParser.scala
---
@@ -202,21 +212,41 @@ private[csv] class
Github user maropu commented on the issue:
https://github.com/apache/spark/pull/17028
Thanks!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the
Github user maropu commented on the issue:
https://github.com/apache/spark/pull/17028
@hvanhovell oka, I'll open soon.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
en
GitHub user maropu opened a pull request:
https://github.com/apache/spark/pull/17046
[SPARK-19691][SQL][BRANCH-2.1] Fix ClassCastException when calculating
percentile of decimal column
## What changes were proposed in this pull request?
This is a backport of the two following
Github user maropu commented on the issue:
https://github.com/apache/spark/pull/17046
@hvanhovell okay, ready to review
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user maropu closed the pull request at:
https://github.com/apache/spark/pull/17046
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user maropu commented on the issue:
https://github.com/apache/spark/pull/17046
Thanks!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the
GitHub user maropu opened a pull request:
https://github.com/apache/spark/pull/16410
[SPARK-19005][SQL] Keep column ordering when a schema is explicitly
specified
## What changes were proposed in this pull request?
This pr is to keep column ordering when a schema is explicitly
Github user maropu commented on the issue:
https://github.com/apache/spark/pull/16410
I'm looking into the failure...
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enable
Github user maropu commented on the issue:
https://github.com/apache/spark/pull/16410
This fix change some existing behaviour in datasource.
For instance,
```
scala> sql("""CREATE TABLE testTable(a INT, b INT, c INT, d INT) USING
PARQUET PARTITIONED BY (b
1 - 100 of 3608 matches
Mail list logo