Repository: spark
Updated Branches:
  refs/heads/master 57dc326bd -> 25a020be9


[SPARK-17583][SQL] Remove uesless rowSeparator variable and set auto-expanding 
buffer as default for maxCharsPerColumn option in CSV

## What changes were proposed in this pull request?

This PR includes the changes below:

1. Upgrade Univocity library from 2.1.1 to 2.2.1

  This includes some performance improvement and also enabling auto-extending 
buffer in `maxCharsPerColumn` option in CSV. Please refer the [release 
notes](https://github.com/uniVocity/univocity-parsers/releases).

2. Remove useless `rowSeparator` variable existing in `CSVOptions`

  We have this unused variable in 
[CSVOptions.scala#L127](https://github.com/apache/spark/blob/29952ed096fd2a0a19079933ff691671d6f00835/sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/csv/CSVOptions.scala#L127)
 but it seems possibly causing confusion that it actually does not care of 
`\r\n`. For example, we have an issue open about this, 
[SPARK-17227](https://issues.apache.org/jira/browse/SPARK-17227), describing 
this variable.

  This variable is virtually not being used because we rely on 
`LineRecordReader` in Hadoop which deals with only both `\n` and `\r\n`.

3. Set the default value of `maxCharsPerColumn` to auto-expending.

  We are setting 1000000 for the length of each column. It'd be more sensible 
we allow auto-expending rather than fixed length by default.

  To make sure, using `-1` is being described in the release note, 
[2.2.0](https://github.com/uniVocity/univocity-parsers/releases/tag/v2.2.0).

## How was this patch tested?

N/A

Author: hyukjinkwon <gurwls...@gmail.com>

Closes #15138 from HyukjinKwon/SPARK-17583.


Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commit: http://git-wip-us.apache.org/repos/asf/spark/commit/25a020be
Tree: http://git-wip-us.apache.org/repos/asf/spark/tree/25a020be
Diff: http://git-wip-us.apache.org/repos/asf/spark/diff/25a020be

Branch: refs/heads/master
Commit: 25a020be99b6a540e4001e59e40d5d1c8aa53812
Parents: 57dc326
Author: hyukjinkwon <gurwls...@gmail.com>
Authored: Wed Sep 21 10:35:29 2016 +0100
Committer: Sean Owen <so...@cloudera.com>
Committed: Wed Sep 21 10:35:29 2016 +0100

----------------------------------------------------------------------
 dev/deps/spark-deps-hadoop-2.2                                   | 2 +-
 dev/deps/spark-deps-hadoop-2.3                                   | 2 +-
 dev/deps/spark-deps-hadoop-2.4                                   | 2 +-
 dev/deps/spark-deps-hadoop-2.6                                   | 2 +-
 dev/deps/spark-deps-hadoop-2.7                                   | 2 +-
 python/pyspark/sql/readwriter.py                                 | 2 +-
 python/pyspark/sql/streaming.py                                  | 2 +-
 sql/core/pom.xml                                                 | 2 +-
 .../src/main/scala/org/apache/spark/sql/DataFrameReader.scala    | 4 ++--
 .../apache/spark/sql/execution/datasources/csv/CSVOptions.scala  | 4 +---
 .../apache/spark/sql/execution/datasources/csv/CSVParser.scala   | 2 --
 .../scala/org/apache/spark/sql/streaming/DataStreamReader.scala  | 4 ++--
 12 files changed, 13 insertions(+), 17 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/spark/blob/25a020be/dev/deps/spark-deps-hadoop-2.2
----------------------------------------------------------------------
diff --git a/dev/deps/spark-deps-hadoop-2.2 b/dev/deps/spark-deps-hadoop-2.2
index a7259e2..f4f92c6 100644
--- a/dev/deps/spark-deps-hadoop-2.2
+++ b/dev/deps/spark-deps-hadoop-2.2
@@ -159,7 +159,7 @@ stax-api-1.0.1.jar
 stream-2.7.0.jar
 stringtemplate-3.2.1.jar
 super-csv-2.2.0.jar
-univocity-parsers-2.1.1.jar
+univocity-parsers-2.2.1.jar
 validation-api-1.1.0.Final.jar
 xbean-asm5-shaded-4.4.jar
 xmlenc-0.52.jar

http://git-wip-us.apache.org/repos/asf/spark/blob/25a020be/dev/deps/spark-deps-hadoop-2.3
----------------------------------------------------------------------
diff --git a/dev/deps/spark-deps-hadoop-2.3 b/dev/deps/spark-deps-hadoop-2.3
index 6986ab5..3db013f 100644
--- a/dev/deps/spark-deps-hadoop-2.3
+++ b/dev/deps/spark-deps-hadoop-2.3
@@ -167,7 +167,7 @@ stax-api-1.0.1.jar
 stream-2.7.0.jar
 stringtemplate-3.2.1.jar
 super-csv-2.2.0.jar
-univocity-parsers-2.1.1.jar
+univocity-parsers-2.2.1.jar
 validation-api-1.1.0.Final.jar
 xbean-asm5-shaded-4.4.jar
 xmlenc-0.52.jar

http://git-wip-us.apache.org/repos/asf/spark/blob/25a020be/dev/deps/spark-deps-hadoop-2.4
----------------------------------------------------------------------
diff --git a/dev/deps/spark-deps-hadoop-2.4 b/dev/deps/spark-deps-hadoop-2.4
index 75cccb3..7171010 100644
--- a/dev/deps/spark-deps-hadoop-2.4
+++ b/dev/deps/spark-deps-hadoop-2.4
@@ -167,7 +167,7 @@ stax-api-1.0.1.jar
 stream-2.7.0.jar
 stringtemplate-3.2.1.jar
 super-csv-2.2.0.jar
-univocity-parsers-2.1.1.jar
+univocity-parsers-2.2.1.jar
 validation-api-1.1.0.Final.jar
 xbean-asm5-shaded-4.4.jar
 xmlenc-0.52.jar

http://git-wip-us.apache.org/repos/asf/spark/blob/25a020be/dev/deps/spark-deps-hadoop-2.6
----------------------------------------------------------------------
diff --git a/dev/deps/spark-deps-hadoop-2.6 b/dev/deps/spark-deps-hadoop-2.6
index ef7b8a7..cb30fda 100644
--- a/dev/deps/spark-deps-hadoop-2.6
+++ b/dev/deps/spark-deps-hadoop-2.6
@@ -175,7 +175,7 @@ stax-api-1.0.1.jar
 stream-2.7.0.jar
 stringtemplate-3.2.1.jar
 super-csv-2.2.0.jar
-univocity-parsers-2.1.1.jar
+univocity-parsers-2.2.1.jar
 validation-api-1.1.0.Final.jar
 xbean-asm5-shaded-4.4.jar
 xercesImpl-2.9.1.jar

http://git-wip-us.apache.org/repos/asf/spark/blob/25a020be/dev/deps/spark-deps-hadoop-2.7
----------------------------------------------------------------------
diff --git a/dev/deps/spark-deps-hadoop-2.7 b/dev/deps/spark-deps-hadoop-2.7
index 6356612..9008aa8 100644
--- a/dev/deps/spark-deps-hadoop-2.7
+++ b/dev/deps/spark-deps-hadoop-2.7
@@ -176,7 +176,7 @@ stax-api-1.0.1.jar
 stream-2.7.0.jar
 stringtemplate-3.2.1.jar
 super-csv-2.2.0.jar
-univocity-parsers-2.1.1.jar
+univocity-parsers-2.2.1.jar
 validation-api-1.1.0.Final.jar
 xbean-asm5-shaded-4.4.jar
 xercesImpl-2.9.1.jar

http://git-wip-us.apache.org/repos/asf/spark/blob/25a020be/python/pyspark/sql/readwriter.py
----------------------------------------------------------------------
diff --git a/python/pyspark/sql/readwriter.py b/python/pyspark/sql/readwriter.py
index a6860ef..3ad6f80 100644
--- a/python/pyspark/sql/readwriter.py
+++ b/python/pyspark/sql/readwriter.py
@@ -349,7 +349,7 @@ class DataFrameReader(OptionUtils):
                            set, it uses the default value, ``20480``.
         :param maxCharsPerColumn: defines the maximum number of characters 
allowed for any given
                                   value being read. If None is set, it uses 
the default value,
-                                  ``1000000``.
+                                  ``-1`` meaning unlimited length.
         :param maxMalformedLogPerPartition: sets the maximum number of 
malformed rows Spark will
                                             log for each partition. Malformed 
records beyond this
                                             number will be ignored. If None is 
set, it

http://git-wip-us.apache.org/repos/asf/spark/blob/25a020be/python/pyspark/sql/streaming.py
----------------------------------------------------------------------
diff --git a/python/pyspark/sql/streaming.py b/python/pyspark/sql/streaming.py
index 0136451..cbd8279 100644
--- a/python/pyspark/sql/streaming.py
+++ b/python/pyspark/sql/streaming.py
@@ -517,7 +517,7 @@ class DataStreamReader(OptionUtils):
                            set, it uses the default value, ``20480``.
         :param maxCharsPerColumn: defines the maximum number of characters 
allowed for any given
                                   value being read. If None is set, it uses 
the default value,
-                                  ``1000000``.
+                                  ``-1`` meaning unlimited length.
         :param mode: allows a mode for dealing with corrupt records during 
parsing. If None is
                      set, it uses the default value, ``PERMISSIVE``.
 

http://git-wip-us.apache.org/repos/asf/spark/blob/25a020be/sql/core/pom.xml
----------------------------------------------------------------------
diff --git a/sql/core/pom.xml b/sql/core/pom.xml
index b275263..84de1d4 100644
--- a/sql/core/pom.xml
+++ b/sql/core/pom.xml
@@ -38,7 +38,7 @@
     <dependency>
       <groupId>com.univocity</groupId>
       <artifactId>univocity-parsers</artifactId>
-      <version>2.1.1</version>
+      <version>2.2.1</version>
       <type>jar</type>
     </dependency>
     <dependency>

http://git-wip-us.apache.org/repos/asf/spark/blob/25a020be/sql/core/src/main/scala/org/apache/spark/sql/DataFrameReader.scala
----------------------------------------------------------------------
diff --git a/sql/core/src/main/scala/org/apache/spark/sql/DataFrameReader.scala 
b/sql/core/src/main/scala/org/apache/spark/sql/DataFrameReader.scala
index 30f39c7..b10d2c8 100644
--- a/sql/core/src/main/scala/org/apache/spark/sql/DataFrameReader.scala
+++ b/sql/core/src/main/scala/org/apache/spark/sql/DataFrameReader.scala
@@ -392,8 +392,8 @@ class DataFrameReader private[sql](sparkSession: 
SparkSession) extends Logging {
    * `java.sql.Timestamp.valueOf()` and `java.sql.Date.valueOf()` or ISO 8601 
format.</li>
    * <li>`maxColumns` (default `20480`): defines a hard limit of how many 
columns
    * a record can have.</li>
-   * <li>`maxCharsPerColumn` (default `1000000`): defines the maximum number 
of characters allowed
-   * for any given value being read.</li>
+   * <li>`maxCharsPerColumn` (default `-1`): defines the maximum number of 
characters allowed
+   * for any given value being read. By default, it is -1 meaning unlimited 
length</li>
    * <li>`maxMalformedLogPerPartition` (default `10`): sets the maximum number 
of malformed rows
    * Spark will log for each partition. Malformed records beyond this number 
will be ignored.</li>
    * <li>`mode` (default `PERMISSIVE`): allows a mode for dealing with corrupt 
records

http://git-wip-us.apache.org/repos/asf/spark/blob/25a020be/sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/csv/CSVOptions.scala
----------------------------------------------------------------------
diff --git 
a/sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/csv/CSVOptions.scala
 
b/sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/csv/CSVOptions.scala
index 364d7c8..e7dcc22 100644
--- 
a/sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/csv/CSVOptions.scala
+++ 
b/sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/csv/CSVOptions.scala
@@ -112,7 +112,7 @@ private[csv] class CSVOptions(@transient private val 
parameters: Map[String, Str
 
   val maxColumns = getInt("maxColumns", 20480)
 
-  val maxCharsPerColumn = getInt("maxCharsPerColumn", 1000000)
+  val maxCharsPerColumn = getInt("maxCharsPerColumn", -1)
 
   val escapeQuotes = getBool("escapeQuotes", true)
 
@@ -123,8 +123,6 @@ private[csv] class CSVOptions(@transient private val 
parameters: Map[String, Str
   val inputBufferSize = 128
 
   val isCommentSet = this.comment != '\u0000'
-
-  val rowSeparator = "\n"
 }
 
 object CSVOptions {

http://git-wip-us.apache.org/repos/asf/spark/blob/25a020be/sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/csv/CSVParser.scala
----------------------------------------------------------------------
diff --git 
a/sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/csv/CSVParser.scala
 
b/sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/csv/CSVParser.scala
index 64bdd6f..332f5c8 100644
--- 
a/sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/csv/CSVParser.scala
+++ 
b/sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/csv/CSVParser.scala
@@ -34,7 +34,6 @@ private[csv] class CsvReader(params: CSVOptions) {
     val settings = new CsvParserSettings()
     val format = settings.getFormat
     format.setDelimiter(params.delimiter)
-    format.setLineSeparator(params.rowSeparator)
     format.setQuote(params.quote)
     format.setQuoteEscape(params.escape)
     format.setComment(params.comment)
@@ -70,7 +69,6 @@ private[csv] class LineCsvWriter(params: CSVOptions, headers: 
Seq[String]) exten
   private val format = writerSettings.getFormat
 
   format.setDelimiter(params.delimiter)
-  format.setLineSeparator(params.rowSeparator)
   format.setQuote(params.quote)
   format.setQuoteEscape(params.escape)
   format.setComment(params.comment)

http://git-wip-us.apache.org/repos/asf/spark/blob/25a020be/sql/core/src/main/scala/org/apache/spark/sql/streaming/DataStreamReader.scala
----------------------------------------------------------------------
diff --git 
a/sql/core/src/main/scala/org/apache/spark/sql/streaming/DataStreamReader.scala 
b/sql/core/src/main/scala/org/apache/spark/sql/streaming/DataStreamReader.scala
index 9d17405..d437c16 100644
--- 
a/sql/core/src/main/scala/org/apache/spark/sql/streaming/DataStreamReader.scala
+++ 
b/sql/core/src/main/scala/org/apache/spark/sql/streaming/DataStreamReader.scala
@@ -247,8 +247,8 @@ final class DataStreamReader private[sql](sparkSession: 
SparkSession) extends Lo
    * `java.text.SimpleDateFormat`. This applies to timestamp type.</li>
    * <li>`maxColumns` (default `20480`): defines a hard limit of how many 
columns
    * a record can have.</li>
-   * <li>`maxCharsPerColumn` (default `1000000`): defines the maximum number 
of characters allowed
-   * for any given value being read.</li>
+   * <li>`maxCharsPerColumn` (default `-1`): defines the maximum number of 
characters allowed
+   * for any given value being read. By default, it is -1 meaning unlimited 
length</li>
    * <li>`mode` (default `PERMISSIVE`): allows a mode for dealing with corrupt 
records
    *    during parsing.
    *   <ul>


---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org

Reply via email to