spark git commit: [SPARK-17561][DOCS] DataFrameWriter documentation formatting problems

2016-09-16 Thread rxin
Repository: spark
Updated Branches:
  refs/heads/master dca771bec -> b9323fc93


[SPARK-17561][DOCS] DataFrameWriter documentation formatting problems

## What changes were proposed in this pull request?

Fix ` / ` problems in SQL scaladoc.

## How was this patch tested?

Scaladoc build and manual verification of generated HTML.

Author: Sean Owen 

Closes #15117 from srowen/SPARK-17561.


Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commit: http://git-wip-us.apache.org/repos/asf/spark/commit/b9323fc9
Tree: http://git-wip-us.apache.org/repos/asf/spark/tree/b9323fc9
Diff: http://git-wip-us.apache.org/repos/asf/spark/diff/b9323fc9

Branch: refs/heads/master
Commit: b9323fc9381a09af510f542fd5c86473e029caf6
Parents: dca771b
Author: Sean Owen 
Authored: Fri Sep 16 13:43:05 2016 -0700
Committer: Reynold Xin 
Committed: Fri Sep 16 13:43:05 2016 -0700

--
 .../org/apache/spark/sql/DataFrameReader.scala  | 32 +
 .../org/apache/spark/sql/DataFrameWriter.scala  | 12 +++
 .../spark/sql/streaming/DataStreamReader.scala  | 38 
 3 files changed, 53 insertions(+), 29 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/spark/blob/b9323fc9/sql/core/src/main/scala/org/apache/spark/sql/DataFrameReader.scala
--
diff --git a/sql/core/src/main/scala/org/apache/spark/sql/DataFrameReader.scala 
b/sql/core/src/main/scala/org/apache/spark/sql/DataFrameReader.scala
index 93bf74d..d29d90c 100644
--- a/sql/core/src/main/scala/org/apache/spark/sql/DataFrameReader.scala
+++ b/sql/core/src/main/scala/org/apache/spark/sql/DataFrameReader.scala
@@ -269,14 +269,15 @@ class DataFrameReader private[sql](sparkSession: 
SparkSession) extends Logging {
* `allowBackslashEscapingAnyCharacter` (default `false`): allows 
accepting quoting of all
* character using backslash quoting mechanism
* `mode` (default `PERMISSIVE`): allows a mode for dealing with corrupt 
records
-   * during parsing.
-   * 
-   *   - `PERMISSIVE` : sets other fields to `null` when it meets a 
corrupted record, and puts
-   *  the malformed string into a new field configured by 
`columnNameOfCorruptRecord`. When
-   *  a schema is set by user, it sets `null` for extra fields.
-   *   - `DROPMALFORMED` : ignores the whole corrupted records.
-   *   - `FAILFAST` : throws an exception when it meets corrupted 
records.
-   * 
+   * during parsing.
+   *   
+   * `PERMISSIVE` : sets other fields to `null` when it meets a 
corrupted record, and puts
+   * the malformed string into a new field configured by 
`columnNameOfCorruptRecord`. When
+   * a schema is set by user, it sets `null` for extra fields.
+   * `DROPMALFORMED` : ignores the whole corrupted records.
+   * `FAILFAST` : throws an exception when it meets corrupted 
records.
+   *   
+   * 
* `columnNameOfCorruptRecord` (default is the value specified in
* `spark.sql.columnNameOfCorruptRecord`): allows renaming the new field 
having malformed string
* created by `PERMISSIVE` mode. This overrides 
`spark.sql.columnNameOfCorruptRecord`.
@@ -395,13 +396,14 @@ class DataFrameReader private[sql](sparkSession: 
SparkSession) extends Logging {
* `maxMalformedLogPerPartition` (default `10`): sets the maximum number 
of malformed rows
* Spark will log for each partition. Malformed records beyond this number 
will be ignored.
* `mode` (default `PERMISSIVE`): allows a mode for dealing with corrupt 
records
-   *during parsing.
-   * 
-   *- `PERMISSIVE` : sets other fields to `null` when it meets a 
corrupted record. When
-   * a schema is set by user, it sets `null` for extra fields.
-   *- `DROPMALFORMED` : ignores the whole corrupted records.
-   *- `FAILFAST` : throws an exception when it meets corrupted 
records.
-   * 
+   *during parsing.
+   *   
+   * `PERMISSIVE` : sets other fields to `null` when it meets a 
corrupted record. When
+   *   a schema is set by user, it sets `null` for extra fields.
+   * `DROPMALFORMED` : ignores the whole corrupted records.
+   * `FAILFAST` : throws an exception when it meets corrupted 
records.
+   *   
+   * 
* 
* @since 2.0.0
*/

http://git-wip-us.apache.org/repos/asf/spark/blob/b9323fc9/sql/core/src/main/scala/org/apache/spark/sql/DataFrameWriter.scala
--
diff --git a/sql/core/src/main/scala/org/apache/spark/sql/DataFrameWriter.scala 
b/sql/core/src/main/scala/org/apache/spark/sql/DataFrameWriter.scala
index c05c7a6..e137f07 100644
--- a/sql/core/src/main/scala/org/apache/spark/sql/DataFrameWriter.scala
+++ b/sql/core/src/main/scala/org/apache/spark/sql/DataFrameWriter.scala
@@ -397,7 +397,9 @@ final class DataFrameWriter[T] private[sql](ds: Dataset[T]) 
{

spark git commit: [SPARK-17561][DOCS] DataFrameWriter documentation formatting problems

2016-09-17 Thread srowen
Repository: spark
Updated Branches:
  refs/heads/branch-2.0 3ca0dc007 -> c9bd67e94


[SPARK-17561][DOCS] DataFrameWriter documentation formatting problems

Fix ` / ` problems in SQL scaladoc.

Scaladoc build and manual verification of generated HTML.

Author: Sean Owen 

Closes #15117 from srowen/SPARK-17561.

(cherry picked from commit b9323fc9381a09af510f542fd5c86473e029caf6)
Signed-off-by: Sean Owen 


Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commit: http://git-wip-us.apache.org/repos/asf/spark/commit/c9bd67e9
Tree: http://git-wip-us.apache.org/repos/asf/spark/tree/c9bd67e9
Diff: http://git-wip-us.apache.org/repos/asf/spark/diff/c9bd67e9

Branch: refs/heads/branch-2.0
Commit: c9bd67e94d9d9d2e1f2cb1e5c4bb71a69b1e1d4e
Parents: 3ca0dc0
Author: Sean Owen 
Authored: Fri Sep 16 13:43:05 2016 -0700
Committer: Sean Owen 
Committed: Sat Sep 17 12:43:30 2016 +0100

--
 .../org/apache/spark/sql/DataFrameReader.scala  | 32 +
 .../org/apache/spark/sql/DataFrameWriter.scala  | 10 ++
 .../spark/sql/streaming/DataStreamReader.scala  | 38 
 3 files changed, 51 insertions(+), 29 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/spark/blob/c9bd67e9/sql/core/src/main/scala/org/apache/spark/sql/DataFrameReader.scala
--
diff --git a/sql/core/src/main/scala/org/apache/spark/sql/DataFrameReader.scala 
b/sql/core/src/main/scala/org/apache/spark/sql/DataFrameReader.scala
index 083c2e2..410cb20 100644
--- a/sql/core/src/main/scala/org/apache/spark/sql/DataFrameReader.scala
+++ b/sql/core/src/main/scala/org/apache/spark/sql/DataFrameReader.scala
@@ -269,14 +269,15 @@ class DataFrameReader private[sql](sparkSession: 
SparkSession) extends Logging {
* `allowBackslashEscapingAnyCharacter` (default `false`): allows 
accepting quoting of all
* character using backslash quoting mechanism
* `mode` (default `PERMISSIVE`): allows a mode for dealing with corrupt 
records
-   * during parsing.
-   * 
-   *   - `PERMISSIVE` : sets other fields to `null` when it meets a 
corrupted record, and puts
-   *  the malformed string into a new field configured by 
`columnNameOfCorruptRecord`. When
-   *  a schema is set by user, it sets `null` for extra fields.
-   *   - `DROPMALFORMED` : ignores the whole corrupted records.
-   *   - `FAILFAST` : throws an exception when it meets corrupted 
records.
-   * 
+   * during parsing.
+   *   
+   * `PERMISSIVE` : sets other fields to `null` when it meets a 
corrupted record, and puts
+   * the malformed string into a new field configured by 
`columnNameOfCorruptRecord`. When
+   * a schema is set by user, it sets `null` for extra fields.
+   * `DROPMALFORMED` : ignores the whole corrupted records.
+   * `FAILFAST` : throws an exception when it meets corrupted 
records.
+   *   
+   * 
* `columnNameOfCorruptRecord` (default is the value specified in
* `spark.sql.columnNameOfCorruptRecord`): allows renaming the new field 
having malformed string
* created by `PERMISSIVE` mode. This overrides 
`spark.sql.columnNameOfCorruptRecord`.
@@ -396,13 +397,14 @@ class DataFrameReader private[sql](sparkSession: 
SparkSession) extends Logging {
* `maxMalformedLogPerPartition` (default `10`): sets the maximum number 
of malformed rows
* Spark will log for each partition. Malformed records beyond this number 
will be ignored.
* `mode` (default `PERMISSIVE`): allows a mode for dealing with corrupt 
records
-   *during parsing.
-   * 
-   *- `PERMISSIVE` : sets other fields to `null` when it meets a 
corrupted record. When
-   * a schema is set by user, it sets `null` for extra fields.
-   *- `DROPMALFORMED` : ignores the whole corrupted records.
-   *- `FAILFAST` : throws an exception when it meets corrupted 
records.
-   * 
+   *during parsing.
+   *   
+   * `PERMISSIVE` : sets other fields to `null` when it meets a 
corrupted record. When
+   *   a schema is set by user, it sets `null` for extra fields.
+   * `DROPMALFORMED` : ignores the whole corrupted records.
+   * `FAILFAST` : throws an exception when it meets corrupted 
records.
+   *   
+   * 
* 
* @since 2.0.0
*/

http://git-wip-us.apache.org/repos/asf/spark/blob/c9bd67e9/sql/core/src/main/scala/org/apache/spark/sql/DataFrameWriter.scala
--
diff --git a/sql/core/src/main/scala/org/apache/spark/sql/DataFrameWriter.scala 
b/sql/core/src/main/scala/org/apache/spark/sql/DataFrameWriter.scala
index 767af99..a4c4a5d 100644
--- a/sql/core/src/main/scala/org/apache/spark/sql/DataFrameWriter.scala
+++ b/sql/core/src/main/scala/org/apache/spark/sql/DataFrameWriter.scala
@@ -449,6 +449,7 @@ final class DataFrameWriter[T] private[sql](ds