Repository: spark
Updated Branches:
  refs/heads/master 0e2405490 -> 3b4376876


[MINOR][BUILD] Fix javadoc8 break

## What changes were proposed in this pull request?

These error below seems caused by unidoc that does not understand double 
commented block.

```
[error] .../spark/sql/core/target/java/org/apache/spark/sql/KeyedState.java:69: 
error: class, interface, or enum expected
[error]  * MapGroupsWithStateFunction<String, Integer, Integer, String> 
mappingFunction =
[error]                                  ^
[error] .../spark/sql/core/target/java/org/apache/spark/sql/KeyedState.java:69: 
error: class, interface, or enum expected
[error]  * MapGroupsWithStateFunction<String, Integer, Integer, String> 
mappingFunction =
[error]                                                                       ^
[error] .../spark/sql/core/target/java/org/apache/spark/sql/KeyedState.java:70: 
error: class, interface, or enum expected
[error]  *    new MapGroupsWithStateFunction<String, Integer, Integer, 
String>() {
[error]                                         ^
[error] .../spark/sql/core/target/java/org/apache/spark/sql/KeyedState.java:70: 
error: class, interface, or enum expected
[error]  *    new MapGroupsWithStateFunction<String, Integer, Integer, 
String>() {
[error]                                                                         
    ^
[error] .../spark/sql/core/target/java/org/apache/spark/sql/KeyedState.java:72: 
error: illegal character: '#'
[error]  *      @Override
[error]          ^
[error] .../spark/sql/core/target/java/org/apache/spark/sql/KeyedState.java:72: 
error: class, interface, or enum expected
[error]  *      @Override
[error]              ^
[error] .../spark/sql/core/target/java/org/apache/spark/sql/KeyedState.java:73: 
error: class, interface, or enum expected
[error]  *      public String call(String key, Iterator<Integer> value, 
KeyedState<Integer> state) {
[error]                ^
[error] .../spark/sql/core/target/java/org/apache/spark/sql/KeyedState.java:73: 
error: class, interface, or enum expected
[error]  *      public String call(String key, Iterator<Integer> value, 
KeyedState<Integer> state) {
[error]                                                    ^
[error] .../spark/sql/core/target/java/org/apache/spark/sql/KeyedState.java:73: 
error: class, interface, or enum expected
[error]  *      public String call(String key, Iterator<Integer> value, 
KeyedState<Integer> state) {
[error]                                                                ^
[error] .../spark/sql/core/target/java/org/apache/spark/sql/KeyedState.java:73: 
error: class, interface, or enum expected
[error]  *      public String call(String key, Iterator<Integer> value, 
KeyedState<Integer> state) {
[error]                                                                         
            ^
[error] .../spark/sql/core/target/java/org/apache/spark/sql/KeyedState.java:73: 
error: class, interface, or enum expected
[error]  *      public String call(String key, Iterator<Integer> value, 
KeyedState<Integer> state) {
[error]                                                                         
                        ^
[error] .../spark/sql/core/target/java/org/apache/spark/sql/KeyedState.java:76: 
error: class, interface, or enum expected
[error]  *          boolean shouldRemove = ...; // Decide whether to remove the 
state
[error]  ^
[error] .../spark/sql/core/target/java/org/apache/spark/sql/KeyedState.java:77: 
error: class, interface, or enum expected
[error]  *          if (shouldRemove) {
[error]  ^
[error] .../spark/sql/core/target/java/org/apache/spark/sql/KeyedState.java:79: 
error: class, interface, or enum expected
[error]  *          } else {
[error]  ^
[error] .../spark/sql/core/target/java/org/apache/spark/sql/KeyedState.java:81: 
error: class, interface, or enum expected
[error]  *            state.update(newState); // Set the new state
[error]  ^
[error] .../spark/sql/core/target/java/org/apache/spark/sql/KeyedState.java:82: 
error: class, interface, or enum expected
[error]  *          }
[error]  ^
[error] 
.../forked/spark/sql/core/target/java/org/apache/spark/sql/KeyedState.java:85: 
error: class, interface, or enum expected
[error]  *          state.update(initialState);
[error]  ^
[error] 
.../forked/spark/sql/core/target/java/org/apache/spark/sql/KeyedState.java:86: 
error: class, interface, or enum expected
[error]  *        }
[error]  ^
[error] .../spark/sql/core/target/java/org/apache/spark/sql/KeyedState.java:90: 
error: class, interface, or enum expected
[error]  * </code></pre>
[error]  ^
[error] .../spark/sql/core/target/java/org/apache/spark/sql/KeyedState.java:92: 
error: class, interface, or enum expected
[error]  * tparam S User-defined type of the state to be stored for each key. 
Must be encodable into
[error]            ^
[error] .../spark/sql/core/target/java/org/apache/spark/sql/KeyedState.java:93: 
error: class, interface, or enum expected
[error]  *           Spark SQL types (see {link Encoder} for more details).
[error]                                          ^
[error] .../spark/sql/core/target/java/org/apache/spark/sql/KeyedState.java:94: 
error: class, interface, or enum expected
[error]  * since 2.1.1
[error]           ^
```

And another link seems unrecognisable.

```
.../spark/sql/core/target/java/org/apache/spark/sql/KeyedState.java:16: error: 
reference not found
[error]  * That is, in every batch of the {link streaming.StreamingQuery 
StreamingQuery},
[error]
```

Note that this PR does not fix the two breaks as below:

```
[error] 
.../spark/sql/core/target/java/org/apache/spark/sql/DataFrameStatFunctions.java:43:
 error: unexpected content
[error]    * see {link DataFrameStatsFunctions.approxQuantile(col:Str* 
approxQuantile} for
[error]      ^
[error] 
.../spark/sql/core/target/java/org/apache/spark/sql/DataFrameStatFunctions.java:52:
 error: bad use of '>'
[error]    * param relativeError The relative target precision to achieve (>= 
0).
[error]                                                                     ^
[error]
```

because these seem probably fixed soon in 
https://github.com/apache/spark/pull/16776 and I intended to avoid potential 
conflicts.

## How was this patch tested?

Manually via `jekyll build`

Author: hyukjinkwon <gurwls...@gmail.com>

Closes #16926 from HyukjinKwon/javadoc-break.


Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commit: http://git-wip-us.apache.org/repos/asf/spark/commit/3b437687
Tree: http://git-wip-us.apache.org/repos/asf/spark/tree/3b437687
Diff: http://git-wip-us.apache.org/repos/asf/spark/diff/3b437687

Branch: refs/heads/master
Commit: 3b4376876fabf7df4bd245dcf755222f4fe5f190
Parents: 0e24054
Author: hyukjinkwon <gurwls...@gmail.com>
Authored: Thu Feb 16 12:35:43 2017 +0000
Committer: Sean Owen <so...@cloudera.com>
Committed: Thu Feb 16 12:35:43 2017 +0000

----------------------------------------------------------------------
 .../src/main/scala/org/apache/spark/sql/KeyedState.scala     | 8 +++-----
 1 file changed, 3 insertions(+), 5 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/spark/blob/3b437687/sql/core/src/main/scala/org/apache/spark/sql/KeyedState.scala
----------------------------------------------------------------------
diff --git a/sql/core/src/main/scala/org/apache/spark/sql/KeyedState.scala 
b/sql/core/src/main/scala/org/apache/spark/sql/KeyedState.scala
index 6864b6f..71efa43 100644
--- a/sql/core/src/main/scala/org/apache/spark/sql/KeyedState.scala
+++ b/sql/core/src/main/scala/org/apache/spark/sql/KeyedState.scala
@@ -17,8 +17,6 @@
 
 package org.apache.spark.sql
 
-import java.lang.IllegalArgumentException
-
 import org.apache.spark.annotation.{Experimental, InterfaceStability}
 import org.apache.spark.sql.catalyst.plans.logical.LogicalKeyedState
 
@@ -36,7 +34,7 @@ import 
org.apache.spark.sql.catalyst.plans.logical.LogicalKeyedState
  * `Dataset.groupByKey()`) while maintaining user-defined per-group state 
between invocations.
  * For a static batch Dataset, the function will be invoked once per group. 
For a streaming
  * Dataset, the function will be invoked for each group repeatedly in every 
trigger.
- * That is, in every batch of the [[streaming.StreamingQuery StreamingQuery]],
+ * That is, in every batch of the `streaming.StreamingQuery`,
  * the function will be invoked once for each group that has data in the batch.
  *
  * The function is invoked with following parameters.
@@ -65,7 +63,7 @@ import 
org.apache.spark.sql.catalyst.plans.logical.LogicalKeyedState
  *
  * Scala example of using KeyedState in `mapGroupsWithState`:
  * {{{
- * /* A mapping function that maintains an integer state for string keys and 
returns a string. */
+ * // A mapping function that maintains an integer state for string keys and 
returns a string.
  * def mappingFunction(key: String, value: Iterator[Int], state: 
KeyedState[Int]): String = {
  *   // Check if state exists
  *   if (state.exists) {
@@ -88,7 +86,7 @@ import 
org.apache.spark.sql.catalyst.plans.logical.LogicalKeyedState
  *
  * Java example of using `KeyedState`:
  * {{{
- * /* A mapping function that maintains an integer state for string keys and 
returns a string. */
+ * // A mapping function that maintains an integer state for string keys and 
returns a string.
  * MapGroupsWithStateFunction<String, Integer, Integer, String> 
mappingFunction =
  *    new MapGroupsWithStateFunction<String, Integer, Integer, String>() {
  *


---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org

Reply via email to