spark git commit: [SPARK-16485][DOC][ML] Fixed several inline formatting in ml features doc
Repository: spark Updated Branches: refs/heads/branch-2.0 d9bd066b9 -> f0d05f669 [SPARK-16485][DOC][ML] Fixed several inline formatting in ml features doc ## What changes were proposed in this pull request? Fixed several inline formatting in ml features doc. Before: https://cloud.githubusercontent.com/assets/717363/16827974/1e1b6e04-49be-11e6-8aa9-4a0cb6cd3b4e.png;> After: https://cloud.githubusercontent.com/assets/717363/16827976/2576510a-49be-11e6-96dd-92a1fa464d36.png;> ## How was this patch tested? Genetate the docs locally by `SKIP_API=1 jekyll build` and view it in the browser. Author: Shuai LinCloses #14194 from lins05/fix-docs-formatting. (cherry picked from commit 3b6e1d094e153599e158331b10d33d74a667be5a) Signed-off-by: Sean Owen Project: http://git-wip-us.apache.org/repos/asf/spark/repo Commit: http://git-wip-us.apache.org/repos/asf/spark/commit/f0d05f66 Tree: http://git-wip-us.apache.org/repos/asf/spark/tree/f0d05f66 Diff: http://git-wip-us.apache.org/repos/asf/spark/diff/f0d05f66 Branch: refs/heads/branch-2.0 Commit: f0d05f669b4e7be017d8d0cfba33c3a61a1eef8f Parents: d9bd066 Author: Shuai Lin Authored: Mon Jul 25 20:26:55 2016 +0100 Committer: Sean Owen Committed: Mon Jul 25 20:27:04 2016 +0100 -- docs/ml-features.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) -- http://git-wip-us.apache.org/repos/asf/spark/blob/f0d05f66/docs/ml-features.md -- diff --git a/docs/ml-features.md b/docs/ml-features.md index e7d7ddf..6020114 100644 --- a/docs/ml-features.md +++ b/docs/ml-features.md @@ -216,7 +216,7 @@ for more details on the API. [RegexTokenizer](api/scala/index.html#org.apache.spark.ml.feature.RegexTokenizer) allows more advanced tokenization based on regular expression (regex) matching. - By default, the parameter "pattern" (regex, default: \\s+) is used as delimiters to split the input text. + By default, the parameter "pattern" (regex, default: `"\\s+"`) is used as delimiters to split the input text. Alternatively, users can set parameter "gaps" to false indicating the regex "pattern" denotes "tokens" rather than splitting gaps, and find all matching occurrences as the tokenization result. @@ -815,7 +815,7 @@ The rescaled value for a feature E is calculated as, `\begin{equation} Rescaled(e_i) = \frac{e_i - E_{min}}{E_{max} - E_{min}} * (max - min) + min \end{equation}` -For the case `E_{max} == E_{min}`, `Rescaled(e_i) = 0.5 * (max + min)` +For the case `$E_{max} == E_{min}$`, `$Rescaled(e_i) = 0.5 * (max + min)$` Note that since zero values will probably be transformed to non-zero values, output of the transformer will be `DenseVector` even for sparse input. - To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org For additional commands, e-mail: commits-h...@spark.apache.org
spark git commit: [SPARK-16485][DOC][ML] Fixed several inline formatting in ml features doc
Repository: spark Updated Branches: refs/heads/master 978cd5f12 -> 3b6e1d094 [SPARK-16485][DOC][ML] Fixed several inline formatting in ml features doc ## What changes were proposed in this pull request? Fixed several inline formatting in ml features doc. Before: https://cloud.githubusercontent.com/assets/717363/16827974/1e1b6e04-49be-11e6-8aa9-4a0cb6cd3b4e.png;> After: https://cloud.githubusercontent.com/assets/717363/16827976/2576510a-49be-11e6-96dd-92a1fa464d36.png;> ## How was this patch tested? Genetate the docs locally by `SKIP_API=1 jekyll build` and view it in the browser. Author: Shuai LinCloses #14194 from lins05/fix-docs-formatting. Project: http://git-wip-us.apache.org/repos/asf/spark/repo Commit: http://git-wip-us.apache.org/repos/asf/spark/commit/3b6e1d09 Tree: http://git-wip-us.apache.org/repos/asf/spark/tree/3b6e1d09 Diff: http://git-wip-us.apache.org/repos/asf/spark/diff/3b6e1d09 Branch: refs/heads/master Commit: 3b6e1d094e153599e158331b10d33d74a667be5a Parents: 978cd5f Author: Shuai Lin Authored: Mon Jul 25 20:26:55 2016 +0100 Committer: Sean Owen Committed: Mon Jul 25 20:26:55 2016 +0100 -- docs/ml-features.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) -- http://git-wip-us.apache.org/repos/asf/spark/blob/3b6e1d09/docs/ml-features.md -- diff --git a/docs/ml-features.md b/docs/ml-features.md index e7d7ddf..6020114 100644 --- a/docs/ml-features.md +++ b/docs/ml-features.md @@ -216,7 +216,7 @@ for more details on the API. [RegexTokenizer](api/scala/index.html#org.apache.spark.ml.feature.RegexTokenizer) allows more advanced tokenization based on regular expression (regex) matching. - By default, the parameter "pattern" (regex, default: \\s+) is used as delimiters to split the input text. + By default, the parameter "pattern" (regex, default: `"\\s+"`) is used as delimiters to split the input text. Alternatively, users can set parameter "gaps" to false indicating the regex "pattern" denotes "tokens" rather than splitting gaps, and find all matching occurrences as the tokenization result. @@ -815,7 +815,7 @@ The rescaled value for a feature E is calculated as, `\begin{equation} Rescaled(e_i) = \frac{e_i - E_{min}}{E_{max} - E_{min}} * (max - min) + min \end{equation}` -For the case `E_{max} == E_{min}`, `Rescaled(e_i) = 0.5 * (max + min)` +For the case `$E_{max} == E_{min}$`, `$Rescaled(e_i) = 0.5 * (max + min)$` Note that since zero values will probably be transformed to non-zero values, output of the transformer will be `DenseVector` even for sparse input. - To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org For additional commands, e-mail: commits-h...@spark.apache.org