This is an automated email from the ASF dual-hosted git repository.

dongjoon pushed a commit to branch branch-3.0
in repository https://gitbox.apache.org/repos/asf/spark.git


The following commit(s) were added to refs/heads/branch-3.0 by this push:
     new 1fc9833  [MINOR][DOCS] Fix [[...]] to `...` and <code>...</code> in 
documentation
1fc9833 is described below

commit 1fc98336cfc8390139f2548a1f496d40a6a7f784
Author: HyukjinKwon <gurwls...@apache.org>
AuthorDate: Fri Mar 13 16:44:23 2020 -0700

    [MINOR][DOCS] Fix [[...]] to `...` and <code>...</code> in documentation
    
    ### What changes were proposed in this pull request?
    
    Before:
    
    - ![Screen Shot 2020-03-13 at 1 19 12 
PM](https://user-images.githubusercontent.com/6477701/76589452-7c34f300-652d-11ea-9da7-3754f8575796.png)
    - ![Screen Shot 2020-03-13 at 1 19 24 
PM](https://user-images.githubusercontent.com/6477701/76589455-7d662000-652d-11ea-9dbe-f5fe10d1e7ad.png)
    - ![Screen Shot 2020-03-13 at 1 19 03 
PM](https://user-images.githubusercontent.com/6477701/76589449-7b03c600-652d-11ea-8e99-dbe47f561f9c.png)
    
    After:
    
    - ![Screen Shot 2020-03-13 at 1 17 37 
PM](https://user-images.githubusercontent.com/6477701/76589437-74754e80-652d-11ea-99f5-14fb4761f915.png)
    - ![Screen Shot 2020-03-13 at 1 17 46 
PM](https://user-images.githubusercontent.com/6477701/76589442-76d7a880-652d-11ea-8c10-53e595421081.png)
    - ![Screen Shot 2020-03-13 at 1 18 15 
PM](https://user-images.githubusercontent.com/6477701/76589443-7808d580-652d-11ea-9b1b-e5d11d638335.png)
    
    ### Why are the changes needed?
    To render the code block properly in the documentation
    
    ### Does this PR introduce any user-facing change?
    Yes, code rendering in documentation.
    
    ### How was this patch tested?
    
    Manually built the doc via `SKIP_API=1 jekyll build`.
    
    Closes #27899 from HyukjinKwon/minor-docss.
    
    Authored-by: HyukjinKwon <gurwls...@apache.org>
    Signed-off-by: Dongjoon Hyun <dongj...@apache.org>
    (cherry picked from commit 9628aca68ba0821b8f3fa934ed4872cabb2a5d7d)
    Signed-off-by: Dongjoon Hyun <dongj...@apache.org>
---
 docs/monitoring.md  | 6 +++---
 docs/quick-start.md | 2 +-
 2 files changed, 4 insertions(+), 4 deletions(-)

diff --git a/docs/monitoring.md b/docs/monitoring.md
index 4cba15b..ba3f1dc 100644
--- a/docs/monitoring.md
+++ b/docs/monitoring.md
@@ -595,7 +595,7 @@ A list of the available metrics, with a short description:
   </tr>
   <tr>
     <td>inputMetrics.*</td>
-    <td>Metrics related to reading data from 
[[org.apache.spark.rdd.HadoopRDD]] 
+    <td>Metrics related to reading data from 
<code>org.apache.spark.rdd.HadoopRDD</code>
     or from persisted data.</td>
   </tr>
   <tr>
@@ -779,11 +779,11 @@ A list of the available metrics, with a short description:
   </tr>
   <tr>
     <td>&nbsp;&nbsp;&nbsp;&nbsp;.DirectPoolMemory</td>
-    <td>Peak memory that the JVM is using for direct buffer pool 
([[java.lang.management.BufferPoolMXBean]])</td>
+    <td>Peak memory that the JVM is using for direct buffer pool 
(<code>java.lang.management.BufferPoolMXBean</code>)</td>
   </tr>
   <tr>
     <td>&nbsp;&nbsp;&nbsp;&nbsp;.MappedPoolMemory</td>
-    <td>Peak memory that the JVM is using for mapped buffer pool 
([[java.lang.management.BufferPoolMXBean]])</td>
+    <td>Peak memory that the JVM is using for mapped buffer pool 
(<code>java.lang.management.BufferPoolMXBean</code>)</td>
   </tr>
   <tr>
     <td>&nbsp;&nbsp;&nbsp;&nbsp;.ProcessTreeJVMVMemory</td>
diff --git a/docs/quick-start.md b/docs/quick-start.md
index 86ba2c4..e7a16a3 100644
--- a/docs/quick-start.md
+++ b/docs/quick-start.md
@@ -264,7 +264,7 @@ Spark README. Note that you'll need to replace 
YOUR_SPARK_HOME with the location
 installed. Unlike the earlier examples with the Spark shell, which initializes 
its own SparkSession,
 we initialize a SparkSession as part of the program.
 
-We call `SparkSession.builder` to construct a [[SparkSession]], then set the 
application name, and finally call `getOrCreate` to get the [[SparkSession]] 
instance.
+We call `SparkSession.builder` to construct a `SparkSession`, then set the 
application name, and finally call `getOrCreate` to get the `SparkSession` 
instance.
 
 Our application depends on the Spark API, so we'll also include an sbt 
configuration file,
 `build.sbt`, which explains that Spark is a dependency. This file also adds a 
repository that


---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org

Reply via email to