[ 
https://issues.apache.org/jira/browse/SPARK-18437?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hyukjin Kwon updated SPARK-18437:
---------------------------------
    Description: 
It seems in Scala/Java,

- {{Note:}}

- {{NOTE:}}

- {{Note that}}

- {{'''Note:'''}}

are used in a mixed way. The last one seems correct as it marks down correctly. 
This will, for example, mark down it pretty[1].

Also, it seems some {{'''Note:'''}}s are wrongly placed[2] which looks like the 
{{Note:}} for the last argument (I believe it meant to be for the API).

For Python, 

- {{Note:}}

- {{NOTE:}}

- {{Note that}}

- {{.. note:}}

In this case, I also believe the last one marks down pretty[3] rather than the 
others[4][5][6].

For R, it seems there are also,


- {{Note:}}

- {{NOTE:}}

- {{Note that}}

- {{@note}}

In case of R, it seems pretty consistent. {{@note}} only contains the 
information about when the function came out such as {{locate since 1.5.0}} 
without other information[7]. So, I am not too sure for this.

It would be nicer if they are consistent, at least for Scala/Python/Java.

[1]http://spark.apache.org/docs/latest/api/scala/index.html#org.apache.spark.SparkContext@hadoopFile[K,V,F<:org.apache.hadoop.mapred.InputFormat[K,V]](path:String)(implicitkm:scala.reflect.ClassTag[K],implicitvm:scala.reflect.ClassTag[V],implicitfm:scala.reflect.ClassTag[F]):org.apache.spark.rdd.RDD[(K,V)]
[2]http://spark.apache.org/docs/latest/api/scala/index.html#org.apache.spark.SparkContext@hadoopRDD[K,V](conf:org.apache.hadoop.mapred.JobConf,inputFormatClass:Class[_<:org.apache.hadoop.mapred.InputFormat[K,V]],keyClass:Class[K],valueClass:Class[V],minPartitions:Int):org.apache.spark.rdd.RDD[(K,V)]
[3]http://spark.apache.org/docs/latest/api/python/pyspark.sql.html#pyspark.sql.DataFrame.describe
[4]http://spark.apache.org/docs/latest/api/python/pyspark.sql.html#pyspark.sql.functions.date_format
[5]http://spark.apache.org/docs/latest/api/python/pyspark.sql.html#pyspark.sql.functions.grouping_id
[6]http://spark.apache.org/docs/latest/api/python/pyspark.sql.html#pyspark.sql.DataFrame.head
[7]http://spark.apache.org/docs/latest/api/R/index.html

  was:
It seems in Scala/Java,

- {{Note:}}

- {{NOTE:}}

- {{Note that}}

- {{'''Note:'''}}}

are used in a mixed way. The last one seems correct as it marks down correctly. 
This will, for example, mark down it pretty[1].

Also, it seems some {{'''Note:'''}}s are wrongly placed[2] which looks like the 
{{Note:}} for the last argument (I believe it meant to be for the API).

For Python, 

- {{Note:}}

- {{NOTE:}}

- {{Note that}}

- {{.. note:}}}

In this case, I also believe the last one marks down pretty[3] rather than the 
others[4][5][6].

For R, it seems there are also,


- {{Note:}}

- {{NOTE:}}

- {{Note that}}

- {{@note}}}

In case of R, it seems pretty consistent. {{@note}} only contains the 
information about when the function came out such as {{locate since 1.5.0}} 
without other information[7]. So, I am not too sure for this.

It would be nicer if they are consistent, at least for Scala/Python/Java.

[1]http://spark.apache.org/docs/latest/api/scala/index.html#org.apache.spark.SparkContext@hadoopFile[K,V,F<:org.apache.hadoop.mapred.InputFormat[K,V]](path:String)(implicitkm:scala.reflect.ClassTag[K],implicitvm:scala.reflect.ClassTag[V],implicitfm:scala.reflect.ClassTag[F]):org.apache.spark.rdd.RDD[(K,V)]
[2]http://spark.apache.org/docs/latest/api/scala/index.html#org.apache.spark.SparkContext@hadoopRDD[K,V](conf:org.apache.hadoop.mapred.JobConf,inputFormatClass:Class[_<:org.apache.hadoop.mapred.InputFormat[K,V]],keyClass:Class[K],valueClass:Class[V],minPartitions:Int):org.apache.spark.rdd.RDD[(K,V)]
[3]http://spark.apache.org/docs/latest/api/python/pyspark.sql.html#pyspark.sql.DataFrame.describe
[4]http://spark.apache.org/docs/latest/api/python/pyspark.sql.html#pyspark.sql.functions.date_format
[5]http://spark.apache.org/docs/latest/api/python/pyspark.sql.html#pyspark.sql.functions.grouping_id
[6]http://spark.apache.org/docs/latest/api/python/pyspark.sql.html#pyspark.sql.DataFrame.head
[7]http://spark.apache.org/docs/latest/api/R/index.html


> Inconsistent mark-down for `Note:` across Scala/Java/R/Python in API 
> documentations
> -----------------------------------------------------------------------------------
>
>                 Key: SPARK-18437
>                 URL: https://issues.apache.org/jira/browse/SPARK-18437
>             Project: Spark
>          Issue Type: Improvement
>          Components: Documentation
>    Affects Versions: 2.0.1
>            Reporter: Hyukjin Kwon
>
> It seems in Scala/Java,
> - {{Note:}}
> - {{NOTE:}}
> - {{Note that}}
> - {{'''Note:'''}}
> are used in a mixed way. The last one seems correct as it marks down 
> correctly. This will, for example, mark down it pretty[1].
> Also, it seems some {{'''Note:'''}}s are wrongly placed[2] which looks like 
> the {{Note:}} for the last argument (I believe it meant to be for the API).
> For Python, 
> - {{Note:}}
> - {{NOTE:}}
> - {{Note that}}
> - {{.. note:}}
> In this case, I also believe the last one marks down pretty[3] rather than 
> the others[4][5][6].
> For R, it seems there are also,
> - {{Note:}}
> - {{NOTE:}}
> - {{Note that}}
> - {{@note}}
> In case of R, it seems pretty consistent. {{@note}} only contains the 
> information about when the function came out such as {{locate since 1.5.0}} 
> without other information[7]. So, I am not too sure for this.
> It would be nicer if they are consistent, at least for Scala/Python/Java.
> [1]http://spark.apache.org/docs/latest/api/scala/index.html#org.apache.spark.SparkContext@hadoopFile[K,V,F<:org.apache.hadoop.mapred.InputFormat[K,V]](path:String)(implicitkm:scala.reflect.ClassTag[K],implicitvm:scala.reflect.ClassTag[V],implicitfm:scala.reflect.ClassTag[F]):org.apache.spark.rdd.RDD[(K,V)]
> [2]http://spark.apache.org/docs/latest/api/scala/index.html#org.apache.spark.SparkContext@hadoopRDD[K,V](conf:org.apache.hadoop.mapred.JobConf,inputFormatClass:Class[_<:org.apache.hadoop.mapred.InputFormat[K,V]],keyClass:Class[K],valueClass:Class[V],minPartitions:Int):org.apache.spark.rdd.RDD[(K,V)]
> [3]http://spark.apache.org/docs/latest/api/python/pyspark.sql.html#pyspark.sql.DataFrame.describe
> [4]http://spark.apache.org/docs/latest/api/python/pyspark.sql.html#pyspark.sql.functions.date_format
> [5]http://spark.apache.org/docs/latest/api/python/pyspark.sql.html#pyspark.sql.functions.grouping_id
> [6]http://spark.apache.org/docs/latest/api/python/pyspark.sql.html#pyspark.sql.DataFrame.head
> [7]http://spark.apache.org/docs/latest/api/R/index.html



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to