Github user mateiz commented on a diff in the pull request:

    https://github.com/apache/spark/pull/931#discussion_r13268434
  
    --- Diff: 
core/src/main/scala/org/apache/spark/rdd/OrderedRDDFunctions.scala ---
    @@ -17,54 +17,123 @@
     
     package org.apache.spark.rdd
     
    +import java.util.Comparator
    +
    +import scala.collection.mutable.ArrayBuffer
     import scala.reflect.ClassTag
     
    -import org.apache.spark.{Logging, RangePartitioner}
    +import org.apache.spark.{Logging, RangePartitioner, SparkEnv}
    +import org.apache.spark.util.collection.{ExternalAppendOnlyMap, 
AppendOnlyMap}
     
     /**
    - * Extra functions available on RDDs of (key, value) pairs where the key 
is sortable through
    - * an implicit conversion. Import `org.apache.spark.SparkContext._` at the 
top of your program to
    - * use these functions. They will work with any key type `K` that has an 
implicit `Ordering[K]` in
    - * scope.  Ordering objects already exist for all of the standard 
primitive types.  Users can also
    - * define their own orderings for custom types, or to override the default 
ordering.  The implicit
    - * ordering that is in the closest scope will be used.
    - *
    - * {{{
    - *   import org.apache.spark.SparkContext._
    - *
    - *   val rdd: RDD[(String, Int)] = ...
    - *   implicit val caseInsensitiveOrdering = new Ordering[String] {
    - *     override def compare(a: String, b: String) = 
a.toLowerCase.compare(b.toLowerCase)
    - *   }
    - *
    - *   // Sort by key, using the above case insensitive ordering.
    - *   rdd.sortByKey()
    - * }}}
    - */
    +  * Extra functions available on RDDs of (key, value) pairs where the key 
is sortable through
    +  * an implicit conversion. Import `org.apache.spark.SparkContext._` at 
the top of your program to
    +  * use these functions. They will work with any key type `K` that has an 
implicit `Ordering[K]` in
    +  * scope.  Ordering objects already exist for all of the standard 
primitive types.  Users can also
    +  * define their own orderings for custom types, or to override the 
default ordering.  The implicit
    +  * ordering that is in the closest scope will be used.
    +  *
    +  * {{{
    +  *   import org.apache.spark.SparkContext._
    +  *
    +  *   val rdd: RDD[(String, Int)] = ...
    +  *   implicit val caseInsensitiveOrdering = new Ordering[String] {
    +  *     override def compare(a: String, b: String) = 
a.toLowerCase.compare(b.toLowerCase)
    +  *   }
    +  *
    +  *   // Sort by key, using the above case insensitive ordering.
    +  *   rdd.sortByKey()
    +  * }}}
    +  */
    --- End diff --
    
    It looks like your IDE changed the style of the comments here. Please leave 
them as they were originally. Our style in Spark is not the default Scala one, 
it's this:
    ```
    /**
     * aaa
     * bbb
     */
    ```


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

Reply via email to