Github user erikerlandson commented on a diff in the pull request:

    https://github.com/apache/spark/pull/2455#discussion_r19636555
  
    --- Diff: 
core/src/main/scala/org/apache/spark/util/random/RandomSampler.scala ---
    @@ -38,13 +41,45 @@ trait RandomSampler[T, U] extends Pseudorandom with 
Cloneable with Serializable
       /** take a random sample */
       def sample(items: Iterator[T]): Iterator[U]
     
    +  /** return a copy of the RandomSampler object */
       override def clone: RandomSampler[T, U] =
         throw new NotImplementedError("clone() is not implemented.")
     }
     
    +private [spark]
    +object RandomSampler {
    +  /** Default random number generator used by random samplers. */
    +  def newDefaultRNG: Random = new XORShiftRandom
    +
    +  /**
    +   * Default gap sampling maximum.
    +   * For sampling fractions <= this value, the gap sampling optimization 
will be applied.
    +   * Above this value, it is assumed that "tradtional" Bernoulli sampling 
is faster.  The
    +   * optimal value for this will depend on the RNG.  More expensive RNGs 
will tend to make
    +   * the optimal value higher.  The most reliable way to determine this 
value for a given RNG
    +   * is to experiment.  I would expect a value of 0.5 to be close in most 
cases.
    --- End diff --
    
    0.5 is what I recommend as an initial guess if one is using a new RNG.  
(0.4 is what I got by experimenting with the current RNG)


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to