Github user andrewor14 commented on a diff in the pull request:

    https://github.com/apache/spark/pull/8764#discussion_r39485102
  
    --- Diff: 
sql/core/src/main/scala/org/apache/spark/sql/execution/local/SampleNode.scala 
---
    @@ -51,18 +50,15 @@ case class SampleNode(
     
       override def open(): Unit = {
         child.open()
    -    val (sampler, _seed) = if (withReplacement) {
    -        val random = new Random(seed)
    +    val sampler =
    +      if (withReplacement) {
             // Disable gap sampling since the gap sampling method buffers two 
rows internally,
             // requiring us to copy the row, which is more expensive than the 
random number generator.
    -        (new PoissonSampler[InternalRow](upperBound - lowerBound, 
useGapSamplingIfPossible = false),
    -          // Use the seed for partition 0 like PartitionwiseSampledRDD to 
generate the same result
    -          // of DataFrame
    -          random.nextLong())
    --- End diff --
    
    @zsxwing I had to remove this to make testing deterministic. Looking at 
this further I still don't see the point of introducing another layer of 
randomness here. What change in behavior does this entail?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to