Github user lucio-yz commented on a diff in the pull request:

    https://github.com/apache/spark/pull/20472#discussion_r171160692
  
    --- Diff: 
mllib/src/main/scala/org/apache/spark/ml/tree/impl/RandomForest.scala ---
    @@ -1001,11 +996,18 @@ private[spark] object RandomForest extends Logging {
         } else {
           val numSplits = metadata.numSplits(featureIndex)
     
    -      // get count for each distinct value
    -      val (valueCountMap, numSamples) = 
featureSamples.foldLeft((Map.empty[Double, Int], 0)) {
    +      // get count for each distinct value except zero value
    +      val (partValueCountMap, partNumSamples) = 
featureSamples.foldLeft((Map.empty[Double, Int], 0)) {
             case ((m, cnt), x) =>
               (m + ((x, m.getOrElse(x, 0) + 1)), cnt + 1)
           }
    +
    +      // Calculate the number of samples for finding splits
    +      val numSamples: Int = (samplesFractionForFindSplits(metadata) * 
metadata.numExamples).toInt
    --- End diff --
    
    I have seen the note of function _sample_, and _sample_ does not guarantee 
to provide exactly the fraction of the count of the given RDD. It seems that 
requiring _numSamples - partNumSamples_ to be non-negative is a more efficient 
choice than trigger a _count_. The degree of approximation depends upon the 
degree approximation of _sample_. And it's sure that the splits will be 
inaccurate.


---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to