Github user squito commented on a diff in the pull request:

    https://github.com/apache/spark/pull/11105#discussion_r86484995
  
    --- Diff: core/src/main/scala/org/apache/spark/rdd/ShuffledRDD.scala ---
    @@ -104,10 +105,26 @@ class ShuffledRDD[K: ClassTag, V: ClassTag, C: 
ClassTag](
       }
     
       override def compute(split: Partition, context: TaskContext): 
Iterator[(K, C)] = {
    +    // Use -1 for our Shuffle ID since we are on the read side of the 
shuffle.
    +    val shuffleWriteId = -1
    +    // If our task has data property accumulators we need to keep track of 
which partitions
    +    // we are processing.
    +    if (context.taskMetrics.hasDataPropertyAccumulators()) {
    +      context.setRDDPartitionInfo(id, shuffleWriteId, split.index)
    +    }
         val dep = dependencies.head.asInstanceOf[ShuffleDependency[K, V, C]]
    -    SparkEnv.get.shuffleManager.getReader(dep.shuffleHandle, split.index, 
split.index + 1, context)
    +    val itr = SparkEnv.get.shuffleManager.getReader(dep.shuffleHandle, 
split.index, split.index + 1,
    +      context)
    --- End diff --
    
    man github formatting for these comments is super weird.  It doesn't show 
up in the comments view, I only saw it in an email notifcation.
    
    Shuffled-read operations (aka reducers) get a bunch of blocks from the 
mappers, and then it merges those blocks together.  But the mappers have 
already sorted the data, so really this only needs to be a merge of a bunch of 
sorted streams.  It happens to do this now by just sorting the whole thing.  
But, there isn't any reason why the implementation couldn't change -- it could 
do the merge on the fly instead, pulling just the next key from each stream.  
It would run the combiner for all the records with the same key, and then push 
that one record out to the iterator.
    
    If it changed to do that, this would break.  I think you are right, its 
covered by a test case (another comment on that below), so thats fine.  But I 
think its worth expanding your comment that this is based on the *assumption* 
that the shuffle reader always processes the entire input, running all 
combiners, before returning an iterator.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to