Github user mateiz commented on the pull request:

    https://github.com/apache/spark/pull/1499#issuecomment-50297730
  
    Are map tasks spilling by any chance? There is one issue in this right now, 
which is that if your map task spills to disk, you need to spill multiple times 
with the sort-based shuffle, whereas with the old code you would directly open 
one file for each reduce task and write data to it. I think we can add the same 
behavior here if the # of reduce tasks is small enough (the problem before was 
that you'd open 10,000 files for 10,000 reduce tasks and allocate lots of space 
for buffers, not to mention do lots of random I/O). I would do that in a 
separate pull request though. For other things, like reduceByKey, this should 
have similar performance to the hash-based shuffle.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

Reply via email to