Github user jinxing64 commented on the issue:
https://github.com/apache/spark/pull/18482
Sure, I will update the document soon.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
en
Github user zsxwing commented on the issue:
https://github.com/apache/spark/pull/18482
In a second thought, I think we don't need this PR. We can disable
`spark.reducer.maxReqSizeShuffleToMem` by default. Let's just document this
configuration will break old shuffle service and the us
Github user jinxing64 commented on the issue:
https://github.com/apache/spark/pull/18482
Very gentle ping @zsxwing , How do you think about this idea?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does n
Github user jinxing64 commented on the issue:
https://github.com/apache/spark/pull/18482
In current change, it i fetching big chunk in memory and then writing to
disk and then release the memory. I made this change for below reasons:
1. The client shouldn't break old shuffle servic
Github user cloud-fan commented on the issue:
https://github.com/apache/spark/pull/18482
does this mean we have to fetch big chunks in memory and then writing to
disk?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If y