Even I observed the same issue.

On Fri, Feb 6, 2015 at 12:19 AM, Praveen Garg <praveen.g...@guavus.com>
wrote:

>  Hi,
>
>  While moving from spark 1.1 to spark 1.2, we are facing an issue where
> Shuffle read/write has been increased significantly. We also tried running
> the job by rolling back to spark 1.1 configuration where we set
> spark.shuffle.manager to hash and spark.shuffle.blockTransferService to
> nio. It did improve the performance a bit but it was still much worse than
> spark 1.1. The scenario seems similar to the bug raised sometime back
> https://issues.apache.org/jira/browse/SPARK-5081.
> Has anyone come across any similar issue? Please tell us if any
> configuration change can help.
>
>  Regards, Praveen
>
>

Reply via email to