Github user MJFND commented on the issue:
https://github.com/apache/spark/pull/14658
Okay, but even if not then increasing the number of shuffle partition
should fix it, but its not.
On Dec 26, 2017 8:51 PM, "Guoqiang Li" wrote:
> Spark
Github user witgo commented on the issue:
https://github.com/apache/spark/pull/14658
Spark 2.2 has fixed this issue.
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands,
Github user MJFND commented on the issue:
https://github.com/apache/spark/pull/14658
If "Remote Shuffle Blocks cannot be more than 2 GB" then setting up
spark.sql.shuffle.partitions=value, where value should be such that it has 2gb
per executor, like for 200GB of data, we can have
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/14658
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/14658
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/63823/
Test PASSed.
---
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/14658
**[Test build #63823 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/63823/consoleFull)**
for PR 14658 at commit