GitHub user liyuance opened a pull request: https://github.com/apache/spark/pull/12835
[SPARK-3190][GraphX] fix VertexRDD.count exceed on large graph As [SPARK-3190] and https://github.com/apache/spark/pull/2106 described, VertexRDDs with more than 4 billion elements are counted incorrectly due to integer overflow when summing partition sizes. And the PR above expected to fix the issue by converting partition sizes to Longs before summing them. But when he number of vertices in specific partition exceed Integer.MAX_VALUE also can repreduce this issue. The fundamental cause of this problem is the variable âsizeâ is defined as type Int in class VertexPartitionBase. You can merge this pull request into a Git repository by running: $ git pull https://github.com/liyuance/spark graphx-VertexRDD-count-exceed Alternatively you can review and apply these changes as the patch at: https://github.com/apache/spark/pull/12835.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #12835 ---- commit eba89be2633cfc7b0b3104c71ffd544c0f2db38b Author: liyuance <liyua...@gmail.com> Date: 2016-05-02T06:03:13Z fix VertexRDD.count exceed on large graph ---- --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. --- --------------------------------------------------------------------- To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org