[
https://issues.apache.org/jira/browse/BEAM-7413?focusedWorklogId=255706&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-255706
]
ASF GitHub Bot logged work on BEAM-7413:
----------------------------------------
Author: ASF GitHub Bot
Created on: 07/Jun/19 07:43
Start Date: 07/Jun/19 07:43
Worklog Time Spent: 10m
Work Description: pbackx commented on pull request #8785: [BEAM-7413]
Fixes bug in partitioning of GroupNonMergingWindowsFunctions
URL: https://github.com/apache/beam/pull/8785
This is the same PR as #8743 but now applied to master
---
We run our Beam pipelines on Spark. Since upgrading from Beam 2.8.0 to
2.12.0 we experience a very high number of tasks per stage (in the millions).
The two relevant tickets that have introduced this behavior are:
https://issues.apache.org/jira/browse/BEAM-5392
https://issues.apache.org/jira/browse/BEAM-4783
Both are good changes that should remain, no question there.
The underlying issue is the HashPartioner that is used in the new
GroupNonMergingWindowsFunctions#groupByKeyAndWindow method. The partitioner
that used to be there would reduce the number of partitions to the parallelism
that was set when the pipeline was started. The partitioner that is there now,
does no longer do this.
The fix: I have added the partitioner as an extra argument, so we can supply
the same partitioner throughout the pipeline. Because the partitioner can be
null, I have added the fallback to be the one that it was before:
```
private static <K, V> Partitioner getPartitioner(
Partitioner partitioner, JavaRDD<WindowedValue<KV<K, V>>> rdd) {
return partitioner == null ? new HashPartitioner(rdd.getNumPartitions())
: partitioner;
}
```
------------------------
Thank you for your contribution! Follow this checklist to help us
incorporate your contribution quickly and easily:
- [x] [**Choose
reviewer(s)**](https://beam.apache.org/contribute/#make-your-change) and
mention them in a comment (`R: @username`).
- [x] Format the pull request title like `[BEAM-XXX] Fixes bug in
ApproximateQuantiles`, where you replace `BEAM-XXX` with the appropriate JIRA
issue, if applicable. This will automatically link the pull request to the
issue.
- [ ] If this contribution is large, please file an Apache [Individual
Contributor License Agreement](https://www.apache.org/licenses/icla.pdf).
Post-Commit Tests Status (on master branch)
------------------------------------------------------------------------------------------------
Lang | SDK | Apex | Dataflow | Flink | Gearpump | Samza | Spark
--- | --- | --- | --- | --- | --- | --- | ---
Go | [](https://builds.apache.org/job/beam_PostCommit_Go/lastCompletedBuild/)
| --- | --- | [](https://builds.apache.org/job/beam_PostCommit_Go_VR_Flink/lastCompletedBuild/)
| --- | --- | ---
Java | [](https://builds.apache.org/job/beam_PostCommit_Java/lastCompletedBuild/)
| [](https://builds.apache.org/job/beam_PostCommit_Java_ValidatesRunner_Apex/lastCompletedBuild/)
| [](https://builds.apache.org/job/beam_PostCommit_Java_ValidatesRunner_Dataflow/lastCompletedBuild/)
| [](https://builds.apache.org/job/beam_PostCommit_Java_ValidatesRunner_Flink/lastCompletedBuild/)<br>[](https://builds.apache.org/job/beam_PostCommit_Java_PVR_Flink_Batch/lastCompletedBuild/)<br>[](https://builds.apache.org/job/beam_PostCommit_Java_PVR_Flink_Streaming/lastCompletedBuild/)
| [](https://builds.apache.org/job/beam_PostCommit_Java_ValidatesRunner_Gearpump/lastCompletedBuild/)
| [](https://builds.apache.org/job/beam_PostCommit_Java_ValidatesRunner_Samza/lastCompletedBuild/)
| [](https://builds.apache.org/job/beam_PostCommit_Java_ValidatesRunner_Spark/lastCompletedBuild/)<br>[](https://builds.apache.org/job/beam_PostCommit_Java_PVR_Spark_Batch/lastCompletedBuild/)
Python | [](https://builds.apache.org/job/beam_PostCommit_Python_Verify/lastCompletedBuild/)<br>[](https://builds.apache.org/job/beam_PostCommit_Python3_Verify/lastCompletedBuild/)
| --- | [](https://builds.apache.org/job/beam_PostCommit_Py_VR_Dataflow/lastCompletedBuild/)
<br> [](https://builds.apache.org/job/beam_PostCommit_Py_ValCont/lastCompletedBuild/)
| [](https://builds.apache.org/job/beam_PreCommit_Python_PVR_Flink_Cron/lastCompletedBuild/)
| --- | --- | [](https://builds.apache.org/job/beam_PostCommit_Python_VR_Spark/lastCompletedBuild/)
Pre-Commit Tests Status (on master branch)
------------------------------------------------------------------------------------------------
--- |Java | Python | Go | Website
--- | --- | --- | --- | ---
Non-portable | [](https://builds.apache.org/job/beam_PreCommit_Java_Cron/lastCompletedBuild/)
| [](https://builds.apache.org/job/beam_PreCommit_Python_Cron/lastCompletedBuild/)
| [](https://builds.apache.org/job/beam_PreCommit_Go_Cron/lastCompletedBuild/)
| [](https://builds.apache.org/job/beam_PreCommit_Website_Cron/lastCompletedBuild/)
Portable | --- | [](https://builds.apache.org/job/beam_PreCommit_Portable_Python_Cron/lastCompletedBuild/)
| --- | ---
See
[.test-infra/jenkins/README](https://github.com/apache/beam/blob/master/.test-infra/jenkins/README.md)
for trigger phrase, status and link of all Jenkins jobs.
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
Issue Time Tracking
-------------------
Worklog Id: (was: 255706)
Time Spent: 1h 10m (was: 1h)
> Huge amount of tasks per stage in Spark runner after upgrade to Beam 2.12.0
> ---------------------------------------------------------------------------
>
> Key: BEAM-7413
> URL: https://issues.apache.org/jira/browse/BEAM-7413
> Project: Beam
> Issue Type: Bug
> Components: runner-spark
> Affects Versions: 2.12.0
> Reporter: Peter Backx
> Assignee: Peter Backx
> Priority: Major
> Fix For: 2.14.0
>
> Time Spent: 1h 10m
> Remaining Estimate: 0h
>
> After upgrading from Beam 2.8.0 to 2.12.0 we see a huge number of tasks per
> stage in our pipelines. Where we used to see a few thousands tasks/stage at
> most, it's now into the millions. This makes the pipeline unable to complete
> successfully (driver and network are overloaded)
> It looks like after each (Co)GroupByKey operation, the amount of tasks (per
> stage) at least doubles sometimes even more.
> I did notice a fix to GroupByKey (BEAM-5392) that may or may not be related.
> I cannot post the full pipeline, but we have created a small test to showcase
> the effect:
> [https://github.com/pbackx/beam-groupbykey-test]
>
> [https://github.com/pbackx/beam-groupbykey-test/blob/master/src/test/java/NumTaskTest.java]
> contains two tests:
> * One shows how we would usually join PCollections together and if you run
> it, you'll see the number of tasks gradually increase
> * The other uses a GroupIntoBatches operation after each join. The effect is
> that there's no longer an increase in tasks. (the deprecated Reshuffle
> operation has a similar effect, but it's deprecated...)
> We've now sprinkled GroupIntoBatches throughout our pipeline and this seems
> to avoid the issue, but at the cost of performance (this effect is much worse
> in the toy example than in our "real" pipeline to be honest).
> My questions:
> * Is this a bug or is this expected behavior?
> * Is the GroupIntoBatches the best workaround or are there better options?
>
--
This message was sent by Atlassian JIRA
(v7.6.3#76005)