[
https://issues.apache.org/jira/browse/BEAM-10475?focusedWorklogId=503817&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-503817
]
ASF GitHub Bot logged work on BEAM-10475:
-----------------------------------------
Author: ASF GitHub Bot
Created on: 22/Oct/20 17:23
Start Date: 22/Oct/20 17:23
Worklog Time Spent: 10m
Work Description: boyuanzz commented on a change in pull request #13144:
URL: https://github.com/apache/beam/pull/13144#discussion_r509687809
##########
File path: sdks/python/apache_beam/transforms/util.py
##########
@@ -751,24 +751,42 @@ class GroupIntoBatches(PTransform):
GroupIntoBatches is experimental. Its use case will depend on the runner if
it has support of States and Timers.
"""
- def __init__(self, batch_size):
+ def __init__(
+ self, batch_size, max_buffering_duration_secs=None, clock=time.time):
"""Create a new GroupIntoBatches with batch size.
Arguments:
batch_size: (required) How many elements should be in a batch
+ max_buffering_duration_secs: (optional) How long in seconds at most an
+ incomplete batch of elements is allowed to be buffered in the states.
+ The duration must be a positive second duration and should be given as
+ an int or float.
+ clock: (optional) an alternative to time.time (mostly for testing)
"""
self.batch_size = batch_size
+ if max_buffering_duration_secs is not None:
+ assert max_buffering_duration_secs > 0, \
Review comment:
You can use parenthesis to avoid using `\` as line continuation.
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
Issue Time Tracking
-------------------
Worklog Id: (was: 503817)
Time Spent: 14h (was: 13h 50m)
> GroupIntoBatches with Runner-determined Sharding
> ------------------------------------------------
>
> Key: BEAM-10475
> URL: https://issues.apache.org/jira/browse/BEAM-10475
> Project: Beam
> Issue Type: Improvement
> Components: runner-dataflow
> Reporter: Siyuan Chen
> Assignee: Siyuan Chen
> Priority: P2
> Labels: GCP, performance
> Time Spent: 14h
> Remaining Estimate: 0h
>
> [https://s.apache.org/sharded-group-into-batches|https://s.apache.org/sharded-group-into-batches__]
> Improve the existing Beam transform, GroupIntoBatches, to allow runners to
> choose different sharding strategies depending on how the data needs to be
> grouped. The goal is to help with the situation where the elements to process
> need to be co-located to reduce the overhead that would otherwise be incurred
> per element, while not losing the ability to scale the parallelism. The
> essential idea is to build a stateful DoFn with shardable states.
>
--
This message was sent by Atlassian Jira
(v8.3.4#803005)