[
https://issues.apache.org/jira/browse/BEAM-10475?focusedWorklogId=501694&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-501694
]
ASF GitHub Bot logged work on BEAM-10475:
-----------------------------------------
Author: ASF GitHub Bot
Created on: 16/Oct/20 22:01
Start Date: 16/Oct/20 22:01
Worklog Time Spent: 10m
Work Description: robertwb commented on a change in pull request #13069:
URL: https://github.com/apache/beam/pull/13069#discussion_r506739408
##########
File path: sdks/python/apache_beam/utils/sharded_key.py
##########
@@ -0,0 +1,62 @@
+#
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements. See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License. You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#
+
+# pytype: skip-file
+
+from __future__ import absolute_import
+
+
+class ShardedKey(object):
+ """
+ A sharded key consisting of a user key and an opaque shard id represented by
+ bytes.
+
+ Attributes:
+ key: The user key.
+ shard_id: An opaque byte string that uniquely represents a shard of the
key.
+ """
+ def __init__(
+ self,
+ key,
+ shard_id=b'', # type: bytes
Review comment:
Any reason not to make this required?
##########
File path: sdks/python/apache_beam/utils/sharded_key.py
##########
@@ -0,0 +1,62 @@
+#
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements. See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License. You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#
+
+# pytype: skip-file
+
+from __future__ import absolute_import
+
+
+class ShardedKey(object):
+ """
+ A sharded key consisting of a user key and an opaque shard id represented by
+ bytes.
+
+ Attributes:
+ key: The user key.
+ shard_id: An opaque byte string that uniquely represents a shard of the
key.
+ """
+ def __init__(
+ self,
+ key,
+ shard_id=b'', # type: bytes
+ ):
+ # type: (...) -> None
+ assert shard_id is not None
+ self._key = key
+ self._shard_id = shard_id
+
+ @property
+ def key(self):
+ return self._key
+
+ def __repr__(self):
+ return '(%s, %s)' % (repr(self.key), self._shard_id)
+
+ def __eq__(self, other):
+ return (
+ type(self) == type(other) and self.key == other.key and
+ self._shard_id == other._shard_id)
+
+ def __ne__(self, other):
+ # TODO(BEAM-5949): Needed for Python 2 compatibility.
Review comment:
We don't need Python 2 compatibility anymore.
##########
File path: sdks/python/apache_beam/coders/coder_impl.py
##########
@@ -1365,3 +1366,38 @@ def estimate_size(self, value, nested=False):
# type: (Any, bool) -> int
value_size = self._value_coder.estimate_size(value)
return get_varint_size(value_size) + value_size
+
+
+class ShardedKeyCoderImpl(StreamCoderImpl):
+ """For internal use only; no backwards-compatibility guarantees.
+
+ A coder for sharded user keys.
+
+ The encoding and decoding should follow the order:
+ length of shard id byte string
+ shard id byte string
+ encoded user key
+ """
+ def __init__(self, key_coder_impl):
+ self._shard_id_coder_impl = LengthPrefixCoderImpl(BytesCoderImpl())
+ self._key_coder_impl = key_coder_impl
+
+ def encode_to_stream(self, value, out, nested):
+ # type: (ShardedKey, create_OutputStream, bool) -> None
+ self._shard_id_coder_impl.encode_to_stream(value.shard_id, out, True)
+ self._key_coder_impl.encode_to_stream(value.key, out, True)
Review comment:
True is correct and matches the description and Java. Let's not create
more coders that have different nested/unnested encodings.
##########
File path: sdks/python/apache_beam/coders/coders.py
##########
@@ -312,11 +312,12 @@ def register_urn(
@classmethod
@overload
- def register_urn(cls,
- urn, # type: str
- parameter_type, # type: Optional[Type[T]]
- fn # type: Callable[[T, List[Coder], PipelineContext], Any]
- ):
+ def register_urn(
Review comment:
Is this yapf or some other auto-formatter?
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
Issue Time Tracking
-------------------
Worklog Id: (was: 501694)
Time Spent: 12h 20m (was: 12h 10m)
> GroupIntoBatches with Runner-determined Sharding
> ------------------------------------------------
>
> Key: BEAM-10475
> URL: https://issues.apache.org/jira/browse/BEAM-10475
> Project: Beam
> Issue Type: Improvement
> Components: runner-dataflow
> Reporter: Siyuan Chen
> Assignee: Siyuan Chen
> Priority: P2
> Labels: GCP, performance
> Time Spent: 12h 20m
> Remaining Estimate: 0h
>
> [https://s.apache.org/sharded-group-into-batches|https://s.apache.org/sharded-group-into-batches__]
> Improve the existing Beam transform, GroupIntoBatches, to allow runners to
> choose different sharding strategies depending on how the data needs to be
> grouped. The goal is to help with the situation where the elements to process
> need to be co-located to reduce the overhead that would otherwise be incurred
> per element, while not losing the ability to scale the parallelism. The
> essential idea is to build a stateful DoFn with shardable states.
>
--
This message was sent by Atlassian Jira
(v8.3.4#803005)