[ 
https://issues.apache.org/jira/browse/BEAM-8335?focusedWorklogId=343700&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-343700
 ]

ASF GitHub Bot logged work on BEAM-8335:
----------------------------------------

                Author: ASF GitHub Bot
            Created on: 14/Nov/19 19:39
            Start Date: 14/Nov/19 19:39
    Worklog Time Spent: 10m 
      Work Description: robertwb commented on pull request #9720: [BEAM-8335] 
Add initial modules for interactive streaming support
URL: https://github.com/apache/beam/pull/9720#discussion_r346500890
 
 

 ##########
 File path: 
sdks/python/apache_beam/runners/interactive/caching/streaming_cache.py
 ##########
 @@ -0,0 +1,171 @@
+#
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements.  See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License.  You may obtain a copy of the License at
+#
+#    http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#
+
+from __future__ import absolute_import
+
+from apache_beam.portability.api.beam_runner_api_pb2 import TestStreamPayload
+from apache_beam.utils import timestamp
+from apache_beam.utils.timestamp import Timestamp
+
+
+class StreamingCache(object):
+  """Abstraction that holds the logic for reading and writing to cache.
+  """
+  def __init__(self, readers):
+    self._readers = readers
+
+  class Reader(object):
+    """Abstraction that reads from PCollection readers.
+
+    This class is an Abstraction layer over multiple PCollection readers to be
+    used for supplying a TestStream service with events.
+
+    This class is also responsible for holding the state of the clock, 
injecting
+    clock advancement events, and watermark advancement events.
+    """
+    def __init__(self, readers):
+      # This timestamp is used as the monotonic clock to order events in the
+      # replay.
+      self._monotonic_clock = timestamp.Timestamp.of(0)
+
+      # The PCollection cache readers.
+      self._readers = {}
+
+      # The file headers that are metadata for that particular PCollection.
+      # The header allows for metadata about an entire stream, so that the data
+      # isn't copied per record.
+      self._headers = {r.header().tag : r.header() for r in readers}
+      self._readers = {r.header().tag : r.read() for r in readers}
+
+      # The watermarks per tag. Useful for introspection in the stream.
+      self._watermarks = {tag: timestamp.MIN_TIMESTAMP for tag in 
self._headers}
+
+      # The most recently read timestamp per tag.
+      self._stream_times = {tag: timestamp.MIN_TIMESTAMP
+                            for tag in self._headers}
+
+    def _test_stream_events_before_target(self, target_timestamp):
+      """Reads the next iteration of elements from each stream.
+
+      Retrieves an element from each stream iff the most recently read 
timestamp
+      from that stream is less than the target_timestamp. Since the amount of
+      events may not fit into memory, this StreamingCache needs to read each
+      element one at a time.
+      """
+      records = []
+      for tag, r in self._readers.items():
+        # The target_timestamp is the maximum timestamp that was read from the
+        # stream. Some readers may have elements that are less than this. Thus,
+        # we skip all readers that already have elements that are at this
+        # timestamp so that we don't read everything into memory.
+        if self._stream_times[tag] >= target_timestamp:
+          continue
+        try:
+          record = next(r)
+          records.append((tag, record))
+          self._stream_times[tag] = 
Timestamp.from_proto(record.processing_time)
+        except StopIteration:
+          pass
+      return records
+
+    def _merge_sort(self, previous_events, new_events):
 
 Review comment:
   By merge sort, I meant basically 
https://guava.dev/releases/20.0/api/docs/com/google/common/collect/Iterables.html#mergeSorted-java.lang.Iterable-java.util.Comparator-
   
   In short, one maintains a heap of readers sorted by the timestamp of their 
(peeked at) next element. Then
   
   ```
   while not reader_heap.empty():
     first_reader = reader_heap.pop()
     try:
       yield first_reader.read()
     except StopIteration:
       pass
     else:
       reader_heap.push(first_reader)
   ```
   
   This should be equivalent, so you can keep things as is if you want. 
 
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
-------------------

    Worklog Id:     (was: 343700)
    Time Spent: 28h 10m  (was: 28h)

> Add streaming support to Interactive Beam
> -----------------------------------------
>
>                 Key: BEAM-8335
>                 URL: https://issues.apache.org/jira/browse/BEAM-8335
>             Project: Beam
>          Issue Type: Improvement
>          Components: runner-py-interactive
>            Reporter: Sam Rohde
>            Assignee: Sam Rohde
>            Priority: Major
>          Time Spent: 28h 10m
>  Remaining Estimate: 0h
>
> This issue tracks the work items to introduce streaming support to the 
> Interactive Beam experience. This will allow users to:
>  * Write and run a streaming job in IPython
>  * Automatically cache records from unbounded sources
>  * Add a replay experience that replays all cached records to simulate the 
> original pipeline execution
>  * Add controls to play/pause/stop/step individual elements from the cached 
> records
>  * Add ability to inspect/visualize unbounded PCollections



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

Reply via email to