While I do agree the symptoms are similar, I don't believe these are
related. The ESIO bug was centered around both "bundle" operations
(StartBundle/EndBundle) and watermark updates during bundle processing. In
my case I'm not using anything related to bundle operations (no
Start/EndBundle), and
I believe that this thread is entirely related to an another thread[1]
where there is discussion that the correct fix for this issue could be to
enforce that watermark updates would only happen at bundle boundaries. There’s
another related thread[2] citing the same error with ElasticsearchIO.
This
> but this only impacts non-portable runners.
heh like Dataflow? :(
I'm not sure what the solution is here then? I managed to hit this bug
within 2 hours of running my first pipeline on 2.37. I can't just live
with pipelines breaking randomly, and it seems like everything worked fine
before tha
I'm unclear how the timer would have even ever been set to output at that
timestamp though. The output timestamp falls into the next window, and if
unset (like in this case) the output timestamp is derived from the element
timestamp [1]. This means we somehow had an element in the wrong window?
T
We have a job that uses processing time timers, and just upgraded from 2.33
to 2.37. Sporadically we've started seeing jobs fail with this error:
java.lang.IllegalArgumentException: Cannot output with timestamp
2022-04-01T19:19:59.999Z. Output timestamps must be no earlier than the
output timesta
Output element timestamps. But i think your comment on the doc resolved my
misunderstanding. Thanks!
For the benefit of the thread:
I forgot that the default timestamp is the parent element's timestamp, and
if there are no user timestamp changes, the root default is "whatever the
Runner sets the
This is your daily summary of Beam's current P1 issues, not including flaky
tests
(https://issues.apache.org/jira/issues/?jql=project%20%3D%20BEAM%20AND%20statusCategory%20!%3D%20Done%20AND%20priority%20%3D%20P1%20AND%20(labels%20is%20EMPTY%20OR%20labels%20!%3D%20flake).
See https://beam.apache.
This is your daily summary of Beam's current flaky tests
(https://issues.apache.org/jira/issues/?jql=project%20%3D%20BEAM%20AND%20statusCategory%20!%3D%20Done%20AND%20labels%20%3D%20flake)
These are P1 issues because they have a major negative impact on the community
and make it hard to determin
I just started looking into the Spark runner code a bit to helpfully help
supporting it.
Besides having to maintain (test!) twice the number of artifacts, there’s also
a significant negative impact on developer ergonomics / productivity supporting
multiple major versions (separate modules to dea