Yes, this all makes sense. The option untilWallClockTimeLimit would solve the
problem for us. I can see the difficulty with delivering the "final" result.
The problem here is assuming that something will be *complete* at some point in
time and a final result can be delivered. The time for a
Ok, good, that's what I was hoping. I think that's a good strategy, at
the end of the "real" data, just write a dummy record with the same
keys with a high timestamp to flush everything else through.
For the most part, I'd expect a production program to get a steady
stream of traffic with
Initially it started in the testing. QA reported problems where "events" were
not detected after they finished their testing. After this discussion, my
proposal was to send a few more records to cause the windows to flush so that
the suppressed event would show up. Now it looks to me, these few
Hi Mohan,
I see where you're going with this, and it might indeed be a
challenge. Even if you send a "dummy" message on all input topics, you
won't have a guarantee that after the repartition, the dummy message
is propagated to all partitions of the repartition topics. So it might
be difficult to
John,
Thanks for the nice explanation. When the repartitioning happens, does the
window get associated with the new partition i.e., now does a message with new
timestamp has to appear on the repartition topic for the window to expire ? It
is possible that there is new stream of messages coming
Hey, this is a very apt question.
GroupByKey isn't a great example because it doesn't actually change
the key, so all the aggregation results are actually on records from
the same partition. But let's say you do a groupBy or a map (or any
operation that can change the key), followed by an
I can see the issue. But it raised other questions. Pardon my ignorance. Even
though partitions are processed independently, windows can be aggregating state
from records read from many partitions. Let us say there is a groupByKey
followed by aggregate. In this case how is the state reconciled
No problem. It's definitely a subtlety. It occurs because each
partition is processed completely independently of the others, so
"stream time" is tracked per partition, and there's no way to look
across at the other partitions to find out what stream time they have.
In general, it's not a problem
That change "In the same partition" must explain what we are seeing. Unless you
see one message per partition, all windows will not expire. That is an
interesting twist. Thanks for the correction ( I will go back and confirm this.
-mohan
On 6/21/19, 12:40 PM, "John Roesler" wrote:
Sure, the record cache attempts to save downstream operators from
unnecessary updates by also buffering for a short amount of time
before forwarding. It forwards results whenever the cache fills up or
whenever there is a commit. If you're happy to wait at least "commit
interval" amount of time for
Could you tell me a little more about the delays about the record caches and
how I can disable it ?
If I could summarize my problem:
-A new record with a new timestamp > all records sent before, I expect *all* of
the old windows to close
-Expiry of the windows depends only on the event time
Hi!
In addition to setting the grace period to zero (or some small
number), you should also consider the delays introduced by record
caches upstream of the suppression. If you're closely watching the
timing of records going into and coming out of the topology, this
might also spoil your
We do explicitly set the grace period to zero. I am going to try the new version
-mohan
On 6/19/19, 12:50 PM, "Parthasarathy, Mohan" wrote:
Thanks. We will give it a shot.
On 6/19/19, 12:42 PM, "Bruno Cadonna" wrote:
Hi Mohan,
I realized that my
Thanks. We will give it a shot.
On 6/19/19, 12:42 PM, "Bruno Cadonna" wrote:
Hi Mohan,
I realized that my previous statement was not clear. With a grace
period of 12 hour, suppress would wait for late events until stream
time has advanced 12 hours before a result would be
Hi Mohan,
I realized that my previous statement was not clear. With a grace
period of 12 hour, suppress would wait for late events until stream
time has advanced 12 hours before a result would be emitted.
Best,
Bruno
On Wed, Jun 19, 2019 at 9:21 PM Bruno Cadonna wrote:
>
> Hi Mohan,
>
> if you
Hi Mohan,
if you do not set a grace period, the grace period defaults to 12
hours. Hence, suppress would wait for an event that occurs 12 hour
later before it outputs a result. Try to explicitly set the grace
period to 0 and let us know if it worked.
If it still does not work, upgrade to version
No, I have not set any grace period. Is that mandatory ? Have you seen problems
with suppress and windows expiring ?
Thanks
Mohan
On 6/19/19, 12:41 AM, "Bruno Cadonna" wrote:
Hi Mohan,
Did you set a grace period on the window?
Best,
Bruno
On Tue, Jun 18,
Hi Mohan,
Did you set a grace period on the window?
Best,
Bruno
On Tue, Jun 18, 2019 at 2:04 AM Parthasarathy, Mohan wrote:
>
> On further debugging, what we are seeing is that windows are expiring rather
> randomly as new messages are being processed. . We tested with new key for
> every
On further debugging, what we are seeing is that windows are expiring rather
randomly as new messages are being processed. . We tested with new key for
every new message. We waited for the window time before replaying new messages.
Sometimes a new message would come in and create state. It
Hi,
We are using suppress in the application. We see some state being created at
some point in time. Now there is no new data for a day or two. We send new data
but the old window of data (where we see the state being created) is not
closing i.e not seeing it go through suppress and on to the
20 matches
Mail list logo