Hi,

You cannot suppress those records, because both are required for
correctness. Note, that each event might go to a different instance in
the downstream aggregation -- that's why both records are required.

Not sure what the problem for your business logic is. Note, that Kafka
Streams provides eventual consistency guarantees. What guarantee do you
need?


-Matthias



On 6/29/18 12:22 PM, Thilo-Alexander Ginkel wrote:
> Hello everyone,
> 
> I have implemented a Kafka Streams service using the Streams DSL.
> Within the topology I am using A KGroupedTable, on which I perform an
> aggregate using an adder and subtractor. AFAICS (at least when using
> TopologyTestDriver) the intermediate state created by the subtractor
> is pushed downstream as an update followed by another update after the
> adder has been called.
> 
> Is there a way to reliably suppress publishing of this intermediate
> state (which is inconsistent from a business point of view in my
> case)?
> 
> The docs indicate this, but this does not sound like a guarantee ;-):
> 
> -- 8< --
> Not all updates might get sent downstream, as an internal cache is
> used to deduplicate consecutive updates to the same key. The rate of
> propagated updates depends on your input data rate, the number of
> distinct keys, the number of parallel running Kafka Streams instances,
> and the configuration parameters for cache size, and commit intervall.
> -- 8< --
> 
> Thanks & kind regards
> Thilo
> 

Attachment: signature.asc
Description: OpenPGP digital signature

Reply via email to