Thanks a lot Bruno I'll check that!
--
Alessandro Tagliapietra
On Wed, Sep 18, 2019 at 4:20 PM Bruno Cadonna wrote:
> Hi Alessandro,
>
> If you want to get each update to an aggregate, you need to disable
> the cache. Otherwise, an update will only be emitted when the
> aggregate is evicted or
Following up on this. It turned out to be 100% user error on my part. I was
still sending the v0 OffsetFetch request after committing v1+.
On Tue, Sep 17, 2019 at 9:14 PM Dan Swain wrote:
> Hi!
>
> I'm a maintainer of an open source Kafka client and working on adding
> support for kafka-stored o
Hi Alessandro,
If you want to get each update to an aggregate, you need to disable
the cache. Otherwise, an update will only be emitted when the
aggregate is evicted or flushed from the cache.
To disable the cache, you can:
- disable it with the `Materialized` object
- set cache.max.bytes.bufferi
Thanks Ismael and Bill.
It seems that nobody objects with the proposal. Hence, I prepared the
following PR to update the upgrade notes.
- https://github.com/apache/kafka/pull/7363 (trunk and 2.3)
- https://github.com/apache/kafka/pull/7364 (2.2)
- https://github.com/apache/kafka-site/pull/229
To reliably delete the logs you have to follow this.
In Kafka server.properties set the following properties (adjust per your need)
log.segment.bytes=10485760
log.retention.check.interval.ms=12
log.retention.ms=60
I believe the documentation should be clear enough to explain the priority
Yes, log.cleanup.policy is set to delete.
On Wed, Sep 18, 2019, 15:12 M. Manna wrote:
> And is your log.cleanup.policy set to delete ?
>
> On Wed, 18 Sep 2019 at 06:19, Vinay Kumar wrote:
>
> > I don't see log.retention.bytes only is not working. Even after the
> > specified size in log.retenti
And is your log.cleanup.policy set to delete ?
On Wed, 18 Sep 2019 at 06:19, Vinay Kumar wrote:
> I don't see log.retention.bytes only is not working. Even after the
> specified size in log.retention.bytes reached, the topic partition with
> segments grows much beyond.
>
> On Wed, Sep 18, 2019,
Hi,
Currently I have 3 node kafka cluster, and I want to add 2 more nodes to
make it 5 node cluster.
-After adding the nodes to cluster, I need all the topic partitions to be
evenly distributed across all the 5 nodes.
-In the past, when I ran kafka-reassign-partitions.sh &
kafka-preferred-replica-