Hi Guozhang,
Thanks for the input. Yes, confirmed that enabling and overriding the
Rocks DB config setter class (with default parameters) in parallel to Kafka
streams cache goes to indefinite memory usage. After removing the
override, the application memory usage is consistent within 24GB. Can
maybe easier to use github.
https://github.com/angelfox123/kperf
On Sat, Aug 25, 2018 at 8:43 PM Nan Xu wrote:
> so I did upgrade to 2.0.0 and still seeing the same result. below is the
> program I am using. I am running everything on a single server. (centos 7,
> 24 core, 32G ram , 1 broker
so I did upgrade to 2.0.0 and still seeing the same result. below is the
program I am using. I am running everything on a single server. (centos 7,
24 core, 32G ram , 1 broker, 1 zookeeper, single harddrive), I understand
the single hard drive is less ideal. but still don't expect it can over 3
se
No.
The cleanup interval configures when old state, that is not longer used,
will be deleted. This does not imply a TTL feature. It's about tasks
that got assigned to a different KafkaStreams instance.
State would only grow unbounded if your program increases the state
unbounded. For example, if
Hello
We are currently using GlobalKTables for interactive queries as well as for
lookups inside stream applications but have come across some
limitations/problems.
The main problem was that our deployments including application start took
longer with every new global state store we added which ca
Matthias:
Thanks for your reply. With your answer, I have found the cause of my
problem.
There is nothing wrong with the KafkaProducer code. The problem is
with the use of KafkaComsuer. I am storing committed offsets outside
of Kafka. I am counting the received consumer records to compute
committ