Re: Kafka streams - runs out of memory

2018-08-25 Thread AshokKumar J
Hi Guozhang, Thanks for the input. Yes, confirmed that enabling and overriding the Rocks DB config setter class (with default parameters) in parallel to Kafka streams cache goes to indefinite memory usage. After removing the override, the application memory usage is consistent within 24GB. Can

Re: kafka stream latency

2018-08-25 Thread Nan Xu
maybe easier to use github. https://github.com/angelfox123/kperf On Sat, Aug 25, 2018 at 8:43 PM Nan Xu wrote: > so I did upgrade to 2.0.0 and still seeing the same result. below is the > program I am using. I am running everything on a single server. (centos 7, > 24 core, 32G ram , 1 broker

Re: kafka stream latency

2018-08-25 Thread Nan Xu
so I did upgrade to 2.0.0 and still seeing the same result. below is the program I am using. I am running everything on a single server. (centos 7, 24 core, 32G ram , 1 broker, 1 zookeeper, single harddrive), I understand the single hard drive is less ideal. but still don't expect it can over 3 se

Re: Regarding issue - https://lists.apache.org/thread.html/1f2ffc93483cbe71167fa47875c5ecda8dbcd5275d3d41b5af3220d9@%3Cusers.kafka.apache.org%3E

2018-08-25 Thread Matthias J. Sax
No. The cleanup interval configures when old state, that is not longer used, will be deleted. This does not imply a TTL feature. It's about tasks that got assigned to a different KafkaStreams instance. State would only grow unbounded if your program increases the state unbounded. For example, if

GlobalKTable/KTable initialization differences

2018-08-25 Thread Patrik Kleindl
Hello We are currently using GlobalKTables for interactive queries as well as for lookups inside stream applications but have come across some limitations/problems. The main problem was that our deployments including application start took longer with every new global state store we added which ca

Re: Is it possible to send a message more than once with transactional.id set?

2018-08-25 Thread jingguo yao
Matthias: Thanks for your reply. With your answer, I have found the cause of my problem. There is nothing wrong with the KafkaProducer code. The problem is with the use of KafkaComsuer. I am storing committed offsets outside of Kafka. I am counting the received consumer records to compute committ