Cannot read messages using a specific group

2018-12-19 Thread Karim Lamouri
Hi, One of our brokers went down and when it came back up the consumer of one topic couldn’t read using the original consumer group. This doesn’t output anything: bin/kafka-console-consumer --bootstrap-server server --topic ASSETS --group group_name However, if I change the name of the group it

Re: Cannot read messages using a specific group

2018-12-19 Thread M. Manna
what Is your offset reset point (earliest/latest) ? Just because you can read it by changing consumer group doesn’t mean you are reading the latest data, but may be from the beginning. On Wed, 19 Dec 2018 at 13:54, Karim Lamouri wrote: > Hi, > > One of our brokers went down and when it came b

KTable.suppress(Suppressed.untilWindowCloses) does not suppress some non-final results when the kafka streams process is restarted

2018-12-19 Thread Peter Levart
Hello, I'm trying to use kafka streams to aggregate some time series data using 1 second tumbling time windows. The data is ordered approximately by timestamp with some "jitter" which I'm limiting at the input by a custom TimestampExtractor that moves events into the future if they come in to

Re: KTable.suppress(Suppressed.untilWindowCloses) does not suppress some non-final results when the kafka streams process is restarted

2018-12-19 Thread Peter Levart
I see the list processor managed to smash may beautifully formatted HTML message. For that reason I'm re-sending the sample code snippet in plain text mode...  Here's a sample kafka streams processor:     KStream input =     builder     .stream(     inpu

High end-to-end latency with processing.guarantee=exactly_once

2018-12-19 Thread Dmitry Minkovsky
I have a process that spans several Kafka Streams applications. With the streams commit interval and producer linger both set to 5ms, when exactly once delivery is disabled, this process takes ~250ms. With exactly once enabled, the same process takes anywhere from 800-1200ms. In Enabling Exactly-O

回复:High end-to-end latency with processing.guarantee=exactly_once

2018-12-19 Thread meigong.wang
Which version are you using? This bug(https://issues.apache.org/jira/browse/KAFKA-7190) may increase the latency of your application, try to reduce the retry.backoff.ms,the default value is 100 ms. 王美功 原始邮件 发件人:Dmitry minkovskydminkov...@gmail.com 收件人:usersus...@kafka.apache.org 发送时间:2018年12

Re: High end-to-end latency with processing.guarantee=exactly_once

2018-12-19 Thread Dmitry Minkovsky
Hello 王美功, I am using 2.1.0. And, I think you nailed it on the head, because my application is low throughput and I am seeing UNKNOWN_PRODUCER_ID all the time with exactly once enabled. I've googled this before but couldn't identify the cause. Thank you! Setting retry.backoff.ms to 5 brought the

Re: High end-to-end latency with processing.guarantee=exactly_once

2018-12-19 Thread Dmitry Minkovsky
Also, I have read through that issue and KIP-360 to the extent my knowledge allows and I don't understand why I get this error constantly when exactly once is enabled. The KIP says > Idempotent/transactional semantics depend on the broker retaining state for each active producer id (e.g. epoch and

Re: maven dependency problems

2018-12-19 Thread big data
Solved this problem! Because of SpringBoot dependency in parent POM, it  depends lots of components... 在 2018/12/18 上午10:54, big data 写道: > When I remove/comment parent dependency like > in Module A pom.xml, it seems ok, and streaming kafka's depends on > kafka_2.11:0.8.2.1. > > The Modula