Re: offset reset on unavailability

2020-01-29 Thread Puneet Saha
unsubscribe

On Wed, Jan 29, 2020 at 6:31 PM Sergey Shelukhin
 wrote:

> Hi.
> We've run into a situation where Kafka cluster was unstable, but some
> brokers were still up and responding.
> Some of the consumers restarted at that time and were not able to get
> their commit offset.
> We run with auto.offset.reset earliest by default, for bootstrap; after
> some time, these consumers reset their commit offset to earliest and
> started reprocessing a bunch of events.
>
> We are using Confluent.Kafka client.
> Is that an expected behavior?
> Is there an option to only reset offset on the positive ack that the
> offset is not stored for this consumer?
> We'd like the cases when the offset cannot be retrieved due to a transient
> condition to result in retries, or at least a failure.
>


Re: Very long consumer rebalances

2018-08-10 Thread Puneet Saha
Please remove me from the list

On Fri, Jul 6, 2018 at 2:55 AM Shantanu Deshmukh 
wrote:

> Hello everyone,
>
> We are running a 3 broker Kafka 0.10.0.1 cluster. We have a java app which
> spawns many consumer threads consuming from different topics. For every
> topic we have specified different consumer-group. A lot of times I see that
> whenever this application is restarted a CG on one or two topics takes more
> than 5 minutes to receive partition assignment. Till that time consumers
> for that topic don't consumer anything. If I go to Kafka broker and run
> consumer-groups.sh and describe that particular CG I see that it is
> rebalancing. There is time critical data stored in that topic and we cannot
> tolerate such long delays. What can be the reason for such long rebalances.
>
> Here's our consumer config
>
>
> auto.commit.interval.ms = 3000
> auto.offset.reset = latest
> bootstrap.servers = [x.x.x.x:9092, x.x.x.x:9092, x.x.x.x:9092]
> check.crcs = true
> client.id =
> connections.max.idle.ms = 54
> enable.auto.commit = true
> exclude.internal.topics = true
> fetch.max.bytes = 52428800
> fetch.max.wait.ms = 500
> fetch.min.bytes = 1
> group.id = otp-notifications-consumer
> heartbeat.interval.ms = 3000
> interceptor.classes = null
> key.deserializer = class
> org.apache.kafka.common.serialization.StringDeserializer
> max.partition.fetch.bytes = 1048576
> max.poll.interval.ms = 30
> max.poll.records = 50
> metadata.max.age.ms = 30
> metric.reporters = []
> metrics.num.samples = 2
> metrics.sample.window.ms = 3
> partition.assignment.strategy = [class
> org.apache.kafka.clients.consumer.RangeAssignor]
> receive.buffer.bytes = 65536
> reconnect.backoff.ms = 50
> request.timeout.ms = 305000
> retry.backoff.ms = 100
> sasl.kerberos.kinit.cmd = /usr/bin/kinit
> sasl.kerberos.min.time.before.relogin = 6
> sasl.kerberos.service.name = null
> sasl.kerberos.ticket.renew.jitter = 0.05
> sasl.kerberos.ticket.renew.window.factor = 0.8
> sasl.mechanism = GSSAPI
> security.protocol = SSL
> send.buffer.bytes = 131072
> session.timeout.ms = 30
> ssl.cipher.suites = null
> ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
> ssl.endpoint.identification.algorithm = null
> ssl.key.password = null
> ssl.keymanager.algorithm = SunX509
> ssl.keystore.location = null
> ssl.keystore.password = null
> ssl.keystore.type = JKS
> ssl.protocol = TLS
> ssl.provider = null
> ssl.secure.random.implementation = null
> ssl.trustmanager.algorithm = PKIX
> ssl.truststore.location = /x/x/client.truststore.jks
> ssl.truststore.password = [hidden]
> ssl.truststore.type = JKS
> value.deserializer = class
> org.apache.kafka.common.serialization.StringDeserializer
>
> Please help.
>
> *Thanks & Regards,*
> *Shantanu Deshmukh*
>