Metadata max idle vs age ms

2021-05-24 Thread Neeraj Vaidya
Hi All, (I have asked this on SO as well, but happy to paste the response there 
if I get a answer here or vice-versa).

I would like to know the impact of setting both of these options on the 
Producer API.

Can someone please let me know why metadata has not expired in the Scenario-1 
even when metadata.max.age.ms has elapsed.

Scenario-1

set metadata.max.age.ms to 10 seconds.
Leave metadata.max.idle.ms as default of 5 minutes.
start Kafka producer. publish some messages to a topic using this producer.
shutdown the Kafka cluster.
allow more than 10 seconds to elapse.> I am hoping that this would cause 
the producer to automatically trigger a request to update Metadata.
Now,try to produce a message to the topic using KafkaProducer#send from the 
producer.
This operation returns immediately, without blocking.
I was expecting this to block as more than 10 seconds have elapsed and was 
expecting metadata to expire based on the metadata.max.age.ms being set to 10 
seconds.

Scenario-2

However, if do the following, then the KafkaProducer#send operation blocks :
set metadata.max.age.ms and metadata.max.idle.ms both to 10 seconds.
start Kafka producer
publish some messages to a topic using this producer.
shutdown the Kafka cluster.
allow more than 10 seconds to elapse.
try to produce a message to the topic using KafkaProducer#send from the 
producer.
This operation blocks trying to fetch metadata.
I am not sure why it blocks now, but not in the first scenario.


Produce response is not sent in replica.lag.time.max.ms

2021-05-24 Thread 中川恵太
Hello Apache Kafka Community,
Could you please help me?

I'm testing Kafka.
- Environment
# of Kafka brokers: 5
Producer config:
  acks=all
  request.timeout.ms=47000
  delivery.timeout.ms=5
Broker config:
  replica.lag.time.max.ms=45000
  replica.fetch.wait.max.ms=500

I dropped packets from broker#1 to the other broker's port 9092.
It means broker#1 cannot send replication fetch request to the others.
And I produced data to broker#3.
I expected broker#3 waits for fetch requests for 
replica.lag.time.max.ms(45sec), and then it removes broker#1 from ISR and sends 
Produce response.
But actually, broker#3 removes broker#1 from ISR and sends response 57 seconds 
after receiving Produce request.(57sec is not constant value. It varies each 
time.)

Is my expectation wrong?
Why isn't broker#1 removed from ISR in replica.lag.time.max.ms?

# timeline
17:39:30  Drop packets from broker#1 to the others
17:39:30  Producer sent Produce request to broker#3. Broker#3 received it.
[2021-05-24 17:39:30,163] INFO [Log partition=a-snd-d00-87, dir=/kafka/data] 
Rolled new log segment at offset 136 in 1 ms. (kafka.log.Log)
17:40:17  Broker#3 sent Produce response with "Request Timed Out"
17:40:17  Producer sent Produce request to broker#3(retry)
17:40:20  Producer timed out with delivery.timeout.ms
17:40:27  Broker#3 removed broker#1 from ISR, and sent Produce response with 
"No Error"
[2021-05-24 17:40:27,788] INFO [Partition a-snd-d00-87 broker=3] Shrinking ISR 
from 5,1,2,3,4 to 5,2,3,4. Leader: (highWatermark: 136, endOffset: 138). Out of 
sync replicas: (brokerId: 1, endOffset: 136). (kafka.cluster.Partition)

Best regards,
Keita


Re: How to reduce the latency to interact with a topic?

2021-05-24 Thread Shilin Wu
Summary of Configurations for Optimizing Latency
Producer

   - linger.ms=0 (default 0)
   - compression.type=none (default none, meaning no compression)
   - acks=1 (default 1)

Consumer

   - fetch.min.bytes=1 (default 1)

[image: Confluent] 
Wu Shilin
Solution Architect
+6581007012
Follow us: [image: Blog]
[image:
Twitter] [image: LinkedIn]
[image: Slack]
[image: YouTube]

[image: Kafka Summit] 


On Mon, May 24, 2021 at 4:07 PM benitocm  wrote:

> Hi all,
>
> I am using a Kafka topic to hanle invalidation events inside a system A
> that consists of different nodes. When a node of System A detects that a
> situation for invalidation happens, it  produces an event to the
> invalidation topic. The rest of the nodes in System A consume that
> invalidation topic to be aware of those invalidations and process them.
> My question is how can I configure the producers and consumers of that
> topic to minimize the time of that end to end  scenario? I mean I am
> interested in reducing the time that it takes to an event to be written
> into Kafka and reducing the time that it takes for a producer to consume
> those events.
>
> Thanks in advance
>


why does kafka not use the rebalance strategy of rocketmq

2021-05-24 Thread 孙森
> 
> 
> hi,
> Can someone tell me the reason why kafka dose not use the rebalance 
> strategy of rocketmq. When the rocketmq consumer in rebalance ,it does not 
> stop the consuming job, and would not cause the STW. Without the control of 
> the leader member, this rebalance is more lightweight.
> 



How to reduce the latency to interact with a topic?

2021-05-24 Thread benitocm
Hi all,

I am using a Kafka topic to hanle invalidation events inside a system A
that consists of different nodes. When a node of System A detects that a
situation for invalidation happens, it  produces an event to the
invalidation topic. The rest of the nodes in System A consume that
invalidation topic to be aware of those invalidations and process them.
My question is how can I configure the producers and consumers of that
topic to minimize the time of that end to end  scenario? I mean I am
interested in reducing the time that it takes to an event to be written
into Kafka and reducing the time that it takes for a producer to consume
those events.

Thanks in advance