Please find my reply in blue colour:
On Thu, Aug 29, 2019 at 11:32 AM Lisheng Wang
wrote:
> Hi
>
> about question 1, it's dosen't matter that how many consumers in same
> consumer group.
>
> So you means the broker which is coordinator did not crashed at all before?
>
We didn't see any shutd
Hi
None of your image can be shown, actually, mail server aggressively strips
attachment.
Best,
Lisheng
荣益丰 于2019年8月29日周四 下午12:47写道:
>
>
>
>
>
>
> 转发邮件信息
> 发件人:"荣益丰"
> 发送日期:2019-08-29 08:31:40
> 收件人:users-subscr...@kafka.apache.org
> 主题:kafka poll阻塞怎么解决
>
>
>
>
>
>
>
Hi
about question 1, it's dosen't matter that how many consumers in same
consumer group.
So you means the broker which is coordinator did not crashed at all before?
May i know if only exact one broker(coordinator) is unavailable or many
are? if only exact one, you can try to transfer leader of _
Hi all,
Sorry for disturbing you guys. Though I don't think here as a proper place to
do this, I need your help, your vote, your holy vote, for us Chinese, for
conscience and justice, for better world.
In the over 70 years of ruling over China, the Chinese Communist Party has done
many horrible
转发邮件信息
发件人:"荣益丰"
发送日期:2019-08-29 08:31:40
收件人:users-subscr...@kafka.apache.org
主题:kafka poll阻塞怎么解决
Will and Garvit, you can use a load balancer with health checks for this
purpose.
Ryanne
On Wed, Aug 28, 2019, 6:09 PM Will Weber wrote:
> Apologies for piggybacking on a thread, figured the discussion was pretty
> relevant to a thought I had kicking around my brain.
>
> In the event of complet
Hi,
We are facing following issues with Kafka cluster.
- Kafka Version: 2.0.0
- We following cluster configuration:
- Number of Broker: 14
- Per Broker: 37GB Memory and 14 Cores.
- Topics: 40 - 50
- Partitions per topic: 32
- Replicas: 3
- Min In Sync Replica: 2
- __con
Hi Koushik
Seems there is something lead to didn't replicated in time, so the follower
was kicked out from ISR.
May i know if you have any chance can lead to that issue, e.g. the
throughput is too high to can not complete replication in time or there is
record can not replicated to follower as so
Apologies for piggybacking on a thread, figured the discussion was pretty
relevant to a thought I had kicking around my brain.
In the event of complete failure or sustained loss of connectivity of the
first cluster, could the secondary cluster act as a failover for a given
configuration?
Assuming
Hi All,
We had a topic partition(with 5 replication) going offline when leader of the
partition was down. Below is some analysis
Kafka server - 1.1 , relevant config (replica.fetch.wait.max.ms = 500,
replica.fetch.min.bytes = 5, replica.lag.time.max.ms=1)
Topic partition (Test.Request
10 matches
Mail list logo