Hi Dinesh
Maybe you can check
"kafka.network:type=RequestMetrics,name=RemoteTimeMs,request=FetchFollower"
on all broker to see if there are some broker are lower than others?
i think if some followers are busy on replicating, then that metric will be
lower since maybe there are many records are w
The first thing I would do is update to the latest Java 8 release. Just in
case you are hitting any G1GC bugs in such an old version.
Mark
On Thu, 22 Aug 2019, 07:17 Xiaobing Bu, wrote:
> it not a network issues, since i had capture the network packets.
> when the GC remark and unloading class,
it not a network issues, since i had capture the network packets.
when the GC remark and unloading class, we guess the java application is in
stop the world mode, and can not send heartbeat to zookeeper, and after
some time, the broker is disconnected from the cluster,
so anther clients and brokers
Hi,
Our kafka __consumer_offsets cleanup.policy set to delete, and retention.ms
set to 8640(one day), because we found the default compact mode using
very large disk, and we change to delete mode many years ago.
Now we become understanding kafka failover, and found some partitions under
__conus
Hi
on 2019/8/22 13:48, Xiaobing Bu wrote:
[2019-08-07 02:30:04,802] WARN Attempting to send response via channel for
which there is no open connection, connection id 10.97.133.17:9092
-10.97.200.19:58674-52592642 (kafka.network.Processor)
It seems the network issues.
Did you double check the n
Hi Lisheng,
Yes, its RemoteTimeMs,
"kafka.network:type=RequestMetrics,name=RemoteTimeMs,request=Produce"
Sure, i'll try increasing the number of replica fetchers, other
configuration are as suggested by the paper,
I was also wondering whether i can track which topic or which specific
follower is
Hi all,
Our kafka cluster on production environment have some problem, some brokers
connect to zookeeper timeout almost everyday. There are four brokers, each
has 10 core CPU and 8G memory.
The following is the server.log , it said broker can not connect to
zookeeper. I had capture packets using t
Hi Dinesh
Just wanna check if the metrics you called is "RemoteTimeMs" or not?
if so, The meaning of "RemoteTimeMs" is the time the request is waiting on
a remote client for produce. A high value can imply a slow network
connection.
that explanation come from "Optimizing Your Apache KafkaTM De
Hi,
We've a kafka (version 2.0.0) cluster with multiple brokers, and many
producers with ack=all, or could be ack=1 (which we don't control), There's
increase in produce time from 10ms to ~150ms.
With JMX metrics able to see "remote" is taking more time, which i figured
are followers.
1. Is ther
Hi,
i agree with Liam, the OOM look to happen during produce time. I would
look into that.
But with your message size, i would recommend investigating to implement
ClaimChecks. It will be much easier and reduce the avg message size.
-- Pere
On Thu, 22 Aug 2019, 01:12 Liam Clarke wrote:
> Hi
Hi,
the error looks like a missing configuration value. A good source of
examples how to set up security can be found at
https://github.com/purbon/kafka-security-playbook or
https://docs.confluent.io/current/kafka/authentication_ssl.html.
i would verify them and see if you're using the same con
Thanks Matthias for the prompt response.
Now just for curiosity, how does that work? I thought it was not possible
to easily delete topic data...
On Wed, Aug 21, 2019 at 4:51 PM Matthias J. Sax
wrote:
> No need to worry about this.
>
> Kafka Streams used "purge data" calls, to actively delete d
Hi,
I have followed the steps to secure the brokers using SSL. I have signed
the server certificate using internal CA. I have the keystore with server
certificate, private key and the CA. Also the truststore has only the CA.
Unfortunately I am unable to start the broker with the following server
Hi I Vic,
Your OOM is happening before any compression is applied. It's occurring
when the StringSerializer is converting the string to bytes. Looking deeper
into StringCoding.encode, it's first allocating a byte array to fit your
string, and this is where your OOM is occurring, line 300 of
String
Hi Sampath
Maybe need to check log to see if can find any clue, hard to say why those
happened.
Best,
Lisheng
sampath kumar 于2019年8月22日周四 上午2:24写道:
> Hi Lisheng,
>
> I guess the issue is not with message.max.bytes, same messages consumer
> after just running rebalancing.
>
> Regards,
> Sampat
No need to worry about this.
Kafka Streams used "purge data" calls, to actively delete data from
those topics after the records are processed. Hence, those topics won't
grow unbounded but are "truncated" on a regular basis.
-Matthias
On 8/21/19 11:38 AM, Murilo Tavares wrote:
> Hi
> I have a co
+users mailing list
David,
I don't think I really understand your email. Are you saying that this can
already be achieved only using the READ ACL?
Thanks
Adam
On Wed, Aug 21, 2019 at 3:58 AM David Jacot wrote:
> Hello,
>
> It would be better to ask such question on the user mailing list.
>
Hi
I have a complex KafkaStreams topology, where I have a bunch of KTables
that I regroup (rekeying) and aggregate so I can join them.
I've noticed that the "-repartition" topics created by the groupBy
operations have a very long retention by default (Long.MAX_VALUE).
I'm a bit concerned about the
Hi Lisheng,
I guess the issue is not with message.max.bytes, same messages consumer
after just running rebalancing.
Regards,
Sampath
On Wed, Aug 21, 2019 at 7:57 PM Lisheng Wang
wrote:
> Hi Sampath
>
> the description of fetch.max.bytes is following from
> https://kafka.apache.org/documentatio
Hello, please, share the config of your consumer and any Exception you see
in the logs, please.
Cheers!
On Wed, Aug 21, 2019 at 4:30 PM Shreesha Hebbar
wrote:
> >
> > Hi all,
> >Noticing some of the consumer fetch timing out (2-10 sec), even though
> > there is a steady rate of ingestio
>
> Hi all,
>Noticing some of the consumer fetch timing out (2-10 sec), even though
> there is a steady rate of ingestion on the topic
> Any leads on how to triage such an issue?Also for some reason a 60 sec
> poll is getting messages, is that normal?
>
Hi Sampath
the description of fetch.max.bytes is following from
https://kafka.apache.org/documentation/#consumerconfigs
The maximum amount of data the server should return for a fetch request.
Records are fetched in batches by the consumer, and if the first record
batch in the first non-empty par
I have to deal with large ( 16M) text messages in my Kafka system, so i
increased several message limit settings on broker/producer/consumer site
and now the system is able to get them throughI also tried to enable
compression in producer:
"compression.type"= "gzip"
but to my surprise ended up
thx!
we'll try this method soon.
David Jacot 于2019年8月21日周三 下午3:38写道:
> Hello,
>
> Yes, that should be fine. If you move the data to a new machine and use the
> corresponding broker.id, it is basically the same broker but on different
> vm.
>
> Best,
> David
>
> On Fri, Aug 16, 2019 at 10:03 AM
Lisheng ,
Issue not with fetch max bytes as same message start processing after
restarting the consumer
Regards,
Sampath
On Wed, Aug 21, 2019 at 4:30 PM Lisheng Wang
wrote:
> Hi Sampath
>
> Can you confirm that "fetch.max.bytes" on consumer is not smaller than
> "message.max.bytes" on broker?
Hi Sampath
Can you confirm that "fetch.max.bytes" on consumer is not smaller than
"message.max.bytes" on broker?
Maybe need you check consumer log to see if can find any clue once you
enable it. if no any error/exception found on consumer side, maybe need
change log level to "debug" to get more d
Hi Lisheng,
Thanks for the response.
Right now we have enabled info in the broker However logs not enabled for
the consumer client will enable it.
Yes, when we manually stop and start the consumer in affected microservice
instance rebalance triggers and consuming resumes.
And in Broker side con
May i know what log level did you configured on consumer and broker?
you say it will resume when rebalance happen, so consumer is alive, can
you see any heartbeat information in consumer log?
Best,
Lisheng
sampath kumar 于2019年8月21日周三 下午5:23写道:
> Hi,
>
> Using Broker 5.3.0, new consumers(Cons
Hi,
Using Broker 5.3.0, new consumers(Consumers managed by brokers). Brokers
are deployed in a Kubernetes environment
Number of brokers : 3, Number of 3 Zookeeper setup
One of the Topic "inventory.request" we have 3 replication, with insync
replicas configured as 2 and partition count is 1024
W
Thanks. Yes, you're right re (2), I'm new to this stuff.
I've currently got code which scans the GlobalKTable in a punctuator and
inverts it to another data structure, which I cache (a minute seems suitable
for my application), as the only "obvious" way of getting data out of a
GlobalKTable (ot
Hello,
Yes, that should be fine. If you move the data to a new machine and use the
corresponding broker.id, it is basically the same broker but on different
vm.
Best,
David
On Fri, Aug 16, 2019 at 10:03 AM Xiangyuan LI
wrote:
> Hi,
> Now our kafka cluster has 5 brokers that broker.id is 1,2,3,
Hello,
As of today, the producer is able to talk to only one Kafka cluster.
*bootstrap.servers* is to provide a list of brokers belonging to the same
cluster to ensure at least one is available.
In your case, you have to replicate the data from the first cluster to the
second cluster with mirror
Hello Emanuel,
It does not work because Kafka advertises the IP it is configured with (the
ones of the broker) in the metadata response. I think you have two options:
1. You configure the default listener of each broker to advertise the IP
address of their proxy instance. It means that the inter-
> So with
>> exactly_once, it must roll-back commit(s) to the state store in a failure
>> scenario?
Yes. Dirty writes into the stores are "cleaned up" if you enable
exactly-once processing semantics.
"commit" and never rolled back, as a commit indicates successful
processing :)
-Matthias
On 8/
C Hi Eliza,
Kafka Streaming, Spark Streaming, Flink and Storm are all good. They also
all have their caveats. It's really hard to say that X is the best.
For example, Kafka Streaming can't read from one Kafka cluster and write to
another, but Spark can.
But then Spark offers two flavours of str
Hi Alex,
if you are interested in understanding exactly-once a bit more in
detail, I recommend you to watch the following Kafka Summit talk by
Matthias
https://www.confluent.io/kafka-summit-london18/dont-repeat-yourself-introducing-exactly-once-semantics-in-apache-kafka
Best,
Bruno
On Wed, Aug
36 matches
Mail list logo