Hi,
Clogging can happen if, as seems in your case, the requests are bounded by
network.
Just to confirm your configurations, does your broker configuration look
like this?? :-
"num.replica.fetchers": 4,
"replica.fetch.wait.max.ms": 500,
"num.recovery.threads.per.data.dir": 4,
Hi all,
I'm running all little test, with both zookeeper, Kafka and the schema
registry running locally. Using the new consumer, and the 2.0.0-snapshot
version of the registry, which has an decoder giving back instances of the
schema object.
It's all working fine, but I see a consistent delay
Great, thanks for the information! So it is definitely acks=all we want to go
for. Unfortunately we run into an blocking issue in our production like test
environment which we have not been able to find a solution for. So here it is,
ANY idea on how we could possibly find a solution is very
Hi!
Here are our settings for the properties requested:
num.network.threads=3
socket.request.max.bytes=104857600
socket.receive.buffer.bytes=1048576
socket.send.buffer.bytes=1048576
The following properties we don't set at all, so I guess they will default
according to the documentation
Hi,
This should help :)
During my benchmarks, I noticed that if 5 node kafka cluster running 1
topic is given a continuous injection of 50GB in one shot (using a modified
producer performance script, which writes my custom data to kafka), the
last replica can sometimes lag and it used to catch
Hi, i am wondering why the increased replication factor is related to a
partition instead of the topic. Isn't it very hard to manage? Any one can
help me clarify this?
On Nov 27, 2015 6:51 AM, "Dillian Murphey" wrote:
> Alright, thank you all. Appreciate it.
>
> Cheers
>
Hi,
Can some one please let me know the following:-
1. Is it possible to specify maximum length of a particular topic ( in
terms of number of messages ) in kafka ?
2. Also how does Kafka behave when a particular topic gets full?
3. Can the producer be blocked if a topic get full
Hi,
Of all the parameters, num.replica.fetchers should be kept higher to 4 can
be of help.
Please try it out and let us know if it worked
Thanks,
Prabhjot
On Nov 28, 2015 4:59 PM, "Andreas Flinck"
wrote:
> Hi!
>
> Here are our settings for the properties
Hi ,
I am new in Kafka . I went through some docments to understand Kafka. I
have some questions. Please help me to understand
1> Kafka storage:
What is the physical existing of segment file ?
What do you mean by "flushing segment file to disk"?
One of the document mentioned
-- "A message is only
AFAIK there is no such notion as maximum length of a topic, i.e. offset has
no limit, except Long.MAX_VALUE I think, which should be enough for a
couple of lifetimes (9 * 10E18, or quintillion or million trillions).
What would be the purpose of that, besides being a nice foot-gun :)
Marko Bonaći
Kafka server has a data retention policy based on either time or #.message
(e.g. Kafka brokers will automatically delete the oldest data segment if
its oldest data has been xx milliseconds ago, of if its total log size has
exceed yy MBs, with threshold values configurable).
The producer clients
Let me explain my use case:-
We have a ELK setup in which logstash-forwarders pushes logs from different
services to a logstash. The logstash then pushes them to kafka. The
logstash consumer then pulls them out of Kafka and indexes them to
Elasticsearch cluster.
We are trying to ensure that no
12 matches
Mail list logo