Hello friends,
Request your expertise on this problem I'm facing
Thanks
On Sep 4, 2015 8:09 PM, "Prabhjot Bharaj" wrote:
> Hi,
>
> I am experiencing super slow throughput when using acks=-1
> Some further progress in continuation to the test in my previous email:-
>
> *Topic details -*
>
> Topi
Hey Shashank,
If you'd like to get started with the new consumer, I urge you to checkout
trunk and take it for a spin. The API is still a little unstable, but I
doubt that changes from here on will be too dramatic. If you have any
questions or run into any issues, this mailing list is a great plac
Hi
I am eager to get to use the enhanced Consumer API which provides better
control in terms of offset management etc. As I believe from reading
through forums it is coming as part of 0.8.3 release. However there is no
tentative date for the same.
Can you please give any hint on that. Also which
Jun's post is a good start, but I find it's easier to talk in terms of more
concrete reasons and guidance for having fewer or more partitions per topic.
Start with the number of brokers in the cluster. This is a good baseline
for the minimum number of partitions in a topic, as it will assure balan
No problem. Thanks for your advice. I think it would be fun to explore. I
only know how to program in java though. Hope it will work.
On Fri, Sep 4, 2015 at 2:03 PM, Helleren, Erik
wrote:
> I thing the suggestion is to have partitions/brokers >=1, so 32 should be
> enough.
>
> As for latency tes
I thing the suggestion is to have partitions/brokers >=1, so 32 should be
enough.
As for latency tests, there isn’t a lot of code to do a latency test. If
you just want to measure ack time its around 100 lines. I will try to
push out some good latency testing code to github, but my company is
Thanks for your reply Erik. I am running some more tests according to your
suggestions now and I will share with my results here. Is it necessary to
use a fixed number of partitions (32 partitions maybe) for my test?
I am testing 2, 4, 8, 16 and 32 brokers scenarios, all of them are running
on ind
That is an excellent question! There are a bunch of ways to monitor
jitter and see when that is happening. Here are a few:
- You could slice the histogram every few seconds, save it out with a
timestamp, and then look at how they compare. This would be mostly
manual, or you can graph line chart
Thank you Erik! That's is helpful!
But also I see jitters of the maximum latencies when running the
experiment.
The average end to acknowledgement latency from producer to broker is
around 5ms when using 92 producers and 4 brokers, and the 99.9 percentile
latency is 58ms, but the maximum latency
Can't read it. Sorry
On Fri, Sep 4, 2015 at 12:08 PM, Roman Shramkov
wrote:
> Её ай н Анны уйг
>
> sent from a mobile device, please excuse brevity and typos
>
>
> Пользователь Yuheng Du написал
>
> According to the section 3.1 of the paper "Kafka: a Distributed Messaging
> System for L
Её ай н Анны уйг
sent from a mobile device, please excuse brevity and typos
Пользователь Yuheng Du написал
According to the section 3.1 of the paper "Kafka: a Distributed Messaging
System for Log Processing":
"a message is only exposed to the consumers after it is flushed"?
Is it sti
Here is a good doc to describe how to choose the right number of partitions
http://www.confluent.io/blog/how-to-choose-the-number-of-topicspartitions-in-a-kafka-cluster/
On Fri, Sep 4, 2015 at 10:08 PM, Jörg Wagner wrote:
> Hello!
>
> Regarding the recommended amount of partitions I am a bit co
WellŠ not to be contrarian, but latency depends much more on the latency
between the producer and the broker that is the leader for the partition
you are publishing to. At least when your brokers are not saturated with
messages, and acks are set to 1. If acks are set to ALL, latency on an
non-sat
Hi,
I am experiencing super slow throughput when using acks=-1
Some further progress in continuation to the test in my previous email:-
*Topic details -*
Topic:temp PartitionCount:1 ReplicationFactor:3 Configs:
Topic: temp Partition: 0 Leader: 5 Replicas: 5,1,2 Isr: 5,2,1
*This is the command I
Hello!
Regarding the recommended amount of partitions I am a bit confused.
Basically I got the impression that it's better to have lots of
partitions (see information from linkedin etc). On the other hand, a lot
of performance benchmarks floating around show only a few partitions are
being us
When I using 32 partitions, the 4 brokers latency becomes larger than the 8
brokers latency.
So is it always true that using more brokers can give less latency when the
number of partitions is at least the size of the brokers?
Thanks.
On Thu, Sep 3, 2015 at 10:45 PM, Yuheng Du wrote:
> I am ru
According to the section 3.1 of the paper "Kafka: a Distributed Messaging
System for Log Processing":
"a message is only exposed to the consumers after it is flushed"?
Is it still true in the current kafka? like the message can only be
available after it is flushed to disk?
Thanks.
It would seem (if the metrics registry is accurate) that replica
fetcher threads can persist after a leadership election. Even when the
broker itself is elected leader.
This also seems to occur after a reassignment too (as evident by the 5
different thread entries for the same partition in the regi
18 matches
Mail list logo