Hello,
In most cases of Kafka, network bottleneck will be hit before the disk
bottleneck. So maybe you want to check your network capacity to see if it
has been saturated.
Guozhang
On Thu, Oct 10, 2013 at 3:57 PM, Bruno D. Rodrigues
bruno.rodrig...@litux.org wrote:
A 10/10/2013, às 23:14, S
On Thu, Oct 10, 2013 at 3:57 PM, Bruno D. Rodrigues
bruno.rodrig...@litux.org wrote:
My personal newbie experience, which is surely completely wrong and
miss-configured, got me up to 70MB/sec, either with controlled 1K messages
(hence 70Kmsg/sec) as well as with more random data (test
Make sure the fetch batch size and the local consumer queue sizes are large
enough, setting them too low will limit your throughput to the
broker-client latency.
This would be controlled using the following properties:
- fetch.message.max.bytes
- queued.max.message.chunks
On the producer side
Producer:
props.put(batch.num.messages, 1000); // 200
props.put(queue.buffering.max.messages, 2); // 1
props.put(request.required.acks, 0);
props.put(producer.type, async); // sync
// return ++this.count % a_numPartitions; // just round-robin
I'm also curious to know what is the limiting factor of kafka write
throughput?
I've never seen reports higher than 100mb/sec, obviously disks can provide
much more. In my own test with single broker, single partition, single
replica:
bin/kafka-producer-perf-test.sh --topics perf --threads 10
Is anyone out there running a single broker kafka setup?
How about with only 8 GB RAM?
I'm looking at one of the better dedicated server prodivers, and a 8GB
server is pretty much what I want to spend at the moment, would it make
sense going this route?
This same server would also potentially be
A 10/10/2013, às 23:14, S Ahmed sahmed1...@gmail.com escreveu:
Is anyone out there running a single broker kafka setup?
How about with only 8 GB RAM?
I'm looking at one of the better dedicated server prodivers, and a 8GB
server is pretty much what I want to spend at the moment, would it