Kafka never had a callback for the async producer yet. But this is proposed
for Kafka 0.9. You can find the proposal here -
https://cwiki.apache.org/confluence/display/KAFKA/Client+Rewrite#ClientRewrite-ProposedProducerAPI
Thanks,
Neha
On Oct 7, 2013 4:52 AM, Bruno D. Rodrigues
Kafka port is documented in
http://kafka.apache.org/documentation.html#brokerconfigs
Thanks,
Jun
On Sat, Oct 5, 2013 at 12:05 PM, Jiang Jacky jiang0...@gmail.com wrote:
Hi, I tried to setup the host.name in servier.properties, it doesn't work.
I believe it is the network security issue.
Hi everyone,
I wrapped kafka 0.8beta1 client into a jruby class:
https://github.com/joekiller/jruby-kafka and then a wrote an input for
logstash: https://github.com/joekiller/logstash/tree/kafka8
It's a little rough around the edges still but it does work fine.
I'll be enhancing both the
So the concept to keep in mind is that as long as we set the whole kafka
servers list on the producer and the zookeeper(s) list on the consumers, from a
producer and consumer's perspective it should just work and the code won't get
any information, but instead one should look at the logs?
What
Jason,
As Neha said, what you said is possible, but may require a more careful
design. For example, what if the followers don't catch up with the leader
quickly? Do we want to wait forever or up to some configurable amount of
time? If we do the latter, we may still lose data during controlled
At LinkedIn, our message size can be 10s of KB. This is mostly because we
batch a set of messages and send them as a single compressed message.
Thanks,
Jun
On Mon, Oct 7, 2013 at 7:44 AM, S Ahmed sahmed1...@gmail.com wrote:
When people using message queues, the message size is usually pretty
I see, so that is one thing to consider is if I have 20 KB messages, I
shouldn't batch too many together as that will increase latency and the
memory usage footprint on the producer side of things.
On Mon, Oct 7, 2013 at 11:55 AM, Jun Rao jun...@gmail.com wrote:
At LinkedIn, our message size
Thanks for pointing this out. Updated the doc.
The reason for the change is the following. If the timeout is caused by a
problem at the broker, It's actually not very useful to set the timeout too
small. After timing out, the producer is likely to resend the data. This
adds more load to the
The message size limit is imposed on the compressed message. To answer your
question about the effect of large messages - they cause memory pressure on
the Kafka brokers as well as on the consumer since we re-compress messages
on the broker and decompress messages on the consumer.
I'm not so sure
The async producer's send() API is never supposed to block. If, for some
reason, the producer's queue is full and you try to send more messages, it
will drop those messages and raise a QueueFullException. You can configure
the message.send.max.retries config to retry sending the messages n
times,
the total message size of the batch should be less than
message.max.bytes or is that for each individual message?
The former is correct.
When you batch, I am assuming that the producer sends some sort of flag
that this is a batch, and then the broker will split up those messages to
individual
Sure go ahead.
From: Neha Narkhede neha.narkh...@gmail.com
Sent: Monday, October 07, 2013 12:23 PM
To: users@kafka.apache.org
Subject: Re: introducing jruby-kafka (Kafka 0.8beta1) and Kafka logstash input
Hi Joe,
This is great and thanks for sharing. If
Offsets always begin at 0 for each partition and increase sequentially from
there. Offsets aren't unique within a topic. As old data is discarded the
first retained offset will not remain 0. The behavior of what is retained
is controlled by your retention settings.
In trunk there is a feature
Thanks for the quick answer !
Francis
Sent from Samsung Mobile
Original message
From: Jay Kreps jay.kr...@gmail.com
Date: 10-07-2013 17:55 (GMT-05:00)
To: users@kafka.apache.org
Subject: Re: Offset question
Offsets always begin at 0 for each partition and increase
Hi, Everyone,
I made another pass of the remaining jiras that we plan to fix in the 0.8
final release.
Hi ,
We are using central standalone zookeeper for kafka and HBase .
Due to some problem, we have installed new zookeeper on different machine,
but since we do not have old metadata available in Zookeeper required for
kafka.
We are not able to read previous topic messages.
How we can
16 matches
Mail list logo