Hi All,
I am working on using Kafka for building a highly scalable system. As I
understand and have seen, Kafka broker has a very impressive and scalable file
handling mechanisms to provide guaranteed delivery. However in one of the
scenarios, I am facing a different challenge.
The scenario is
Oh... and at this point I'm talking about consumers that do no processing
and don't even produce any output. They simply send udp packets to graphite.
On Mon, Apr 22, 2013 at 9:13 PM, Andrew Neilson wrote:
> Hmm it is highly unlikely that that is the culprit... There is lots of
> bandwidth avail
Hmm it is highly unlikely that that is the culprit... There is lots of
bandwidth available for me to use. I will definitely keep that in mind
though. I was working on this today and have some tidbits of additional
information and thoughts that you might be able to shed some light on:
- I mentio
Hundreds to a few thousands topics should be fine. Depending on the volume
of the data, you may need more brokers to support the load.
Thanks,
Jun
On Mon, Apr 22, 2013 at 8:12 PM, Alex Zuzin wrote:
> Hi all,
>
> I'm dealing with an application where the main semantic entity (equivalent
> to a
In 0.8, a producer can choose to receive an ack after all replicas have
received the message. In this mode, a published message won't be lost on
individual broker failure.
Thanks,
Jun
On Mon, Apr 22, 2013 at 12:03 PM, Yu, Libo wrote:
> Hi,
> I have read both the Kafka design document at
> htt
Chris,
Thanks. Once your high level producer example is ready, I will update the
0.8 quick start page (KAFKA-836) with the new link.
Jun
On Mon, Apr 22, 2013 at 10:40 AM, Chris Curtin wrote:
> Hi Jun,
>
> #1 and #2 are done, thanks for the code-review!
>
> I'll work on getting a High Level con
Hi Jason,
We are in the process of updating the documentation. Hoping to finish
it by this week. Stay tuned.
Thanks,
Neha
On Mon, Apr 22, 2013 at 2:12 PM, Jason Huang wrote:
> Hello,
>
> We've been playing around with kafka 0.8 for a few months now and decided
> to install kafka on a small clus
On 4/22/13 12:03 PM, "Yu, Libo " wrote:
>Hi,
>I have read both the Kafka design document at
>http://kafka.apache.org/design.html
>and the paper. And I have two questions about lost and duplicate message.
>
>"Without acknowledging the producer, there is no guarantee that every
>published
>message
Hello,
We've been playing around with kafka 0.8 for a few months now and decided
to install kafka on a small cluster for further testing. I tried to search
online but couldn't find any setup documentation for a kafka 0.8 cluster.
Does anyone know if such documents exist? If they don't exist, what
Hi,
I have read both the Kafka design document at
http://kafka.apache.org/design.html
and the paper. And I have two questions about lost and duplicate message.
"Without acknowledging the producer, there is no guarantee that every published
message is actually received by the broker." Is this stil
I'm interested in the same topic (similar use case).
What I think would be nice too (and this has been discussed a bit in the
past on this list), would be to have ssl support within the kafka protocol.
Zookeeper also doesn't support ssl, but at least now, in 0.8, producing
clients no longer reall
Wouldn't it make more sense to do something like an encrypted tunnel
between your core routers in each facility? LIke IPSEC on a GRE tunnel or
something.
This concept would need adjustment for those in the cloud but when you want
to build an encrypted tunnel between a bunch of hosts and a bunch of
Unfortunately 'stunneling everything' is not really possible. Stunnel acts like
a proxy service ... in the sense that the Stunnel client (on your log producer,
or log consumer) has to be explicitly configured to connect to an exact
endpoint (ie, kafka1.mydomain.com:1234) -- or multiple endpoints
I think you are right, even if you did put an ELB in front of kafka, it
would only be used for getting the initial broker list afaik. Producers and
consumers need to be able to talk to each broker directly, and also
consumers need to be able to talk to zookeeper as well to store offsets.
Probably
Hi there... we're currently looking into using Kafka as a pipeline for passing
around log messages. We like its use of Zookeeper for coordination (as we
already make heavy use of Zookeeper at Nextdoor), but I'm running into one big
problem. Everything we do is a) in the cloud, b) secure, and c)
Hi Jun,
#1 and #2 are done, thanks for the code-review!
I'll work on getting a High Level consumer example this week. I don't have
one readily usable (we quickly found the lack of control over offsets
didn't meet our needs) but I can get something this week.
Congratulations on getting closer to
Chris,
Thanks for the wiki. We are getting close to releasing 0.8.0 beta and your
writeup is very helpful. The following are some comments for the 0.8
Producer wiki.
1. The following sentence is inaccurate. The producer will do random
assignment as long as the key in KeyedMessage is null. If a ke
It would be better if there is another configuration directive, e.g.
zk.chroot for the chroot path, currently it is not consistent as we also
need to specify the port for each zookeeper, isn't?
Anyway, the doc can better explained this situation..
Thanks anyway!
On Sun, Apr 21, 2013 at 11:10 PM
18 matches
Mail list logo