If a F500 company wants commercial support for Kafka, who would they turn
to?
It appears that there seems to be natural fit with real time processing
schemes aka stormtrident.
I am sure that someone in the community must have come across this issue.
Thanks
Milind
I don't know if anyone else has done that or if there is any indication
against doing it but I found that adding the sbteclipse plugin in the
project/plugins.sbt to be particularly easy to do and it worked for me. I
am only using to look/edit the code but I am not running anything from
eclipse
This is just my opinion of course (who else's could it be? :-)) but I think
from an engineering point of view, one must spend one's time making the
Producer-Kafka connection solid, if it is mission-critical.
Kafka is all about getting messages to disk, and assuming your disks are
solid (and 0.8
Interesting topic.
How would buffering in RAM help in reality though (just trying to work
through the scenerio in my head):
producer tries to connect to a broker, it fails, so it appends the message
to a in-memory store. If the broker is down for say 20 minutes and then
comes back online, won't
Thanks for the reply. But, when I did some more research, it seems like its
using the same encoder for both. For example, if I provide serializer.class
explicitly, this serializer is used for both key and value. However, if I
don't specify any serializer, then it appears that Kafka defaults to
But it shouldn't almost never happen.
Obviously I mean it should almost never happen. Not shouldn't.
Philip
Hi
Is it possible to enable compression between the broker and the consumer?
We are thinking in develop this feature in kafka 0.7 but first I would
like to check if there is something out there.
Our escenario is like this:
- the producer is a CPU bounded machine, so we want to keep the CPU
Kafka already supports end-to-end compression which means data
transfer between brokers and consumers is compressed. There are two
supported compression codecs - GZIP and Snappy. The latter is lighter
on CPU consumption. See this blog post for comparison -
Thanks for the replay Neha, but that's is end-to-end and I am looking
for a broker-consumer compression.
So:
Producer - uncompressed - broker - compressed - consumer
Regards
Pablo
2013/4/12 Neha Narkhede neha.narkh...@gmail.com:
Kafka already supports end-to-end compression which means data
That is not available for performance reasons. Broker uses zero-copy
to transfer data from disk to the network on the consumer side. If we
post process data already written to disk before sending it to
consumer, we will lose the performance advantage that we have due to
zero copy.
Thanks,
Neha
Do you use a VIP or zookeeper for producer side load balancing ? In
other words, what are the values you override for broker.list and
zk.connect in the producer config ?
Thanks,
Neha
On Fri, Apr 12, 2013 at 12:16 PM, Tom Brown tombrow...@gmail.com wrote:
We have recently setup a new kafka
In the producer config, we use the zk connect string:
zk001,zk002,zk003/kafka.
Both brokers have registered themselves with zookeeper. Because only the
first broker has ever received any writes, only the first broker is
registered for the topic in question.
--Tom
On Fri, Apr 12, 2013 at 3:32
Hi all,
I posted an update on the post (
https://blog.liveramp.com/2013/04/08/kafka-0-8-producer-performance-2/) to
test the effect of disabling ack messages from brokers. It appears this
only makes a big difference (~2x improvement ) when using synthetic log
messages, but only a modest 12%
Thanks,
Jun
On Fri, Apr 12, 2013 at 1:08 PM, Marc Labbe mrla...@gmail.com wrote:
I updated the Developer setup page. Let me know if it's not clear enough or
if I need to change anything.
On another note, since the idea plugin is already there, would it be
possible to add the sbteclipse
14 matches
Mail list logo