You might consider the approach we are using in Hermes (a pubsub system
with HTTP interface on top of Kafka):
http://hermes-pubsub.readthedocs.io/en/latest/configuration/buffer-persistence/
We use Chronicle Map to persist things that go into Kafka producer into
memory mapped file. In case of proc
I don't think you need to write it from scratch, Hermes project
http://hermes-pubsub.readthedocs.org/en/latest/ does this (and more). You
could probably use only consumers module to change pull to push and push
messages from Kafka to other REST services. It has all the retry and send
rate auto adju
Hi,
I don't think it matters much which version of ZK will you use (meaning
minor/patch versions). We have been using 3.4.6 for some time and it works
flawlessly.
BR,
Adam
2015-07-22 18:40 GMT+02:00 Prabhjot Bharaj :
> Hi,
>
> I've read on the Kafka documentation page that the zookeeper version
Hi Nicolas,
>From my experience there are only two ways out:
1) wait for retention time to pass, so data gets deleted (this is usually
unacceptable)
2) trace offset of corrupt message on all affected subscriptions and skip
this message by overwriting it (offset+1)
Problem is, that when encounteri
can we integrated with Graphite to add alerting ? can please
> explain in details and if you have any doc can you please provide?
>
> Thanks & Regards,
> -Anandh Kumar
>
> On Fri, Jul 10, 2015 at 12:58 PM, Adam Dubiel
> wrote:
>
> > We are using kafka offset m
We are using kafka offset monitor (http://quantifind.com/KafkaOffsetMonitor/),
which we recently integrated with Graphite to add alerting and better
graphing - it should be accessible in newest version, not yet released. It
works only with ZK offsets though.
2015-07-10 9:24 GMT+02:00 Anandh Kumar
We faced similar problem and ended up with implementing variant of golden
section search, that reads message using simple consumer and checks the
timestamp (timestamps are appended by our producer though, they do not come
from any Kafka metadata) till it finds message closest to given date.
Adam
There is also: http://quantifind.com/KafkaOffsetMonitor/ , but there is no
monitoring support out of box.
2015-06-30 13:12 GMT+02:00 noah :
> If you are committing offsets to Kafka, try Burrow:
> https://github.com/linkedin/Burrow
>
> On Tue, Jun 30, 2015 at 3:41 AM Shady Xu wrote:
>
> > Hi all,
Hi,
We have been solving this very problem in Hermes. You can see what we came
up by examining classes located here:
https://github.com/allegro/hermes/tree/master/hermes-consumers/src/main/java/pl/allegro/tech/hermes/consumers/consumer/offset
We are quite sure this gives us at-least-once guarant
I just tried it out on my 0.8.2 cluster and it worked just fine - the ISR
grew, replica factor changed and data was physically moved to new brokers.
Was there not output/no logs? I see things like
INFO Created log for partition [topicName,7] in /opt/kafka/ with
properties {.. some json}
in se
ine reconciliation to check any 201'd
> messages were successfully delivered to Kafka.
>
> Anyway, great work on this and thanks for sharing!
> On Mon, 25 May 2015 at 1:26 am Adam Dubiel wrote:
>
> > Hi Daniel,
> >
> > First of all sorry for late response, i enjoy
e reliability guarantees you're looking to offer and
> how they can be stronger than the ones provided by Kafka?
>
> Thanks, Daniel.
> On Tue, 19 May 2015 at 2:57 am Adam Dubiel wrote:
>
> > Hello everyone,
> >
> > I'm technical team lead of Hermes project. I
Hi Bill,
I don't know if this is exactly the same case (last part "when they get the
topic tehy apply locally" is bit unclear), but we have setup with Kafka in
DC A and consumers both in DC A and DC B. Actually we also have producers
in A and B writing to Kafka in A, but we are trying to change th
Hello everyone,
I'm technical team lead of Hermes project. I will try to answer already
posted questions, but feel free to ask me anything.
1) Can you comment on how this compares to Confluent's REST proxy?
We do not perceive Hermes as mere proxy. While Confluent product wants to
help services w
14 matches
Mail list logo