We are looking to adopt Kafka for our in stream processing use case. One of
the issue seems to be that we use scala 2.10.2 however the the 0.8 Beta1
release does not seem to support scala 2.10.x. As a result, we are not able
to use Kafka Scala client. I was wondering if we can use branch 0.8 which
It is possible to implement offset-preserving mirrors - however, that
would work only if there is one source cluster mirroring into target
cluster(s) (as opposed to mirroring multiple source clusters into one
target cluster). Anyway, as Jun said right now you have to either read
from the tail or
This was discussed before -
http://mail-archives.apache.org/mod_mbox/kafka-users/201309.mbox/browser
Thanks,
Neha
On Sep 23, 2013 11:43 PM, Aniket Bhatnagar aniket.bhatna...@gmail.com
wrote:
We are looking to adopt Kafka for our in stream processing use case. One of
the issue seems to be that
Agreed that it has been partially discussed in the thread [jira] [Updated]
(KAFKA-1046) Added support for Scala 2.10 builds while maintaining
compatibility with 2.8.x with the discussion being that it's not a good
idea to apply the same path to 0.8-beta1-candidate branch. However, I am
more
Looking at the archive more closely, I understand the confusion now. It
seems that the question got sent twice by me. My apologies. Didn't intend
to spam the mailbox.
Thanks,
Aniket
On 24 September 2013 19:18, Aniket Bhatnagar aniket.bhatna...@gmail.comwrote:
Agreed that it has been partially
I'm wondering if a simple change could be to not log full stack traces for
simple things like Connection refused, etc. Seems it would be fine to
just log the exception message in such cases.
Also, the log levels could be tuned, such that things logged as ERROR
indicate that all possible retries
This makes sense. Please file a JIRA where we can discuss a patch.
Thanks,
Neha
On Tue, Sep 24, 2013 at 9:00 AM, Jason Rosenberg j...@squareup.com wrote:
I'm wondering if a simple change could be to not log full stack traces for
simple things like Connection refused, etc. Seems it would be
Thanks Neha, Looks like this mbean was added recently. The version we are
running is from early June and it doesn't have this Mbean.
Thanks,
Raja.
On Mon, Sep 23, 2013 at 9:15 PM, Neha Narkhede neha.narkh...@gmail.comwrote:
On the consumer side, look for
Yes, that is correct. We added this and *MinFetchRate recently.
On Tue, Sep 24, 2013 at 9:10 AM, Rajasekar Elango rela...@salesforce.comwrote:
Thanks Neha, Looks like this mbean was added recently. The version we are
running is from early June and it doesn't have this Mbean.
Thanks,
Raja.
It seems to work for me. Here is what I added to my server.config:
kafka.csv.metrics.reporter.enabled=true
kafka.metrics.reporters=kafka.metrics.KafkaCSVMetricsReporter
It does spit out those warnings that I mentioned - although actually
the reason for that is in fact because we attempt to verify
filed: https://issues.apache.org/jira/browse/KAFKA-1066
On Tue, Sep 24, 2013 at 12:04 PM, Neha Narkhede neha.narkh...@gmail.comwrote:
This makes sense. Please file a JIRA where we can discuss a patch.
Thanks,
Neha
On Tue, Sep 24, 2013 at 9:00 AM, Jason Rosenberg j...@squareup.com wrote:
I've read in the docs and papers that LinkedIn has an auditing system that
correlates message counts from tiers in their system using a time window of
10 minutes. The time stamp on the message is used to determine which window
the message falls into.
My question is how do you account for clock
Our 0.7.2 Kafka cluster keeps crashing with:
2013-09-24 17:21:47,513 - [kafka-acceptor:Acceptor@153] - Error in acceptor
java.io.IOException: Too many open
The obvious fix is to bump up the number of open files but I'm wondering if
there is a leak on the Kafka side and/or our
It is assumed that clocks are in sync, we use ntp and it mostly works.
-Jay
On Tue, Sep 24, 2013 at 5:12 PM, Tom Amon ta46...@gmail.com wrote:
I've read in the docs and papers that LinkedIn has an auditing system that
correlates message counts from tiers in their system using a time window
Are you using the java producer client?
Thanks,
Jun
On Tue, Sep 24, 2013 at 5:33 PM, Mark static.void@gmail.com wrote:
Our 0.7.2 Kafka cluster keeps crashing with:
2013-09-24 17:21:47,513 - [kafka-acceptor:Acceptor@153] - Error in
acceptor
java.io.IOException: Too many open
15 matches
Mail list logo