On 01.07.2013, at 17:45, Jun Rao wrote:
> Not sure that I fully understand your problem. Could you attach the
> exception that you saw?
Here is what I do and get:
I use the High Level Consumer and configured it with
auto.offset.reset = other
I get
kafka.common.InvalidConfigException: Wron
Hi Vadim,
It totally depends on your requirement. If you want all metrics in some
file then other option may be better otherwise the best way to monitor
kafka is through jmx.
Also list of metrics varies in different Kafka versions. I am assuming that
you are using recent Kafka release of 0.8.
So
Okay i'll try for same.
Also want to know that in kafka-0.7 there is single MBean -
SocketServerStats which provides all kafka Metrics. But in Kafka-0-8 there
are individual MBeans (like AllTopicsBytesInPerSec, AllTopicsBytesOutPerSec
etc.) for getting each parameter like ByteInRate, ByteOutRate e
Hi,
[0] is an old wiki entry for getting Scala setup for development. After all
huffs and puffs, I gave up getting it loaded in Intellij IDEA. However, I
could get it setup with Eclipse IDE.
Here is what I did:
- Downloaded the Scala IDE for Eclipse from [1]
- Checked out the codebase from git as
You may have to remove the existing file first.
Thanks,
Jun
On Mon, Jul 1, 2013 at 11:21 AM, Vadim Keylis wrote:
> I am getting this exception even those permission properly set and empty
> file is created. Other metric files created without problems.
>
> java.io.IOException: Unable to create
By the way, having an official contrib package with graphite, ganglia and
other well-known reporters would be awesome so that not everyone has to
write their own.
On Jul 1, 2013 10:27 PM, "Joel Koshy" wrote:
> Also, there are several key metrics on the broker and client side - we
> should compile
Also, there are several key metrics on the broker and client side - we
should compile a list and put it on a wiki. Can you file a jira for
this?
On Mon, Jul 1, 2013 at 1:26 PM, Joel Koshy wrote:
> CSVreporter is probably not an ideal fit for production monitoring -
> we use it for getting stats o
CSVreporter is probably not an ideal fit for production monitoring -
we use it for getting stats out of period system test runs.
For production monitoring you are probably better off reading off JMX
and feeding your monitoring system of choice. You can also write a
custom metrics reporter and addi
There is "replica.lag.max.messages" (defaults to 4k) and
"replica.lag.time.max.ms" (defaults to 10s). If a replica is behind
the leader by either that many messages or hasn't sent any fetch
requests within the lag config, then it falls out of ISR.
However, as mentioned earlier it is best to avoid
1. I think it is fine to do a one-off since it won't impact the APIs. It
would be *awesome* to get this working.
2. Let's sync up since I think we may be both working on the same page.
-Jay
On Mon, Jul 1, 2013 at 9:50 AM, Sriram Subramanian <
srsubraman...@linkedin.com> wrote:
> Also,
>
> 1. I
Thanks
On Mon, Jul 1, 2013 at 11:44 AM, David DeMaagd wrote:
> The danger of using a size based rollover (unless you set the size and
> log rollover to be fairly high) is that in case of problems, the actual
> cause of the problem might get rolled off the end by the time you get to
> it (kafka c
The danger of using a size based rollover (unless you set the size and
log rollover to be fairly high) is that in case of problems, the actual
cause of the problem might get rolled off the end by the time you get to
it (kafka can be very chatty in some kinds of failure cases). That is
probably the
Good morning. The log4j property file included wit distribution contains
daily log rotation. Are there any reason you guys chose daily rotation vs
RollingFileAppender?
Thanks,
Vadim
I am getting this exception even those permission properly set and empty
file is created. Other metric files created without problems.
java.io.IOException: Unable to create
/home/kafka/metrics/NumDelayedRequests.csv
at
com.yammer.metrics.reporting.CsvReporter.createStreamForMetric(CsvRepor
Good morning. What is the best way to monitor kafka through jmx or by
enabling kafka.csv.metrics.reporter.enabled?
What are the important metrics in JMX to watch for and/or graph?
What are the important metrics in csv files to watch for and/or graph?
Thanks,
Vadim
Also,
1. I am trying to get the api stuff working but it is little but of work.
I need to make Kafka compile with Scala 2.10 first.
2. I have started a design page for kafka replication. The idea is that it
goes as a separate section under the current design page. I will update
the page today and
Yeah thanks for the feedback, that's helpful. Here was my thinking:
1. I think it just makes sense to have one design and implementation page
which describe the most recent release and live at the top level. You could
imagine wanting to read older design pages but that seems a bit unlikely
mostly,
It seems that the mbean name that you used is wrong. The mbean names
registered by metrics 2.2.0 have quotes in them.
Thanks,
Jun
On Mon, Jul 1, 2013 at 4:31 AM, Hanish Bansal <
hanish.bansal.agar...@gmail.com> wrote:
> Hi
>
> I am getting various kafka parameters (like NumFetchRequests,
> Fet
Not sure that I fully understand your problem. Could you attach the
exception that you saw?
Thanks,
Jun
On Mon, Jul 1, 2013 at 12:50 AM, Martin Eigenbrodt <
martineigenbr...@googlemail.com> wrote:
> Hi there,
>
> I am using kafka 0.8 (currently 0.8.0-beta1-candidate1) and I want my
> consumer
You need to wipe out both the ZK data and the Kafka data from 0.8, in order
to try 0.7.
Thanks,
Jun
On Sun, Jun 30, 2013 at 11:28 PM, Yavar Husain wrote:
> Kafka 0.8 works great. I am able to use CLI as well as write my own
> producers/consumers!
>
> Checking Zookeeper... and I see all the top
Thanks again. It seems the 2nd method is not doable.
The downside of the first method is that if the first data
center is down, the second one still lags behind and may
not have all the messages the first one has. We can let
publisher publish to the two data centers at the same
time. But that may
Thanks for your comment, Jun. Your last sentence is really helpful.
Regards,
Libo
-Original Message-
From: Jun Rao [mailto:jun...@gmail.com]
Sent: Sunday, June 30, 2013 11:52 PM
To: users@kafka.apache.org
Subject: Re: Is it possible to get latest offset from kafka server?
If you use M
Hi
I am getting various kafka parameters (like NumFetchRequests,
FetchRequestsPerSecond, ProduceRequestsPerSecond etc.) through JMX in case
of kafka-0.7.
In kafka-0.8 i am doing the same but not able to get parameters.
There are some changes in reporting metrics in kafka-0.8 as compared to
kafka
Hi there,
I am using kafka 0.8 (currently 0.8.0-beta1-candidate1) and I want my consumer
to fail if it can not reliably find out the last consumed offset from zookeeper.
According to https://kafka.apache.org/08/configuration.html for
"https://kafka.apache.org/08/configuration.html";
> What t
24 matches
Mail list logo