According to documentation, offsets by default are committed every 10 secs.
Shouldnt that be frequent enough that JMX would be accurate?
autocommit.interval.ms1is the frequency that the consumed offsets are
committed to zookeeper.
On Mon, Nov 16, 2015 at 3:31 PM, allen chan
wrote:
> So to m
Is it not possible to just manually include the packages in my Eclipse
project? Do you have to use a build tool?
Adaryl "Bob" Wakefield, MBA
Principal
Mass Street Analytics, LLC
913.938.6685
www.linkedin.com/in/bobwakefieldmba
Twitter: @BobLovesData
-Original Message-
From: Ewen Chesl
Hi Adaryl,
First, it looks like you might be trying to use the old producer interface.
That interface is going to be deprecated in favor of the new producer
(under org.apache.kafka.clients.producer). I'd highly recommend using the
new producer interface instead.
Second, perhaps this repository of
Sorry, there was an out of date reference in the pom.xml, the version on
master should build fine now.
-Ewen
On Sat, Nov 14, 2015 at 1:54 PM, Venkatesh Rudraraju <
venkatengineer...@gmail.com> wrote:
> I tried building copycat-hdfs but its not able to pull dependencies from
> maven...
>
> error
Did you just use "./gradlew build" in root directory?
Guozhang
On Mon, Nov 16, 2015 at 6:41 PM, hsy...@gmail.com wrote:
> The actual thing I want to do is I want to build and install in my local
> maven repository so I can include new api in my dependencies. When the
> release is officially out
I'm somewhat new to java development and am studying how to write producers.
The sample code I'm looking at has the following import statements:
import kafka.javaapi.producer.Producer;
import kafka.producer.KeyedMessage;
import kafka.producer.ProducerConfig;
The thing is, he doesn't use any pac
The actual thing I want to do is I want to build and install in my local
maven repository so I can include new api in my dependencies. When the
release is officially out, I can have both my code ready with the official
maven dependency
Thanks,
Siyuan
On Monday, November 16, 2015, Grant Henke wro
Hi Siyuan,
My guess is that you are trying to build from a subdirectory. I have a
minor patch available to fix this that has not been pulled in yet here:
https://github.com/apache/kafka/pull/509
In the mean time, if you need to build a subproject you can execute a
command like the following:
grad
Hi Siyuan,
1) new consumer is single-threaded, it does not maintain any internal
threads as the old high-level consumer.
2) each consumer will only maintain one TCP connection with each broker.
The only extra socket is the one with its coordinator. That is, if there is
three brokers S1, S2, S3, a
Siyuan,
Which command did you use to build?
Guozhang
On Mon, Nov 16, 2015 at 4:01 PM, hsy...@gmail.com wrote:
> I got a build error on both trunk and 0.9.0 branch
>
> > docs/producer_config.html (No such file or directory)
>
> Do I miss anything before build
>
> Thanks,
> Siyuan
>
--
-- Gu
The new consumer API looks good. If I understand it correctly you can use
it like simple consumer or high-level consumer. But I have couple questions
about it's internal implementation
First of all does the consumer have any internal fetcher threads like
high-level consumer?
When you assign multi
Hi,
Has anyone used Protobuff with spark-cassandra connector? I am using
protobuff-3.0-beta with spark-1.4 and cassandra-connector-2.10. I keep
getting "Unable to find proto buffer class" in my code. I checked version
of protobuff jar and it is loaded with 3.0-beta in classpath. Protobuff is
comin
I got a build error on both trunk and 0.9.0 branch
> docs/producer_config.html (No such file or directory)
Do I miss anything before build
Thanks,
Siyuan
So to make JMX accurate, we need to tweak the frequency of commits? What
setting would that be?
On Mon, Nov 16, 2015 at 8:40 AM, Scott Reynolds
wrote:
> On Mon, Nov 16, 2015 at 8:27 AM, Abu-Obeid, Osama <
> osama.abu-ob...@morganstanley.com> wrote:
>
> > I can observe the same thing:
> >
> > - L
Yes I think so. We specifically upgraded the Kafka broker with a patch to
avoid the ZK client NPEs. Guess not all of them are fixed. The Kafka broker
becoming a zombie even if one ZK node is bad is especially terrible.
On Tuesday, November 17, 2015, Mahdi Ben Hamida wrote:
> Hello,
>
> See below
Hello,
See below for my original email. I was wondering if anybody has feedback
on the 4 questions I've asked. Should I go ahead and file this as a bug ?
Thanks.
--
Mahdi.
On 11/12/15 2:37 PM, Mahdi Ben Hamida wrote:
Hi Everyone,
We are using kafka 0.8.2.1 and we noticed that kafka/zookeep
On Mon, Nov 16, 2015 at 8:27 AM, Abu-Obeid, Osama <
osama.abu-ob...@morganstanley.com> wrote:
> I can observe the same thing:
>
> - Lag values read through the Kafka consumer JMX is 0
>
This metric includes *uncommitted* offsets
- Lag values read through kafka-run-class.sh
> kafka.tools.ConsumerO
I can observe the same thing:
- Lag values read through the Kafka consumer JMX is 0
- Lag values read through kafka-run-class.sh kafka.tools.ConsumerOffsetChecker
is on average 200K-400K
When the Lag value in the Kafka consumer JMX is high (for example 5M),
ConsumerOffsetChecker shows a matchi
Folks, The SF Bay Big Data Ingest Meetup is interested in the conversation
around what the next generation of big data ingest looks like. I wanted to
extend an invitation to this group to contribute to this ongoing
conversation - we are planning on a unconference after the scheduled panel
and would
Hello,
That answers my question. Thanks
Mojhaha
On Mon, Nov 16, 2015 at 7:55 PM, Grant Henke wrote:
> Hi Mojhaha,
>
> You will not have access to the actual Apache Kafka repo. Everyone
> contributes via their own fork and asking for the changes to be pulled into
> (pull request) the Apache Kaf
Hi Mojhaha,
You will not have access to the actual Apache Kafka repo. Everyone
contributes via their own fork and asking for the changes to be pulled into
(pull request) the Apache Kafka repo. The guide linked earlier is a great
resource for the Github process.
Thanks,
Grant
On Sat, Nov 14, 2015
If producer doesn't get a response, retries but both produce-requests
succeeded, you will get duplicates. Kafka does not have a Idempotent
Producer.
On Fri, Nov 13, 2015 at 4:35 AM, Prabhjot Bharaj
wrote:
> Hi Gwen,
>
> If producer cant get a response but the message got committed, Because of
>
Hi,
I am currently using Kafka to generate and look at metrics using graphite.
However, I am having trouble accessing the beans for consumer and producer
domains. Below is a list of domains I have available (gathered exposing JMX on
a port and using http://wiki.cyclopsgroup.org/jmxterm/ ) .
23 matches
Mail list logo