New KafkaConsumer is not yet released. It is planned for 0.9.0 release.
On 2/13/15, Jayesh Thakrar wrote:
> Hi,
> I am trying to write a consumer using the KafkaConsumer class
> from
> https://github.com/apache/kafka/blob/0.8.2/clients/src/main/java/org/apache/kafka/clients/consumer/KafkaConsum
Hi,
I am trying to write a consumer using the KafkaConsumer class from
https://github.com/apache/kafka/blob/0.8.2/clients/src/main/java/org/apache/kafka/clients/consumer/KafkaConsumer.java.
My code is pretty simple with the snippet show below.However what I am seeing
is that I am not seeing any
Thanks for the explanation. It there a way that I can wipe out the offset
stored in kafka so that the checker can continue to work again?
On Fri, Feb 13, 2015 at 1:31 PM, Jiangjie Qin
wrote:
> I think this is the offset checker bug.
> The offset checker will
> 1. first check if the offset exists
This is a serious issue, we'll take a look.
-Jay
On Thu, Feb 12, 2015 at 3:19 PM, Solon Gordon wrote:
> I saw a very similar jump in CPU usage when I tried upgrading from 0.8.1.1
> to 0.8.2.0 today in a test environment. The Kafka cluster there is two
> m1.larges handling 2,000 partitions acros
I think this is the offset checker bug.
The offset checker will
1. first check if the offset exists in offset topic on broker or not.
2. If it is on broker then it will just return that offset.
3. Otherwise it goes to zookeeper.
So the problem you saw was actually following this logic.
After dual
I used the one shipped with 0.8.2. It is pretty straightforward to
reproduce the issue.
Here are the steps to reproduce:
1. I have a consumer using high level consumer API with initial settings
offsets.storage=kafka and dual.commit.enabled=false.
2. After consuming messages for a while shutdown th
That is weird. Are you by any chance running an older version of the
offset checker? Is this straightforward to reproduce?
On Fri, Feb 13, 2015 at 09:57:31AM +0800, tao xiao wrote:
> Joel,
>
> No, the metric was not increasing. It was 0 all the time.
>
> On Fri, Feb 13, 2015 at 12:18 AM, Joel Ko
It would.
The way we see things is that if retrying has a chance of success, we will
retry.
Duplicates are basically unavoidable, because the producer can't always
know if the leader received the message or not.
So we expect the consumers to de-duplicate messages.
Gwen
On Thu, Feb 12, 2015 at 7:
Joel,
No, the metric was not increasing. It was 0 all the time.
On Fri, Feb 13, 2015 at 12:18 AM, Joel Koshy wrote:
> Actually I meant to say check that is not increasing.
>
> On Thu, Feb 12, 2015 at 08:15:01AM -0800, Joel Koshy wrote:
> > Possibly a bug - can you also look at the MaxLag mbean
I've checked on other projects, and both Hadoop, HBase and Solr dropped
their JDK6 support in the last few month.
The process they used in Hadoop was:
1. Announce that a specific release is the last one with JDK6 support (2.6
for Hadoop)
2. Allowed contributors from companies that plan to stay on
I saw a very similar jump in CPU usage when I tried upgrading from 0.8.1.1
to 0.8.2.0 today in a test environment. The Kafka cluster there is two
m1.larges handling 2,000 partitions across 32 topics. CPU usage rose from
40% into the 150%–190% range, and load average from under 1 to over 4.
Downgrad
Jun,
Pardon the radio silence. I booted up a new broker, created a topic with
three (3) partitions and replication factor one (1) and used the
*kafka-producer-perf-test.sh
*script to generate load (using messages of roughly the same size as ours).
There was a slight increase in CPU usage (~5-10%)
Hi Kafka Camus Users,
I have a quick question on the capabilities of Camus consumption.
We have a scenario wherein we want to consume data from two
independent Broker/Zk Clusters on two different Data Centers.
Both of these Broker clusters have the same named topic.
Our Camus Job running on a
Hi Team,
i need some help in solving my current issue related to
"kafka-leadership-rebalance"
I have 2 brokers.. i have deployed 2 topics with 2 partition and 2 replica
each in following format.. I made use of kafka-reassignment.sh for same
Topic partition Leader Fol
Hmm,
i think the problem is in
>'typeNames' => ['name'],
it should be 'typeNames' => ['topic'],
>
> On 12 Feb 2015, at 19:04, Evgeniy Shishkin wrote:
>
> Yeah, i want this, but for 0.8.2 =\
> Specifically, how to discover per topic BrokerTopicMetrics
>
> Code
>{
>
I think the problem is that google was giving people that obsolete wiki
page rather than the real quickstart. I deleted the wiki page and linked
the quickstart on the main page.
-Jay
On Thu, Feb 12, 2015 at 8:30 AM, Mark Reddy wrote:
> Hi Saravana,
>
> Since 0.8.1 Kafka uses Gradle, previous to
Sorry... I probably didn't word that well. When I say each message, that's
assuming your going out to poll for new messages on the topic. I.e. if you
have a latency tolerance of 1 second, then you'd never want to go more than
1 second without polling for new messages.
On Thu, Feb 12, 2015 at 12:
Thanks.
I was under the impression that you can't process the data as each record
comes into the topic? That Kafka doesn't support pushing messages to a
consumer like a traditional message queue?
On 12 February 2015 at 11:06, David McNelis
wrote:
> I'm going to go a bit in reverse for your ques
Lets have a KIP for removing JDK 6 support (on another thread) as it would
make for a good discussion and something we should be able to come to
closure on as a dependency for this and other patches that would favor it
happening.
~ Joe Stein
- - - - - - - - - - - - - - - - -
http://www.stealth.
Thanks for the review Gwen. I'll keep in mind about java 6 support.
-Harsha
On Wed, Feb 11, 2015, at 03:28 PM, Gwen Shapira wrote:
> Looks good. Thanks for working on this.
>
> One note, the Channel implementation from SSL only works on Java7 and up.
> Since we are still supporting Java 6, I'm w
Hi Saravana,
Since 0.8.1 Kafka uses Gradle, previous to that SBT was used. Here is the
JIRA which the drove the change:
https://issues.apache.org/jira/browse/KAFKA-1171
The offical docs for the latest release (0.8.2) can be found here:
http://kafka.apache.org/documentation.html
Regards,
Mark
On
Kafka migrated to gradle. Pl follow the README instructions.
On 2/12/15, madhavan kumar wrote:
> dear all,
> i am new to kafka. And when i try to set up kafka source code on my
> lappie, github's readme points to gradle whereas kafka Quick start
> Documentation talks about scala build tool sbt
dear all,
i am new to kafka. And when i try to set up kafka source code on my
lappie, github's readme points to gradle whereas kafka Quick start
Documentation talks about scala build tool sbt @
https://cwiki.apache.org/confluence/display/KAFKA/Kafka+0.8+Quick+Start
which one is kafka using to com
Actually I meant to say check that is not increasing.
On Thu, Feb 12, 2015 at 08:15:01AM -0800, Joel Koshy wrote:
> Possibly a bug - can you also look at the MaxLag mbean in the consumer
> to verify that the maxlag is zero?
>
> On Thu, Feb 12, 2015 at 11:24:42PM +0800, tao xiao wrote:
> > Hi Joel
Possibly a bug - can you also look at the MaxLag mbean in the consumer
to verify that the maxlag is zero?
On Thu, Feb 12, 2015 at 11:24:42PM +0800, tao xiao wrote:
> Hi Joel,
>
> When I set dual.commit.enabled=true the count value of both metrics got
> increased. After I set offsets.storage=zooke
Yeah, i want this, but for 0.8.2 =\
Specifically, how to discover per topic BrokerTopicMetrics
Code
{
'name' =>
'kafka.server:type=BrokerTopicMetrics,name=MessagesInPerSec,topic=*',
'resultAlias' =>
'kafka.server.BrokerTopicMetrics.TopicMessagesInPerSec',
'
Hm, I’m sure how long ago the kafka jmxtrans example has been updated on the
puppet-jmxtrans module.
Wikimedia is currently using Kafka 0.8.1.1 with this puppet module’s jmxtrans
setup:
https://github.com/wikimedia/puppet-kafka/blob/master/manifests/server/jmxtrans.pp
> On Feb 12, 2015, at 10
Hi,
I'm experimenting with the following scenario:
- 3 brokers are running (0,1 and 2) -- Kafka version 0.8.2.0
- Continuously: restart broker number 0 by triggering controlled shutdown.
Sleep rand(10) seconds. repeat.
- Continuously: create 'simple-test-topic' (RF=2), write and read messages,
th
Hi Joel,
When I set dual.commit.enabled=true the count value of both metrics got
increased. After I set offsets.storage=zookeeper only ZooKeeperCommitsPerSec
changed but not KafkaCommitsPerSec. I think this is expected as kafka
offset storage was turned off.
But when I looked up the consumer lag
Hello,
after upgrading to 0.8.2 with reworked jmx beans, i have problem adapting
https://github.com/wikimedia/puppet-jmxtrans config.
Can anyone share their adjusted $jmx_kafka_objects for 0.8.2?
If a producer produces a request with RequiredAcks of -1 (wait for all
ISRs), and the broker returns a RequestTimedOut error, are the messages
still committed locally on the leader broker? The protocol states "we will
not terminate a local write" which implies to me that even when
RequestTimedOut i
I'm going to go a bit in reverse for your questions. We built a restful API
to push data to so that we could submit things from multiple sources that
aren't necessarily things that our team would maintain, as well as validate
that data before we send it off to a topic.
As for consumers... we expec
Thanks again David. So what kind of latencies are you experiencing with
this? If I wanted to act upon certain events in this and send out alarms
(email, sms etc), what kind of delays are you seeing by the time you're
able to process them?
It seems if you were to create an alarm topic, and dump ale
There are mbeans named KafkaCommitsPerSec and ZooKeeperCommitsPerSec -
can you look those up and see what they report?
On Thu, Feb 12, 2015 at 07:32:39PM +0800, tao xiao wrote:
> Hi team,
>
> I was trying to migrate my consumer offset from kafka to zookeeper.
>
> Here is the original settings of
,#,,
Z. Z
Sent from the wilds on my BlackBerry smartphone.
Original Message
From: Gary Ogden
Sent: Thursday, February 12, 2015 8:23 AM
To: users@kafka.apache.org
Reply To: users@kafka.apache.org
Subject: Re: understanding partition key
Thanks David. Whether Kafka is the right choic
In our setup we deal with a similar situation (lots of time-series data
that we have to aggregate in a number of different ways).
Our approach is to push all of our data to a central producer stack, that
stack then submits data to different topics, depending on a set of
predetermined rules.
For a
Thanks David. Whether Kafka is the right choice is exactly what I'm trying
to determine. Everything I want to do with these events is time based.
Store them in the topic for 24 hours. Read from the topics and get data for
a time period (last hour , last 8 hours etc). This reading from the topics
c
Gary,
That is certainly a valid use case. What Zijing was saying is that you can
only have 1 consumer per consumer application per partition.
I think that what it boils down to is how you want your information grouped
inside your timeframes. For example, if you want to have everything for a
spe
So it's not possible to have 1 topic with 1 partition and many consumers of
that topic?
My intention is to have a topic with many consumers, but each consumer
needs to be able to have access to all the messages in that topic.
On 11 February 2015 at 20:42, Zijing Guo wrote:
> Partition key is on
Hi team,
I was trying to migrate my consumer offset from kafka to zookeeper.
Here is the original settings of my consumer
props.put("offsets.storage", "kafka");
props.put("dual.commit.enabled", "false");
Here is the steps
1. set dual.commit.enabled=true
2. restart my consumer and monitor offse
40 matches
Mail list logo