Thanks Jun,
I already set the retention policy to 1 hour, and size 20 10 M but it
didn't work, still logs are piled at "logs/" folder. Maybe I am missing
something .
best,
/Shahab
On Thu, Dec 12, 2013 at 4:57 PM, Jun Rao wrote:
> Log deletion is controlled a retention policy, which can be eit
Thanks for all your replies.
Jun, perhaps you are right. There seems to be no other choice but to use
more brokers,
at least for the current version.
Jay, you've got what I mean. And thanks for sharing your approach, though
we are under
different circumstances.
Magnus, as you said, my consumer p
Hi folks,
There are three brokers running 0.8-beta1 in our cluster currently. Assume all
the topics have six partitions.
I am going to add another three brokers to the cluster and upgrade all of them
to 0.8. My question is after
the cluster is up, will the partition be evenly distributed to all
No the 6 partitions for each topic will remain on the original brokers. You
could either reassign some partitions from all topics to the new brokers or
you could add partitions to the new brokers for each topic. In 0.8.0 there
is now an add-partitions tool.
Cheers
Rob Turner.
On 13 December 2
Hi All,
we are testing performance of our kafka 0.8 release cluster:
- 5 kafka nodes with one zookeeper instance on each machine
- FreeBSD 9.2
- Nagle off (sysctl net.inet.tcp.delayed_ack=0)
- all kafka machines write a ZFS ZIL to a dedicated SSD
- 8 producers on 2 machines, writing to 30 topics,
If you are using ZK based consumers, they should handle broker restarts
automatically.
Thanks,
Jun
On Thu, Dec 12, 2013 at 11:36 PM, Yonghui Zhao wrote:
> Hi,
>
> In kafka 0.7.2, we use 2 brokers and 12 consumers.
>
> I have 2 questions:
>
> 1. seems if one broker is restarted then all consum
Log retention only kicks in on old log segments. The last (active) log
segment will never be deleted. Do you have new log segments rolled? Segment
rolling can also be controlled by size or time.
Thanks,
Jun
On Fri, Dec 13, 2013 at 3:01 AM, shahab wrote:
> Thanks Jun,
> I already set the reten
Florian,
Thanks for sharing the results. The response time with ack=-1 is going to
be larger than that with ack=1 since it has an extra network hop. So, it's
normal to for ack=1 to have twice the throughput than ack=-1, given the
same #producers and batch size.
The other thing you said is that pr
Hi ,
Can you guys lets us know why are we getting this error when try to spawn a
consumer ?.
ZookeeperConsumerConnector can create message streams at most once
Balaji
Shafaq,
That's pretty cool, have you already connected Kafka to spark RRD/DStream
or is that something you have to figure out?
-Steve
On Tue, Dec 10, 2013 at 7:10 PM, Shafaq wrote:
> Hi Steve,
>
>The first phase would be pretty simple, essentially hooking up the
> Kafka-DStream-consumer t
Thats already done.
On Dec 13, 2013 9:28 AM, "Steve Morin" wrote:
> Shafaq,
> That's pretty cool, have you already connected Kafka to spark
> RRD/DStream or is that something you have to figure out?
> -Steve
>
>
> On Tue, Dec 10, 2013 at 7:10 PM, Shafaq wrote:
>
>> Hi Steve,
>>
>>The first
Would you mind sharing your connection setup?
> On Dec 13, 2013, at 10:36, Shafaq wrote:
>
> Thats already done.
>
>> On Dec 13, 2013 9:28 AM, "Steve Morin" wrote:
>> Shafaq,
>> That's pretty cool, have you already connected Kafka to spark RRD/DStream
>> or is that something you have to fig
Partition movement is not an automatic operation in Kafka yet. You need to
use the partition reassignment tool -
https://cwiki.apache.org/confluence/display/KAFKA/Replication+tools#Replicationtools-6.ReassignPartitionsTool.
Also, that feature is stable in 0.8.1.
Thanks,
Neha
On Fri, Dec 13, 20
On Fri, Dec 13, 2013 at 11:21 AM, Neha Narkhede wrote:
> Partition movement is not an automatic operation in Kafka yet. You need to
> use the partition reassignment tool -
>
> https://cwiki.apache.org/confluence/display/KAFKA/Replication+tools#Replicationtools-6.ReassignPartitionsTool
> .
>
>
> Al
It is HEAD from trunk. So far, it seems stable at LinkedIn.
On Fri, Dec 13, 2013 at 12:07 PM, David Birdsong
wrote:
> On Fri, Dec 13, 2013 at 11:21 AM, Neha Narkhede >wrote:
>
> > Partition movement is not an automatic operation in Kafka yet. You need
> to
> > use the partition reassignment too
Any idea on this error guys ?.
-Original Message-
From: Seshadri, Balaji [mailto:balaji.sesha...@dish.com]
Sent: Friday, December 13, 2013 9:32 AM
To: 'users@kafka.apache.org'
Subject: Unable to start consumers in Tomcat
Hi ,
Can you guys lets us know why are we getting this error when
Which version of kafka are you using?
On Fri, Dec 13, 2013 at 2:29 PM, Seshadri, Balaji
wrote:
> Any idea on this error guys ?.
>
> -Original Message-
> From: Seshadri, Balaji [mailto:balaji.sesha...@dish.com]
> Sent: Friday, December 13, 2013 9:32 AM
> To: 'users@kafka.apache.org'
> Sub
0.8
-Original Message-
From: Neha Narkhede [mailto:neha.narkh...@gmail.com]
Sent: Friday, December 13, 2013 3:33 PM
To: users@kafka.apache.org
Subject: Re: Unable to start consumers in Tomcat
Which version of kafka are you using?
On Fri, Dec 13, 2013 at 2:29 PM, Seshadri, Balaji
wrote
That error comes if you are trying to create a message/stream on a consumer
instantiated object more than once.
Why are you using tomcat for the consumers? Is it to see the results of
messages? if so you need to isolate the Consumer in some way so there is a
singleton (assuming one partition or if
We needed HTTP interface to start our consumers using REST interface for
management that why we chose tomcat to run our consumers.
We create streams only once.when we initially start consumer.
-Original Message-
From: Joe Stein [mailto:joe.st...@stealth.ly]
Sent: Friday, December 13, 20
We deploy our applications as war files in tomcat. Most of these
applications have a rest API but some also consume from topics. For the
apps running in tomcat we use spring and initialize our kafka consumer in
the @postConstruct of a spring bean so this only happens once.
The bean itself which is
You should check your code to verify that it is only called once per
instantiated consumer connector. Here is where the exception is thrown
https://github.com/apache/kafka/blob/0.8/core/src/main/scala/kafka/consumer/ZookeeperConsumerConnector.scala#L133from
/**
Cant we create message stream for same topic but different consumer group ?.
-Original Message-
From: Joe Stein [mailto:joe.st...@stealth.ly]
Sent: Friday, December 13, 2013 4:23 PM
To: users@kafka.apache.org
Subject: Re: Unable to start consumers in Tomcat
You should check your code to
you do this at the consumer connector level not at the message stream level
so one
propsA.put("group.id", "groupA") be
val configA = new ConsumerConfig(propsA)
val one = Consumer.create(configA)
and another
propsB.put("group.id", "groupB") be
val configB = new ConsumerConfig(propsB)
val two = Con
Hi all,
I'm running Kafka 0.7 and I'm having problems running the ConsumerOffsetChecker.
I have messages loaded in a topic called eventdata and I can consume the
messages fine but when I try to run ConsumerOffsetChecker I get the following
error:
[2013-12-13 12:24:48,034] INFO zookeeper state c
The tool expected the broker info string to be in the format of host:port,
and only 1 ":" in the string.
Guozhang
On Fri, Dec 13, 2013 at 4:36 PM, Xuyen On wrote:
> Hi all,
>
> I'm running Kafka 0.7 and I'm having problems running the
> ConsumerOffsetChecker.
> I have messages loaded in a topi
The ZK format is for 0.7.2. Are you using 0.7.2 ConsumerOffsetChecker?
Thanks,
Jun
On Fri, Dec 13, 2013 at 4:36 PM, Xuyen On wrote:
> Hi all,
>
> I'm running Kafka 0.7 and I'm having problems running the
> ConsumerOffsetChecker.
> I have messages loaded in a topic called eventdata and I can c
I will have to give that a try as well. I have been having a real tough
time with the tool in 0.8.0. It fails frequently and I have to roll
restarts on my brokers to get partial changes to stick.
It should come with a warning! =)
On Fri, Dec 13, 2013 at 2:03 PM, Neha Narkhede wrote:
> It is H
28 matches
Mail list logo