Hi,
I want to launch kafka on three machines. I can launch zookeepers
on the three machines first. After that, start kafka server on each
machine. Or for each machine, I start a zookeeper followed by the kafka.
I believe the first way is the right way to go. But I want to confirm it.
Regards,
First launch the zookeeper cluster completely followed by the kafka
cluster.
Thanks,
Neha
On May 22, 2013 8:43 AM, Yu, Libo libo...@citi.com wrote:
Hi,
I want to launch kafka on three machines. I can launch zookeepers
on the three machines first. After that, start kafka server on each
Hi,
I'm currently trying to understand how Kafka (0.8) can scale with our usage
pattern and how to setup the partitioning.
We want to route the same messages belonging to the same id to the same
queue, so its consumer will able to consume all the messages of that id.
My questions:
- From my
Hi Tim,
On Wed, May 22, 2013 at 3:25 PM, Timothy Chen tnac...@gmail.com wrote:
Hi,
I'm currently trying to understand how Kafka (0.8) can scale with our usage
pattern and how to setup the partitioning.
We want to route the same messages belonging to the same id to the same
queue, so its
- I see that Kafka server.properties allows one to specify the number of
partitions it supports. However, when we want to scale I wonder if we add #
of partitions or # of brokers, will the same partitioner start distributing
the messages to different partitions?
And if it does, how can that same
All,
I asked a number of questions of the group over the last week, and I'm happy to
report that I've had great success getting Kafka up and running in AWS. I am
using 3 EC2 instances, each of which is a M2 High-Memory Quadruple Extra Large
with 8 cores and 58.4 GiB of memory according to the
Thanks for sharing your experience with the community, Jason!
-Neha
On Wed, May 22, 2013 at 1:42 PM, Jason Weiss jason_we...@rapid7.com wrote:
All,
I asked a number of questions of the group over the last week, and I'm
happy to report that I've had great success getting Kafka up and
Thanks,
Neha
On May 21, 2013 5:42 PM, Ross Black ross.w.bl...@gmail.com wrote:
Hi,
I am using Kafka 0.7.1, and using SyncProducer and SimpleConsumer with a
single broker service process.
I am occasionally seeing messages (from a *single* partition) being
processed out of order to what I
Hi Neha/Chris,
Thanks for the reply, so if I set a fixed number of partitions and just add
brokers to the broker pool, does it rebalance the load to the new brokers
(along with the data)?
Tim
On Wed, May 22, 2013 at 1:15 PM, Neha Narkhede neha.narkh...@gmail.comwrote:
- I see that Kafka
Hi Jason,
Thanks for the notes.
I'm curious whether you went with using local drives (ephemeral storage) or
EBS, and if with EBS then what IOPS.
Thanks,
-- Ken
On May 22, 2013, at 1:42pm, Jason Weiss wrote:
All,
I asked a number of questions of the group over the last week, and I'm
Ken,
Great question! I should have indicated I was using EBS, 500GB with 2000
provisioned IOPs.
Jason
From: Ken Krugler [kkrugler_li...@transpac.com]
Sent: Wednesday, May 22, 2013 17:23
To: users@kafka.apache.org
Subject: Re: Apache Kafka in AWS
Hi
Awesome right up Jason! Very helpful as we are also looking to build a
Kafka environment in AWS. I am curious, are you using Kafka 0.7.2 or 0.8
in your tests? Did you have just one EBS volume per broker instance or
RAID 10 across EBS volumes per broker?
Thanks again for the great info!
Jonathan,
Using 0.7.2, with just a single EBS volume per broker instance - negative on
the RAID 10.
I would speculate that if we used RAID 10 and we went with AWS's maximum
provisioned IOPS (5000??) we probably could have squeaked out some more eps.
I have no doubt, BTW, that if we would have
You can run the ConsumerOffsetChecker tool that ships with Kafka.
Thanks,
Neha
On Wed, May 22, 2013 at 2:02 PM, arathi maddula arathimadd...@gmail.comwrote:
Hi,
Could you tell me how to find the offset in a high level Java consumer ?
Thanks
Arathi
Hey Jason,
question what openjdk version did you have issues with? Im running kafka
on it now and has been ok. Was it a crash only at load?
Thanks
SC
On Wed, May 22, 2013 at 1:42 PM, Jason Weiss jason_we...@rapid7.com wrote:
All,
I asked a number of questions of the group over the last
Not automatically as of today. You have to run the reassign-partitions tool
and explicitly move selected partitions to the new brokers. If you use this
tool, you can move partitions to the new broker without any downtime.
Thanks,
Neha
On Wed, May 22, 2013 at 2:20 PM, Timothy Chen
Thanks for the explanation.
Ross
On 23 May 2013 07:19, Neha Narkhede neha.narkh...@gmail.com wrote:
Thanks,
Neha
On May 21, 2013 5:42 PM, Ross Black ross.w.bl...@gmail.com wrote:
Hi,
I am using Kafka 0.7.1, and using SyncProducer and SimpleConsumer with a
single broker service
[ec2-user@ip-10-194-5-76 ~]$ java -version
java version 1.6.0_24
OpenJDK Runtime Environment (IcedTea6 1.11.11)
(amazon-61.1.11.11.53.amzn1-x86_64)
OpenJDK 64-Bit Server VM (build 20.0-b12, mixed mode)
Yes, as soon as I put it under heavy load, it would buckle almost consistently.
I knew it
Thanks. FWIW this one has been fine so far
java version 1.7.0_13
OpenJDK Runtime Environment (IcedTea7 2.3.6) (Ubuntu build 1.7.0_13-b20)
OpenJDK 64-Bit Server VM (build 23.7-b01, mixed mode)
though not running at the load in your tests.
On Wed, May 22, 2013 at 4:51 PM, Jason Weiss
Did you check that you were using all cores?
top was reporting over 750%
Jason
From: Ken Krugler [kkrugler_li...@transpac.com]
Sent: Wednesday, May 22, 2013 20:59
To: users@kafka.apache.org
Subject: Re: Apache Kafka in AWS
Hi Jason,
On May 22, 2013, at
Jason,
Thanks for sharing. This is very interesting. Normally, Kafka brokers don't
use too much CPU. Are most of the 750% CPU actually used by Kafka brokers?
Jun
On Wed, May 22, 2013 at 6:11 PM, Jason Weiss jason_we...@rapid7.com wrote:
Did you check that you were using all cores?
top was
Normally, I see 2-4 log segments deleted every hour in my brokers. I see
log lines like this:
2013-05-23 04:40:06,857 INFO [kafka-logcleaner-0] log.LogManager -
Deleting log segment 035434043157.kafka from redacted topic
However, it seems like if I restart the broker, a massive amount
It isn't uncommon if a process has an open file handle on a file that is
deleted, the space is not freed until the handle is closed. So restarting
the process that has a handle on the file would cause the space to be freed
also.
You can troubleshoot that with lsof.
Normally, I see 2-4 log
So, does this indicate kafka (or the jvm itself) is not aggressively
closing file handles of deleted files? Is there a fix for this? Or is
there not likely anything to be done? What happens if the disk fills up
with file handles for phantom deleted files?
Jason
On Wed, May 22, 2013 at 9:50
Well, it sounds like files were deleted while Kafka still had them open. Or
something else opened them while Kafka deleted them. I haven't noticed this
on our systems but we haven't looked for it either.
Is anything outside of Kafka deleting or reading those files?
On May 23, 2013 1:17 AM, Jason
No, nothing outside of kafka would look at those files
I'm wondering if it's an os level thing too
On Wed, May 22, 2013 at 10:25 PM, Jonathan Creasy jcre...@box.com wrote:
Well, it sounds like files were deleted while Kafka still had them open.
Or something else opened them while
26 matches
Mail list logo