Hi Steven,
That doesn't work. In your proposal mirrormaker in once DC would copy
messages from topic A to the other DC in topic A. However, in the other
DC there is a mirrormaker which does the same, creating a loop. Messages
will be duplicated, triplicated, etc in a never ending loop.
Hi,
First of all, thank you for replaying.
And I am using 0.8.1.1.
I am expecting the new producer will solve this kind of problem.
Thanks,
Mungeol
On Wed, Oct 22, 2014 at 9:51 AM, Jun Rao jun...@gmail.com wrote:
Yes, what you did is correct. See details in
Does 0.8.2 includes new producer which mentioned at the documentation of
kafka?
If not, which version will include it?
Thanks,
Mungeol
On Wed, Oct 22, 2014 at 11:21 AM, Neha Narkhede neha.narkh...@gmail.com
wrote:
Shlomi,
As Jun mentioned, we are voting on a 0.8.2 beta release now. Are you
Hello Apache Kafka users,
Using Kafka 0.8.1.1 (single instance with single ZK 3.4.6 running locally),
with auto topic creation disabled, in a test I have topic created with
AdminUtils.createTopic (AdminUtils.topicExists returns true) but
KafkaProducer on send request keeps throwing
Hello Stevo,
Your understanding about the configs are correct, and it is indeed wired
that the producer gets the exception after topic is created. Could you use
the kafka-topics command to check if the leaders exist?
kafka-topics.sh --zookeeper XXX --topic [topic-name] describe
Guozhang
On
Yes, 0.8.2 includes the new producer. 0.8.2 will have a lot of new features
which will take time to stabilize. If people want 0.8.1.2 for some critical
bug fixes, we can discuss the feasibility of doing the release.
On Wed, Oct 22, 2014 at 1:39 AM, Shlomi Hazan shl...@viber.com wrote:
at the
Neil,
We fixed a bug related to the BadVersion problem in 0.8.1.1. Would you mind
repeating your test on 0.8.1.1 and if you can still reproduce this issue,
then send around the thread dump and attach the logs to KAFKA-1407?
Thanks,
Neha
On Tue, Oct 21, 2014 at 11:56 AM, Neil Harkins
The number of brokers doesn't really matter here, as far as I can tell,
because the question is about what a single broker can handle. The number
of partitions in the cluster is governed by the ability of the controller
to manage the list of partitions for the cluster, and the ability of each
Hi,
First of all, I am new to Kafka and more of a user than a developer. I will
try to clarify things as much as possible though.
We are using Kafka as a message system for our apps and works nicely in our
SaaS cluster.
I am trying to make the apps also work on a single node for demo purposes.
I
In fact there are many more than 4000 open files. Many of our brokers run
with 28,000+ open files (regular file handles, not network connections). In
our case, we're beefing up the disk performance as much as we can by
running in a RAID-10 configuration with 14 disks.
-Todd
On Tue, Oct 21, 2014
Can you provide steps to reproduce this? I'm not sure I understand how you
run into this. It does look like a bug.
On Wed, Oct 22, 2014 at 9:55 AM, Ciprian Hacman ciprian.hac...@sematext.com
wrote:
Hi,
First of all, I am new to Kafka and more of a user than a developer. I will
try to
This can reproduced with trunk.
start zookeeper
start kafka-broker
create topic or start a producer writing to a topic
stop zookeeper
stop kafka-broker( kafka broker shutdown goes into WARN Session
0x14938d9dc010001 for server null, unexpected error, closing socket
connection and attempting
For the purpose of a unit/integration test for Spring XD, I am creating
several consumers (in the same group) in quick succession. With just 3
consumers, this triggers 2 rebalances that (on some machines) can't be
dealt with in the default 4*2000ms and fails.
I have created a simple use case that
Erik, I don't know that mirrormaker can't write to a different topic. but
it might be an useful feature request to mirrormaker.
On Wed, Oct 22, 2014 at 12:21 AM, Erik van Oosten
e.vanoos...@grons.nl.invalid wrote:
Hi Steven,
That doesn't work. In your proposal mirrormaker in once DC would
RAID-10?
Interesting choice for a system where the data is already replicated
between nodes. Is it to avoid the cost of large replication over the
network? how large are these disks?
On Wed, Oct 22, 2014 at 10:00 AM, Todd Palino tpal...@gmail.com wrote:
In fact there are many more than 4000 open
Thank you for the *very* quick replies Neha, Harsha. I opened a Jira for
this issue:
https://issues.apache.org/jira/browse/KAFKA-1724
Ciprian
--
Performance Monitoring * Log Analytics * Search Analytics
Solr Elasticsearch Support * http://sematext.com/
On Wed, Oct 22, 2014 at 8:27 PM, Harsha
On 10/21/14 21:13, István wrote:
Hi Pete,
Yes you are right, both nodes has all of the data. I was just wondering
what is the scenario for losing one node, in production it might not fly.
If this is for testing only, you are good.
Answering your question, I think retention policy
There are various costs when a broker fails, including broker leader election
for each partition, etc., as well as exposing possible issues for in-flight
messages, and client rebalancing etc.
So even though replication provides partition redundancy, RAID 10 on each
broker is usually a good
Makes sense. Thanks :)
On Wed, Oct 22, 2014 at 11:10 AM, Jonathan Weeks
jonathanbwe...@gmail.com wrote:
There are various costs when a broker fails, including broker leader election
for each partition, etc., as well as exposing possible issues for in-flight
messages, and client rebalancing
I can't find this property in server.properties file. Is that the right
place to set this parameter?
On Tue, Oct 21, 2014 at 6:27 PM, Jun Rao jun...@gmail.com wrote:
Could you also set replica.fetch.wait.max.ms in the broker to sth much
smaller?
Thanks,
Jun
On Tue, Oct 21, 2014 at 2:15
kafka-topics.sh execution, from latest trunk:
~/git/oss/kafka [trunk|✔]
21:00 $ bin/kafka-topics.sh --zookeeper 127.0.0.1:50194 --topic
059915e6-56ef-4b8e-8e95-9f676313a01c --describe
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in
Output on trunk is clean too, after clean build:
~/git/oss/kafka [trunk|✔]
22:00 $ bin/kafka-topics.sh --zookeeper 127.0.0.1:50194 --topic
059915e6-56ef-4b8e-8e95-9f676313a01c --describe
Error while executing topic command next on empty iterator
java.util.NoSuchElementException: next on empty
In my experience, RAID 10 doesn't really provide value in the presence of
replication. When a disk fails, the RAID resync process is so I/O intensive
that it renders the broker useless until it completes. When this happens,
you actually have to take the broker out of rotation and move the leaders
Neha,
Do you mean RAID 10 or RAID 5 or 6? With RAID 5 or 6, recovery is definitely
very painful, but less so with RAID 10.
We have been using the guidance here:
http://www.youtube.com/watch?v=19DvtEC0EbQ#t=190 (LinkedIn Site Reliability
Engineers state they run RAID 10 on all Kafka clusters
Yeah, Jonathan, I'm the LinkedIn SRE who said that :) And Neha, up until
recently, sat 8 feet from my desk. The data from the wiki page is off a
little bit as well (we're running 14 disks now, and 64 GB systems)
So to hit the first questions, RAID 10 gives higher read performance, and
also allows
Still have to understand what is going on, but when I set
kafka.utils.ZKStringSerializer to be ZkSerializer for ZkClient used in
AdminUtils calls, KafkaProducer could see created topic...
Default ZkSerializer is
org.I0Itec.zkclient.serialize.SerializableSerializer.
Kind regards,
Stevo Slavic.
On
I suppose it also is going to depend on:
a) How much spare I/O bandwidth the brokers have as well to support a rebuild
while supporting ongoing requests. Our brokers have spare IO capacity.
b) How many brokers are in the cluster and what the replication factor is —
e.g. if you have a larger
Hello Eric,
1) The rebalance failures is mainly on ZK session timeout, you could try to
increase your zk session timeout value and see if that helps.
2) The new consumer in 0.9 re-write will resolve this problem by getting
rid of the ZK dependency and use a centralized coordinator for rebalance
Todd,
Thank you for the information.
With 28,000+ files and 14 disks, that makes there are averagely about 4000
open files on two disk ( which is treated as one single disk) , am I right?
How do you manage to make the all the write operation to thest 4000 open
files be sequential to the disk?
29 matches
Mail list logo