If auto.offset.reset is set to smallest, it does not mean the consumer
will always consume from the smallest. It means that if no previous offset
commit is found for this consumer group, then it will consume from the
smallest. So for mirror maker, you probably want to always use the same
consumer
Hello,
I was wondering if there is any documented way to recover from a zookeeper
error while retaining Kafka data?
I am developing right now and do not have a redundant zookeeper node. I seem to
regularly get CRC errors that prevent the zookeeper from starting. The trouble
shooting section of
Hi Tao,
Thanks a lot for finding the bug. We are actually rewriting the mirror
maker in KAFKA-1997 with a much simplified solution using the newly added
flush() call in new java producer.
Mirror maker in current trunk is also missing one necessary
synchronization - the
And i also observed ,all the data is moving to one partition in destination
cluster though i have multiple partitions for that topic in source and
destination clusters.
SunilKalva
On Sat, Mar 7, 2015 at 9:54 PM, sunil kalva sambarc...@gmail.com wrote:
I ran kafka mirroring tool after producing
This is one of the major issues that we have noted with using JBOD disk
layouts, that there is no tool like partition reassignment to move partitions
between disks.
Another is that the partition balance algorithm would need to be improved,
allowing for better selection of a mount point than
Xiao,
FileChannel.force is fsync on unix.
To force fsync on every message:
log.flush.interval.messages=1
You are looking at the time based fsync, which, naturally, as you say, is
time-based.
-Jay
On Fri, Mar 6, 2015 at 11:35 PM, Xiao lixiao1...@gmail.com wrote:
Hi, Jay,
Thank you for your
Hi,
(sorry if duplicate, my first try was before I was subscribed to the list).
Using kafka 0.8.2.0. (fresh download). Started zookeeper with:
$ bin/zookeeper-server-start.sh -daemon config/zookeeper.properties
The process is running:
$ ps ax | grep -i 'zookeeper' | grep -v grep | awk
I ran kafka mirroring tool after producing data in source cluster, and this
is not copied to destination cluster. If i produce data after running tool
those data are copied to destination cluster. Am i missing something ?
--
SunilKalva
Actually I was going to report another bug that was exactly caused by
UncheckedOffsets.removeOffset
issue (remove offsets before it is added)
As the current project I am working on heavily relies on the
functionalities MM offers it would be good that if you put the fix to trunk
or gives me some
Hello,
Note I am using the new 0.8.2 version of Kafka and so I'm using the new
KafkaProducer class.
I have a special type of message data that I need to push to every
partition in a topic. Can that be done with custom partitioner that
implements Partitioner when Partitioner expects you to return
Hey Alex,
Conceptually you aren't sending a single message to all topics, but to
be available in all partitions you'd have to send a message for each
partition.
You can fan out in the client like this though:
https://gist.github.com/anonymous/92bb8b788742e95ee2e8
Best,
Mike
On Sat, Mar 7, 2015
I started with clean cluster and started to push data. It still does the
rebalance at random durations even though the auto.leader.relabalance is set to
false.
Thanks
Zakee
On Mar 6, 2015, at 3:51 PM, Jiangjie Qin j...@linkedin.com.INVALID wrote:
Yes, the rebalance should not happen in
12 matches
Mail list logo