Cheers!
I have a setup with 3 brokers having 27 log.dirs each. Setting up a
topic with 27 partitions and 2 replication I would have expected every
directory of log.dirs to feature <1 partition. This is not the case (the
setup has been the same from the beginning; e.g. no log.dirs added
On 11 Jul 2016, at 13:00, Jörg Wagner <joerg.wagn...@1und1.de> wrote:
Hello!
We recently switched to Kafka 0.9.0.1 and currently I don't seem to be able to
figure out how to read the consumer offsets via cli. We are using the 0.9.0.1
new consumer and are storing the offsets in kafka.
Hello!
We recently switched to Kafka 0.9.0.1 and currently I don't seem to be
able to figure out how to read the consumer offsets via cli. We are
using the 0.9.0.1 new consumer and are storing the offsets in kafka.
Status:
kafka-consumer-offset-checker.sh is old and deprecated, points to
using command
./kafka-topics.sh --delete --topic topic_billing --zookeeper localhost:2181
It says topic marked as deletion, but it does not actually delete topic.
Thanks,
Snehalata
--
Mit freundlichem Gruß
Jörg Wagner
Systemadministrator
Search & Account Security
1&1 Mail & Media
I dug further into this and log.cleaner.enable is false by default in
0.8.2.x which pretty much explains all this. Fixing it now.
Thanks!
On 09.05.2016 08:17, Jörg Wagner wrote:
Thank you James.
The log cleaner log is empty and we didn't notice any issue there. The
Topic however is ~174GB
and
therefore the offsets topic just grows forever, which means it will take a long
time to read in the topic.
You can look in the log-cleaner.log debuglog file to see if there are any error
messages there.
-James
On May 6, 2016, at 6:28 AM, Jörg Wagner <joerg.wagn...@1und1.de> wrote:
.
Forwarded Message
Subject:unknown (kafka) offsets after restart
Date: Fri, 6 May 2016 14:12:24 +0200
From: Jörg Wagner <joerg.wagn...@1und1.de>
Reply-To: users@kafka.apache.org
To: users@kafka.apache.org
We're using Kafka 0.8.2 and are puzzled by the offset beh
We're using Kafka 0.8.2 and are puzzled by the offset behaviour when
they are stored in kafka topics.
Upon restart of the Kafka cluster (e.g. due to reconfiguration) it can
happen that the offsets are unknown and therefore stop consumers from
consuming without knowing their offset.
Hello!
Over half a year ago I noted that the MirrorMaker just stops after
hitting a corrupt message, since no error handling is built in.
Since some time has passed now I wanted to ask you: what do you use for
mirroring now? Which issues have you had?
Thanks
Jörg
-SimpleConsumerShell
). Your only other recourse is to iterate past the problem offset.
On Thu, Oct 1, 2015 at 1:22 AM, Jörg Wagner <joerg.wagn...@1und1.de> wrote:
Hey everyone,
I've been having some issues with corrupted messages and mirrormaker as I
wrote previously. Since there was no feedback, I want
Hey everyone,
I've been having some issues with corrupted messages and mirrormaker as
I wrote previously. Since there was no feedback, I want to ask a new
question:
Did you ever have corrupted messages in kafka? Did things break? How did
you recover or work around that?
Thanks
Jörg
Hey everyone!
One of my Mirrormakers is exiting with the following error:
[2015-09-18 11:27:35,591] FATAL [mirrormaker-consumer-0] Stream
unexpectedly exited. (kafka.tools.MirrorMaker$ConsumerThread)
kafka.message.InvalidMessageException: Message is corrupt (stored crc =
3256823012, computed
, Sep 4, 2015 at 8:11 AM, tao xiao <xiaotao...@gmail.com> wrote:
Here is a good doc to describe how to choose the right number of partitions
http://www.confluent.io/blog/how-to-choose-the-number-of-topicspartitions-in-a-kafka-cluster/
On Fri, Sep 4, 2015 at 10:08 PM, Jörg Wagner <j
Sorry, the messages are keyed.
On 07.09.2015 10:08, Jörg Wagner wrote:
Thank you very much for both replies.
@Tao
Thanks, I am aware of and have read that article. I am asking because
my experience is completely different :/. Everytime we go beyond 400
partitions the cluster really starts
Hello!
Regarding the recommended amount of partitions I am a bit confused.
Basically I got the impression that it's better to have lots of
partitions (see information from linkedin etc). On the other hand, a lot
of performance benchmarks floating around show only a few partitions are
being
Hello!
I was looking into compression for some WAN mirroring and did some
tcpdumping.
Our producer is using snappy and I think I can see that in the network
traffic. While the outer part of each message is readable, the content
looks compressed to me with mostly nonprintable characters.
On 25.08.2015 15:18, Jörg Wagner wrote:
So okay, this is a little embarassing but the core of the issue was
that max open files was not set correctly for kafka. It was not an
oversight, but a few things together caused that the system
configuration was not changed correctly, resulting
Jörg
On 24.08.2015 10:31, Jörg Wagner wrote:
Thank you for your answers.
@Raja
No, it also seems to happen if we stop kafka completely clean.
@Gwen
I was testing the situation with num.replica.fetchers set higher. If
you say that was the right direction, I will try it again. What would
, Rajasekar Elango wrote:
We are seeing same behavior in 5 broker cluster when losing one broker.
In our case, we are losing broker as well as kafka data dir.
Jörg Wagner,
Are you losing just broker or kafka data dir as well?
Gwen,
We have also observed that latency of messages arriving
Hey everyone,
here's my crosspost from irc.
Our setup:
3 kafka 0.8.2 brokers with zookeeper, powerful hardware (20 cores, 27
logdisks each). We use a handful of topics, but only one topic is
utilized heavily. It features a replication of 2 and 600 partitions.
Our issue:
If one kafka was
20 matches
Mail list logo