Hi,
Are you enabling log compaction on a topic with compressed messages?
If yes, then that might be the reason for the exception. 0.8.2.2 Log
Compaction does
not support compressed messages. This got fixed in 0.9.0.0 (KAFKA-1641,
KAFKA-1374)
Check below mail thread for some corrective action
Hello,
Looks like we are hitting leader election bug. I've stopped one broker
(104224873) on other brokers I see following:
WARN kafka.controller.ControllerChannelManager - [Channel manager on
controller 104224863]: Not sending request Name: StopReplicaRequest;
Version: 0; CorrelationId: 843100
No, I don't see that in controller.log. Looks like it's related to this
bug: https://issues.apache.org/jira/browse/KAFKA-2729, after I did rolling
restart (all brokers) it rebalanced. It wasn't rebalancing though when a
couple of brokers were missing from ISR (but there were enough brokers to
rebal
I believe the 10% is measured on a broker level not a topic
level. Do you see lines like:
[2016-04-27 22:52:47,854] TRACE [Controller 3]: leader imbalance ratio for
broker 5 is 0.978555 (kafka.controller.KafkaController)
[2016-04-27 22:52:47,855] TRACE [Controller 3]: leader imbalance ratio for
Bump
On Tue, Apr 26, 2016 at 10:33 AM, Kane Kim wrote:
> Hello,
>
> We have auto.leader.rebalance.enable = True, other options are by default
> (10% imbalance ratio and 300 seconds).
>
> We have a check that reports leadership imbalance:
>
> critical: Leadership out of balance for topic mp-auth.
I'm afraid it does. And I am not an owner of the cloud. It seems like a
zookeeper error, maybe? Does it need special configuration in such an
environment? I'll figure it out eventually but any help would be greatly
appreciated.
On Apr 28, 2016 12:26 AM, "Oliver Pačut" wrote:
> I'm afraid it does.
Folks,
Is there a writeup / pointers on how the connection management /
Retries / Failure handling etc.. between Kafka Producers and Kafka
Consumers to a Kafka Cluster.
Thanks...
_dd_
Hello,
We enabled log compaction on a few topics, as we want to preserve permanently
the latest versions of messages published to specific topics. After enabling
compaction, the log cleaner thread dies with the same error for the topics we
tried it on. It looks like kafka has starting offset
client/producer : 0.9.0.1
server/broker : 0.9.0.0
> On Apr 26, 2016, at 10:05 PM, Sharma Vinay wrote:
>
> What versions of kafka and client API are you using now?
> On Apr 26, 2016 3:35 PM, "Fumo, Vincent"
> wrote:
>
>> I spoke too soon. It's back doing the same thing.. which is really odd.
>>
Hello,
Is there a recommendation for handling producer side partitioning based on
a key with skew?
We want to partition on something like clientId. Problem is, this key has
an uniform distribution.
Its equally likely to see a key with 3k occurrence/day vs 100k/day vs
65million/day.
Cardinality of
Hello,
I am trying to execute WordCountDemo app. I produced text to the input topic.
But when I execute the WordCountDemo, I get error.
please help resolve the following:
ERROR Streams application error during processing in thread [StreamThread-1]:
(org.apache.kafka.streams.processor.internals
We monitor the log flush latency p95 on all our Kafka nodes and
occasionally we see it creep up from the regular figure of under 15 ms to
above 150 ms.
Restarting the node usually doesn't help. It seems to fix itself over time
but we are not quite sure about the underlying reason. It's bytes-in/se
Hello,
I'm wondering if fault tolerant state management with kafka streams works
seamlessly if partitions are scaled up. My understanding is that this is
indeed a problem that stateful stream processing frameworks need to solve,
and that:
with samza, this is not a solved problem (though I also u
There will also be inter-broker replication traffic, and controller
communications (the controller runs on an elected broker in the
cluster). If you're using security features in Kafka 0.9, you may see
additional auth traffic between brokers.
That's all I can think of off the top of my head.
O
Hi Paolo,
Dig a bit through the mailing list archives - IIRC there's a "trick"
that lets you do long processing. Basically you pull in a big batch,
unsubscribe from all topics, do regular polls (that will just send the
heartbeat because you don't have active subscriptions) and then when
done, re-s
Hi,
we are thinking about setting up a Kafka cluster. I understand that the
Kafka nodes need to communicate with the Zookeeper cluster in port 2181. Is
there communication between the broker nodes?
Thanks,
Christoph
Hi Phil,
This sounds great. Thanks for trying these serrings. This means probably
something wrong in my code or setup. I will check what is causing this
issue in my case.
I have a 3 broker 1 zk cluster and my topic has 3 partitions with
replication factor 3.
Regards,
Vinay Sharma
Hello!
I'm using Kafka 0.9.0 high level consumer API. I have the following
questions:
1. Where is kept the consumer group and its corresponding clients?
2. Is the group kept in Zookeeper?
3. If the group is kept in ZK the data structure remains the same as
described here:
https://cwiki.apache.org
Hi Vinay,
I tried this out as you suggested by setting metadata.max.age.ms = 4
(session.timeout.ms=3)
I then ran my consumer with a batch of 25 messages where each message takes 4
seconds to process and I call commitSync(offsets) after each message to ensure
the heartbeat keeps the cons
HI!
You can set up your kafka process to dump the stack trace in case of the
OOM by providing the flags:(
https://docs.oracle.com/javase/8/docs/technotes/guides/troubleshoot/clopts001.html
)
-
xx:HeapDumpPath=path
This option can be used to specify a location for the heap dump, see Th
Hi, Liquan!
Thank you for your response. Is much more clear now.
Regards,
Florin
On Sun, Apr 24, 2016 at 9:54 PM, Liquan Pei wrote:
> Hi Spico,
>
> Kafka Consumer is single threaded which means all operations such as
> sending heart beat, fech records and maintain group membership are done i
Have you tried getting the memory usage output using tool like jmap and
seeing what's consuming the memory? Also, what are you heap sizes for
the process?
-Jaikiran
On Tuesday 19 April 2016 02:31 AM, McKoy, Nick wrote:
To follow up with my last email, I have been looking into
socket.receive.
We have had this issue in 0.8.x and at that time we did not investigate
it. Recently we upgraded to 0.9.0.1 and had similar issue which we
investigated and narrowed down to what's explained here
http://mail-archives.apache.org/mod_mbox/kafka-users/201604.mbox/%3C571F23ED.7050405%40gmail.com%3E.
http://docs.confluent.io/2.0.0/connect/connect-hdfs/docs/index.html
On Wed, Apr 27, 2016 at 1:59 PM, Mudit Kumar wrote:
> Hi,
>
> I have a running kafka setup with 3 brokers.Now i want to sink all kafka
> to write to hdfs.My hadoop cluster is already up and running.
> Any blog/doc for configurin
http://www.confluent.io/blog/how-to-build-a-scalable-etl-pipeline-with-kafka-connect
On Wed, Apr 27, 2016 at 1:59 PM, Mudit Kumar wrote:
> Hi,
>
> I have a running kafka setup with 3 brokers.Now i want to sink all kafka
> to write to hdfs.My hadoop cluster is already up and running.
> Any blog/d
Hi,
I have a running kafka setup with 3 brokers.Now i want to sink all kafka to
write to hdfs.My hadoop cluster is already up and running.
Any blog/doc for configuring the same.
Thanks,
Mudit
26 matches
Mail list logo