I think it should
> answer your questions.
>
> On Wed, Nov 29, 2017 at 7:19 AM, Kane Kim wrote:
>
> > How does kafka log compaction work?
> > Does it compact all of the log files periodically against new changes?
> >
>
How does kafka log compaction work?
Does it compact all of the log files periodically against new changes?
No compression, kafka 0.10
On Tue, Sep 13, 2016 at 3:09 PM, Kane Kim wrote:
> What is kafka overhead storing messages on disk?
> I did some testing with replication factor=1, stored 100MB (messages under
> 2kb) on and got 130MB disk usage. Would that be expected?
>
What is kafka overhead storing messages on disk?
I did some testing with replication factor=1, stored 100MB (messages under
2kb) on and got 130MB disk usage. Would that be expected?
So what could happen then? There is no broker registered in zookeeper, but
it's still a leader somehow.
On Mon, May 2, 2016 at 3:27 PM, Gwen Shapira wrote:
> Thats a good version :)
>
> On Mon, May 2, 2016 at 11:04 AM, Kane Kim wrote:
> > We are running Zookeeper version:
Also that broker is not registered in ZK as we can check with zk-shell, but
kafka still thinks it's a leader for some partitions.
On Mon, May 2, 2016 at 11:04 AM, Kane Kim wrote:
> We are running Zookeeper version: 3.4.6-1569965, built on 02/20/2014 09:09
> GMT, does it have any kno
s (and spontaneously
> de-registered brokers).
>
> On Fri, Apr 29, 2016 at 11:30 AM, Kane Kim wrote:
> > Any idea why it's happening? I'm sure rolling restart would fix it. Is
> it a
> > bug?
> >
> > On Wed, Apr 27, 2016 at 5:42 PM, Kane Kim wrote:
> &
Any idea why it's happening? I'm sure rolling restart would fix it. Is it a
bug?
On Wed, Apr 27, 2016 at 5:42 PM, Kane Kim wrote:
> Hello,
>
> Looks like we are hitting leader election bug. I've stopped one broker
> (104224873) on other broker
Hello,
Looks like we are hitting leader election bug. I've stopped one broker
(104224873) on other brokers I see following:
WARN kafka.controller.ControllerChannelManager - [Channel manager on
controller 104224863]: Not sending request Name: StopReplicaRequest;
Version: 0; CorrelationId: 843100
r)
>
> in your controller log?
> rob
>
> > On Apr 27, 2016, at 3:46 PM, Kane Kim wrote:
> >
> > Bump
> >
> > On Tue, Apr 26, 2016 at 10:33 AM, Kane Kim
> wrote:
> >
> >> Hello,
> >>
> >> We have auto.leader.rebala
Bump
On Tue, Apr 26, 2016 at 10:33 AM, Kane Kim wrote:
> Hello,
>
> We have auto.leader.rebalance.enable = True, other options are by default
> (10% imbalance ratio and 300 seconds).
>
> We have a check that reports leadership imbalance:
>
> critical: Leadership out of b
Hello,
We have auto.leader.rebalance.enable = True, other options are by default
(10% imbalance ratio and 300 seconds).
We have a check that reports leadership imbalance:
critical: Leadership out of balance for topic mp-auth. Leader counts: {
"104224873"=>84, "104224876"=>22, "104224877"=>55, "1
, hence you may get message
> 1,2,3,4 in one cluster and 1,3,4,2 in another. If you remember that your
> latest message processed in the first cluster is 2, when you fail over to
> the other cluster you may skip and miss message 3 and 4.
>
> Guozhang
>
> On Fri, Mar 20,
pattern may be to deduplicate messages in Hadoop
> before
> > taking action on them.
> >
> > -Jon
> >
> > P.S. An option in the future might be
> >
> https://cwiki.apache.org/confluence/display/KAFKA/Transactional+Messaging+in+Kafka
> >
> > On
Hello,
What's the best strategy for failover when using mirror-maker to replicate
across datacenters? As I understand offsets in both datacenters will be
different, how consumers should be reconfigured to continue reading from
the same point where they stopped without data loss and/or duplication?
15 matches
Mail list logo