Re: kafka compacted topic

2017-11-29 Thread Kane Kim
I think it should > answer your questions. > > On Wed, Nov 29, 2017 at 7:19 AM, Kane Kim wrote: > > > How does kafka log compaction work? > > Does it compact all of the log files periodically against new changes? > > >

kafka compacted topic

2017-11-28 Thread Kane Kim
How does kafka log compaction work? Does it compact all of the log files periodically against new changes?

Re: kafka disk overhead per message

2016-09-13 Thread Kane Kim
No compression, kafka 0.10 On Tue, Sep 13, 2016 at 3:09 PM, Kane Kim wrote: > What is kafka overhead storing messages on disk? > I did some testing with replication factor=1, stored 100MB (messages under > 2kb) on and got 130MB disk usage. Would that be expected? >

kafka disk overhead per message

2016-09-13 Thread Kane Kim
What is kafka overhead storing messages on disk? I did some testing with replication factor=1, stored 100MB (messages under 2kb) on and got 130MB disk usage. Would that be expected?

Re: leader election bug

2016-05-02 Thread Kane Kim
So what could happen then? There is no broker registered in zookeeper, but it's still a leader somehow. On Mon, May 2, 2016 at 3:27 PM, Gwen Shapira wrote: > Thats a good version :) > > On Mon, May 2, 2016 at 11:04 AM, Kane Kim wrote: > > We are running Zookeeper version:

Re: leader election bug

2016-05-02 Thread Kane Kim
Also that broker is not registered in ZK as we can check with zk-shell, but kafka still thinks it's a leader for some partitions. On Mon, May 2, 2016 at 11:04 AM, Kane Kim wrote: > We are running Zookeeper version: 3.4.6-1569965, built on 02/20/2014 09:09 > GMT, does it have any kno

Re: leader election bug

2016-05-02 Thread Kane Kim
s (and spontaneously > de-registered brokers). > > On Fri, Apr 29, 2016 at 11:30 AM, Kane Kim wrote: > > Any idea why it's happening? I'm sure rolling restart would fix it. Is > it a > > bug? > > > > On Wed, Apr 27, 2016 at 5:42 PM, Kane Kim wrote: > &

Re: leader election bug

2016-04-29 Thread Kane Kim
Any idea why it's happening? I'm sure rolling restart would fix it. Is it a bug? On Wed, Apr 27, 2016 at 5:42 PM, Kane Kim wrote: > Hello, > > Looks like we are hitting leader election bug. I've stopped one broker > (104224873) on other broker

leader election bug

2016-04-27 Thread Kane Kim
Hello, Looks like we are hitting leader election bug. I've stopped one broker (104224873) on other brokers I see following: WARN kafka.controller.ControllerChannelManager - [Channel manager on controller 104224863]: Not sending request Name: StopReplicaRequest; Version: 0; CorrelationId: 843100

Re: auto leader rebalancing

2016-04-27 Thread Kane Kim
r) > > in your controller log? > rob > > > On Apr 27, 2016, at 3:46 PM, Kane Kim wrote: > > > > Bump > > > > On Tue, Apr 26, 2016 at 10:33 AM, Kane Kim > wrote: > > > >> Hello, > >> > >> We have auto.leader.rebala

Re: auto leader rebalancing

2016-04-27 Thread Kane Kim
Bump On Tue, Apr 26, 2016 at 10:33 AM, Kane Kim wrote: > Hello, > > We have auto.leader.rebalance.enable = True, other options are by default > (10% imbalance ratio and 300 seconds). > > We have a check that reports leadership imbalance: > > critical: Leadership out of b

auto leader rebalancing

2016-04-26 Thread Kane Kim
Hello, We have auto.leader.rebalance.enable = True, other options are by default (10% imbalance ratio and 300 seconds). We have a check that reports leadership imbalance: critical: Leadership out of balance for topic mp-auth. Leader counts: { "104224873"=>84, "104224876"=>22, "104224877"=>55, "1

Re: kafka mirrormaker cross datacenter replication

2015-03-22 Thread Kane Kim
, hence you may get message > 1,2,3,4 in one cluster and 1,3,4,2 in another. If you remember that your > latest message processed in the first cluster is 2, when you fail over to > the other cluster you may skip and miss message 3 and 4. > > Guozhang > > On Fri, Mar 20,

Re: kafka mirrormaker cross datacenter replication

2015-03-20 Thread Kane Kim
pattern may be to deduplicate messages in Hadoop > before > > taking action on them. > > > > -Jon > > > > P.S. An option in the future might be > > > https://cwiki.apache.org/confluence/display/KAFKA/Transactional+Messaging+in+Kafka > > > > On

kafka mirrormaker cross datacenter replication

2015-03-19 Thread Kane Kim
Hello, What's the best strategy for failover when using mirror-maker to replicate across datacenters? As I understand offsets in both datacenters will be different, how consumers should be reconfigured to continue reading from the same point where they stopped without data loss and/or duplication?