This means broker 1 is the controller. It uses a generic zookeeper based
leader election module which is where this log4j message is coming from.

Thanks,
Neha
On Sep 12, 2013 10:52 PM, "Lu Xuechao" <lux...@gmail.com> wrote:

> Thanks Rao. I found both log.dir and log.dirs worked.
>
> When I start up all my brokers, I see below log message on the console of
> broker 1: What does that mean? Partition leader or not?
>
> [2013-09-12 13:10:54,570] INFO New leader is 1
> (kafka.server.ZookeeperLeaderElector$LeaderChangeListener)
>
>
> On Thu, Sep 12, 2013 at 1:15 PM, Jun Rao <jun...@gmail.com> wrote:
>
> > 1. You can put multiple directories, each on a separate volume, in
> > log.dirs.
> >
> > 2. Yes, our replica assignment logic will try to spread the partitions
> and
> > the leaders evenly among the brokers.
> >
> > Thanks,
> >
> > Jun
> >
> >
> > On Wed, Sep 11, 2013 at 9:59 PM, Lu Xuechao <lux...@gmail.com> wrote:
> >
> > > Hi Team,
> > >
> > > I have some questions regarding Kafka partitions:
> > >
> > > 1. Based on my understanding, the partitions of the same broker have
> > > contention on disk IO. Say If I have 10 hard drives, can I specify all
> > the
> > > partitions spread evenly on those drives?
> > >
> > > 2. If I configure default.replication.factor=2, then for each partition
> > > there will be one leader and one follower, right? Say if I have 10
> nodes
> > > with a broker on each node, I want to partition 10 way then can each of
> > the
> > > 10 broker instances be a leader for 1 partition and a follower for 1 or
> > > more partitions based on the replication factor?
> > >
> > > thanks,
> > >
> >
>

Reply via email to