On Fri, Aug 28, 2020, at 19:36, Unmesh Joshi wrote:
> Hi Colin,
> 
> There were a few of questions I had..

Hi Unmesh,

Thanks for the response.

>
> 1. Were my comments on the broker lease implementation (and corresponding
> prototype) appropriate and do we need to change the KIP
> description accordingly?.
>

Can you repeat your questions about broker leases?

>
> 2. How will broker epochs be generated? I am assuming it can be the
> committed log offset (like zxid?)
>

There isn't any need to use a log offset.  We can just look at an in-memory 
hash table and see what the latest number is, and add one, to generate a new 
broker epoch.

>
> 3. How will producer registration happen? I am assuming it should be
> similar to broker registration, with a similar way to generate producer id.
>

For the EOS stuff, we will need a few new RPCs to the controller.  I think we 
should do that in a follow-on KIP, though, since this one is already pretty big.

>
> 4. Because we expose Raft log to all the brokers, any de-duplication of the
> broker needs to happen before the requests are proposed to Raft. For this
> the controller needs to be single threaded, and should do validation
> against the in-process or pending requests and the final state. I read a
> mention of this, in the responses in this thread.Will it be useful to
> mention this in the KIP?
> 

I'm not sure what you mean by "de-duplication of the broker."  Can you give a 
little more context?

best,
Colin

>
> Thanks,
> Unmesh
> 
> On Sat, Aug 29, 2020 at 4:50 AM Colin McCabe <cmcc...@apache.org> wrote:
> 
> > Hi all,
> >
> > I'm thinking of calling a vote on KIP-631 on Monday.  Let me know if
> > there's any more comments I should address before I start the vote.
> >
> > cheers,
> > Colin
> >
> > On Tue, Aug 11, 2020, at 05:39, Unmesh Joshi wrote:
> > > >>Hi Unmesh,
> > > >>Thanks, I'll take a look.
> > > Thanks. I will be adding more to the prototype and will be happy to help
> > > and collaborate.
> > >
> > > Thanks,
> > > Unmesh
> > >
> > > On Tue, Aug 11, 2020 at 12:28 AM Colin McCabe <cmcc...@apache.org>
> > wrote:
> > >
> > > > Hi Jose,
> > > >
> > > > That'a s good point that I hadn't considered.  It's probably worth
> > having
> > > > a separate leader change message, as you mentioned.
> > > >
> > > > Hi Unmesh,
> > > >
> > > > Thanks, I'll take a look.
> > > >
> > > > best,
> > > > Colin
> > > >
> > > >
> > > > On Fri, Aug 7, 2020, at 11:56, Jose Garcia Sancio wrote:
> > > > > Hi Unmesh,
> > > > >
> > > > > Very cool prototype!
> > > > >
> > > > > Hi Colin,
> > > > >
> > > > > The KIP proposes a record called IsrChange which includes the
> > > > > partition, topic, isr, leader and leader epoch. During normal
> > > > > operation ISR changes do not result in leader changes. Similarly,
> > > > > leader changes do not necessarily involve ISR changes. The controller
> > > > > implementation that uses ZK modeled them together because
> > > > > 1. All of this information is stored in one znode.
> > > > > 2. ZK's optimistic lock requires that you specify the new value
> > > > completely
> > > > > 3. The change to that znode was being performed by both the
> > controller
> > > > > and the leader.
> > > > >
> > > > > None of these reasons are true in KIP-500. Have we considered having
> > > > > two different records? For example
> > > > >
> > > > > 1. IsrChange record which includes topic, partition, isr
> > > > > 2. LeaderChange record which includes topic, partition, leader and
> > > > leader epoch.
> > > > >
> > > > > I suspect that making this change will also require changing the
> > > > > message AlterIsrRequest introduced in KIP-497: Add inter-broker API
> > to
> > > > > alter ISR.
> > > > >
> > > > > Thanks
> > > > > -Jose
> > > > >
> > > >
> > >
> >
>

Reply via email to