Bump. We've got 4 non-binding and one binding vote.

Ryanne

On Fri, Dec 13, 2019, 1:44 AM Tom Bentley <tbent...@redhat.com> wrote:

> +1 (non-binding)
>
> On Thu, Dec 12, 2019 at 6:33 PM Andrew Schofield <
> andrew_schofi...@live.com>
> wrote:
>
> > +1 (non-binding)
> >
> > On 12/12/2019, 14:20, "Mickael Maison" <mickael.mai...@gmail.com>
> wrote:
> >
> >     +1 (binding)
> >     Thanks for the KIP!
> >
> >     On Thu, Dec 5, 2019 at 12:56 AM Ryanne Dolan <ryannedo...@gmail.com>
> > wrote:
> >     >
> >     > Bump. We've got 2 non-binding votes so far.
> >     >
> >     > On Wed, Nov 13, 2019 at 3:32 PM Ning Zhang <ning2008w...@gmail.com
> >
> > wrote:
> >     >
> >     > > My current plan is to implement this in "MirrorCheckpointTask"
> >     > >
> >     > > On 2019/11/02 03:30:11, Xu Jianhai <snow4yo...@gmail.com> wrote:
> >     > > > I think this kip will implement a task in sinkTask ? right?
> >     > > >
> >     > > > On Sat, Nov 2, 2019 at 1:06 AM Ryanne Dolan <
> > ryannedo...@gmail.com>
> >     > > wrote:
> >     > > >
> >     > > > > Hey y'all, Ning Zhang and I would like to start the vote for
> > the
> >     > > following
> >     > > > > small KIP:
> >     > > > >
> >     > > > >
> >     > > > >
> >     > >
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-545%3A+support+automated+consumer+offset+sync+across+clusters+in+MM+2.0
> >     > > > >
> >     > > > > This is an elegant way to automatically write consumer group
> > offsets to
> >     > > > > downstream clusters without breaking existing use cases.
> > Currently, we
> >     > > rely
> >     > > > > on external tooling based on RemoteClusterUtils and
> >     > > kafka-consumer-groups
> >     > > > > command to write offsets. This KIP bakes this functionality
> > into MM2
> >     > > > > itself, reducing the effort required to failover/failback
> > workloads
> >     > > between
> >     > > > > clusters.
> >     > > > >
> >     > > > > Thanks for the votes!
> >     > > > >
> >     > > > > Ryanne
> >     > > > >
> >     > > >
> >     > >
> >
> >
> >
>

Reply via email to