Thanks Guozhang!

Best,
ShunKang

Guozhang Wang <wangg...@gmail.com> 于2022年9月23日周五 00:27写道:

> Could you start a separate VOTE email thread calling for votes?
>
> On Thu, Sep 22, 2022 at 9:19 AM ShunKang Lin <linshunkang....@gmail.com>
> wrote:
>
> > Hi Guozhang,
> >
> > Thanks for your help! By the way, what should I do next?
> >
> > Best,
> > ShunKang
> >
> > Guozhang Wang <wangg...@gmail.com> 于2022年9月22日周四 23:21写道:
> >
> > > Thanks ShunKang,
> > >
> > > I made a few nit edits on the Motivation section as well. LGTM for me
> > now.
> > >
> > > On Thu, Sep 22, 2022 at 7:33 AM ShunKang Lin <
> linshunkang....@gmail.com>
> > > wrote:
> > >
> > > > Hi Guozhang,
> > > >
> > > > I've updated the "Motivation" section of the KIP, please take a look.
> > > >
> > > > Thanks.
> > > > ShunKang
> > > >
> > > > Guozhang Wang <wangg...@gmail.com> 于2022年9月21日周三 01:26写道:
> > > >
> > > > > In this case, could you update the KIP to clarify the allocation
> > > savings
> > > > > more clearly in the "Motivation" section? Also you could mention
> that
> > > for
> > > > > user customizable serdes, if they could provide overwrites on the
> > > > > overloaded function that's also possible for optimize memory
> > > allocations.
> > > > >
> > > > > Guozhang
> > > > >
> > > > > On Tue, Sep 20, 2022 at 10:24 AM Guozhang Wang <wangg...@gmail.com
> >
> > > > wrote:
> > > > >
> > > > > > 1. Ack, thanks.
> > > > > > 2. Sounds good, thanks for clarifying.
> > > > > >
> > > > > > On Tue, Sep 20, 2022 at 9:50 AM ShunKang Lin <
> > > > linshunkang....@gmail.com>
> > > > > > wrote:
> > > > > >
> > > > > >> Hi Guozhang,
> > > > > >>
> > > > > >> Thanks for your comments!
> > > > > >>
> > > > > >> 1. We can reduce memory allocation if the key/value types happen
> > to
> > > be
> > > > > >> ByteBuffer or String.
> > > > > >> 2. I would like to add `default ByteBuffer
> > > > serializeToByteBuffer(String
> > > > > >> topic, Headers headers, T data)` in Serializer to reduce memory
> > copy
> > > > in
> > > > > >> `KafkaProducer#doSend(ProducerRecord, Callback)`, but this
> change
> > > is a
> > > > > bit
> > > > > >> big, I prefer to submit another one KIP to do the job.
> > > > > >>
> > > > > >> Thanks.
> > > > > >> ShunKang
> > > > > >>
> > > > > >> Guozhang Wang <wangg...@gmail.com> 于2022年9月20日周二 06:32写道:
> > > > > >>
> > > > > >> > Hello ShunKang,
> > > > > >> >
> > > > > >> > Thanks for filing the proposal, and sorry for the late reply!
> > > > > >> >
> > > > > >> > I looked over your KIP proposal and the PR, in general I
> think I
> > > > agree
> > > > > >> that
> > > > > >> > adding an overloaded function with `ByteBuffer` param is
> > > beneficial,
> > > > > >> but I
> > > > > >> > have a meta question regarding it's impact on Kafka consumer:
> my
> > > > > >> > understanding from your PR is that, we can only save memory
> > > > > allocations
> > > > > >> if
> > > > > >> > the key/value types happen to be ByteBuffer as well, otherwise
> > we
> > > > > would
> > > > > >> > still do the `return deserialize(topic, headers,
> > > > > Utils.toArray(data));`
> > > > > >> > from default impls unless the user customized deserializers is
> > > > > >> augmented to
> > > > > >> > handle ByteBuffer directly, right?
> > > > > >> >
> > > > > >> >
> > > > > >> > Guozhang
> > > > > >> >
> > > > > >> >
> > > > > >> >
> > > > > >> > On Sun, Aug 21, 2022 at 9:56 AM ShunKang Lin <
> > > > > linshunkang....@gmail.com
> > > > > >> >
> > > > > >> > wrote:
> > > > > >> >
> > > > > >> > > Hi all,
> > > > > >> > >
> > > > > >> > > I'd like to start a discussion on KIP-863 which is Reduce
> > > > > >> > > Fetcher#parseRecord() memory copy. This KIP can reduce Kafka
> > > > > Consumer
> > > > > >> > > memory allocation by nearly 50% during fetch records.
> > > > > >> > >
> > > > > >> > > Please check
> > > > > >> > >
> > > > > >> >
> > > > > >>
> > > > >
> > > >
> > >
> >
> https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=225152035
> > > > > >> > > and https://github.com/apache/kafka/pull/12545 for more
> > > details.
> > > > > >> > >
> > > > > >> > > Any feedbacks and comments are welcomed.
> > > > > >> > >
> > > > > >> > > Thanks.
> > > > > >> > >
> > > > > >> >
> > > > > >> >
> > > > > >> > --
> > > > > >> > -- Guozhang
> > > > > >> >
> > > > > >>
> > > > > >
> > > > > >
> > > > > > --
> > > > > > -- Guozhang
> > > > > >
> > > > >
> > > > >
> > > > > --
> > > > > -- Guozhang
> > > > >
> > > >
> > >
> > >
> > > --
> > > -- Guozhang
> > >
> >
>
>
> --
> -- Guozhang
>

Reply via email to