Thanks for the KIP Jorge,

As Bill pointed out already, we should be careful with adding new
overloads as this contradicts the work done via KIP-182.

This KIP also seems to be related to KIP-149 and KIP-159. Are you aware
of them? Both have quite long DISCUSS threads, but it might be worth
browsing through them.

A few further questions:

 - why do you want to add the headers to `KeyValue`? I am not sure if we
should consider headers as optional metadata and add it to
`RecordContext` similar to timestamp, offset, etc. only

 - You only include stateless single-record transformations at the DSL
level. Do you suggest that all other operator just drop headers on the
floor?

 - Why do you only want to put headers into in-memory and cache but not
RocksDB store? What do you mean by "pass through"? IMHO, all stores
should behave the same at DSL level.
   -> if we store the headers in the state stores, what is the upgrade path?

 - Why do we need to store record header in state in the first place, if
we exclude stateful operator at DSL level?


What is the motivation for the "border lines" you choose?


-Matthias


On 12/21/17 8:18 AM, Bill Bejeck wrote:
> Jorge,
> 
> Thanks for the KIP, I know this is a feature others in the community have
> been interested in getting into Kafka Streams.
> 
> I took a quick pass over it, and I have one initial question.
> 
> We recently reduced overloads with KIP-182, and in this KIP we are
> increasing them again.
> 
> I can see from the KIP why they are necessary, but I'm wondering if there
> is something else we can do to cut down on the overloads introduced.  I
> don't have any sound suggestions ATM, so I'll have to think about it some
> more, but I wanted to put the thought out there.
> 
> Thanks,
> Bill
> 
> On Thu, Dec 21, 2017 at 9:06 AM, Jorge Esteban Quilcate Otoya <
> quilcate.jo...@gmail.com> wrote:
> 
>> Hi all,
>>
>> I have created a KIP to add Record Headers support to Kafka Streams API:
>> https://cwiki.apache.org/confluence/display/KAFKA/KIP-
>> 244%3A+Add+Record+Header+support+to+Kafka+Streams
>>
>>
>> The main goal is to be able to use headers to filter, map and process
>> records as streams. Stateful processing (joins, windows) are not
>> considered.
>>
>> Proposed changes/Draft:
>> https://github.com/apache/kafka/compare/trunk...jeqo:streams-headers
>>
>> Feedback and suggestions are more than welcome.
>>
>> Cheers,
>>
>> Jorge.
>>
> 

Attachment: signature.asc
Description: OpenPGP digital signature

Reply via email to