Please see comments inline.

On Mon, Nov 7, 2016 at 9:32 AM, Michael Pearce <michael.pea...@ig.com>
wrote:

> Hi Roger,
>
> Thanks for the support.
>

Thanks for leading the discussion on this.  It's an important topic.


>
> I think the key thing is to have a common key space to make an ecosystem,
> there does have to be some level of contract for people to play nicely.
>


Agreed.  There doesn't yet seem to be agreement on whether the broker needs
to understand the metadata structure or whether it's a client-level
concept.  We could define a common spec on top of the existing Kafka
protocol and require clients to implement it if they want to be
metadata-compliant.  That would have the advantage of keeping the broker
and core protocol simpler (would require no changes).  The reason I'm in
favor of making the broker aware of metadata is that it would allow a
smooth migration as clients begin using the new metadata structure.
Serializing metadata to byte[] in the protocol makes sense to me because I
don't see any reason that the broker needs to spend CPU time parsing and
validating individual headers that it doesn't care about.  Nor should the
base wire protocol and on-disk format need to commit to a particular header
structure (when it's only needed at a higher level).

I'm not necessarily opposed to defining a key-value structure in core Kafka
but don't see a strong reason to do it there when it could be done at the
client layer (while still enabling a common metadata model across the
ecosystem).  Without a strong reason, it makes sense to keep things simpler
and more efficient for the brokers (byte arrays for keys, metadata, and
values).


> Having map<String, byte[]> or as per current proposed in kip of having a
> numerical key space of  map<int, byte[]> is a level of the contract that
> most people would expect.
>

Yes, this seems good to me too.  I'm in favor of prescribing something like
this in client APIs.


>
> I think the example in a previous comment someone else made linking to AWS
> blog and also implemented api where originally they didn’t have a header
> space but not they do, where keys are uniform but the value can be string,
> int, anything is a good example.
>
> Having a custom MetadataSerializer is something we had played with, but
> discounted the idea, as if you wanted everyone to work the same way in the
> ecosystem, having to have this also customizable makes it a bit harder.
> Think about making the whole message record custom serializable, this would
> make it fairly tricky (though it would not be impossible) to have made work
> nicely. Having the value customizable we thought is a reasonable tradeoff
> here of flexibility over contract of interaction between different parties.
>
> Is there a particular case or benefit of having serialization customizable
> that you have in mind?
>

I guess this depends on whether we decide to encode individual headers in
the protocol or not.  If so, then custom serialization does not make
sense.  If metadata is a byte array, then it does.  The main reason for
allowing custom metadata serialization is that there already exist a lot of
good serialization solutions (Protobuf, Avro, HPACK, etc.). Kafka clients
could ship with a default header serializer and perhaps 80% of users will
just use it (as you said).  As long as the client API is key-value,
everything should interoperate as expected with custom serialization
without any application changes if you configure your custom metadata serde
in all your clients (brokers wouldn't care).



>
> Saying this it is obviously something that could be implemented, if there
> is a need. If we did go this avenue I think a defaulted serializer
> implementation should exist so for the 80:20 rule, people can just have the
> broker and clients get default behavior.
>
> Cheers
> Mike
>
> On 11/6/16, 5:25 PM, "radai" <radai.rosenbl...@gmail.com> wrote:
>
>     making header _key_ serialization configurable potentially undermines
> the
>     board usefulness of the feature (any point along the path must be able
> to
>     read the header keys. the values may be whatever and require more
> intimate
>     knowledge of the code that produced specific headers, but keys should
> be
>     universally readable).
>
>     it would also make it hard to write really portable plugins - say i
> wrote a
>     large message splitter/combiner - if i rely on key "largeMessage" and
>     values of the form "1/20" someone who uses (contrived example)
> Map<Byte[],
>     Double> wouldnt be able to re-use my code.
>
>     not the end of a the world within an organization, but problematic if
> you
>     want to enable an ecosystem
>
>     On Thu, Nov 3, 2016 at 2:04 PM, Roger Hoover <roger.hoo...@gmail.com>
> wrote:
>
>     >  As others have laid out, I see strong reasons for a common message
>     > metadata structure for the Kafka ecosystem.  In particular, I've
> seen that
>     > even within a single organization, infrastructure teams often own the
>     > message metadata while application teams own the application-level
> data
>     > format.  Allowing metadata and content to have different structure
> and
>     > evolve separately is very helpful for this.  Also, I think there's a
> lot of
>     > value to having a common metadata structure shared across the Kafka
>     > ecosystem so that tools which leverage metadata can more easily be
> shared
>     > across organizations and integrated together.
>     >
>     > The question is, where does the metadata structure belong?  Here's
> my take:
>     >
>     > We change the Kafka wire and on-disk format to from a (key, value)
> model to
>     > a (key, metadata, value) model where all three are byte arrays from
> the
>     > brokers point of view.  The primary reason for this is that it
> provides a
>     > backward compatible migration path forward.  Producers can start
> populating
>     > metadata fields before all consumers understand the metadata
> structure.
>     > For people who already have custom envelope structures, they can
> populate
>     > their existing structure and the new structure for a while as they
> make the
>     > transition.
>     >
>     > We could stop there and let the clients plug in a KeySerializer,
>     > MetadataSerializer, and ValueSerializer but I think it is also be
> useful to
>     > have a default MetadataSerializer that implements a key-value model
> similar
>     > to AMQP or HTTP headers.  Or we could go even further and prescribe a
>     > Map<String, byte[]> or Map<String, String> data model for headers in
> the
>     > clients (while still allowing custom serialization of the header data
>     > model).
>     >
>     > I think this would address Radai's concerns:
>     > 1. All client code would not need to be updated to know about the
>     > container.
>     > 2. Middleware friendly clients would have a standard header data
> model to
>     > work with.
>     > 3. KIP is required both b/c of broker changes and because of client
> API
>     > changes.
>     >
>     > Cheers,
>     >
>     > Roger
>     >
>     >
>     > On Wed, Nov 2, 2016 at 4:38 PM, radai <radai.rosenbl...@gmail.com>
> wrote:
>     >
>     > > my biggest issues with a "standard" wrapper format:
>     > >
>     > > 1. _ALL_ client _CODE_ (as opposed to kafka lib version) must be
> updated
>     > to
>     > > know about the container, because any old naive code trying to
> directly
>     > > deserialize its own payload would keel over and die (it needs to
> know to
>     > > deserialize a container, and then dig in there for its payload).
>     > > 2. in order to write middleware-friendly clients that utilize such
> a
>     > > container one would basically have to write their own
> producer/consumer
>     > API
>     > > on top of the open source kafka one.
>     > > 3. if you were going to go with a wrapper format you really dont
> need to
>     > > bother with a kip (just open source your own client stack from #2
> above
>     > so
>     > > others could stop re-inventing it)
>     > >
>     > > On Wed, Nov 2, 2016 at 4:25 PM, James Cheng <wushuja...@gmail.com>
>     > wrote:
>     > >
>     > > > How exactly would this work? Or maybe that's out of scope for
> this
>     > email.
>     > >
>     >
>
>
> The information contained in this email is strictly confidential and for
> the use of the addressee only, unless otherwise indicated. If you are not
> the intended recipient, please do not read, copy, use or disclose to others
> this message or any attachment. Please also notify the sender by replying
> to this email or by telephone (+44(020 7896 0011) and then delete the email
> and any copies of it. Opinions, conclusion (etc) that do not relate to the
> official business of this company shall be understood as neither given nor
> endorsed by it. IG is a trading name of IG Markets Limited (a company
> registered in England and Wales, company number 04008957) and IG Index
> Limited (a company registered in England and Wales, company number
> 01190902). Registered address at Cannon Bridge House, 25 Dowgate Hill,
> London EC4R 2YA. Both IG Markets Limited (register number 195355) and IG
> Index Limited (register number 114059) are authorised and regulated by the
> Financial Conduct Authority.
>

Reply via email to