Also, if log.cleaner.enable is true in your broker config, that enables the 
log-compaction retention strategy.

   Then, for topics with the per-topic "cleanup.policy=compact" config 
parameter set, Kafka will scan the topic periodically, nuking old versions of 
the data with the same key.

   I seem to remember that there's some trickiness here, it's not that you're 
absolutely guaranteed to have just one message there with the same key, it's 
just that you'll always have at least one with that key.  I think that depends 
a bit on how big the segments are and how often you're configured to purge old 
log data and that sort of thing.  The idea is that you could have long-term 
persistent data stored within a topic without it getting out of control.

   But in any case, that's another thing that the keys can be useful for.

   It's been six months or so since I tried that so the details are a bit 
fuzzy, but it's something like that, at least.

        -Steve

On Fri, Dec 19, 2014 at 01:04:36PM -0800, Rajiv Kurian wrote:
> Thanks, didn't know that.
> 
> On Fri, Dec 19, 2014 at 10:39 AM, Jiangjie Qin <j...@linkedin.com.invalid>
> wrote:
> >
> > Hi Rajiv,
> >
> > You can send messages without keys. Just provide null for key.
> >
> > Jiangjie (Becket) Qin
> >
> >
> > On 12/19/14, 10:14 AM, "Rajiv Kurian" <ra...@signalfuse.com> wrote:
> >
> > >Hi all,
> > >
> > >I was wondering what why every ProducerRecord sent requires a serialized
> > >key. I am using kafka, to send opaque bytes and I am ending up creating
> > >garbage keys because I don't really have a good one.
> > >
> > >Thanks,
> > >Rajiv
> >
> >

Reply via email to