Hi, Grant,

The append api seems indeed a bit weird. The compression type is a producer
level config. Instead of passing it in for each append, we probably should
just pass it in once during the creation RecordAccumulator. Could you file
a jira to track this?

Thanks,

Jun

On Mon, Mar 23, 2015 at 7:16 PM, Grant Henke <ghe...@cloudera.com> wrote:

> I am reading over the new producer code in an effort to understand the
> implementation more thoroughly and had some questions/feedback.
>
> Currently org.apache.kafka.clients.producer.internals.RecordAccumulator
> append method accepts the compressionType on a per record basis. It looks
> like the code would only work on a per batch basis because the
> CompressionType is only used when creating a new RecordBatch. My
> understanding is this should only support setting per batch at most. I may
> have misread this though. Is there a time where setting per record would
> make sense?
>
>     public RecordAppendResult append(TopicPartition tp, byte[] key, byte[]
> value, CompressionType compression, Callback callback) throws
> InterruptedException;
>
> Why does org.apache.kafka.common.serialization.Serializer Interface require
> a topic?  Is there a use case where serialization would change based on
> topic?
>
>    public byte[] serialize(String topic, T data);
>
> Thank you,
> Grant
>
> --
> Grant Henke
> Solutions Consultant | Cloudera
> ghe...@cloudera.com | 920-980-8979
> twitter.com/ghenke <http://twitter.com/gchenke> |
> linkedin.com/in/granthenke
>

Reply via email to