Interesting use case. I would be interested to hear more. Are you assigning
1 partition per key incrementally? How does your consumer know which
partition has which key?

I don't think there is a way to manually invalidate the cached metadata in
the public producer api (I could be wrong), but the longest you should have
to wait is whatever is configured for metadata.max.age.ms.

metadata.max.age.ms is the period of time in milliseconds after which we
force a refresh of metadata even if we haven't seen any partition
leadership changes to proactively discover any new brokers or partitions.
http://kafka.apache.org/documentation.html#newproducerconfigs




On Fri, Aug 7, 2015 at 12:34 PM, Gelinas, Chiara <cgeli...@illumina.com>
wrote:

> Hi All,
>
> We are looking to dynamically create partitions when we see a new piece of
> data that requires logical partitioning (so partitioning from a logical
> perspective rather than partitioning solely for load-based reasons). I
> noticed that when I create a partition via AdminUtils.addPartition, and
> then send the message (within milliseconds since it’s all happening on one
> thread execution), I get the following error:
>
> Invalid partition given with record: 3 is not in the range [0...3].
>
> Basically, the Producer can’t see the new partition. When I set a
> breakpoint just before the send (which essentially sleeps the thread), then
> all is well, and it pushes the message to the new partition with no issues.
>
> I am running just one zookeeper, one kafka (no replicas) – this is all
> local on my dev environment.
>
> Is this normal behavior or is there possibly some issue with how we are
> using addPartition? Also, once we have replicas in a more realistic
> production environment, should we expect this lag to increase?
>
> The only workaround I can envision for this is to have the thread check
> the partition count via AdminUtils and only move on when the partition
> count comes back as expected.
>
> Thanks,
> Chiara
>
>
>


-- 
Grant Henke
Software Engineer | Cloudera
gr...@cloudera.com | twitter.com/gchenke | linkedin.com/in/granthenke

Reply via email to