Hello,

We found a serious bug while testing flush() calls in the new producer,
which is summarized in KAFKA-2042.

In general, when the producer starts up it will try to refresh metadata
with empty topic list, and hence get all the topic metadata. When sending
the message with some topic later, it will hence not cause the topic to be
added into metadata's topic list since the metadata is available. When the
data is still sitting in the accumulator and a new topic is created, that
will cause metadata refresh with just this single topic, hence losing the
metadata for any other topics. Under usual scenarios the messages will be
sitting in the accumulator until another send() is triggered with the same
topic, but with flush() as a blocking call the likelihood of this issue
being exposed that messages gets blocked forever inside flush() could be
largely increased.

I am writing to ask if people think this problem is severe enough that
requires another bug-fix release.

-- Guozhang

Reply via email to