Hey Frank,
I think I spotted the issue and submitted a patch. Here's a link to the
JIRA: https://issues.apache.org/jira/browse/KAFKA-5456. I expect we'll get
the fix into 0.11.0.0. Thanks for finding this!
-Jason
On Thu, Jun 15, 2017 at 11:53 PM, Frank Lyaruu wrote:
> Yes,
Yes, compression was on (lz4), key and value sizes fluctuate, key sizes are
small <10 bytes, value sizes fluctuate also, but nothing crazy, up to about
100kb.
I did some stepping through the code and at some point I saw a branch that
used a different path depending on protocol version (something
Finally, was compression enabled when you hit this exception? If so, which
compression algorithm was enabled?
On Thu, Jun 15, 2017 at 5:04 PM, Apurva Mehta wrote:
> Frank: it would be even better if you could share the key and value which
> was causing this problem. Maybe
Frank: it would be even better if you could share the key and value which
was causing this problem. Maybe share it on the JIRA:
https://issues.apache.org/jira/browse/KAFKA-5456 ?
Thanks,
Apurva
On Thu, Jun 15, 2017 at 4:07 PM, Apurva Mehta wrote:
> Hi Frank,
>
> What is is
Hi Frank,
What is is the value of `batch.size` in your producer? What is the size of
the key and value you are trying to write?
Thanks,
Apurva
On Thu, Jun 15, 2017 at 2:28 AM, Frank Lyaruu wrote:
> Hey people, I see an error I haven't seen before. It is on a lowlevel-API
>
It seems to happen when using Streams 0.11.1 snapshot against a 0.10.2
(release) broker, the problem disappeared after I upgraded the broker.
On Thu, Jun 15, 2017 at 11:28 AM, Frank Lyaruu wrote:
> Hey people, I see an error I haven't seen before. It is on a lowlevel-API
>
Hey people, I see an error I haven't seen before. It is on a lowlevel-API
based streams application. I've started it once, then it ran fine, then did
a graceful shutdown and since then I always see this error on startup.
I'm using yesterday's trunk.
It seems that the MemoryRecordsBuilder