[ 
https://issues.apache.org/jira/browse/KAFKA-7384?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vasilis Tsanis updated KAFKA-7384:
----------------------------------
    Description: 
Hello

After upgrading the Kafka Brokers from 0.10.2.1 to 1.1.0, I am getting the 
following error logs thrown by the kafka clients 0.10.2.1 & 0.10.0.1. This 
seems to be some kind of incompatibility issue for the older clients although 
this shouldn't be true according to the following [doc 
1|https://docs.confluent.io/current/installation/upgrade.html#preparation], 
[doc2|https://cwiki-test.apache.org/confluence/display/KAFKA/KIP-35+-+Retrieving+protocol+version]
 and 
[thread|https://lists.apache.org/thread.html/9bc87a2c683d13fda27f01a635dba822520113cfd8fb50f3a3e82fcf@%3Cusers.kafka.apache.org%3E].

Can someone please help on this issue? Does this mean that I have to upgrade 
all kafka-clients to 1.1.0?

 

(Please also check the attached log, some extra compression type ids are also 
occurring)
 
{noformat}
java.lang.IllegalArgumentException: Unknown compression type id: 4
        at 
org.apache.kafka.common.record.CompressionType.forId(CompressionType.java:46)
        at 
org.apache.kafka.common.record.Record.compressionType(Record.java:260)
        at 
org.apache.kafka.common.record.LogEntry.isCompressed(LogEntry.java:89)
        at 
org.apache.kafka.common.record.RecordsIterator.makeNext(RecordsIterator.java:70)
        at 
org.apache.kafka.common.record.RecordsIterator.makeNext(RecordsIterator.java:34)
        at 
org.apache.kafka.common.utils.AbstractIterator.maybeComputeNext(AbstractIterator.java:79)
        at 
org.apache.kafka.common.utils.AbstractIterator.hasNext(AbstractIterator.java:45)
        at 
org.apache.kafka.clients.consumer.internals.Fetcher.parseCompletedFetch(Fetcher.java:785)
        at 
org.apache.kafka.clients.consumer.internals.Fetcher.fetchedRecords(Fetcher.java:480)
        at 
org.apache.kafka.clients.consumer.KafkaConsumer.pollOnce(KafkaConsumer.java:1037)
        at 
org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:995)
        at 
org.apache.camel.component.kafka.KafkaConsumer$KafkaFetchRecords.run(KafkaConsumer.java:130)
        at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
        at java.util.concurrent.FutureTask.run(FutureTask.java:266)
        at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
        at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
        at java.lang.Thread.run(Thread.java:745)


------------------- Another kind of exception due to same reason

java.lang.IndexOutOfBoundsException: null
        at java.nio.Buffer.checkIndex(Buffer.java:546)
        at java.nio.HeapByteBuffer.getInt(HeapByteBuffer.java:365) 
        at org.apache.kafka.common.utils.Utils.sizeDelimited(Utils.java:784)
        at org.apache.kafka.common.record.Record.value(Record.java:268)
        at 
org.apache.kafka.common.record.RecordsIterator$DeepRecordsIterator.<init>(RecordsIterator.java:149)
        at 
org.apache.kafka.common.record.RecordsIterator.makeNext(RecordsIterator.java:79)
        at 
org.apache.kafka.common.record.RecordsIterator.makeNext(RecordsIterator.java:34)
        at 
org.apache.kafka.common.utils.AbstractIterator.maybeComputeNext(AbstractIterator.java:79)
        at 
org.apache.kafka.common.utils.AbstractIterator.hasNext(AbstractIterator.java:45)
        at 
org.apache.kafka.clients.consumer.internals.Fetcher.parseCompletedFetch(Fetcher.java:785)
        at 
org.apache.kafka.clients.consumer.internals.Fetcher.fetchedRecords(Fetcher.java:480)
        at 
org.apache.kafka.clients.consumer.KafkaConsumer.pollOnce(KafkaConsumer.java:1037)
        at 
org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:995)
        at 
org.apache.camel.component.kafka.KafkaConsumer$KafkaFetchRecords.run(KafkaConsumer.java:130)
        at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
        at java.util.concurrent.FutureTask.run(FutureTask.java:266)
        at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
        at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
        at java.lang.Thread.run(Thread.java:745)

{noformat}

  was:
Hello

After upgrading the Kafka Brokers from 0.10.2.1 to 1.1.0, I am getting the 
following error logs thrown by the kafka clients 0.10.2.1 & 0.10.0.1. This 
seems to be some kind of incompatibility issue for the older clients although 
this shouldn't be true according to the following [doc 
1|https://docs.confluent.io/current/installation/upgrade.html#preparation], 
[doc2|https://cwiki-test.apache.org/confluence/display/KAFKA/KIP-35+-+Retrieving+protocol+version]
 and 
[thread|https://lists.apache.org/thread.html/9bc87a2c683d13fda27f01a635dba822520113cfd8fb50f3a3e82fcf@%3Cusers.kafka.apache.org%3E].

Can someone please help on this issue? Does this mean that I have to upgrade 
all kafka-clients to 1.1.0?

 

(Please also check the attached log, some extra compression type ids are also 
occuring)
 
{noformat}
java.lang.IllegalArgumentException: Unknown compression type id: 4
        at 
org.apache.kafka.common.record.CompressionType.forId(CompressionType.java:46)
        at 
org.apache.kafka.common.record.Record.compressionType(Record.java:260)
        at 
org.apache.kafka.common.record.LogEntry.isCompressed(LogEntry.java:89)
        at 
org.apache.kafka.common.record.RecordsIterator.makeNext(RecordsIterator.java:70)
        at 
org.apache.kafka.common.record.RecordsIterator.makeNext(RecordsIterator.java:34)
        at 
org.apache.kafka.common.utils.AbstractIterator.maybeComputeNext(AbstractIterator.java:79)
        at 
org.apache.kafka.common.utils.AbstractIterator.hasNext(AbstractIterator.java:45)
        at 
org.apache.kafka.clients.consumer.internals.Fetcher.parseCompletedFetch(Fetcher.java:785)
        at 
org.apache.kafka.clients.consumer.internals.Fetcher.fetchedRecords(Fetcher.java:480)
        at 
org.apache.kafka.clients.consumer.KafkaConsumer.pollOnce(KafkaConsumer.java:1037)
        at 
org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:995)
        at 
org.apache.camel.component.kafka.KafkaConsumer$KafkaFetchRecords.run(KafkaConsumer.java:130)
        at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
        at java.util.concurrent.FutureTask.run(FutureTask.java:266)
        at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
        at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
        at java.lang.Thread.run(Thread.java:745)


------------------- Another kind of exception due to same reason

java.lang.IndexOutOfBoundsException: null
        at java.nio.Buffer.checkIndex(Buffer.java:546)
        at java.nio.HeapByteBuffer.getInt(HeapByteBuffer.java:365) 
        at org.apache.kafka.common.utils.Utils.sizeDelimited(Utils.java:784)
        at org.apache.kafka.common.record.Record.value(Record.java:268)
        at 
org.apache.kafka.common.record.RecordsIterator$DeepRecordsIterator.<init>(RecordsIterator.java:149)
        at 
org.apache.kafka.common.record.RecordsIterator.makeNext(RecordsIterator.java:79)
        at 
org.apache.kafka.common.record.RecordsIterator.makeNext(RecordsIterator.java:34)
        at 
org.apache.kafka.common.utils.AbstractIterator.maybeComputeNext(AbstractIterator.java:79)
        at 
org.apache.kafka.common.utils.AbstractIterator.hasNext(AbstractIterator.java:45)
        at 
org.apache.kafka.clients.consumer.internals.Fetcher.parseCompletedFetch(Fetcher.java:785)
        at 
org.apache.kafka.clients.consumer.internals.Fetcher.fetchedRecords(Fetcher.java:480)
        at 
org.apache.kafka.clients.consumer.KafkaConsumer.pollOnce(KafkaConsumer.java:1037)
        at 
org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:995)
        at 
org.apache.camel.component.kafka.KafkaConsumer$KafkaFetchRecords.run(KafkaConsumer.java:130)
        at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
        at java.util.concurrent.FutureTask.run(FutureTask.java:266)
        at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
        at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
        at java.lang.Thread.run(Thread.java:745)

{noformat}


> Compatibility issues between Kafka Brokers 1.1.0 and older kafka clients
> ------------------------------------------------------------------------
>
>                 Key: KAFKA-7384
>                 URL: https://issues.apache.org/jira/browse/KAFKA-7384
>             Project: Kafka
>          Issue Type: Bug
>          Components: consumer
>    Affects Versions: 1.1.0
>            Reporter: Vasilis Tsanis
>            Priority: Blocker
>         Attachments: logs2.txt
>
>
> Hello
> After upgrading the Kafka Brokers from 0.10.2.1 to 1.1.0, I am getting the 
> following error logs thrown by the kafka clients 0.10.2.1 & 0.10.0.1. This 
> seems to be some kind of incompatibility issue for the older clients although 
> this shouldn't be true according to the following [doc 
> 1|https://docs.confluent.io/current/installation/upgrade.html#preparation], 
> [doc2|https://cwiki-test.apache.org/confluence/display/KAFKA/KIP-35+-+Retrieving+protocol+version]
>  and 
> [thread|https://lists.apache.org/thread.html/9bc87a2c683d13fda27f01a635dba822520113cfd8fb50f3a3e82fcf@%3Cusers.kafka.apache.org%3E].
> Can someone please help on this issue? Does this mean that I have to upgrade 
> all kafka-clients to 1.1.0?
>  
> (Please also check the attached log, some extra compression type ids are also 
> occurring)
>  
> {noformat}
> java.lang.IllegalArgumentException: Unknown compression type id: 4
>         at 
> org.apache.kafka.common.record.CompressionType.forId(CompressionType.java:46)
>         at 
> org.apache.kafka.common.record.Record.compressionType(Record.java:260)
>         at 
> org.apache.kafka.common.record.LogEntry.isCompressed(LogEntry.java:89)
>         at 
> org.apache.kafka.common.record.RecordsIterator.makeNext(RecordsIterator.java:70)
>         at 
> org.apache.kafka.common.record.RecordsIterator.makeNext(RecordsIterator.java:34)
>         at 
> org.apache.kafka.common.utils.AbstractIterator.maybeComputeNext(AbstractIterator.java:79)
>         at 
> org.apache.kafka.common.utils.AbstractIterator.hasNext(AbstractIterator.java:45)
>         at 
> org.apache.kafka.clients.consumer.internals.Fetcher.parseCompletedFetch(Fetcher.java:785)
>         at 
> org.apache.kafka.clients.consumer.internals.Fetcher.fetchedRecords(Fetcher.java:480)
>         at 
> org.apache.kafka.clients.consumer.KafkaConsumer.pollOnce(KafkaConsumer.java:1037)
>         at 
> org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:995)
>         at 
> org.apache.camel.component.kafka.KafkaConsumer$KafkaFetchRecords.run(KafkaConsumer.java:130)
>         at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>         at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>         at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>         at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>         at java.lang.Thread.run(Thread.java:745)
> ------------------- Another kind of exception due to same reason
> java.lang.IndexOutOfBoundsException: null
>         at java.nio.Buffer.checkIndex(Buffer.java:546)
>         at java.nio.HeapByteBuffer.getInt(HeapByteBuffer.java:365) 
>         at org.apache.kafka.common.utils.Utils.sizeDelimited(Utils.java:784)
>         at org.apache.kafka.common.record.Record.value(Record.java:268)
>         at 
> org.apache.kafka.common.record.RecordsIterator$DeepRecordsIterator.<init>(RecordsIterator.java:149)
>         at 
> org.apache.kafka.common.record.RecordsIterator.makeNext(RecordsIterator.java:79)
>         at 
> org.apache.kafka.common.record.RecordsIterator.makeNext(RecordsIterator.java:34)
>         at 
> org.apache.kafka.common.utils.AbstractIterator.maybeComputeNext(AbstractIterator.java:79)
>         at 
> org.apache.kafka.common.utils.AbstractIterator.hasNext(AbstractIterator.java:45)
>         at 
> org.apache.kafka.clients.consumer.internals.Fetcher.parseCompletedFetch(Fetcher.java:785)
>         at 
> org.apache.kafka.clients.consumer.internals.Fetcher.fetchedRecords(Fetcher.java:480)
>         at 
> org.apache.kafka.clients.consumer.KafkaConsumer.pollOnce(KafkaConsumer.java:1037)
>         at 
> org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:995)
>         at 
> org.apache.camel.component.kafka.KafkaConsumer$KafkaFetchRecords.run(KafkaConsumer.java:130)
>         at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>         at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>         at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>         at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>         at java.lang.Thread.run(Thread.java:745)
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

Reply via email to