[jira] [Commented] (KAFKA-6147) Error reading field 'api_versions': Error reading field 'max_version': java.nio.BufferUnderflowException

2017-10-30 Thread Sandro Simas (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-6147?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16226119#comment-16226119
 ] 

Sandro Simas commented on KAFKA-6147:
-

OK, I found the problem. My process was starting another producer due to 
logback-kafka-appender library. The library was starting a producer with 
bootstrap.servers pointing to localhost:9092, but this server does not exist. 
After editing started working without any errors. I think this error message is 
very confuse.

> Error reading field 'api_versions': Error reading field 'max_version': 
> java.nio.BufferUnderflowException
> 
>
> Key: KAFKA-6147
> URL: https://issues.apache.org/jira/browse/KAFKA-6147
> Project: Kafka
>  Issue Type: Bug
>  Components: clients
>Affects Versions: 0.11.0.0
>Reporter: Sandro Simas
>Priority: Minor
>
> I'm getting the following error on my kafka client 0.11.0.1:
> {code:java}
> [2017-10-30 19:18:30] ERROR o.a.k.c.producer.internals.Sender - Uncaught 
> error in kafka producer I/O thread: 
> org.apache.kafka.common.protocol.types.SchemaException: Error reading field 
> 'api_versions': Error reading field 'max_version': 
> java.nio.BufferUnderflowException
> at org.apache.kafka.common.protocol.types.Schema.read(Schema.java:75)
> at 
> org.apache.kafka.common.protocol.ApiKeys.parseResponse(ApiKeys.java:163)
> at 
> org.apache.kafka.common.protocol.ApiKeys$1.parseResponse(ApiKeys.java:54)
> at 
> org.apache.kafka.clients.NetworkClient.parseStructMaybeUpdateThrottleTimeMetrics(NetworkClient.java:560)
> at 
> org.apache.kafka.clients.NetworkClient.handleCompletedReceives(NetworkClient.java:657)
> at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:442)
> at 
> org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:224)
> at 
> org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:162)
> at java.lang.Thread.run(Thread.java:745)
> {code}
> Another similar error:
> {code:java}
> org.apache.kafka.common.protocol.types.SchemaException: Error reading field 
> 'api_versions': Error reading array of size 65546, only 10 bytes available
> at org.apache.kafka.common.protocol.types.Schema.read(Schema.java:75)
> at 
> org.apache.kafka.common.protocol.ApiKeys.parseResponse(ApiKeys.java:163)
> at 
> org.apache.kafka.common.protocol.ApiKeys$1.parseResponse(ApiKeys.java:54)
> at 
> org.apache.kafka.clients.NetworkClient.parseStructMaybeUpdateThrottleTimeMetrics(NetworkClient.java:560)
> at 
> org.apache.kafka.clients.NetworkClient.handleCompletedReceives(NetworkClient.java:657)
> at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:442)
> at 
> org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:224)
> at 
> org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:162)
> at java.lang.Thread.run(Thread.java:745)
> {code}
> The server is also 0.11.0.1 and I'm running Kafka and ZooKeeper without any 
> previous data. This errors appears suddenly even without producing messages.
> Although this error occurs, I can produce messages without any problems after 
> this error. This could be a network issue? I changed the servers version to 
> 0.10.2 and 0.10.1, but the problem persists.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (KAFKA-6147) Error reading field 'api_versions': Error reading field 'max_version': java.nio.BufferUnderflowException

2017-10-30 Thread Sandro Simas (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-6147?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sandro Simas updated KAFKA-6147:

Priority: Minor  (was: Major)

> Error reading field 'api_versions': Error reading field 'max_version': 
> java.nio.BufferUnderflowException
> 
>
> Key: KAFKA-6147
> URL: https://issues.apache.org/jira/browse/KAFKA-6147
> Project: Kafka
>  Issue Type: Bug
>  Components: clients
>Affects Versions: 0.11.0.0
>Reporter: Sandro Simas
>Priority: Minor
>
> I'm getting the following error on my kafka client 0.11.0.1:
> {code:java}
> [2017-10-30 19:18:30] ERROR o.a.k.c.producer.internals.Sender - Uncaught 
> error in kafka producer I/O thread: 
> org.apache.kafka.common.protocol.types.SchemaException: Error reading field 
> 'api_versions': Error reading field 'max_version': 
> java.nio.BufferUnderflowException
> at org.apache.kafka.common.protocol.types.Schema.read(Schema.java:75)
> at 
> org.apache.kafka.common.protocol.ApiKeys.parseResponse(ApiKeys.java:163)
> at 
> org.apache.kafka.common.protocol.ApiKeys$1.parseResponse(ApiKeys.java:54)
> at 
> org.apache.kafka.clients.NetworkClient.parseStructMaybeUpdateThrottleTimeMetrics(NetworkClient.java:560)
> at 
> org.apache.kafka.clients.NetworkClient.handleCompletedReceives(NetworkClient.java:657)
> at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:442)
> at 
> org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:224)
> at 
> org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:162)
> at java.lang.Thread.run(Thread.java:745)
> {code}
> Another similar error:
> {code:java}
> org.apache.kafka.common.protocol.types.SchemaException: Error reading field 
> 'api_versions': Error reading array of size 65546, only 10 bytes available
> at org.apache.kafka.common.protocol.types.Schema.read(Schema.java:75)
> at 
> org.apache.kafka.common.protocol.ApiKeys.parseResponse(ApiKeys.java:163)
> at 
> org.apache.kafka.common.protocol.ApiKeys$1.parseResponse(ApiKeys.java:54)
> at 
> org.apache.kafka.clients.NetworkClient.parseStructMaybeUpdateThrottleTimeMetrics(NetworkClient.java:560)
> at 
> org.apache.kafka.clients.NetworkClient.handleCompletedReceives(NetworkClient.java:657)
> at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:442)
> at 
> org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:224)
> at 
> org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:162)
> at java.lang.Thread.run(Thread.java:745)
> {code}
> The server is also 0.11.0.1 and I'm running Kafka and ZooKeeper without any 
> previous data. This errors appears suddenly even without producing messages.
> Although this error occurs, I can produce messages without any problems after 
> this error. This could be a network issue? I changed the servers version to 
> 0.10.2 and 0.10.1, but the problem persists.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (KAFKA-6147) Error reading field 'api_versions': Error reading field 'max_version': java.nio.BufferUnderflowException

2017-10-30 Thread Sandro Simas (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-6147?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sandro Simas updated KAFKA-6147:

Description: 
I'm getting the following error on my kafka client 0.11.0.1:
{code:java}
[2017-10-30 19:18:30] ERROR o.a.k.c.producer.internals.Sender - Uncaught error 
in kafka producer I/O thread: 
org.apache.kafka.common.protocol.types.SchemaException: Error reading field 
'api_versions': Error reading field 'max_version': 
java.nio.BufferUnderflowException
at org.apache.kafka.common.protocol.types.Schema.read(Schema.java:75)
at 
org.apache.kafka.common.protocol.ApiKeys.parseResponse(ApiKeys.java:163)
at 
org.apache.kafka.common.protocol.ApiKeys$1.parseResponse(ApiKeys.java:54)
at 
org.apache.kafka.clients.NetworkClient.parseStructMaybeUpdateThrottleTimeMetrics(NetworkClient.java:560)
at 
org.apache.kafka.clients.NetworkClient.handleCompletedReceives(NetworkClient.java:657)
at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:442)
at 
org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:224)
at 
org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:162)
at java.lang.Thread.run(Thread.java:745)
{code}

Another similar error:
{code:java}
org.apache.kafka.common.protocol.types.SchemaException: Error reading field 
'api_versions': Error reading array of size 65546, only 10 bytes available
at org.apache.kafka.common.protocol.types.Schema.read(Schema.java:75)
at 
org.apache.kafka.common.protocol.ApiKeys.parseResponse(ApiKeys.java:163)
at 
org.apache.kafka.common.protocol.ApiKeys$1.parseResponse(ApiKeys.java:54)
at 
org.apache.kafka.clients.NetworkClient.parseStructMaybeUpdateThrottleTimeMetrics(NetworkClient.java:560)
at 
org.apache.kafka.clients.NetworkClient.handleCompletedReceives(NetworkClient.java:657)
at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:442)
at 
org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:224)
at 
org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:162)
at java.lang.Thread.run(Thread.java:745)
{code}

The server is also 0.11.0.1 and I'm running Kafka and ZooKeeper without any 
previous data. This errors appears suddenly even without producing messages.
Although this error occurs, I can produce messages without any problems after 
this error. This could be a network issue? I changed the servers version to 
0.10.2 and 0.10.1, but the problem persists.

  was:
I'm getting the following error on my kafka client 0.11.0.1:
{code:java}
[2017-10-30 19:18:30] ERROR o.a.k.c.producer.internals.Sender - Uncaught error 
in kafka producer I/O thread: 
org.apache.kafka.common.protocol.types.SchemaException: Error reading field 
'api_versions': Error reading field 'max_version': 
java.nio.BufferUnderflowException
at org.apache.kafka.common.protocol.types.Schema.read(Schema.java:75)
at 
org.apache.kafka.common.protocol.ApiKeys.parseResponse(ApiKeys.java:163)
at 
org.apache.kafka.common.protocol.ApiKeys$1.parseResponse(ApiKeys.java:54)
at 
org.apache.kafka.clients.NetworkClient.parseStructMaybeUpdateThrottleTimeMetrics(NetworkClient.java:560)
at 
org.apache.kafka.clients.NetworkClient.handleCompletedReceives(NetworkClient.java:657)
at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:442)
at 
org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:224)
at 
org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:162)
at java.lang.Thread.run(Thread.java:745)
{code}

Another similar error:
{code:java}
org.apache.kafka.common.protocol.types.SchemaException: Error reading field 
'api_versions': Error reading array of size 65546, only 10 bytes available
at org.apache.kafka.common.protocol.types.Schema.read(Schema.java:75)
at 
org.apache.kafka.common.protocol.ApiKeys.parseResponse(ApiKeys.java:163)
at 
org.apache.kafka.common.protocol.ApiKeys$1.parseResponse(ApiKeys.java:54)
at 
org.apache.kafka.clients.NetworkClient.parseStructMaybeUpdateThrottleTimeMetrics(NetworkClient.java:560)
at 
org.apache.kafka.clients.NetworkClient.handleCompletedReceives(NetworkClient.java:657)
at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:442)
at 
org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:224)
at 
org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:162)
at java.lang.Thread.run(Thread.java:745)
{code}

The server is also 0.11.0.1 and I'm running Kafka and ZooKeeper without any 
previous data. This errors appears suddenly even without producing messages.
Although this error occurs, I can produce messages without any problems after 
this error. This could be a network issue? I ch

[jira] [Updated] (KAFKA-6147) Error reading field 'api_versions': Error reading field 'max_version': java.nio.BufferUnderflowException

2017-10-30 Thread Sandro Simas (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-6147?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sandro Simas updated KAFKA-6147:

Description: 
I'm getting the following error on my kafka client 0.11.0.1:
{code:java}
[2017-10-30 19:18:30] ERROR o.a.k.c.producer.internals.Sender - Uncaught error 
in kafka producer I/O thread: 
org.apache.kafka.common.protocol.types.SchemaException: Error reading field 
'api_versions': Error reading field 'max_version': 
java.nio.BufferUnderflowException
at org.apache.kafka.common.protocol.types.Schema.read(Schema.java:75)
at 
org.apache.kafka.common.protocol.ApiKeys.parseResponse(ApiKeys.java:163)
at 
org.apache.kafka.common.protocol.ApiKeys$1.parseResponse(ApiKeys.java:54)
at 
org.apache.kafka.clients.NetworkClient.parseStructMaybeUpdateThrottleTimeMetrics(NetworkClient.java:560)
at 
org.apache.kafka.clients.NetworkClient.handleCompletedReceives(NetworkClient.java:657)
at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:442)
at 
org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:224)
at 
org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:162)
at java.lang.Thread.run(Thread.java:745)
{code}

Another similar error:
{code:java}
org.apache.kafka.common.protocol.types.SchemaException: Error reading field 
'api_versions': Error reading array of size 65546, only 10 bytes available
at org.apache.kafka.common.protocol.types.Schema.read(Schema.java:75)
at 
org.apache.kafka.common.protocol.ApiKeys.parseResponse(ApiKeys.java:163)
at 
org.apache.kafka.common.protocol.ApiKeys$1.parseResponse(ApiKeys.java:54)
at 
org.apache.kafka.clients.NetworkClient.parseStructMaybeUpdateThrottleTimeMetrics(NetworkClient.java:560)
at 
org.apache.kafka.clients.NetworkClient.handleCompletedReceives(NetworkClient.java:657)
at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:442)
at 
org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:224)
at 
org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:162)
at java.lang.Thread.run(Thread.java:745)
{code}

The server is also 0.11.0.1 and I'm running Kafka and ZooKeeper without any 
previous data. This errors appears suddenly even without producing messages.
Although this error occurs, I can produce messages without any problems after 
this error. This could be a network issue? I changed the servers version to 
0.10.2 and 0.10.1 but the problem persists.

  was:
I'm getting the following error on my kafka client 0.11.0.1:
{code:java}
[2017-10-30 19:18:30] ERROR o.a.k.c.producer.internals.Sender - Uncaught error 
in kafka producer I/O thread: 
org.apache.kafka.common.protocol.types.SchemaException: Error reading field 
'api_versions': Error reading field 'max_version': 
java.nio.BufferUnderflowException
at org.apache.kafka.common.protocol.types.Schema.read(Schema.java:75)
at 
org.apache.kafka.common.protocol.ApiKeys.parseResponse(ApiKeys.java:163)
at 
org.apache.kafka.common.protocol.ApiKeys$1.parseResponse(ApiKeys.java:54)
at 
org.apache.kafka.clients.NetworkClient.parseStructMaybeUpdateThrottleTimeMetrics(NetworkClient.java:560)
at 
org.apache.kafka.clients.NetworkClient.handleCompletedReceives(NetworkClient.java:657)
at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:442)
at 
org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:224)
at 
org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:162)
at java.lang.Thread.run(Thread.java:745)
{code}

Another similar error:
{code:java}
org.apache.kafka.common.protocol.types.SchemaException: Error reading field 
'api_versions': Error reading array of size 65546, only 10 bytes available
at org.apache.kafka.common.protocol.types.Schema.read(Schema.java:75)
at 
org.apache.kafka.common.protocol.ApiKeys.parseResponse(ApiKeys.java:163)
at 
org.apache.kafka.common.protocol.ApiKeys$1.parseResponse(ApiKeys.java:54)
at 
org.apache.kafka.clients.NetworkClient.parseStructMaybeUpdateThrottleTimeMetrics(NetworkClient.java:560)
at 
org.apache.kafka.clients.NetworkClient.handleCompletedReceives(NetworkClient.java:657)
at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:442)
at 
org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:224)
at 
org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:162)
at java.lang.Thread.run(Thread.java:745)
{code}

The server is also 0.11.0.1 and I'm running Kafka and ZooKeeper without any 
previous data. This errors appears suddenly even without producing messages.
Although this error occurs, I can produce messages without any problems after 
this error.
Downgrading the servers and clients 

[jira] [Updated] (KAFKA-6147) Error reading field 'api_versions': Error reading field 'max_version': java.nio.BufferUnderflowException

2017-10-30 Thread Sandro Simas (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-6147?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sandro Simas updated KAFKA-6147:

Description: 
I'm getting the following error on my kafka client 0.11.0.1:
{code:java}
[2017-10-30 19:18:30] ERROR o.a.k.c.producer.internals.Sender - Uncaught error 
in kafka producer I/O thread: 
org.apache.kafka.common.protocol.types.SchemaException: Error reading field 
'api_versions': Error reading field 'max_version': 
java.nio.BufferUnderflowException
at org.apache.kafka.common.protocol.types.Schema.read(Schema.java:75)
at 
org.apache.kafka.common.protocol.ApiKeys.parseResponse(ApiKeys.java:163)
at 
org.apache.kafka.common.protocol.ApiKeys$1.parseResponse(ApiKeys.java:54)
at 
org.apache.kafka.clients.NetworkClient.parseStructMaybeUpdateThrottleTimeMetrics(NetworkClient.java:560)
at 
org.apache.kafka.clients.NetworkClient.handleCompletedReceives(NetworkClient.java:657)
at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:442)
at 
org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:224)
at 
org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:162)
at java.lang.Thread.run(Thread.java:745)
{code}

Another similar error:
{code:java}
org.apache.kafka.common.protocol.types.SchemaException: Error reading field 
'api_versions': Error reading array of size 65546, only 10 bytes available
at org.apache.kafka.common.protocol.types.Schema.read(Schema.java:75)
at 
org.apache.kafka.common.protocol.ApiKeys.parseResponse(ApiKeys.java:163)
at 
org.apache.kafka.common.protocol.ApiKeys$1.parseResponse(ApiKeys.java:54)
at 
org.apache.kafka.clients.NetworkClient.parseStructMaybeUpdateThrottleTimeMetrics(NetworkClient.java:560)
at 
org.apache.kafka.clients.NetworkClient.handleCompletedReceives(NetworkClient.java:657)
at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:442)
at 
org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:224)
at 
org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:162)
at java.lang.Thread.run(Thread.java:745)
{code}

The server is also 0.11.0.1 and I'm running Kafka and ZooKeeper without any 
previous data. This errors appears suddenly even without producing messages.
Although this error occurs, I can produce messages without any problems after 
this error.
Downgrading the servers and clients to version 0.10.1.1 works fine.

  was:
I'm getting the following error on my kafka client 0.11.0.1:
{code:java}
[2017-10-30 19:18:30] ERROR o.a.k.c.producer.internals.Sender - Uncaught error 
in kafka producer I/O thread: 
org.apache.kafka.common.protocol.types.SchemaException: Error reading field 
'api_versions': Error reading field 'max_version': 
java.nio.BufferUnderflowException
at org.apache.kafka.common.protocol.types.Schema.read(Schema.java:75)
at 
org.apache.kafka.common.protocol.ApiKeys.parseResponse(ApiKeys.java:163)
at 
org.apache.kafka.common.protocol.ApiKeys$1.parseResponse(ApiKeys.java:54)
at 
org.apache.kafka.clients.NetworkClient.parseStructMaybeUpdateThrottleTimeMetrics(NetworkClient.java:560)
at 
org.apache.kafka.clients.NetworkClient.handleCompletedReceives(NetworkClient.java:657)
at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:442)
at 
org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:224)
at 
org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:162)
at java.lang.Thread.run(Thread.java:745)
{code}

Another similar error:
{code:java}
org.apache.kafka.common.protocol.types.SchemaException: Error reading field 
'api_versions': Error reading array of size 65546, only 10 bytes available
at org.apache.kafka.common.protocol.types.Schema.read(Schema.java:75)
at 
org.apache.kafka.common.protocol.ApiKeys.parseResponse(ApiKeys.java:163)
at 
org.apache.kafka.common.protocol.ApiKeys$1.parseResponse(ApiKeys.java:54)
at 
org.apache.kafka.clients.NetworkClient.parseStructMaybeUpdateThrottleTimeMetrics(NetworkClient.java:560)
at 
org.apache.kafka.clients.NetworkClient.handleCompletedReceives(NetworkClient.java:657)
at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:442)
at 
org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:224)
at 
org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:162)
at java.lang.Thread.run(Thread.java:745)
{code}

The server is also 0.11.0.1 and I'm running Kafka and ZooKeeper without any 
previous data. This errors appears suddenly even without producing messages.
Although this error occurs, I can produce messages without any problems after 
this error.
Downgrading to 0.10 everythings works fine,


> Error reading field 'api_vers

[jira] [Updated] (KAFKA-6147) Error reading field 'api_versions': Error reading field 'max_version': java.nio.BufferUnderflowException

2017-10-30 Thread Sandro Simas (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-6147?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sandro Simas updated KAFKA-6147:

Description: 
I'm getting the following error on my kafka client 0.11.0.1:
{code:java}
[2017-10-30 19:18:30] ERROR o.a.k.c.producer.internals.Sender - Uncaught error 
in kafka producer I/O thread: 
org.apache.kafka.common.protocol.types.SchemaException: Error reading field 
'api_versions': Error reading field 'max_version': 
java.nio.BufferUnderflowException
at org.apache.kafka.common.protocol.types.Schema.read(Schema.java:75)
at 
org.apache.kafka.common.protocol.ApiKeys.parseResponse(ApiKeys.java:163)
at 
org.apache.kafka.common.protocol.ApiKeys$1.parseResponse(ApiKeys.java:54)
at 
org.apache.kafka.clients.NetworkClient.parseStructMaybeUpdateThrottleTimeMetrics(NetworkClient.java:560)
at 
org.apache.kafka.clients.NetworkClient.handleCompletedReceives(NetworkClient.java:657)
at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:442)
at 
org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:224)
at 
org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:162)
at java.lang.Thread.run(Thread.java:745)
{code}

Another similar error:
{code:java}
org.apache.kafka.common.protocol.types.SchemaException: Error reading field 
'api_versions': Error reading array of size 65546, only 10 bytes available
at org.apache.kafka.common.protocol.types.Schema.read(Schema.java:75)
at 
org.apache.kafka.common.protocol.ApiKeys.parseResponse(ApiKeys.java:163)
at 
org.apache.kafka.common.protocol.ApiKeys$1.parseResponse(ApiKeys.java:54)
at 
org.apache.kafka.clients.NetworkClient.parseStructMaybeUpdateThrottleTimeMetrics(NetworkClient.java:560)
at 
org.apache.kafka.clients.NetworkClient.handleCompletedReceives(NetworkClient.java:657)
at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:442)
at 
org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:224)
at 
org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:162)
at java.lang.Thread.run(Thread.java:745)
{code}

The server is also 0.11.0.1 and I'm running Kafka and ZooKeeper without any 
previous data. This errors appears suddenly even without producing messages.
Although this error occurs, I can produce messages without any problems after 
this error.
Downgrading to 0.10 everythings works fine,

  was:
I'm getting the following error on my kafka client 0.11.0.1:
{code:java}
[2017-10-30 19:18:30] ERROR o.a.k.c.producer.internals.Sender - Uncaught error 
in kafka producer I/O thread: 
org.apache.kafka.common.protocol.types.SchemaException: Error reading field 
'api_versions': Error reading field 'max_version': 
java.nio.BufferUnderflowException
at org.apache.kafka.common.protocol.types.Schema.read(Schema.java:75)
at 
org.apache.kafka.common.protocol.ApiKeys.parseResponse(ApiKeys.java:163)
at 
org.apache.kafka.common.protocol.ApiKeys$1.parseResponse(ApiKeys.java:54)
at 
org.apache.kafka.clients.NetworkClient.parseStructMaybeUpdateThrottleTimeMetrics(NetworkClient.java:560)
at 
org.apache.kafka.clients.NetworkClient.handleCompletedReceives(NetworkClient.java:657)
at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:442)
at 
org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:224)
at 
org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:162)
at java.lang.Thread.run(Thread.java:745)
{code}

Another similar error:
{code:java}
org.apache.kafka.common.protocol.types.SchemaException: Error reading field 
'api_versions': Error reading array of size 65546, only 10 bytes available
at org.apache.kafka.common.protocol.types.Schema.read(Schema.java:75)
at 
org.apache.kafka.common.protocol.ApiKeys.parseResponse(ApiKeys.java:163)
at 
org.apache.kafka.common.protocol.ApiKeys$1.parseResponse(ApiKeys.java:54)
at 
org.apache.kafka.clients.NetworkClient.parseStructMaybeUpdateThrottleTimeMetrics(NetworkClient.java:560)
at 
org.apache.kafka.clients.NetworkClient.handleCompletedReceives(NetworkClient.java:657)
at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:442)
at 
org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:224)
at 
org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:162)
at java.lang.Thread.run(Thread.java:745)
{code}

The server is also 0.11.0.1 and I'm running Kafka and ZooKeeper without any 
previous data. This errors appears suddenly even without producing messages.
Although this error occurs, I can produce messages without any problems after 
this error.
Downgrading to 0.10*** everythings works fine,


> Error reading field 'api_versions': Error reading 

[jira] [Updated] (KAFKA-6147) Error reading field 'api_versions': Error reading field 'max_version': java.nio.BufferUnderflowException

2017-10-30 Thread Sandro Simas (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-6147?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sandro Simas updated KAFKA-6147:

Description: 
I'm getting the following error on my kafka client 0.11.0.1:
{code:java}
[2017-10-30 19:18:30] ERROR o.a.k.c.producer.internals.Sender - Uncaught error 
in kafka producer I/O thread: 
org.apache.kafka.common.protocol.types.SchemaException: Error reading field 
'api_versions': Error reading field 'max_version': 
java.nio.BufferUnderflowException
at org.apache.kafka.common.protocol.types.Schema.read(Schema.java:75)
at 
org.apache.kafka.common.protocol.ApiKeys.parseResponse(ApiKeys.java:163)
at 
org.apache.kafka.common.protocol.ApiKeys$1.parseResponse(ApiKeys.java:54)
at 
org.apache.kafka.clients.NetworkClient.parseStructMaybeUpdateThrottleTimeMetrics(NetworkClient.java:560)
at 
org.apache.kafka.clients.NetworkClient.handleCompletedReceives(NetworkClient.java:657)
at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:442)
at 
org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:224)
at 
org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:162)
at java.lang.Thread.run(Thread.java:745)
{code}

Another similar error:
{code:java}
org.apache.kafka.common.protocol.types.SchemaException: Error reading field 
'api_versions': Error reading array of size 65546, only 10 bytes available
at org.apache.kafka.common.protocol.types.Schema.read(Schema.java:75)
at 
org.apache.kafka.common.protocol.ApiKeys.parseResponse(ApiKeys.java:163)
at 
org.apache.kafka.common.protocol.ApiKeys$1.parseResponse(ApiKeys.java:54)
at 
org.apache.kafka.clients.NetworkClient.parseStructMaybeUpdateThrottleTimeMetrics(NetworkClient.java:560)
at 
org.apache.kafka.clients.NetworkClient.handleCompletedReceives(NetworkClient.java:657)
at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:442)
at 
org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:224)
at 
org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:162)
at java.lang.Thread.run(Thread.java:745)
{code}

The server is also 0.11.0.1 and I'm running Kafka and ZooKeeper without any 
previous data. This errors appears suddenly even without producing messages.
Although this error occurs, I can produce messages without any problems after 
this error.
Downgrading to 0.10*** everythings works fine,

  was:
I'm getting the following error on my kafka client 0.11.0.1:
{code:java}
[2017-10-30 19:18:30] ERROR o.a.k.c.producer.internals.Sender - Uncaught error 
in kafka producer I/O thread: 
org.apache.kafka.common.protocol.types.SchemaException: Error reading field 
'api_versions': Error reading field 'max_version': 
java.nio.BufferUnderflowException
at org.apache.kafka.common.protocol.types.Schema.read(Schema.java:75)
at 
org.apache.kafka.common.protocol.ApiKeys.parseResponse(ApiKeys.java:163)
at 
org.apache.kafka.common.protocol.ApiKeys$1.parseResponse(ApiKeys.java:54)
at 
org.apache.kafka.clients.NetworkClient.parseStructMaybeUpdateThrottleTimeMetrics(NetworkClient.java:560)
at 
org.apache.kafka.clients.NetworkClient.handleCompletedReceives(NetworkClient.java:657)
at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:442)
at 
org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:224)
at 
org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:162)
at java.lang.Thread.run(Thread.java:745)
{code}

Another similar error:
{code:java}
org.apache.kafka.common.protocol.types.SchemaException: Error reading field 
'api_versions': Error reading array of size 65546, only 10 bytes available
at org.apache.kafka.common.protocol.types.Schema.read(Schema.java:75)
at 
org.apache.kafka.common.protocol.ApiKeys.parseResponse(ApiKeys.java:163)
at 
org.apache.kafka.common.protocol.ApiKeys$1.parseResponse(ApiKeys.java:54)
at 
org.apache.kafka.clients.NetworkClient.parseStructMaybeUpdateThrottleTimeMetrics(NetworkClient.java:560)
at 
org.apache.kafka.clients.NetworkClient.handleCompletedReceives(NetworkClient.java:657)
at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:442)
at 
org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:224)
at 
org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:162)
at java.lang.Thread.run(Thread.java:745)
{code}

The server is also 0.11.0.1 and I'm running Kafka and ZooKeeper without any 
previous data. This errors appears suddenly even without producing messages.
Although this error occurs, I can produce messages without any problems after 
this error.



> Error reading field 'api_versions': Error reading field 'max_version': 
> java.nio.BufferUnde

[jira] [Created] (KAFKA-6147) Error reading field 'api_versions': Error reading field 'max_version': java.nio.BufferUnderflowException

2017-10-30 Thread Sandro Simas (JIRA)
Sandro Simas created KAFKA-6147:
---

 Summary: Error reading field 'api_versions': Error reading field 
'max_version': java.nio.BufferUnderflowException
 Key: KAFKA-6147
 URL: https://issues.apache.org/jira/browse/KAFKA-6147
 Project: Kafka
  Issue Type: Bug
  Components: clients
Affects Versions: 0.11.0.0
Reporter: Sandro Simas


I'm getting the following error on my kafka client 0.11.0.1:
{code:java}
[2017-10-30 19:18:30] ERROR o.a.k.c.producer.internals.Sender - Uncaught error 
in kafka producer I/O thread: 
org.apache.kafka.common.protocol.types.SchemaException: Error reading field 
'api_versions': Error reading field 'max_version': 
java.nio.BufferUnderflowException
at org.apache.kafka.common.protocol.types.Schema.read(Schema.java:75)
at 
org.apache.kafka.common.protocol.ApiKeys.parseResponse(ApiKeys.java:163)
at 
org.apache.kafka.common.protocol.ApiKeys$1.parseResponse(ApiKeys.java:54)
at 
org.apache.kafka.clients.NetworkClient.parseStructMaybeUpdateThrottleTimeMetrics(NetworkClient.java:560)
at 
org.apache.kafka.clients.NetworkClient.handleCompletedReceives(NetworkClient.java:657)
at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:442)
at 
org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:224)
at 
org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:162)
at java.lang.Thread.run(Thread.java:745)
{code}

Another similar error:
{code:java}
org.apache.kafka.common.protocol.types.SchemaException: Error reading field 
'api_versions': Error reading array of size 65546, only 10 bytes available
at org.apache.kafka.common.protocol.types.Schema.read(Schema.java:75)
at 
org.apache.kafka.common.protocol.ApiKeys.parseResponse(ApiKeys.java:163)
at 
org.apache.kafka.common.protocol.ApiKeys$1.parseResponse(ApiKeys.java:54)
at 
org.apache.kafka.clients.NetworkClient.parseStructMaybeUpdateThrottleTimeMetrics(NetworkClient.java:560)
at 
org.apache.kafka.clients.NetworkClient.handleCompletedReceives(NetworkClient.java:657)
at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:442)
at 
org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:224)
at 
org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:162)
at java.lang.Thread.run(Thread.java:745)
{code}

The server is also 0.11.0.1 and I'm running Kafka and ZooKeeper without any 
previous data. This errors appears suddenly even without producing messages.
Although this error occurs, I can produce messages without any problems after 
this error.




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)