[jira] [Updated] (KAFKA-9613) CorruptRecordException: Found record size 0 smaller than minimum record overhead

2021-10-06 Thread Nandini (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-9613?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nandini updated KAFKA-9613:
---
Summary: CorruptRecordException: Found record size 0 smaller than minimum 
record overhead  (was: orruptRecordException: Found record size 0 smaller than 
minimum record overhead)

> CorruptRecordException: Found record size 0 smaller than minimum record 
> overhead
> 
>
> Key: KAFKA-9613
> URL: https://issues.apache.org/jira/browse/KAFKA-9613
> Project: Kafka
>  Issue Type: Bug
>Reporter: Amit Khandelwal
>Priority: Major
>
> 20200224;21:01:38: [2020-02-24 21:01:38,615] ERROR [ReplicaManager broker=0] 
> Error processing fetch with max size 1048576 from consumer on partition 
> SANDBOX.BROKER.NEWORDER-0: (fetchOffset=211886, logStartOffset=-1, 
> maxBytes=1048576, currentLeaderEpoch=Optional.empty) 
> (kafka.server.ReplicaManager)
> 20200224;21:01:38: org.apache.kafka.common.errors.CorruptRecordException: 
> Found record size 0 smaller than minimum record overhead (14) in file 
> /data/tmp/kafka-topic-logs/SANDBOX.BROKER.NEWORDER-0/.log.
> 20200224;21:05:48: [2020-02-24 21:05:48,711] INFO [GroupMetadataManager 
> brokerId=0] Removed 0 expired offsets in 1 milliseconds. 
> (kafka.coordinator.group.GroupMetadataManager)
> 20200224;21:10:22: [2020-02-24 21:10:22,204] INFO [GroupCoordinator 0]: 
> Member 
> _011-9e61d2c9-ce5a-4231-bda1-f04e6c260dc0-StreamThread-1-consumer-27768816-ee87-498f-8896-191912282d4f
>  in group y_011 has failed, removing it from the group 
> (kafka.coordinator.group.GroupCoordinator)
>  
> [https://stackoverflow.com/questions/60404510/kafka-broker-issue-replica-manager-with-max-size#]
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-13220) Group or Group Coordinator logs are misleading when using assign(partitions) method in Consumer API.

2021-08-20 Thread nandini (Jira)
nandini created KAFKA-13220:
---

 Summary: Group or Group Coordinator logs are misleading when using 
assign(partitions) method in Consumer API.
 Key: KAFKA-13220
 URL: https://issues.apache.org/jira/browse/KAFKA-13220
 Project: Kafka
  Issue Type: Bug
  Components: clients
Reporter: nandini


Kafka console consumer does not allow us to to use group and partition together 
https://github.com/apache/kafka/blob/trunk/core/src/main/scala/kafka/tools/ConsoleConsumer.scala#L395

Hence, on Manual topic assignment through this method does not use the 
consumer's group management functionality. 
https://kafka.apache.org/26/javadoc/org/apache/kafka/clients/consumer/KafkaConsumer.html#assign-java.util.Collection-

But as when using the API and implementing your own Kafka Consumer, you can 
specify group.id and use consumer.assign(partitions);
While gives an impression that the client is using the group (however it is not 
printed as an active consumer in kafka-consumer-groups describe output for that 
topic)
Even if you do not specify a group.id it prints logs from GroupCoordinator 
This is misleading.

{{Code- }}
{quote}{{Properties props = new Properties();}}
{{ props.setProperty(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, 
bootstrapServers);}}
{{ props.setProperty(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, 
StringDeserializer.class.getName());}}
{{ props.setProperty(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, 
ByteArrayDeserializer.class.getName());}}
{{ props.setProperty(ConsumerConfig.GROUP_ID_CONFIG, 
mygroup);}}{{List partitions = new 
ArrayList();}}
{{ partitions.add(new TopicPartition("mytopic", 0));}}
{{ consumer.assign(partitions);}}
{quote}
{{Logs -}}

{{-- when group.id is specified}}
{quote}[main] INFO org.apache.kafka.clients.consumer.internals.Fetcher - 
[Consumer clientId=consumer-1, groupId=mygroup] Resetting offset for partition 
mytopic-0 to offset 0.
[main] INFO com.cloudera.kafka.SimpleConsumer - Key: null, Value: [B@7bc1a03d
[main] INFO com.cloudera.kafka.SimpleConsumer - Partition: 0, Offset:0
{quote}
-- when group.id is not specified
{quote}[main] INFO 
org.apache.kafka.clients.consumer.internals.AbstractCoordinator - [Consumer 
clientId=consumer-1, groupId=] Discovered group coordinator hostname:9092 (id: 
2147483647 rack: null)
[main] INFO org.apache.kafka.clients.consumer.internals.Fetcher - [Consumer 
clientId=consumer-1, groupId=] Resetting offset for partition mytopic-0 to 
offset 0.
{quote}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-7908) retention.ms and message.timestamp.difference.max.ms are tied

2020-11-09 Thread nandini (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-7908?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17228975#comment-17228975
 ] 

nandini commented on KAFKA-7908:


The bug relates to - https://issues.apache.org/jira/browse/KAFKA-4340

> retention.ms and message.timestamp.difference.max.ms are tied
> -
>
> Key: KAFKA-7908
> URL: https://issues.apache.org/jira/browse/KAFKA-7908
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 2.1.0
>Reporter: Ciprian Pascu
>Priority: Minor
> Fix For: 2.3.0, 2.4.0
>
>
> When configuring retention.ms for a topic, following warning will be printed:
> _retention.ms for topic X is set to 180. It is smaller than 
> message.timestamp.difference.max.ms's value 9223372036854775807. This may 
> result in frequent log rolling. (kafka.log.Log)_
>  
> message.timestamp.difference.max.ms has not been configured explicitly, so it 
> has the default value of 9223372036854775807; I haven't seen anywhere 
> mentioned that this parameter needs to be configured also, if retention.ms is 
> configured; also, if we look at the default values for these parameters, they 
> are also so, that retention.ms < message.timestamp.difference.max.ms; so, 
> what is the purpose of this warning, in this case?
> The warning is generated from this code 
> (core/src/main/scala/kafka/log/Log.scala):
>   _def updateConfig(updatedKeys: Set[String], newConfig: LogConfig): Unit = {_
>     _*if ((updatedKeys.contains(LogConfig.RetentionMsProp)*_
>   *_|| 
> updatedKeys.contains(LogConfig.MessageTimestampDifferenceMaxMsProp))_*
>   _&& topicPartition.partition == 0  // generate warnings only for one 
> partition of each topic_
>   _&& newConfig.retentionMs < newConfig.messageTimestampDifferenceMaxMs)_
>   _warn(s"${LogConfig.RetentionMsProp} for topic ${topicPartition.topic} 
> is set to ${newConfig.retentionMs}. It is smaller than " +_
>     _s"${LogConfig.MessageTimestampDifferenceMaxMsProp}'s value 
> ${newConfig.messageTimestampDifferenceMaxMs}. " +_
>     _s"This may result in frequent log rolling.")_
>     _this.config = newConfig_
>   _}_
>  
> Shouldn't the || operand in the bolded condition be replaced with &&?
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-7908) retention.ms and message.timestamp.difference.max.ms are tied

2020-11-09 Thread nandini (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-7908?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17228514#comment-17228514
 ] 

nandini commented on KAFKA-7908:


This applies to older versions too. Just found this in 0.11.0. The *Affects 
Version/s:* needs to be updated. 

> retention.ms and message.timestamp.difference.max.ms are tied
> -
>
> Key: KAFKA-7908
> URL: https://issues.apache.org/jira/browse/KAFKA-7908
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 2.1.0
>Reporter: Ciprian Pascu
>Priority: Minor
> Fix For: 2.3.0, 2.4.0
>
>
> When configuring retention.ms for a topic, following warning will be printed:
> _retention.ms for topic X is set to 180. It is smaller than 
> message.timestamp.difference.max.ms's value 9223372036854775807. This may 
> result in frequent log rolling. (kafka.log.Log)_
>  
> message.timestamp.difference.max.ms has not been configured explicitly, so it 
> has the default value of 9223372036854775807; I haven't seen anywhere 
> mentioned that this parameter needs to be configured also, if retention.ms is 
> configured; also, if we look at the default values for these parameters, they 
> are also so, that retention.ms < message.timestamp.difference.max.ms; so, 
> what is the purpose of this warning, in this case?
> The warning is generated from this code 
> (core/src/main/scala/kafka/log/Log.scala):
>   _def updateConfig(updatedKeys: Set[String], newConfig: LogConfig): Unit = {_
>     _*if ((updatedKeys.contains(LogConfig.RetentionMsProp)*_
>   *_|| 
> updatedKeys.contains(LogConfig.MessageTimestampDifferenceMaxMsProp))_*
>   _&& topicPartition.partition == 0  // generate warnings only for one 
> partition of each topic_
>   _&& newConfig.retentionMs < newConfig.messageTimestampDifferenceMaxMs)_
>   _warn(s"${LogConfig.RetentionMsProp} for topic ${topicPartition.topic} 
> is set to ${newConfig.retentionMs}. It is smaller than " +_
>     _s"${LogConfig.MessageTimestampDifferenceMaxMsProp}'s value 
> ${newConfig.messageTimestampDifferenceMaxMs}. " +_
>     _s"This may result in frequent log rolling.")_
>     _this.config = newConfig_
>   _}_
>  
> Shouldn't the || operand in the bolded condition be replaced with &&?
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-9296) Correlation id for response () does not match request ()

2020-10-07 Thread nandini (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-9296?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17209432#comment-17209432
 ] 

nandini commented on KAFKA-9296:


https://issues.apache.org/jira/browse/KAFKA-4669 - This issue was reopened but 
there has been no update.  

> Correlation id for response () does not match request ()
> 
>
> Key: KAFKA-9296
> URL: https://issues.apache.org/jira/browse/KAFKA-9296
> Project: Kafka
>  Issue Type: Bug
>  Components: clients, producer 
>Affects Versions: 0.11.0.2
> Environment: Flink on  k8s
>Reporter: Enhon Bryant
>Priority: Blocker
>  Labels: kafka, producer
>
> The Kafka client and broker I use are both version 0.11.0.2.   I use Kafka's 
> producer to write data to broker. I encountered the following exceptions.
> 2019-12-12 18:12:46,821 ERROR 
> org.apache.kafka.clients.producer.internals.Sender - Uncaught error in kafka 
> producer I/O thread: 
> java.lang.IllegalStateException: Correlation id for response (11715816) does 
> not match request (11715804), request header: 
> \{api_key=0,api_version=3,correlation_id=11715804,client_id=producer-3}
>  at org.apache.kafka.clients.NetworkClient.correlate(NetworkClient.java:752)
>  at 
> org.apache.kafka.clients.NetworkClient.parseStructMaybeUpdateThrottleTimeMetrics(NetworkClient.java:561)
>  at 
> org.apache.kafka.clients.NetworkClient.handleCompletedReceives(NetworkClient.java:657)
>  at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:442)
>  at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:224)
>  at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:162)
>  at java.lang.Thread.run(Thread.java:748)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-6221) ReplicaFetcherThread throws UnknownTopicOrPartitionException on topic creation

2020-09-13 Thread nandini (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-6221?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17195157#comment-17195157
 ] 

nandini commented on KAFKA-6221:


https://issues.apache.org/jira/browse/KAFKA-8264

https://issues.apache.org/jira/browse/KAFKA-7965

https://issues.apache.org/jira/browse/KAFKA-8087

https://issues.apache.org/jira/browse/KAFKA-9181

https://issues.apache.org/jira/browse/KAFKA-7976

 

And many other related jiras. Do they address the same?

> ReplicaFetcherThread throws UnknownTopicOrPartitionException on topic 
> creation 
> ---
>
> Key: KAFKA-6221
> URL: https://issues.apache.org/jira/browse/KAFKA-6221
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 0.10.1.1, 0.10.2.0, 0.10.2.1, 0.11.0.1, 1.0.0
> Environment: RHEL 7
>Reporter: Alex Dunayevsky
>Priority: Minor
>   Original Estimate: 336h
>  Remaining Estimate: 336h
>
> This issue appeared to happen frequently on 0.10.2.0. 
> On 0.10.2.1 and 1.0.0 it's a way harder to reproduce. 
> We'll focus on reproducing it on 0.10.2.1 and 1.0.0.
> *TOPOLOGY:* 
>   3 brokers, 1 zk.
> *REPRODUCING STRATEGY:* 
> Create a few dozens topics (say, 40) one by one, each with replication factor 
> 2. Number of partitions, generally, does not matter but, for easier 
> reproduction, should not be too small (around 30 or so). 
> *CREATE 40 TOPICS:*
> {code:java} for i in {1..40}; do bin/kafka-topics.sh --create --topic 
> "topic${i}_p28_r2" --partitions 28 --replication-factor 2 --zookeeper :2165; 
> done {code}
> *ERRORS*
> {code:java}
> *BROKER 1*
> [2017-11-15 16:46:00,853] ERROR [ReplicaFetcherThread-0-2], Error for 
> partition [topic1_p28_r2,27] to broker 
> 2:org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This 
> server does not host this topic-partition. (kafka.server.ReplicaFetcherThread)
> [2017-11-15 16:46:00,853] ERROR [ReplicaFetcherThread-0-2], Error for 
> partition [topic1_p28_r2,27] to broker 
> 2:org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This 
> server does not host this topic-partition. (kafka.server.ReplicaFetcherThread)
> [2017-11-15 16:46:00,854] ERROR [ReplicaFetcherThread-0-2], Error for 
> partition [topic1_p28_r2,9] to broker 
> 2:org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This 
> server does not host this topic-partition. (kafka.server.ReplicaFetcherThread)
> [2017-11-15 16:46:00,854] ERROR [ReplicaFetcherThread-0-2], Error for 
> partition [topic1_p28_r2,9] to broker 
> 2:org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This 
> server does not host this topic-partition. (kafka.server.ReplicaFetcherThread)
> [2017-11-15 16:46:00,854] ERROR [ReplicaFetcherThread-0-2], Error for 
> partition [topic1_p28_r2,3] to broker 
> 2:org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This 
> server does not host this topic-partition. (kafka.server.ReplicaFetcherThread)
> [2017-11-15 16:46:00,854] ERROR [ReplicaFetcherThread-0-2], Error for 
> partition [topic1_p28_r2,3] to broker 
> 2:org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This 
> server does not host this topic-partition. (kafka.server.ReplicaFetcherThread)
> [2017-11-15 16:46:00,854] ERROR [ReplicaFetcherThread-0-2], Error for 
> partition [topic1_p28_r2,15] to broker 
> 2:org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This 
> server does not host this topic-partition. (kafka.server.ReplicaFetcherThread)
> [2017-11-15 16:46:00,854] ERROR [ReplicaFetcherThread-0-2], Error for 
> partition [topic1_p28_r2,15] to broker 
> 2:org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This 
> server does not host this topic-partition. (kafka.server.ReplicaFetcherThread)
> [2017-11-15 16:46:00,854] ERROR [ReplicaFetcherThread-0-2], Error for 
> partition [topic1_p28_r2,21] to broker 
> 2:org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This 
> server does not host this topic-partition. (kafka.server.ReplicaFetcherThread)
> [2017-11-15 16:46:00,854] ERROR [ReplicaFetcherThread-0-2], Error for 
> partition [topic1_p28_r2,21] to broker 
> 2:org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This 
> server does not host this topic-partition. (kafka.server.ReplicaFetcherThread)
> *BROKER 2*
> [2017-11-15 16:46:36,408] ERROR [ReplicaFetcherThread-0-3], Error for 
> partition [topic20_p28_r2,12] to broker 
> 3:org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This 
> server does not host this topic-partition. (kafka.server.ReplicaFetcherThread)
> [2017-11-15 16:46:36,408] ERROR [ReplicaFetcherThread-0-3], Error for 
> partition [topic20_p28_r2,12] to broker 
> 3:org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This 
> server does not host