[jira] [Updated] (KAFKA-6930) Update KafkaZkClient debug log

2018-05-23 Thread darion yaphet (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-6930?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

darion yaphet updated KAFKA-6930:
-
Attachment: snapshot.png

> Update KafkaZkClient debug log
> --
>
> Key: KAFKA-6930
> URL: https://issues.apache.org/jira/browse/KAFKA-6930
> Project: Kafka
>  Issue Type: Improvement
>  Components: core, zkclient
>Affects Versions: 1.1.0
>Reporter: darion yaphet
>Priority: Trivial
> Attachments: [KAFKA-6930]_Update_KafkaZkClient_debug_log.patch, 
> snapshot.png
>
>
> Currently , KafkaZkClient could print data: Array[Byte] in debug log , we 
> should print data as String . 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-6762) log-cleaner thread terminates due to java.lang.IllegalStateException

2018-05-23 Thread Uwe Eisele (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-6762?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16486975#comment-16486975
 ] 

Uwe Eisele commented on KAFKA-6762:
---

We are experiencing the same problem. The log cleaner threads of brokers which 
are responsible for the __consumer_offsets-9 partition died because of a 
IllegalStateException.
{code:java}
[2018-05-22T10:00:27.431+02:00] INFO Cleaner 0: Beginning cleaning of log 
__consumer_offsets-9. (kafka.log.LogCleaner) [2018-05-22T10:00:27.431+02:00] 
INFO Cleaner 0: Building offset map for __consumer_offsets-9... 
(kafka.log.LogCleaner) [2018-05-22T10:00:27.451+02:00] INFO Cleaner 0: Building 
offset map for log __consumer_offsets-9 for 8 segments in offset range 
[95646530, 96708067). (kafka.log.LogCleaner) [2018-05-22T10:00:29.604+02:00] 
INFO Cleaner 0: Offset map for log __consumer_offsets-9 complete. 
(kafka.log.LogCleaner) [2018-05-22T10:00:29.604+02:00] INFO Cleaner 0: Cleaning 
log __consumer_offsets-9 (cleaning prior to Wed May 16 16:38:40 CEST 2018, 
discarding tombstones prior to Sun May 06 18:36:02 CEST 2018)... 
(kafka.log.LogCl eaner) [2018-05-22T10:00:29.604+02:00] INFO Cleaner 0: 
Cleaning segment 0 in log __consumer_offsets-9 (largest timestamp Fri May 04 
18:01:11 CEST 2018) into 0, discarding deletes. (kafka.log.LogCleaner) 
[2018-05-22T10:00:29.614+02:00] INFO Cleaner 0: Growing cleaner I/O buffers 
from 262144bytes to 524288 bytes. (kafka.log.LogCleaner) 
[2018-05-22T10:00:29.620+02:00] INFO Cleaner 0: Growing cleaner I/O buffers 
from 524288bytes to 112 bytes. (kafka.log.LogCleaner) 
[2018-05-22T10:00:29.639+02:00] ERROR [kafka-log-cleaner-thread-0]: Error due 
to (kafka.log.LogCleaner) java.lang.IllegalStateException: This log contains a 
message larger than maximum allowable size of 112. at 
kafka.log.Cleaner.growBuffers(LogCleaner.scala:675) at 
kafka.log.Cleaner.cleanInto(LogCleaner.scala:627) at 
kafka.log.Cleaner.cleanSegments(LogCleaner.scala:518) at 
kafka.log.Cleaner$$anonfun$doClean$4.apply(LogCleaner.scala:462) at 
kafka.log.Cleaner$$anonfun$doClean$4.apply(LogCleaner.scala:461) at 
scala.collection.immutable.List.foreach(List.scala:392) at 
kafka.log.Cleaner.doClean(LogCleaner.scala:461) at 
kafka.log.Cleaner.clean(LogCleaner.scala:438) at 
kafka.log.LogCleaner$CleanerThread.cleanOrSleep(LogCleaner.scala:305) at 
kafka.log.LogCleaner$CleanerThread.doWork(LogCleaner.scala:291) at 
kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:82) 
[2018-05-22T10:00:29.640+02:00] INFO [kafka-log-cleaner-thread-0]: Stopped 
(kafka.log.LogCleaner)
{code}
We are running Kafka 1.1.0.

> log-cleaner thread terminates due to java.lang.IllegalStateException
> 
>
> Key: KAFKA-6762
> URL: https://issues.apache.org/jira/browse/KAFKA-6762
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 1.0.0
> Environment: os: GNU/Linux 
> arch: x86_64 
> Kernel: 4.9.77 
> jvm: OpenJDK 1.8.0
>Reporter: Ricardo Bartolome
>Priority: Major
>
> We are experiencing some problems with kafka log-cleaner thread on Kafka 
> 1.0.0. We have planned to update this cluster to 1.1.0 by next week in order 
> to fix KAFKA-6683, but until then we can only confirm that it happens in 
> 1.0.0.
> log-cleaner thread crashes after a while with the following error:
> {code:java}
> [2018-03-28 11:14:40,199] INFO Cleaner 0: Beginning cleaning of log 
> __consumer_offsets-31. (kafka.log.LogCleaner)
> [2018-03-28 11:14:40,199] INFO Cleaner 0: Building offset map for 
> __consumer_offsets-31... (kafka.log.LogCleaner)
> [2018-03-28 11:14:40,218] INFO Cleaner 0: Building offset map for log 
> __consumer_offsets-31 for 16 segments in offset range [1612869, 14282934). 
> (kafka.log.LogCleaner)
> [2018-03-28 11:14:58,566] INFO Cleaner 0: Offset map for log 
> __consumer_offsets-31 complete. (kafka.log.LogCleaner)
> [2018-03-28 11:14:58,566] INFO Cleaner 0: Cleaning log __consumer_offsets-31 
> (cleaning prior to Tue Mar 27 09:25:09 GMT 2018, discarding tombstones prior 
> to Sat Feb 24 11:04:21 GMT 2018
> )... (kafka.log.LogCleaner)
> [2018-03-28 11:14:58,567] INFO Cleaner 0: Cleaning segment 0 in log 
> __consumer_offsets-31 (largest timestamp Fri Feb 23 11:40:54 GMT 2018) into 
> 0, discarding deletes. (kafka.log.LogClea
> ner)
> [2018-03-28 11:14:58,570] INFO Cleaner 0: Growing cleaner I/O buffers from 
> 262144bytes to 524288 bytes. (kafka.log.LogCleaner)
> [2018-03-28 11:14:58,576] INFO Cleaner 0: Growing cleaner I/O buffers from 
> 524288bytes to 112 bytes. (kafka.log.LogCleaner)
> [2018-03-28 11:14:58,593] ERROR [kafka-log-cleaner-thread-0]: Error due to 
> (kafka.log.LogCleaner)
> java.lang.IllegalStateException: This log contains a message larger than 
> maximum allowable size of 112.
>   

[jira] [Commented] (KAFKA-4628) Support KTable/GlobalKTable Joins

2018-05-23 Thread Adam Bellemare (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4628?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16487005#comment-16487005
 ] 

Adam Bellemare commented on KAFKA-4628:
---

[~guozhang]

Any updates on this? I have encountered several more instances of the exact 
same use-case that Dmitry did. Given that this is coming up on the 1-year mark 
since any updates, it would be good to know if there are any plans for it. 
Thanks!

> Support KTable/GlobalKTable Joins
> -
>
> Key: KAFKA-4628
> URL: https://issues.apache.org/jira/browse/KAFKA-4628
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams
>Affects Versions: 0.10.2.0
>Reporter: Damian Guy
>Priority: Major
>
> In KIP-99 we have added support for GlobalKTables, however we don't currently 
> support KTable/GlobalKTable joins as they require materializing a state store 
> for the join. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (KAFKA-6762) log-cleaner thread terminates due to java.lang.IllegalStateException

2018-05-23 Thread Uwe Eisele (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-6762?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16486975#comment-16486975
 ] 

Uwe Eisele edited comment on KAFKA-6762 at 5/23/18 10:16 AM:
-

We are experiencing the same problem. The log cleaner threads of brokers which 
are responsible for the __consumer_offsets-9 partition died because of a 
IllegalStateException.

We are running Kafka 1.1.0.

[2018-05-22T10:00:27.431+02:00] INFO Cleaner 0: Beginning cleaning of log 
__consumer_offsets-9. (kafka.log.LogCleaner)
[2018-05-22T10:00:27.431+02:00] INFO Cleaner 0: Building offset map for 
__consumer_offsets-9... (kafka.log.LogCleaner)
[2018-05-22T10:00:27.451+02:00] INFO Cleaner 0: Building offset map for log 
__consumer_offsets-9 for 8 segments in offset range [95646530, 96708067). 
(kafka.log.LogCleaner)
[2018-05-22T10:00:29.604+02:00] INFO Cleaner 0: Offset map for log 
__consumer_offsets-9 complete. (kafka.log.LogCleaner)
[2018-05-22T10:00:29.604+02:00] INFO Cleaner 0: Cleaning log 
__consumer_offsets-9 (cleaning prior to Wed May 16 16:38:40 CEST 2018, 
discarding tombstones prior to Sun May 06 18:36:02 CEST 2018)... 
(kafka.log.LogCl
eaner)
[2018-05-22T10:00:29.604+02:00] INFO Cleaner 0: Cleaning segment 0 in log 
__consumer_offsets-9 (largest timestamp Fri May 04 18:01:11 CEST 2018) into 0, 
discarding deletes. (kafka.log.LogCleaner)
[2018-05-22T10:00:29.614+02:00] INFO Cleaner 0: Growing cleaner I/O buffers 
from 262144bytes to 524288 bytes. (kafka.log.LogCleaner)
[2018-05-22T10:00:29.620+02:00] INFO Cleaner 0: Growing cleaner I/O buffers 
from 524288bytes to 112 bytes. (kafka.log.LogCleaner)
[2018-05-22T10:00:29.639+02:00] ERROR [kafka-log-cleaner-thread-0]: Error due 
to (kafka.log.LogCleaner)
java.lang.IllegalStateException: This log contains a message larger than 
maximum allowable size of 112.
 at kafka.log.Cleaner.growBuffers(LogCleaner.scala:675)
 at kafka.log.Cleaner.cleanInto(LogCleaner.scala:627)
 at kafka.log.Cleaner.cleanSegments(LogCleaner.scala:518)
 at kafka.log.Cleaner$$anonfun$doClean$4.apply(LogCleaner.scala:462)
 at kafka.log.Cleaner$$anonfun$doClean$4.apply(LogCleaner.scala:461)
 at scala.collection.immutable.List.foreach(List.scala:392)
 at kafka.log.Cleaner.doClean(LogCleaner.scala:461)
 at kafka.log.Cleaner.clean(LogCleaner.scala:438)
 at kafka.log.LogCleaner$CleanerThread.cleanOrSleep(LogCleaner.scala:305)
 at kafka.log.LogCleaner$CleanerThread.doWork(LogCleaner.scala:291)
 at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:82)
[2018-05-22T10:00:29.640+02:00] INFO [kafka-log-cleaner-thread-0]: Stopped 
(kafka.log.LogCleaner)


was (Author: ueisele):
We are experiencing the same problem. The log cleaner threads of brokers which 
are responsible for the __consumer_offsets-9 partition died because of a 
IllegalStateException.
{code:java}
[2018-05-22T10:00:27.431+02:00] INFO Cleaner 0: Beginning cleaning of log 
__consumer_offsets-9. (kafka.log.LogCleaner) [2018-05-22T10:00:27.431+02:00] 
INFO Cleaner 0: Building offset map for __consumer_offsets-9... 
(kafka.log.LogCleaner) [2018-05-22T10:00:27.451+02:00] INFO Cleaner 0: Building 
offset map for log __consumer_offsets-9 for 8 segments in offset range 
[95646530, 96708067). (kafka.log.LogCleaner) [2018-05-22T10:00:29.604+02:00] 
INFO Cleaner 0: Offset map for log __consumer_offsets-9 complete. 
(kafka.log.LogCleaner) [2018-05-22T10:00:29.604+02:00] INFO Cleaner 0: Cleaning 
log __consumer_offsets-9 (cleaning prior to Wed May 16 16:38:40 CEST 2018, 
discarding tombstones prior to Sun May 06 18:36:02 CEST 2018)... 
(kafka.log.LogCl eaner) [2018-05-22T10:00:29.604+02:00] INFO Cleaner 0: 
Cleaning segment 0 in log __consumer_offsets-9 (largest timestamp Fri May 04 
18:01:11 CEST 2018) into 0, discarding deletes. (kafka.log.LogCleaner) 
[2018-05-22T10:00:29.614+02:00] INFO Cleaner 0: Growing cleaner I/O buffers 
from 262144bytes to 524288 bytes. (kafka.log.LogCleaner) 
[2018-05-22T10:00:29.620+02:00] INFO Cleaner 0: Growing cleaner I/O buffers 
from 524288bytes to 112 bytes. (kafka.log.LogCleaner) 
[2018-05-22T10:00:29.639+02:00] ERROR [kafka-log-cleaner-thread-0]: Error due 
to (kafka.log.LogCleaner) java.lang.IllegalStateException: This log contains a 
message larger than maximum allowable size of 112. at 
kafka.log.Cleaner.growBuffers(LogCleaner.scala:675) at 
kafka.log.Cleaner.cleanInto(LogCleaner.scala:627) at 
kafka.log.Cleaner.cleanSegments(LogCleaner.scala:518) at 
kafka.log.Cleaner$$anonfun$doClean$4.apply(LogCleaner.scala:462) at 
kafka.log.Cleaner$$anonfun$doClean$4.apply(LogCleaner.scala:461) at 
scala.collection.immutable.List.foreach(List.scala:392) at 
kafka.log.Cleaner.doClean(LogCleaner.scala:461) at 
kafka.log.Cleaner.clean(LogCleaner.scala:438) at 
kafka.log.LogCleaner$CleanerThread.cleanOrSleep(LogCleaner.scala:305) at 
kafka.log.LogCleaner$CleanerThread.doWork(LogCleaner.scala:291) at 

[jira] [Comment Edited] (KAFKA-6762) log-cleaner thread terminates due to java.lang.IllegalStateException

2018-05-23 Thread Uwe Eisele (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-6762?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16486975#comment-16486975
 ] 

Uwe Eisele edited comment on KAFKA-6762 at 5/23/18 10:18 AM:
-

We are experiencing the same problem. The log cleaner threads of brokers which 
are responsible for the __consumer_offsets-9 partition died because of a 
IllegalStateException.

We are running Kafka 1.1.0.

[2018-05-22T10:00:27.431+02:00] INFO Cleaner 0: Beginning cleaning of log 
__consumer_offsets-9. (kafka.log.LogCleaner)
 [2018-05-22T10:00:27.431+02:00] INFO Cleaner 0: Building offset map for 
__consumer_offsets-9... (kafka.log.LogCleaner)
 [2018-05-22T10:00:27.451+02:00] INFO Cleaner 0: Building offset map for log 
__consumer_offsets-9 for 8 segments in offset range [95646530, 96708067). 
(kafka.log.LogCleaner)
 [2018-05-22T10:00:29.604+02:00] INFO Cleaner 0: Offset map for log 
__consumer_offsets-9 complete. (kafka.log.LogCleaner)
 [2018-05-22T10:00:29.604+02:00] INFO Cleaner 0: Cleaning log 
__consumer_offsets-9 (cleaning prior to Wed May 16 16:38:40 CEST 2018, 
discarding tombstones prior to Sun May 06 18:36:02 CEST 2018)... 
(kafka.log.LogCleaner)
 [2018-05-22T10:00:29.604+02:00] INFO Cleaner 0: Cleaning segment 0 in log 
__consumer_offsets-9 (largest timestamp Fri May 04 18:01:11 CEST 2018) into 0, 
discarding deletes. (kafka.log.LogCleaner)
 [2018-05-22T10:00:29.614+02:00] INFO Cleaner 0: Growing cleaner I/O buffers 
from 262144bytes to 524288 bytes. (kafka.log.LogCleaner)
 [2018-05-22T10:00:29.620+02:00] INFO Cleaner 0: Growing cleaner I/O buffers 
from 524288bytes to 112 bytes. (kafka.log.LogCleaner)
 [2018-05-22T10:00:29.639+02:00] ERROR [kafka-log-cleaner-thread-0]: Error due 
to (kafka.log.LogCleaner)
 java.lang.IllegalStateException: This log contains a message larger than 
maximum allowable size of 112.
    at kafka.log.Cleaner.growBuffers(LogCleaner.scala:675)
    at kafka.log.Cleaner.cleanInto(LogCleaner.scala:627)
    at kafka.log.Cleaner.cleanSegments(LogCleaner.scala:518)
    at kafka.log.Cleaner$$anonfun$doClean$4.apply(LogCleaner.scala:462)
    at kafka.log.Cleaner$$anonfun$doClean$4.apply(LogCleaner.scala:461)
    at scala.collection.immutable.List.foreach(List.scala:392)
    at kafka.log.Cleaner.doClean(LogCleaner.scala:461)
    at kafka.log.Cleaner.clean(LogCleaner.scala:438)
    at kafka.log.LogCleaner$CleanerThread.cleanOrSleep(LogCleaner.scala:305)
    at kafka.log.LogCleaner$CleanerThread.doWork(LogCleaner.scala:291)
    at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:82)
 [2018-05-22T10:00:29.640+02:00] INFO [kafka-log-cleaner-thread-0]: Stopped 
(kafka.log.LogCleaner)


was (Author: ueisele):
We are experiencing the same problem. The log cleaner threads of brokers which 
are responsible for the __consumer_offsets-9 partition died because of a 
IllegalStateException.

We are running Kafka 1.1.0.

[2018-05-22T10:00:27.431+02:00] INFO Cleaner 0: Beginning cleaning of log 
__consumer_offsets-9. (kafka.log.LogCleaner)
[2018-05-22T10:00:27.431+02:00] INFO Cleaner 0: Building offset map for 
__consumer_offsets-9... (kafka.log.LogCleaner)
[2018-05-22T10:00:27.451+02:00] INFO Cleaner 0: Building offset map for log 
__consumer_offsets-9 for 8 segments in offset range [95646530, 96708067). 
(kafka.log.LogCleaner)
[2018-05-22T10:00:29.604+02:00] INFO Cleaner 0: Offset map for log 
__consumer_offsets-9 complete. (kafka.log.LogCleaner)
[2018-05-22T10:00:29.604+02:00] INFO Cleaner 0: Cleaning log 
__consumer_offsets-9 (cleaning prior to Wed May 16 16:38:40 CEST 2018, 
discarding tombstones prior to Sun May 06 18:36:02 CEST 2018)... 
(kafka.log.LogCl
eaner)
[2018-05-22T10:00:29.604+02:00] INFO Cleaner 0: Cleaning segment 0 in log 
__consumer_offsets-9 (largest timestamp Fri May 04 18:01:11 CEST 2018) into 0, 
discarding deletes. (kafka.log.LogCleaner)
[2018-05-22T10:00:29.614+02:00] INFO Cleaner 0: Growing cleaner I/O buffers 
from 262144bytes to 524288 bytes. (kafka.log.LogCleaner)
[2018-05-22T10:00:29.620+02:00] INFO Cleaner 0: Growing cleaner I/O buffers 
from 524288bytes to 112 bytes. (kafka.log.LogCleaner)
[2018-05-22T10:00:29.639+02:00] ERROR [kafka-log-cleaner-thread-0]: Error due 
to (kafka.log.LogCleaner)
java.lang.IllegalStateException: This log contains a message larger than 
maximum allowable size of 112.
 at kafka.log.Cleaner.growBuffers(LogCleaner.scala:675)
 at kafka.log.Cleaner.cleanInto(LogCleaner.scala:627)
 at kafka.log.Cleaner.cleanSegments(LogCleaner.scala:518)
 at kafka.log.Cleaner$$anonfun$doClean$4.apply(LogCleaner.scala:462)
 at kafka.log.Cleaner$$anonfun$doClean$4.apply(LogCleaner.scala:461)
 at scala.collection.immutable.List.foreach(List.scala:392)
 at kafka.log.Cleaner.doClean(LogCleaner.scala:461)
 at kafka.log.Cleaner.clean(LogCleaner.scala:438)
 at kafka.log.LogCleaner$CleanerThread.cleanOrSleep(LogCleaner.scala:305)
 at kafka.log.Lo

[jira] [Comment Edited] (KAFKA-6762) log-cleaner thread terminates due to java.lang.IllegalStateException

2018-05-23 Thread Uwe Eisele (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-6762?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16486975#comment-16486975
 ] 

Uwe Eisele edited comment on KAFKA-6762 at 5/23/18 10:19 AM:
-

We are experiencing the same problem. The log cleaner threads of brokers which 
are responsible for the __consumer_offsets-9 partition died because of a 
IllegalStateException.

We are running Kafka 1.1.0.

{code:java}
[2018-05-22T10:00:27.431+02:00] INFO Cleaner 0: Beginning cleaning of log 
__consumer_offsets-9. (kafka.log.LogCleaner)
 [2018-05-22T10:00:27.431+02:00] INFO Cleaner 0: Building offset map for 
__consumer_offsets-9... (kafka.log.LogCleaner)
 [2018-05-22T10:00:27.451+02:00] INFO Cleaner 0: Building offset map for log 
__consumer_offsets-9 for 8 segments in offset range [95646530, 96708067). 
(kafka.log.LogCleaner)
 [2018-05-22T10:00:29.604+02:00] INFO Cleaner 0: Offset map for log 
__consumer_offsets-9 complete. (kafka.log.LogCleaner)
 [2018-05-22T10:00:29.604+02:00] INFO Cleaner 0: Cleaning log 
__consumer_offsets-9 (cleaning prior to Wed May 16 16:38:40 CEST 2018, 
discarding tombstones prior to Sun May 06 18:36:02 CEST 2018)... 
(kafka.log.LogCleaner)
 [2018-05-22T10:00:29.604+02:00] INFO Cleaner 0: Cleaning segment 0 in log 
__consumer_offsets-9 (largest timestamp Fri May 04 18:01:11 CEST 2018) into 0, 
discarding deletes. (kafka.log.LogCleaner)
 [2018-05-22T10:00:29.614+02:00] INFO Cleaner 0: Growing cleaner I/O buffers 
from 262144bytes to 524288 bytes. (kafka.log.LogCleaner)
 [2018-05-22T10:00:29.620+02:00] INFO Cleaner 0: Growing cleaner I/O buffers 
from 524288bytes to 112 bytes. (kafka.log.LogCleaner)
 [2018-05-22T10:00:29.639+02:00] ERROR [kafka-log-cleaner-thread-0]: Error due 
to (kafka.log.LogCleaner)
 java.lang.IllegalStateException: This log contains a message larger than 
maximum allowable size of 112.
    at kafka.log.Cleaner.growBuffers(LogCleaner.scala:675)
    at kafka.log.Cleaner.cleanInto(LogCleaner.scala:627)
    at kafka.log.Cleaner.cleanSegments(LogCleaner.scala:518)
    at kafka.log.Cleaner$$anonfun$doClean$4.apply(LogCleaner.scala:462)
    at kafka.log.Cleaner$$anonfun$doClean$4.apply(LogCleaner.scala:461)
    at scala.collection.immutable.List.foreach(List.scala:392)
    at kafka.log.Cleaner.doClean(LogCleaner.scala:461)
    at kafka.log.Cleaner.clean(LogCleaner.scala:438)
    at kafka.log.LogCleaner$CleanerThread.cleanOrSleep(LogCleaner.scala:305)
    at kafka.log.LogCleaner$CleanerThread.doWork(LogCleaner.scala:291)
    at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:82)
 [2018-05-22T10:00:29.640+02:00] INFO [kafka-log-cleaner-thread-0]: Stopped 
(kafka.log.LogCleaner)
{code}


was (Author: ueisele):
We are experiencing the same problem. The log cleaner threads of brokers which 
are responsible for the __consumer_offsets-9 partition died because of a 
IllegalStateException.

We are running Kafka 1.1.0.

[2018-05-22T10:00:27.431+02:00] INFO Cleaner 0: Beginning cleaning of log 
__consumer_offsets-9. (kafka.log.LogCleaner)
 [2018-05-22T10:00:27.431+02:00] INFO Cleaner 0: Building offset map for 
__consumer_offsets-9... (kafka.log.LogCleaner)
 [2018-05-22T10:00:27.451+02:00] INFO Cleaner 0: Building offset map for log 
__consumer_offsets-9 for 8 segments in offset range [95646530, 96708067). 
(kafka.log.LogCleaner)
 [2018-05-22T10:00:29.604+02:00] INFO Cleaner 0: Offset map for log 
__consumer_offsets-9 complete. (kafka.log.LogCleaner)
 [2018-05-22T10:00:29.604+02:00] INFO Cleaner 0: Cleaning log 
__consumer_offsets-9 (cleaning prior to Wed May 16 16:38:40 CEST 2018, 
discarding tombstones prior to Sun May 06 18:36:02 CEST 2018)... 
(kafka.log.LogCleaner)
 [2018-05-22T10:00:29.604+02:00] INFO Cleaner 0: Cleaning segment 0 in log 
__consumer_offsets-9 (largest timestamp Fri May 04 18:01:11 CEST 2018) into 0, 
discarding deletes. (kafka.log.LogCleaner)
 [2018-05-22T10:00:29.614+02:00] INFO Cleaner 0: Growing cleaner I/O buffers 
from 262144bytes to 524288 bytes. (kafka.log.LogCleaner)
 [2018-05-22T10:00:29.620+02:00] INFO Cleaner 0: Growing cleaner I/O buffers 
from 524288bytes to 112 bytes. (kafka.log.LogCleaner)
 [2018-05-22T10:00:29.639+02:00] ERROR [kafka-log-cleaner-thread-0]: Error due 
to (kafka.log.LogCleaner)
 java.lang.IllegalStateException: This log contains a message larger than 
maximum allowable size of 112.
    at kafka.log.Cleaner.growBuffers(LogCleaner.scala:675)
    at kafka.log.Cleaner.cleanInto(LogCleaner.scala:627)
    at kafka.log.Cleaner.cleanSegments(LogCleaner.scala:518)
    at kafka.log.Cleaner$$anonfun$doClean$4.apply(LogCleaner.scala:462)
    at kafka.log.Cleaner$$anonfun$doClean$4.apply(LogCleaner.scala:461)
    at scala.collection.immutable.List.foreach(List.scala:392)
    at kafka.log.Cleaner.doClean(LogCleaner.scala:461)
    at kafka.log.Cleaner.clean(LogCleaner.scala:438)
    at kafka.log.LogCleaner$CleanerThre

[jira] [Comment Edited] (KAFKA-6762) log-cleaner thread terminates due to java.lang.IllegalStateException

2018-05-23 Thread Uwe Eisele (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-6762?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16486975#comment-16486975
 ] 

Uwe Eisele edited comment on KAFKA-6762 at 5/23/18 10:21 AM:
-

We are experiencing the same problem. The log cleaner threads of brokers which 
are responsible for the __consumer_offsets-9 partition died because of a 
IllegalStateException.

We are running Kafka 1.1.0.

{code:java}
[2018-05-22T10:00:27.431+02:00] INFO Cleaner 0: Beginning cleaning of log 
__consumer_offsets-9. (kafka.log.LogCleaner)
[2018-05-22T10:00:27.431+02:00] INFO Cleaner 0: Building offset map for 
__consumer_offsets-9... (kafka.log.LogCleaner)
[2018-05-22T10:00:27.451+02:00] INFO Cleaner 0: Building offset map for log 
__consumer_offsets-9 for 8 segments in offset range [95646530, 96708067). 
(kafka.log.LogCleaner)
[2018-05-22T10:00:29.604+02:00] INFO Cleaner 0: Offset map for log 
__consumer_offsets-9 complete. (kafka.log.LogCleaner)
[2018-05-22T10:00:29.604+02:00] INFO Cleaner 0: Cleaning log 
__consumer_offsets-9 (cleaning prior to Wed May 16 16:38:40 CEST 2018, 
discarding tombstones prior to Sun May 06 18:36:02 CEST 2018)... 
(kafka.log.LogCleaner)
[2018-05-22T10:00:29.604+02:00] INFO Cleaner 0: Cleaning segment 0 in log 
__consumer_offsets-9 (largest timestamp Fri May 04 18:01:11 CEST 2018) into 0, 
discarding deletes. (kafka.log.LogCleaner)
[2018-05-22T10:00:29.614+02:00] INFO Cleaner 0: Growing cleaner I/O buffers 
from 262144bytes to 524288 bytes. (kafka.log.LogCleaner)
[2018-05-22T10:00:29.620+02:00] INFO Cleaner 0: Growing cleaner I/O buffers 
from 524288bytes to 112 bytes. (kafka.log.LogCleaner)
[2018-05-22T10:00:29.639+02:00] ERROR [kafka-log-cleaner-thread-0]: Error due 
to (kafka.log.LogCleaner)
 java.lang.IllegalStateException: This log contains a message larger than 
maximum allowable size of 112.
    at kafka.log.Cleaner.growBuffers(LogCleaner.scala:675)
    at kafka.log.Cleaner.cleanInto(LogCleaner.scala:627)
    at kafka.log.Cleaner.cleanSegments(LogCleaner.scala:518)
    at kafka.log.Cleaner$$anonfun$doClean$4.apply(LogCleaner.scala:462)
    at kafka.log.Cleaner$$anonfun$doClean$4.apply(LogCleaner.scala:461)
    at scala.collection.immutable.List.foreach(List.scala:392)
    at kafka.log.Cleaner.doClean(LogCleaner.scala:461)
    at kafka.log.Cleaner.clean(LogCleaner.scala:438)
    at kafka.log.LogCleaner$CleanerThread.cleanOrSleep(LogCleaner.scala:305)
    at kafka.log.LogCleaner$CleanerThread.doWork(LogCleaner.scala:291)
    at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:82)
[2018-05-22T10:00:29.640+02:00] INFO [kafka-log-cleaner-thread-0]: Stopped 
(kafka.log.LogCleaner)
{code}


was (Author: ueisele):
We are experiencing the same problem. The log cleaner threads of brokers which 
are responsible for the __consumer_offsets-9 partition died because of a 
IllegalStateException.

We are running Kafka 1.1.0.

{code:java}
[2018-05-22T10:00:27.431+02:00] INFO Cleaner 0: Beginning cleaning of log 
__consumer_offsets-9. (kafka.log.LogCleaner)
 [2018-05-22T10:00:27.431+02:00] INFO Cleaner 0: Building offset map for 
__consumer_offsets-9... (kafka.log.LogCleaner)
 [2018-05-22T10:00:27.451+02:00] INFO Cleaner 0: Building offset map for log 
__consumer_offsets-9 for 8 segments in offset range [95646530, 96708067). 
(kafka.log.LogCleaner)
 [2018-05-22T10:00:29.604+02:00] INFO Cleaner 0: Offset map for log 
__consumer_offsets-9 complete. (kafka.log.LogCleaner)
 [2018-05-22T10:00:29.604+02:00] INFO Cleaner 0: Cleaning log 
__consumer_offsets-9 (cleaning prior to Wed May 16 16:38:40 CEST 2018, 
discarding tombstones prior to Sun May 06 18:36:02 CEST 2018)... 
(kafka.log.LogCleaner)
 [2018-05-22T10:00:29.604+02:00] INFO Cleaner 0: Cleaning segment 0 in log 
__consumer_offsets-9 (largest timestamp Fri May 04 18:01:11 CEST 2018) into 0, 
discarding deletes. (kafka.log.LogCleaner)
 [2018-05-22T10:00:29.614+02:00] INFO Cleaner 0: Growing cleaner I/O buffers 
from 262144bytes to 524288 bytes. (kafka.log.LogCleaner)
 [2018-05-22T10:00:29.620+02:00] INFO Cleaner 0: Growing cleaner I/O buffers 
from 524288bytes to 112 bytes. (kafka.log.LogCleaner)
 [2018-05-22T10:00:29.639+02:00] ERROR [kafka-log-cleaner-thread-0]: Error due 
to (kafka.log.LogCleaner)
 java.lang.IllegalStateException: This log contains a message larger than 
maximum allowable size of 112.
    at kafka.log.Cleaner.growBuffers(LogCleaner.scala:675)
    at kafka.log.Cleaner.cleanInto(LogCleaner.scala:627)
    at kafka.log.Cleaner.cleanSegments(LogCleaner.scala:518)
    at kafka.log.Cleaner$$anonfun$doClean$4.apply(LogCleaner.scala:462)
    at kafka.log.Cleaner$$anonfun$doClean$4.apply(LogCleaner.scala:461)
    at scala.collection.immutable.List.foreach(List.scala:392)
    at kafka.log.Cleaner.doClean(LogCleaner.scala:461)
    at kafka.log.Cleaner.clean(LogCleaner.scala:438)
    at kafka.log.LogCleaner$CleanerT

[jira] [Comment Edited] (KAFKA-6762) log-cleaner thread terminates due to java.lang.IllegalStateException

2018-05-23 Thread Uwe Eisele (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-6762?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16486975#comment-16486975
 ] 

Uwe Eisele edited comment on KAFKA-6762 at 5/23/18 10:25 AM:
-

We are experiencing the same problem. The log cleaner thread of brokers which 
are responsible for the ___consumer_offsets-9_ partition died because of a 
IllegalStateException.

We are running Kafka 1.1.0.

{code:java}
[2018-05-22T10:00:27.431+02:00] INFO Cleaner 0: Beginning cleaning of log 
__consumer_offsets-9. (kafka.log.LogCleaner)
[2018-05-22T10:00:27.431+02:00] INFO Cleaner 0: Building offset map for 
__consumer_offsets-9... (kafka.log.LogCleaner)
[2018-05-22T10:00:27.451+02:00] INFO Cleaner 0: Building offset map for log 
__consumer_offsets-9 for 8 segments in offset range [95646530, 96708067). 
(kafka.log.LogCleaner)
[2018-05-22T10:00:29.604+02:00] INFO Cleaner 0: Offset map for log 
__consumer_offsets-9 complete. (kafka.log.LogCleaner)
[2018-05-22T10:00:29.604+02:00] INFO Cleaner 0: Cleaning log 
__consumer_offsets-9 (cleaning prior to Wed May 16 16:38:40 CEST 2018, 
discarding tombstones prior to Sun May 06 18:36:02 CEST 2018)... 
(kafka.log.LogCleaner)
[2018-05-22T10:00:29.604+02:00] INFO Cleaner 0: Cleaning segment 0 in log 
__consumer_offsets-9 (largest timestamp Fri May 04 18:01:11 CEST 2018) into 0, 
discarding deletes. (kafka.log.LogCleaner)
[2018-05-22T10:00:29.614+02:00] INFO Cleaner 0: Growing cleaner I/O buffers 
from 262144bytes to 524288 bytes. (kafka.log.LogCleaner)
[2018-05-22T10:00:29.620+02:00] INFO Cleaner 0: Growing cleaner I/O buffers 
from 524288bytes to 112 bytes. (kafka.log.LogCleaner)
[2018-05-22T10:00:29.639+02:00] ERROR [kafka-log-cleaner-thread-0]: Error due 
to (kafka.log.LogCleaner)
 java.lang.IllegalStateException: This log contains a message larger than 
maximum allowable size of 112.
    at kafka.log.Cleaner.growBuffers(LogCleaner.scala:675)
    at kafka.log.Cleaner.cleanInto(LogCleaner.scala:627)
    at kafka.log.Cleaner.cleanSegments(LogCleaner.scala:518)
    at kafka.log.Cleaner$$anonfun$doClean$4.apply(LogCleaner.scala:462)
    at kafka.log.Cleaner$$anonfun$doClean$4.apply(LogCleaner.scala:461)
    at scala.collection.immutable.List.foreach(List.scala:392)
    at kafka.log.Cleaner.doClean(LogCleaner.scala:461)
    at kafka.log.Cleaner.clean(LogCleaner.scala:438)
    at kafka.log.LogCleaner$CleanerThread.cleanOrSleep(LogCleaner.scala:305)
    at kafka.log.LogCleaner$CleanerThread.doWork(LogCleaner.scala:291)
    at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:82)
[2018-05-22T10:00:29.640+02:00] INFO [kafka-log-cleaner-thread-0]: Stopped 
(kafka.log.LogCleaner)
{code}

It looks like the ___consumer_offsets-9_ partitions contains a message with a 
value that is larger than _message.max.bytes_ of 112 bytes. However, we 
think that this should not be possible.


was (Author: ueisele):
We are experiencing the same problem. The log cleaner threads of brokers which 
are responsible for the __consumer_offsets-9 partition died because of a 
IllegalStateException.

We are running Kafka 1.1.0.

{code:java}
[2018-05-22T10:00:27.431+02:00] INFO Cleaner 0: Beginning cleaning of log 
__consumer_offsets-9. (kafka.log.LogCleaner)
[2018-05-22T10:00:27.431+02:00] INFO Cleaner 0: Building offset map for 
__consumer_offsets-9... (kafka.log.LogCleaner)
[2018-05-22T10:00:27.451+02:00] INFO Cleaner 0: Building offset map for log 
__consumer_offsets-9 for 8 segments in offset range [95646530, 96708067). 
(kafka.log.LogCleaner)
[2018-05-22T10:00:29.604+02:00] INFO Cleaner 0: Offset map for log 
__consumer_offsets-9 complete. (kafka.log.LogCleaner)
[2018-05-22T10:00:29.604+02:00] INFO Cleaner 0: Cleaning log 
__consumer_offsets-9 (cleaning prior to Wed May 16 16:38:40 CEST 2018, 
discarding tombstones prior to Sun May 06 18:36:02 CEST 2018)... 
(kafka.log.LogCleaner)
[2018-05-22T10:00:29.604+02:00] INFO Cleaner 0: Cleaning segment 0 in log 
__consumer_offsets-9 (largest timestamp Fri May 04 18:01:11 CEST 2018) into 0, 
discarding deletes. (kafka.log.LogCleaner)
[2018-05-22T10:00:29.614+02:00] INFO Cleaner 0: Growing cleaner I/O buffers 
from 262144bytes to 524288 bytes. (kafka.log.LogCleaner)
[2018-05-22T10:00:29.620+02:00] INFO Cleaner 0: Growing cleaner I/O buffers 
from 524288bytes to 112 bytes. (kafka.log.LogCleaner)
[2018-05-22T10:00:29.639+02:00] ERROR [kafka-log-cleaner-thread-0]: Error due 
to (kafka.log.LogCleaner)
 java.lang.IllegalStateException: This log contains a message larger than 
maximum allowable size of 112.
    at kafka.log.Cleaner.growBuffers(LogCleaner.scala:675)
    at kafka.log.Cleaner.cleanInto(LogCleaner.scala:627)
    at kafka.log.Cleaner.cleanSegments(LogCleaner.scala:518)
    at kafka.log.Cleaner$$anonfun$doClean$4.apply(LogCleaner.scala:462)
    at kafka.log.Cleaner$$anonfun$doClean$4.apply(LogCleaner.scala:461)
    at scala.coll

[jira] [Created] (KAFKA-6933) Broker reports Corrupted index warnings apparently infinitely

2018-05-23 Thread Franco Bonazza (JIRA)
Franco Bonazza created KAFKA-6933:
-

 Summary: Broker reports Corrupted index warnings apparently 
infinitely
 Key: KAFKA-6933
 URL: https://issues.apache.org/jira/browse/KAFKA-6933
 Project: Kafka
  Issue Type: Bug
Affects Versions: 1.0.1
Reporter: Franco Bonazza


I'm running into a situation where the server logs show continuously the 
following snippet:
{noformat}
[2018-05-23 10:58:56,590] INFO Loading producer state from offset 20601420 for 
partition transaction_r10_updates-6 with message format version 2 
(kafka.log.Log)
[2018-05-23 10:58:56,592] INFO Loading producer state from snapshot file 
'/data/0/kafka-logs/transaction_r10_updates-6/20601420.snapshot' 
for partition transaction_r10_u
pdates-6 (kafka.log.ProducerStateManager)
[2018-05-23 10:58:56,593] INFO Completed load of log transaction_r10_updates-6 
with 74 log segments, log start offset 0 and log end offset 20601420 in 5823 ms 
(kafka.log.Log)
[2018-05-23 10:58:58,761] WARN Found a corrupted index file due to requirement 
failed: Corrupt index found, index file 
(/data/0/kafka-logs/transaction_r10_updates-15/20544956.index) has 
non-zero size but the last offset is 20544956 which is no larger than the base 
offset 20544956.}. deleting 
/data/0/kafka-logs/transaction_r10_updates-15/20544956.timeindex, 
/data/0/kafka-logs/transaction_r10_updates-15/20544956.index, and 
/data/0/kafka-logs/transaction_r10_updates-15/20544956.txnindex and 
rebuilding index... (kafka.log.Log)
[2018-05-23 10:58:58,763] INFO Loading producer state from snapshot file 
'/data/0/kafka-logs/transaction_r10_updates-15/20544956.snapshot' 
for partition transaction_r10_updates-15 (kafka.log.ProducerStateManager)
[2018-05-23 10:59:02,202] INFO Recovering unflushed segment 20544956 in log 
transaction_r10_updates-15. (kafka.log.Log){noformat}
The set up is the following,

Broker is 1.0.1

There are mirrors from another cluster using client 0.10.2.1 

There are kafka streams and other custom consumer / producers using 1.0.0 
client.

 

While is doing this the JVM of the broker is up but it doesn't respond so it's 
impossible to produce, consume or run any commands.

If I delete all the index files the WARN turns into an ERROR, which takes a 
long time (1 day last time I tried) but eventually it goes into a healthy 
state, then I start the producers and things are still healthy, but when I 
start the consumers it quickly goes into the original WARN loop, which seems 
infinite.

 

I couldn't find any references to the problem, it seems to be at least 
mis-reporting the issue, and perhaps it's not infinite? I let it loop over the 
WARN for over a day and it never moved past that, and if there was something 
really wrong with the state maybe it should be reported.

The log cleaner log showed a few "too many files open" when it originally 
happened but ulimit has always been set to unlimited so I'm not sure what that 
error means.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-4290) High CPU caused by timeout overflow in WorkerCoordinator

2018-05-23 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4290?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16487117#comment-16487117
 ] 

ASF GitHub Bot commented on KAFKA-4290:
---

GitHub user thisthat opened a pull request:

https://github.com/apache/activemq/pull/284

Avoid overflow errors with timestamps

Some comparisons with timestamp values are not safe. This comparisons can 
trigger errors that were found in other Apache projects, e.g. KAFKA-4290. 
I changed those comparisons according to what the Java documentation 
recommends to help preventing such errors.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/thisthat/activemq following_java_rec

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/activemq/pull/284.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #284


commit 92a1926aa52de7d99142458ee74a6bb99cc5a963
Author: giliva 
Date:   2018-05-23T11:35:38Z

Avoid overflow errors - see KAFKA-4290




> High CPU caused by timeout overflow in WorkerCoordinator
> 
>
> Key: KAFKA-4290
> URL: https://issues.apache.org/jira/browse/KAFKA-4290
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Reporter: Jason Gustafson
>Assignee: Jason Gustafson
>Priority: Blocker
> Fix For: 0.10.1.0, 0.10.2.0
>
>
> The timeout passed to {{WorkerCoordinator.poll()}} can overflow if large 
> enough because we add it to the current time in order to calculate the call's 
> deadline. This shortcuts the poll loop and results in a very tight event loop 
> which can saturate a CPU. We hit this case out of the box because Connect 
> uses a default timeout of {{Long.MAX_VALUE}}.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (KAFKA-6762) log-cleaner thread terminates due to java.lang.IllegalStateException

2018-05-23 Thread Uwe Eisele (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-6762?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16486975#comment-16486975
 ] 

Uwe Eisele edited comment on KAFKA-6762 at 5/23/18 12:02 PM:
-

We are experiencing the same problem. The log cleaner thread of brokers which 
are responsible for the ___consumer_offsets-9_ partition stopped because of a 
IllegalStateException.

We are running Kafka 1.1.0.

{code:java}
[2018-05-22T10:00:27.431+02:00] INFO Cleaner 0: Beginning cleaning of log 
__consumer_offsets-9. (kafka.log.LogCleaner)
[2018-05-22T10:00:27.431+02:00] INFO Cleaner 0: Building offset map for 
__consumer_offsets-9... (kafka.log.LogCleaner)
[2018-05-22T10:00:27.451+02:00] INFO Cleaner 0: Building offset map for log 
__consumer_offsets-9 for 8 segments in offset range [95646530, 96708067). 
(kafka.log.LogCleaner)
[2018-05-22T10:00:29.604+02:00] INFO Cleaner 0: Offset map for log 
__consumer_offsets-9 complete. (kafka.log.LogCleaner)
[2018-05-22T10:00:29.604+02:00] INFO Cleaner 0: Cleaning log 
__consumer_offsets-9 (cleaning prior to Wed May 16 16:38:40 CEST 2018, 
discarding tombstones prior to Sun May 06 18:36:02 CEST 2018)... 
(kafka.log.LogCleaner)
[2018-05-22T10:00:29.604+02:00] INFO Cleaner 0: Cleaning segment 0 in log 
__consumer_offsets-9 (largest timestamp Fri May 04 18:01:11 CEST 2018) into 0, 
discarding deletes. (kafka.log.LogCleaner)
[2018-05-22T10:00:29.614+02:00] INFO Cleaner 0: Growing cleaner I/O buffers 
from 262144bytes to 524288 bytes. (kafka.log.LogCleaner)
[2018-05-22T10:00:29.620+02:00] INFO Cleaner 0: Growing cleaner I/O buffers 
from 524288bytes to 112 bytes. (kafka.log.LogCleaner)
[2018-05-22T10:00:29.639+02:00] ERROR [kafka-log-cleaner-thread-0]: Error due 
to (kafka.log.LogCleaner)
 java.lang.IllegalStateException: This log contains a message larger than 
maximum allowable size of 112.
    at kafka.log.Cleaner.growBuffers(LogCleaner.scala:675)
    at kafka.log.Cleaner.cleanInto(LogCleaner.scala:627)
    at kafka.log.Cleaner.cleanSegments(LogCleaner.scala:518)
    at kafka.log.Cleaner$$anonfun$doClean$4.apply(LogCleaner.scala:462)
    at kafka.log.Cleaner$$anonfun$doClean$4.apply(LogCleaner.scala:461)
    at scala.collection.immutable.List.foreach(List.scala:392)
    at kafka.log.Cleaner.doClean(LogCleaner.scala:461)
    at kafka.log.Cleaner.clean(LogCleaner.scala:438)
    at kafka.log.LogCleaner$CleanerThread.cleanOrSleep(LogCleaner.scala:305)
    at kafka.log.LogCleaner$CleanerThread.doWork(LogCleaner.scala:291)
    at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:82)
[2018-05-22T10:00:29.640+02:00] INFO [kafka-log-cleaner-thread-0]: Stopped 
(kafka.log.LogCleaner)
{code}

It looks like the ___consumer_offsets-9_ partitions contains a message with a 
value that is larger than _message.max.bytes_ of 112 bytes. However, we 
think that this should not be possible.


was (Author: ueisele):
We are experiencing the same problem. The log cleaner thread of brokers which 
are responsible for the ___consumer_offsets-9_ partition died because of a 
IllegalStateException.

We are running Kafka 1.1.0.

{code:java}
[2018-05-22T10:00:27.431+02:00] INFO Cleaner 0: Beginning cleaning of log 
__consumer_offsets-9. (kafka.log.LogCleaner)
[2018-05-22T10:00:27.431+02:00] INFO Cleaner 0: Building offset map for 
__consumer_offsets-9... (kafka.log.LogCleaner)
[2018-05-22T10:00:27.451+02:00] INFO Cleaner 0: Building offset map for log 
__consumer_offsets-9 for 8 segments in offset range [95646530, 96708067). 
(kafka.log.LogCleaner)
[2018-05-22T10:00:29.604+02:00] INFO Cleaner 0: Offset map for log 
__consumer_offsets-9 complete. (kafka.log.LogCleaner)
[2018-05-22T10:00:29.604+02:00] INFO Cleaner 0: Cleaning log 
__consumer_offsets-9 (cleaning prior to Wed May 16 16:38:40 CEST 2018, 
discarding tombstones prior to Sun May 06 18:36:02 CEST 2018)... 
(kafka.log.LogCleaner)
[2018-05-22T10:00:29.604+02:00] INFO Cleaner 0: Cleaning segment 0 in log 
__consumer_offsets-9 (largest timestamp Fri May 04 18:01:11 CEST 2018) into 0, 
discarding deletes. (kafka.log.LogCleaner)
[2018-05-22T10:00:29.614+02:00] INFO Cleaner 0: Growing cleaner I/O buffers 
from 262144bytes to 524288 bytes. (kafka.log.LogCleaner)
[2018-05-22T10:00:29.620+02:00] INFO Cleaner 0: Growing cleaner I/O buffers 
from 524288bytes to 112 bytes. (kafka.log.LogCleaner)
[2018-05-22T10:00:29.639+02:00] ERROR [kafka-log-cleaner-thread-0]: Error due 
to (kafka.log.LogCleaner)
 java.lang.IllegalStateException: This log contains a message larger than 
maximum allowable size of 112.
    at kafka.log.Cleaner.growBuffers(LogCleaner.scala:675)
    at kafka.log.Cleaner.cleanInto(LogCleaner.scala:627)
    at kafka.log.Cleaner.cleanSegments(LogCleaner.scala:518)
    at kafka.log.Cleaner$$anonfun$doClean$4.apply(LogCleaner.scala:462)
    at kafka.log.Cleaner$$anonfun$doClean$4.apply(LogCleaner.scala:461)
    at scala.

[jira] [Assigned] (KAFKA-6923) Consolidate ExtendedSerializer/Serializer and ExtendedDeserializer/Deserializer

2018-05-23 Thread Viktor Somogyi (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-6923?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Viktor Somogyi reassigned KAFKA-6923:
-

Assignee: Viktor Somogyi

> Consolidate ExtendedSerializer/Serializer and 
> ExtendedDeserializer/Deserializer
> ---
>
> Key: KAFKA-6923
> URL: https://issues.apache.org/jira/browse/KAFKA-6923
> Project: Kafka
>  Issue Type: Improvement
>  Components: clients
>Reporter: Ismael Juma
>Assignee: Viktor Somogyi
>Priority: Major
>  Labels: needs-kip
> Fix For: 2.1.0
>
>
> The Javadoc of ExtendedDeserializer states:
> {code}
>  * Prefer {@link Deserializer} if access to the headers is not required. Once 
> Kafka drops support for Java 7, the
>  * {@code deserialize()} method introduced by this interface will be added to 
> Deserializer with a default implementation
>  * so that backwards compatibility is maintained. This interface may be 
> deprecated once that happens.
> {code}
> Since we have dropped Java 7 support, we should figure out how to do this. 
> There are compatibility implications, so a KIP is needed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-6923) Consolidate ExtendedSerializer/Serializer and ExtendedDeserializer/Deserializer

2018-05-23 Thread Viktor Somogyi (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-6923?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16487200#comment-16487200
 ] 

Viktor Somogyi commented on KAFKA-6923:
---

As I know it is a valid thing to override the default interface method with an 
abstract one. Apparently leaving the ExtendedDeserializer as is just does that.
Yea, there might be a better approach, just saw this jira and came up with an 
idea :). I'll try to think a little bit more and publish a KIP.

> Consolidate ExtendedSerializer/Serializer and 
> ExtendedDeserializer/Deserializer
> ---
>
> Key: KAFKA-6923
> URL: https://issues.apache.org/jira/browse/KAFKA-6923
> Project: Kafka
>  Issue Type: Improvement
>  Components: clients
>Reporter: Ismael Juma
>Assignee: Viktor Somogyi
>Priority: Major
>  Labels: needs-kip
> Fix For: 2.1.0
>
>
> The Javadoc of ExtendedDeserializer states:
> {code}
>  * Prefer {@link Deserializer} if access to the headers is not required. Once 
> Kafka drops support for Java 7, the
>  * {@code deserialize()} method introduced by this interface will be added to 
> Deserializer with a default implementation
>  * so that backwards compatibility is maintained. This interface may be 
> deprecated once that happens.
> {code}
> Since we have dropped Java 7 support, we should figure out how to do this. 
> There are compatibility implications, so a KIP is needed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (KAFKA-6933) Broker reports Corrupted index warnings apparently infinitely

2018-05-23 Thread Franco Bonazza (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-6933?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Franco Bonazza updated KAFKA-6933:
--
Description: 
I'm running into a situation where the server logs show continuously the 
following snippet:
{noformat}
[2018-05-23 10:58:56,590] INFO Loading producer state from offset 20601420 for 
partition transaction_r10_updates-6 with message format version 2 
(kafka.log.Log)
[2018-05-23 10:58:56,592] INFO Loading producer state from snapshot file 
'/data/0/kafka-logs/transaction_r10_updates-6/20601420.snapshot' 
for partition transaction_r10_u
pdates-6 (kafka.log.ProducerStateManager)
[2018-05-23 10:58:56,593] INFO Completed load of log transaction_r10_updates-6 
with 74 log segments, log start offset 0 and log end offset 20601420 in 5823 ms 
(kafka.log.Log)
[2018-05-23 10:58:58,761] WARN Found a corrupted index file due to requirement 
failed: Corrupt index found, index file 
(/data/0/kafka-logs/transaction_r10_updates-15/20544956.index) has 
non-zero size but the last offset is 20544956 which is no larger than the base 
offset 20544956.}. deleting 
/data/0/kafka-logs/transaction_r10_updates-15/20544956.timeindex, 
/data/0/kafka-logs/transaction_r10_updates-15/20544956.index, and 
/data/0/kafka-logs/transaction_r10_updates-15/20544956.txnindex and 
rebuilding index... (kafka.log.Log)
[2018-05-23 10:58:58,763] INFO Loading producer state from snapshot file 
'/data/0/kafka-logs/transaction_r10_updates-15/20544956.snapshot' 
for partition transaction_r10_updates-15 (kafka.log.ProducerStateManager)
[2018-05-23 10:59:02,202] INFO Recovering unflushed segment 20544956 in log 
transaction_r10_updates-15. (kafka.log.Log){noformat}
The set up is the following,

Broker is 1.0.1

There are mirrors from another cluster using client 0.10.2.1 

There are kafka streams and other custom consumer / producers using 1.0.0 
client.

 

While is doing this the JVM of the broker is up but it doesn't respond so it's 
impossible to produce, consume or run any commands.

If I delete all the index files the WARN turns into an ERROR, which takes a 
long time (1 day last time I tried) but eventually it goes into a healthy 
state, then I start the producers and things are still healthy, but when I 
start the consumers it quickly goes into the original WARN loop, which seems 
infinite.

 

I couldn't find any references to the problem, it seems to be at least 
missreporting the issue, and perhaps it's not infinite? I let it loop over the 
WARN for over a day and it never moved past that, and if there was something 
really wrong with the state maybe it should be reported.

The log cleaner log showed a few "too many files open" when it originally 
happened but ulimit has always been set to unlimited so I'm not sure what that 
error means.

  was:
I'm running into a situation where the server logs show continuously the 
following snippet:
{noformat}
[2018-05-23 10:58:56,590] INFO Loading producer state from offset 20601420 for 
partition transaction_r10_updates-6 with message format version 2 
(kafka.log.Log)
[2018-05-23 10:58:56,592] INFO Loading producer state from snapshot file 
'/data/0/kafka-logs/transaction_r10_updates-6/20601420.snapshot' 
for partition transaction_r10_u
pdates-6 (kafka.log.ProducerStateManager)
[2018-05-23 10:58:56,593] INFO Completed load of log transaction_r10_updates-6 
with 74 log segments, log start offset 0 and log end offset 20601420 in 5823 ms 
(kafka.log.Log)
[2018-05-23 10:58:58,761] WARN Found a corrupted index file due to requirement 
failed: Corrupt index found, index file 
(/data/0/kafka-logs/transaction_r10_updates-15/20544956.index) has 
non-zero size but the last offset is 20544956 which is no larger than the base 
offset 20544956.}. deleting 
/data/0/kafka-logs/transaction_r10_updates-15/20544956.timeindex, 
/data/0/kafka-logs/transaction_r10_updates-15/20544956.index, and 
/data/0/kafka-logs/transaction_r10_updates-15/20544956.txnindex and 
rebuilding index... (kafka.log.Log)
[2018-05-23 10:58:58,763] INFO Loading producer state from snapshot file 
'/data/0/kafka-logs/transaction_r10_updates-15/20544956.snapshot' 
for partition transaction_r10_updates-15 (kafka.log.ProducerStateManager)
[2018-05-23 10:59:02,202] INFO Recovering unflushed segment 20544956 in log 
transaction_r10_updates-15. (kafka.log.Log){noformat}
The set up is the following,

Broker is 1.0.1

There are mirrors from another cluster using client 0.10.2.1 

There are kafka streams and other custom consumer / producers using 1.0.0 
client.

 

While is doing this the JVM of the broker is up but it doesn't respond so it's 
impossible to produce, consume or run any commands.

If I delete all the index files the WARN turns into an ERROR, which takes a 
long time (1 day last time I tried) 

[jira] [Created] (KAFKA-6934) Csv reporter doesn't work for ConsoleConsumer

2018-05-23 Thread Sandor Murakozi (JIRA)
Sandor Murakozi created KAFKA-6934:
--

 Summary: Csv reporter doesn't work for ConsoleConsumer
 Key: KAFKA-6934
 URL: https://issues.apache.org/jira/browse/KAFKA-6934
 Project: Kafka
  Issue Type: Bug
Affects Versions: 1.1.0, 2.0.0
Reporter: Sandor Murakozi


Reproduction:

Start a broker listening to localhost:9092.

Start a console consumer with the following options:

{code}

kafka-console-consumer --topic test --bootstrap-server localhost:9092 
--csv-reporter-enabled --metrics-dir /tmp/metrics

{code}

The consumer consumes messages sent to the test topic, it will (re)create the 
/tmp/metrics dir, but it will not produce any metrics/files in that dir.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (KAFKA-6934) Csv reporter doesn't work for ConsoleConsumer

2018-05-23 Thread Viktor Somogyi (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-6934?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Viktor Somogyi reassigned KAFKA-6934:
-

Assignee: Viktor Somogyi

> Csv reporter doesn't work for ConsoleConsumer
> -
>
> Key: KAFKA-6934
> URL: https://issues.apache.org/jira/browse/KAFKA-6934
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 1.1.0, 2.0.0
>Reporter: Sandor Murakozi
>Assignee: Viktor Somogyi
>Priority: Major
>
> Reproduction:
> Start a broker listening to localhost:9092.
> Start a console consumer with the following options:
> {code}
> kafka-console-consumer --topic test --bootstrap-server localhost:9092 
> --csv-reporter-enabled --metrics-dir /tmp/metrics
> {code}
> The consumer consumes messages sent to the test topic, it will (re)create the 
> /tmp/metrics dir, but it will not produce any metrics/files in that dir.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-6934) Csv reporter doesn't work for ConsoleConsumer

2018-05-23 Thread Viktor Somogyi (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-6934?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16487254#comment-16487254
 ] 

Viktor Somogyi commented on KAFKA-6934:
---

I'll pick this up if you don't mind.

> Csv reporter doesn't work for ConsoleConsumer
> -
>
> Key: KAFKA-6934
> URL: https://issues.apache.org/jira/browse/KAFKA-6934
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 1.1.0, 2.0.0
>Reporter: Sandor Murakozi
>Priority: Major
>
> Reproduction:
> Start a broker listening to localhost:9092.
> Start a console consumer with the following options:
> {code}
> kafka-console-consumer --topic test --bootstrap-server localhost:9092 
> --csv-reporter-enabled --metrics-dir /tmp/metrics
> {code}
> The consumer consumes messages sent to the test topic, it will (re)create the 
> /tmp/metrics dir, but it will not produce any metrics/files in that dir.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (KAFKA-6932) Avroconverter does not propagate subjectname stratergy

2018-05-23 Thread Masthan Shaik (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-6932?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Masthan Shaik updated KAFKA-6932:
-
Description: 
Thecustomer is using 
io.confluent.kafka.serializers.subject.TopicRecordNameStrategy for 
AvroConverter but the avroconverter is defaulting to TopicNameStratergy

It looks like the avroconverter doesn't initialize or pass the strategy into 
the deserializer.

POST with input 
{"schema":"{\"type\":\"record\",\"name\":\"Channel\",\"namespace\":\"com.thing\",\"doc\":\"Channel
 Object\",\"fields\":[

{\"name\":\"channelId\",\"type\":\"string\",\"doc\":\"Unique channel id for a 
single time series\"}

]}"} to

in the above log it is using newgen.changelog.v1-key instead of the 
newgen.changelog.v1-

  was:
Thecustomer is using 
io.confluent.kafka.serializers.subject.TopicRecordNameStrategy for 
AvroConverter but the avroconverter is defaulting to TopicNameStratergy

It looks like the avroconverter doesn't initialize or pass the strategy into 
the deserializer.

POST with input 
{"schema":"{\"type\":\"record\",\"name\":\"ChannelKey\",\"namespace\":\"com.uptake\",\"doc\":\"Channel
 Object\",\"fields\":[

{\"name\":\"channelId\",\"type\":\"string\",\"doc\":\"Unique channel id for a 
single time series\"}

]}"} to

in the above log it is using replatform.changelog.v1-key instead of the 
replatform.changelog.v1-


> Avroconverter does not propagate subjectname stratergy
> --
>
> Key: KAFKA-6932
> URL: https://issues.apache.org/jira/browse/KAFKA-6932
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Affects Versions: 1.1.0
>Reporter: Masthan Shaik
>Priority: Major
>
> Thecustomer is using 
> io.confluent.kafka.serializers.subject.TopicRecordNameStrategy for 
> AvroConverter but the avroconverter is defaulting to TopicNameStratergy
> It looks like the avroconverter doesn't initialize or pass the strategy into 
> the deserializer.
> POST with input 
> {"schema":"{\"type\":\"record\",\"name\":\"Channel\",\"namespace\":\"com.thing\",\"doc\":\"Channel
>  Object\",\"fields\":[
> {\"name\":\"channelId\",\"type\":\"string\",\"doc\":\"Unique channel id for a 
> single time series\"}
> ]}"} to
> in the above log it is using newgen.changelog.v1-key instead of the 
> newgen.changelog.v1-



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-6933) Broker reports Corrupted index warnings apparently infinitely

2018-05-23 Thread Ismael Juma (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-6933?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16487333#comment-16487333
 ] 

Ismael Juma commented on KAFKA-6933:


Thanks for the report. Is this a production environment? Would you be able to 
test 1.1.0? There have been some improvements to the recovery logic in 1.1.0.

> Broker reports Corrupted index warnings apparently infinitely
> -
>
> Key: KAFKA-6933
> URL: https://issues.apache.org/jira/browse/KAFKA-6933
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 1.0.1
>Reporter: Franco Bonazza
>Priority: Major
>
> I'm running into a situation where the server logs show continuously the 
> following snippet:
> {noformat}
> [2018-05-23 10:58:56,590] INFO Loading producer state from offset 20601420 
> for partition transaction_r10_updates-6 with message format version 2 
> (kafka.log.Log)
> [2018-05-23 10:58:56,592] INFO Loading producer state from snapshot file 
> '/data/0/kafka-logs/transaction_r10_updates-6/20601420.snapshot' 
> for partition transaction_r10_u
> pdates-6 (kafka.log.ProducerStateManager)
> [2018-05-23 10:58:56,593] INFO Completed load of log 
> transaction_r10_updates-6 with 74 log segments, log start offset 0 and log 
> end offset 20601420 in 5823 ms (kafka.log.Log)
> [2018-05-23 10:58:58,761] WARN Found a corrupted index file due to 
> requirement failed: Corrupt index found, index file 
> (/data/0/kafka-logs/transaction_r10_updates-15/20544956.index) 
> has non-zero size but the last offset is 20544956 which is no larger than the 
> base offset 20544956.}. deleting 
> /data/0/kafka-logs/transaction_r10_updates-15/20544956.timeindex, 
> /data/0/kafka-logs/transaction_r10_updates-15/20544956.index, and 
> /data/0/kafka-logs/transaction_r10_updates-15/20544956.txnindex 
> and rebuilding index... (kafka.log.Log)
> [2018-05-23 10:58:58,763] INFO Loading producer state from snapshot file 
> '/data/0/kafka-logs/transaction_r10_updates-15/20544956.snapshot' 
> for partition transaction_r10_updates-15 (kafka.log.ProducerStateManager)
> [2018-05-23 10:59:02,202] INFO Recovering unflushed segment 20544956 in log 
> transaction_r10_updates-15. (kafka.log.Log){noformat}
> The set up is the following,
> Broker is 1.0.1
> There are mirrors from another cluster using client 0.10.2.1 
> There are kafka streams and other custom consumer / producers using 1.0.0 
> client.
>  
> While is doing this the JVM of the broker is up but it doesn't respond so 
> it's impossible to produce, consume or run any commands.
> If I delete all the index files the WARN turns into an ERROR, which takes a 
> long time (1 day last time I tried) but eventually it goes into a healthy 
> state, then I start the producers and things are still healthy, but when I 
> start the consumers it quickly goes into the original WARN loop, which seems 
> infinite.
>  
> I couldn't find any references to the problem, it seems to be at least 
> missreporting the issue, and perhaps it's not infinite? I let it loop over 
> the WARN for over a day and it never moved past that, and if there was 
> something really wrong with the state maybe it should be reported.
> The log cleaner log showed a few "too many files open" when it originally 
> happened but ulimit has always been set to unlimited so I'm not sure what 
> that error means.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (KAFKA-6935) KIP-295 Add Streams Config for Optional Optimization

2018-05-23 Thread Bill Bejeck (JIRA)
Bill Bejeck created KAFKA-6935:
--

 Summary: KIP-295 Add Streams Config for Optional Optimization
 Key: KAFKA-6935
 URL: https://issues.apache.org/jira/browse/KAFKA-6935
 Project: Kafka
  Issue Type: New Feature
  Components: streams
Affects Versions: 2.0.0
Reporter: Bill Bejeck
Assignee: Bill Bejeck






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-6935) KIP-295 Add Streams Config for Optional Optimization

2018-05-23 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-6935?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16487414#comment-16487414
 ] 

ASF GitHub Bot commented on KAFKA-6935:
---

bbejeck opened a new pull request #5071: KAFKA-6935: Add config for allowing 
optional optimization
URL: https://github.com/apache/kafka/pull/5071
 
 
   Adding configuration to `StreamsConfig` allowing for making topology 
optimization optional.
   
   Added unit tests are verifying default values, setting correct value and 
failure on invalid values.
   
   ### Committer Checklist (excluded from commit message)
   - [ ] Verify design and implementation 
   - [ ] Verify test coverage and CI build status
   - [ ] Verify documentation (including upgrade notes)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> KIP-295 Add Streams Config for Optional Optimization
> 
>
> Key: KAFKA-6935
> URL: https://issues.apache.org/jira/browse/KAFKA-6935
> Project: Kafka
>  Issue Type: New Feature
>  Components: streams
>Affects Versions: 2.0.0
>Reporter: Bill Bejeck
>Assignee: Bill Bejeck
>Priority: Major
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-6933) Broker reports Corrupted index warnings apparently infinitely

2018-05-23 Thread Franco Bonazza (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-6933?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16487489#comment-16487489
 ] 

Franco Bonazza commented on KAFKA-6933:
---

It's a "sandbox" environment part of the company perceives as production which 
would be quite painful to restore from scratch which is meant to rollover / 
expand into a production environment soon.

I was first going to try upgrading the consumers to 1.0.1 (as my current 
suspicion is that the 1.0.0 clients are to blame) but ultimately upgrade to 
1.1.0, I was just wondering if there was any known issue at play since I'm 
struggling to understand the root cause or what is consistently re-creating the 
problem.


I was also trying to avoid piling up issues by migrating the broker which could 
add other problems, but I'm very close to just do that.

> Broker reports Corrupted index warnings apparently infinitely
> -
>
> Key: KAFKA-6933
> URL: https://issues.apache.org/jira/browse/KAFKA-6933
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 1.0.1
>Reporter: Franco Bonazza
>Priority: Major
>
> I'm running into a situation where the server logs show continuously the 
> following snippet:
> {noformat}
> [2018-05-23 10:58:56,590] INFO Loading producer state from offset 20601420 
> for partition transaction_r10_updates-6 with message format version 2 
> (kafka.log.Log)
> [2018-05-23 10:58:56,592] INFO Loading producer state from snapshot file 
> '/data/0/kafka-logs/transaction_r10_updates-6/20601420.snapshot' 
> for partition transaction_r10_u
> pdates-6 (kafka.log.ProducerStateManager)
> [2018-05-23 10:58:56,593] INFO Completed load of log 
> transaction_r10_updates-6 with 74 log segments, log start offset 0 and log 
> end offset 20601420 in 5823 ms (kafka.log.Log)
> [2018-05-23 10:58:58,761] WARN Found a corrupted index file due to 
> requirement failed: Corrupt index found, index file 
> (/data/0/kafka-logs/transaction_r10_updates-15/20544956.index) 
> has non-zero size but the last offset is 20544956 which is no larger than the 
> base offset 20544956.}. deleting 
> /data/0/kafka-logs/transaction_r10_updates-15/20544956.timeindex, 
> /data/0/kafka-logs/transaction_r10_updates-15/20544956.index, and 
> /data/0/kafka-logs/transaction_r10_updates-15/20544956.txnindex 
> and rebuilding index... (kafka.log.Log)
> [2018-05-23 10:58:58,763] INFO Loading producer state from snapshot file 
> '/data/0/kafka-logs/transaction_r10_updates-15/20544956.snapshot' 
> for partition transaction_r10_updates-15 (kafka.log.ProducerStateManager)
> [2018-05-23 10:59:02,202] INFO Recovering unflushed segment 20544956 in log 
> transaction_r10_updates-15. (kafka.log.Log){noformat}
> The set up is the following,
> Broker is 1.0.1
> There are mirrors from another cluster using client 0.10.2.1 
> There are kafka streams and other custom consumer / producers using 1.0.0 
> client.
>  
> While is doing this the JVM of the broker is up but it doesn't respond so 
> it's impossible to produce, consume or run any commands.
> If I delete all the index files the WARN turns into an ERROR, which takes a 
> long time (1 day last time I tried) but eventually it goes into a healthy 
> state, then I start the producers and things are still healthy, but when I 
> start the consumers it quickly goes into the original WARN loop, which seems 
> infinite.
>  
> I couldn't find any references to the problem, it seems to be at least 
> missreporting the issue, and perhaps it's not infinite? I let it loop over 
> the WARN for over a day and it never moved past that, and if there was 
> something really wrong with the state maybe it should be reported.
> The log cleaner log showed a few "too many files open" when it originally 
> happened but ulimit has always been set to unlimited so I'm not sure what 
> that error means.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (KAFKA-6776) Connect Rest Extension Plugin

2018-05-23 Thread Magesh kumar Nandakumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-6776?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Magesh kumar Nandakumar updated KAFKA-6776:
---
Fix Version/s: 2.0.0

> Connect Rest Extension Plugin
> -
>
> Key: KAFKA-6776
> URL: https://issues.apache.org/jira/browse/KAFKA-6776
> Project: Kafka
>  Issue Type: Improvement
>  Components: KafkaConnect
>Reporter: Magesh kumar Nandakumar
>Assignee: Magesh kumar Nandakumar
>Priority: Major
> Fix For: 2.0.0
>
>
> This covers the connect Rest extension plugin covered at KIP-285  
> https://cwiki.apache.org/confluence/display/KAFKA/KIP+285+-+Connect+Rest+Extension+Plugin



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-6356) UnknownTopicOrPartitionException & NotLeaderForPartitionException and log deletion happening with retention bytes kept at -1.

2018-05-23 Thread Manikumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-6356?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-6356.
--
Resolution: Not A Problem

old data is discarded after log retention period or when the log reaches 
retention time.  In this case, you may need to increase retention period.  
replication errors looks normal. Please reopen if you think the issue still 
exists

Post these kind of queries to us...@kafka.apache.org mailing list 
(http://kafka.apache.org/contact) for quicker response.

> UnknownTopicOrPartitionException & NotLeaderForPartitionException and log 
> deletion happening with retention bytes kept at -1.
> -
>
> Key: KAFKA-6356
> URL: https://issues.apache.org/jira/browse/KAFKA-6356
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.10.1.0
> Environment: Cent OS 7.2,
> HDD : 2Tb,
> CPUs: 56 cores,
> RAM : 256GB
>Reporter: kaushik srinivas
>Priority: Major
> Attachments: configs.txt, stderr_b0, stderr_b1, stderr_b2, stdout_b0, 
> stdout_b1, stdout_b2, topic_description, topic_offsets
>
>
> Facing issues in kafka topic with partitions and replication factor of 3.
> Config used :
> No of partitions : 20
> replication factor : 3
> No of brokers : 3
> Memory for broker : 32GB
> Heap for broker : 12GB
> Producer is run to produce data for 20 partitions of a single topic.
> But observed that partitions for which the leader is one of the 
> broker(broker-1), the offsets are never incremented and also we see log file 
> with 0MB size in the broker disk.
> Seeing below error in the brokers :
> error 1:
> 2017-12-13 07:11:11,191] ERROR [ReplicaFetcherThread-0-2], Error for 
> partition [test2,5] to broker 
> 2:org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This 
> server does not host this topic-partition. (kafka.server.ReplicaFetcherThread)
> error 2:
> [2017-12-11 12:19:41,599] ERROR [ReplicaFetcherThread-0-2], Error for 
> partition [test1,13] to broker 
> 2:org.apache.kafka.common.errors.NotLeaderForPartitionException: This server 
> is not the leader for that topic-partition. 
> (kafka.server.ReplicaFetcherThread)
> Attaching,
> 1. error and std out files of all the brokers.
> 2. kafka config used.
> 3. offsets and topic description.
> Retention bytes was kept to -1 and retention period 96 hours.
> But still observing some of the log files deleting at the broker,
> from logs :
> [2017-12-11 12:20:20,586] INFO Deleting index 
> /var/lib/mesos/slave/slaves/7b319cf4-f06e-4a35-a6fe-fd4fcc0548e6-S7/frameworks/7b319cf4-f06e-4a35-a6fe-fd4fcc0548e6-0006/executors/ckafka__5f085d0c-e296-40f0-a686-8953dd14e4c6/runs/506a1ce7-23d1-45ea-bb7c-84e015405285/kafka-broker-data/broker-1/test1-12/.timeindex
>  (kafka.log.TimeIndex)
> [2017-12-11 12:20:20,587] INFO Deleted log for partition [test1,12] in 
> /var/lib/mesos/slave/slaves/7b319cf4-f06e-4a35-a6fe-fd4fcc0548e6-S7/frameworks/7b319cf4-f06e-4a35-a6fe-fd4fcc0548e6-0006/executors/ckafka__5f085d0c-e296-40f0-a686-8953dd14e4c6/runs/506a1ce7-23d1-45ea-bb7c-84e015405285/kafka-broker-data/broker-1/test1-12.
>  (kafka.log.LogManager)
> We are expecting the logs to be never delete if retention bytes set to -1.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (KAFKA-6935) KIP-295 Add Streams Config for Optional Optimization

2018-05-23 Thread Matthias J. Sax (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-6935?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias J. Sax updated KAFKA-6935:
---
Labels: kip  (was: )

> KIP-295 Add Streams Config for Optional Optimization
> 
>
> Key: KAFKA-6935
> URL: https://issues.apache.org/jira/browse/KAFKA-6935
> Project: Kafka
>  Issue Type: New Feature
>  Components: streams
>Affects Versions: 2.0.0
>Reporter: Bill Bejeck
>Assignee: Bill Bejeck
>Priority: Major
>  Labels: kip
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (KAFKA-6935) KIP-295 Add Streams Config for Optional Optimization

2018-05-23 Thread Matthias J. Sax (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-6935?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias J. Sax updated KAFKA-6935:
---
Description: KIP-295: 
https://cwiki.apache.org/confluence/display/KAFKA/KIP-295%3A+Add+Streams+Configuration+Allowing+for+Optional+Topology+Optimization

> KIP-295 Add Streams Config for Optional Optimization
> 
>
> Key: KAFKA-6935
> URL: https://issues.apache.org/jira/browse/KAFKA-6935
> Project: Kafka
>  Issue Type: New Feature
>  Components: streams
>Affects Versions: 2.0.0
>Reporter: Bill Bejeck
>Assignee: Bill Bejeck
>Priority: Major
>  Labels: kip
>
> KIP-295: 
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-295%3A+Add+Streams+Configuration+Allowing+for+Optional+Topology+Optimization



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-6774) Improve default groupId behavior in consumer

2018-05-23 Thread Vahid Hashemian (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-6774?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16487767#comment-16487767
 ] 

Vahid Hashemian commented on KAFKA-6774:


[~hachikuji] Yes that makes sense. I believe this has been noted in the KIP as 
well.

> Improve default groupId behavior in consumer
> 
>
> Key: KAFKA-6774
> URL: https://issues.apache.org/jira/browse/KAFKA-6774
> Project: Kafka
>  Issue Type: Improvement
>  Components: consumer
>Reporter: Jason Gustafson
>Assignee: Vahid Hashemian
>Priority: Major
>  Labels: needs-kip
>
> At the moment, the default groupId in the consumer is "". If you try to use 
> this to subscribe() to a topic, the broker will reject the group as invalid. 
> On the other hand, if you use it with assign(), then the user will be able to 
> fetch and commit offsets using the empty groupId. Probably 99% of the time, 
> this is not what the user expects. Instead you would probably expect that if 
> no groupId is provided, then no committed offsets will be fetched at all and 
> we'll just use the auto reset behavior if we don't have a current position.
> Here are two potential solutions (both requiring a KIP):
> 1. Change the default to null. We will preserve the current behavior for 
> subscribe(). When using assign(), we will not bother fetching committed 
> offsets for the null groupId, and any attempt to commit offsets will raise an 
> error. The user can still use the empty groupId, but they have to specify it 
> explicitly.
> 2. Keep the current default, but change the consumer to treat this value as 
> if it were null as described in option 1. The argument for this behavior is 
> that using the empty groupId to commit offsets is inherently a dangerous 
> practice and should not be permitted. We'd have to convince ourselves that 
> we're fine not needing to allow the empty groupId for backwards compatibility 
> though.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (KAFKA-6936) Scala API Wrapper for Streams uses default serializer for table aggregate

2018-05-23 Thread Daniel Heinrich (JIRA)
Daniel Heinrich created KAFKA-6936:
--

 Summary: Scala API Wrapper for Streams uses default serializer for 
table aggregate
 Key: KAFKA-6936
 URL: https://issues.apache.org/jira/browse/KAFKA-6936
 Project: Kafka
  Issue Type: Bug
  Components: streams
Affects Versions: 2.0.0
Reporter: Daniel Heinrich


On of the goals of the Scala API is to not fall back on the configured default 
serializer, but let the compiler provide them through implicits.

The aggregate method on KGroupedStream misses to achieve this goal.

Compared to the Java API is this behavior very supprising, because no other 
stream operation falls back to the default serializer and a developer assums, 
that the compiler checks for the correct serializer type.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (KAFKA-6937) In-sync replica delayed during fetch if replica throttle is exceeded

2018-05-23 Thread Jun Rao (JIRA)
Jun Rao created KAFKA-6937:
--

 Summary: In-sync replica delayed during fetch if replica throttle 
is exceeded
 Key: KAFKA-6937
 URL: https://issues.apache.org/jira/browse/KAFKA-6937
 Project: Kafka
  Issue Type: Bug
Reporter: Jun Rao
Assignee: Jun Rao


When replication throttling is enabled, in-sync replica's traffic should never 
be throttled. However, in DelayedFetch.tryComplete(), we incorrectly delay the 
completion of an in-sync replica fetch request if replication throttling is 
engaged. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-6937) In-sync replica delayed during fetch if replica throttle is exceeded

2018-05-23 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-6937?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16488077#comment-16488077
 ] 

ASF GitHub Bot commented on KAFKA-6937:
---

junrao opened a new pull request #5074: KAFKA-6937: In-sync replica delayed 
during fetch if replica throttle …
URL: https://github.com/apache/kafka/pull/5074
 
 
   …is exceeded
   
   * Added a unit test.
   
   ### Committer Checklist (excluded from commit message)
   - [ ] Verify design and implementation 
   - [ ] Verify test coverage and CI build status
   - [ ] Verify documentation (including upgrade notes)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> In-sync replica delayed during fetch if replica throttle is exceeded
> 
>
> Key: KAFKA-6937
> URL: https://issues.apache.org/jira/browse/KAFKA-6937
> Project: Kafka
>  Issue Type: Bug
>Reporter: Jun Rao
>Assignee: Jun Rao
>Priority: Major
>
> When replication throttling is enabled, in-sync replica's traffic should 
> never be throttled. However, in DelayedFetch.tryComplete(), we incorrectly 
> delay the completion of an in-sync replica fetch request if replication 
> throttling is engaged. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (KAFKA-6937) In-sync replica delayed during fetch if replica throttle is exceeded

2018-05-23 Thread Jun Rao (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-6937?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jun Rao updated KAFKA-6937:
---
Description: 
When replication throttling is enabled, in-sync replica's traffic should never 
be throttled. However, in DelayedFetch.tryComplete(), we incorrectly delay the 
completion of an in-sync replica fetch request if replication throttling is 
engaged. 

The impact is that the producer may see increased latency if acks = all. The 
delivery of the message to the consumer may also be delayed.

  was:When replication throttling is enabled, in-sync replica's traffic should 
never be throttled. However, in DelayedFetch.tryComplete(), we incorrectly 
delay the completion of an in-sync replica fetch request if replication 
throttling is engaged. 


> In-sync replica delayed during fetch if replica throttle is exceeded
> 
>
> Key: KAFKA-6937
> URL: https://issues.apache.org/jira/browse/KAFKA-6937
> Project: Kafka
>  Issue Type: Bug
>Reporter: Jun Rao
>Assignee: Jun Rao
>Priority: Major
>
> When replication throttling is enabled, in-sync replica's traffic should 
> never be throttled. However, in DelayedFetch.tryComplete(), we incorrectly 
> delay the completion of an in-sync replica fetch request if replication 
> throttling is engaged. 
> The impact is that the producer may see increased latency if acks = all. The 
> delivery of the message to the consumer may also be delayed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (KAFKA-6937) In-sync replica delayed during fetch if replica throttle is exceeded

2018-05-23 Thread Jun Rao (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-6937?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jun Rao updated KAFKA-6937:
---
Affects Version/s: 0.11.0.1
   1.1.0
   1.0.1

> In-sync replica delayed during fetch if replica throttle is exceeded
> 
>
> Key: KAFKA-6937
> URL: https://issues.apache.org/jira/browse/KAFKA-6937
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.11.0.1, 1.1.0, 1.0.1
>Reporter: Jun Rao
>Assignee: Jun Rao
>Priority: Major
>
> When replication throttling is enabled, in-sync replica's traffic should 
> never be throttled. However, in DelayedFetch.tryComplete(), we incorrectly 
> delay the completion of an in-sync replica fetch request if replication 
> throttling is engaged. 
> The impact is that the producer may see increased latency if acks = all. The 
> delivery of the message to the consumer may also be delayed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (KAFKA-6938) Add documentation for accessing Headers on Kafka Streams Processor API

2018-05-23 Thread Jorge Quilcate (JIRA)
Jorge Quilcate created KAFKA-6938:
-

 Summary: Add documentation for accessing Headers on Kafka Streams 
Processor API
 Key: KAFKA-6938
 URL: https://issues.apache.org/jira/browse/KAFKA-6938
 Project: Kafka
  Issue Type: Improvement
  Components: streams
Reporter: Jorge Quilcate
Assignee: Jorge Quilcate


Document changes implemented on KIP-244.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (KAFKA-6938) Add documentation for accessing Headers on Kafka Streams Processor API

2018-05-23 Thread Matthias J. Sax (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-6938?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias J. Sax updated KAFKA-6938:
---
Affects Version/s: 2.0.0

> Add documentation for accessing Headers on Kafka Streams Processor API
> --
>
> Key: KAFKA-6938
> URL: https://issues.apache.org/jira/browse/KAFKA-6938
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams
>Affects Versions: 2.0.0
>Reporter: Jorge Quilcate
>Assignee: Jorge Quilcate
>Priority: Major
>
> Document changes implemented on KIP-244.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (KAFKA-6938) Add documentation for accessing Headers on Kafka Streams Processor API

2018-05-23 Thread Matthias J. Sax (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-6938?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias J. Sax updated KAFKA-6938:
---
Component/s: documentation

> Add documentation for accessing Headers on Kafka Streams Processor API
> --
>
> Key: KAFKA-6938
> URL: https://issues.apache.org/jira/browse/KAFKA-6938
> Project: Kafka
>  Issue Type: Improvement
>  Components: documentation, streams
>Affects Versions: 2.0.0
>Reporter: Jorge Quilcate
>Assignee: Jorge Quilcate
>Priority: Major
> Fix For: 2.0.0
>
>
> Document changes implemented on KIP-244.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (KAFKA-6938) Add documentation for accessing Headers on Kafka Streams Processor API

2018-05-23 Thread Matthias J. Sax (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-6938?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias J. Sax updated KAFKA-6938:
---
Fix Version/s: 2.0.0

> Add documentation for accessing Headers on Kafka Streams Processor API
> --
>
> Key: KAFKA-6938
> URL: https://issues.apache.org/jira/browse/KAFKA-6938
> Project: Kafka
>  Issue Type: Improvement
>  Components: documentation, streams
>Affects Versions: 2.0.0
>Reporter: Jorge Quilcate
>Assignee: Jorge Quilcate
>Priority: Major
> Fix For: 2.0.0
>
>
> Document changes implemented on KIP-244.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-6738) Kafka Connect handling of bad data

2018-05-23 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-6738?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16488234#comment-16488234
 ] 

ASF GitHub Bot commented on KAFKA-6738:
---

wicknicks closed pull request #5010: KAFKA-6738: (WIP) Error Handling in Connect
URL: https://github.com/apache/kafka/pull/5010
 
 
   

This is a PR merged from a forked repository.
As GitHub hides the original diff on merge, it is displayed below for
the sake of provenance:

As this is a foreign pull request (from a fork), the diff is supplied
below (as it won't show otherwise due to GitHub magic):

diff --git a/checkstyle/suppressions.xml b/checkstyle/suppressions.xml
index 64258bf7b07..4f31ce264c7 100644
--- a/checkstyle/suppressions.xml
+++ b/checkstyle/suppressions.xml
@@ -80,7 +80,7 @@
   
files="(KafkaConfigBackingStore|RequestResponseTest|WorkerSinkTaskTest).java"/>
 
 
+  files="(WorkerSourceTask|WorkerSinkTask).java"/>
 
  
props) {
@@ -139,6 +143,13 @@ public ConnectorConfig(Plugins plugins, ConfigDef 
configDef, Map
 );
 }
 
+/**
+ * @return properties to configure error handlers and reporters.
+ */
+public Map errorHandlerConfig() {
+return originalsWithPrefix(ERROR_HANDLING_CONFIG + ".");
+}
+
 @Override
 public Object get(String key) {
 return enrichedConfig.get(key);
@@ -166,6 +177,25 @@ public Object get(String key) {
 return transformations;
 }
 
+/**
+ * @return an ordered list of stages describing the transformations in 
this connector. The order is specified by
+ * {@link #TRANSFORMS_CONFIG}.
+ */
+public List transformationAsStages() {
+final List transformAliases = getList(TRANSFORMS_CONFIG);
+List stages = new ArrayList<>();
+for (String alias : transformAliases) {
+final String prefix = TRANSFORMS_CONFIG + "." + alias + ".";
+stages.add(Stage.newBuilder(StageType.TRANSFORMATION)
+.setExecutingClass(getClass(prefix + "type"))
+.setConfig(originalsWithPrefix(prefix))
+.build()
+);
+}
+
+return stages;
+}
+
 /**
  * Returns an enriched {@link ConfigDef} building upon the {@code 
ConfigDef}, using the current configuration specified in {@code props} as an 
input.
  * 
diff --git 
a/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/TransformationChain.java
 
b/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/TransformationChain.java
index e1d8b1f2e85..3d9baf3e541 100644
--- 
a/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/TransformationChain.java
+++ 
b/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/TransformationChain.java
@@ -17,6 +17,9 @@
 package org.apache.kafka.connect.runtime;
 
 import org.apache.kafka.connect.connector.ConnectRecord;
+import org.apache.kafka.connect.runtime.errors.OperationExecutor;
+import org.apache.kafka.connect.runtime.errors.ProcessingContext;
+import org.apache.kafka.connect.runtime.errors.impl.NoopExecutor;
 import org.apache.kafka.connect.transforms.Transformation;
 
 import java.util.Collections;
@@ -32,10 +35,29 @@ public TransformationChain(List> 
transformations) {
 }
 
 public R apply(R record) {
+return apply(record, NoopExecutor.INSTANCE, null);
+}
+
+public R apply(R record, OperationExecutor operationExecutor, 
ProcessingContext processingContext) {
 if (transformations.isEmpty()) return record;
 
-for (Transformation transformation : transformations) {
-record = transformation.apply(record);
+for (final Transformation transformation : transformations) {
+final R current = record;
+
+// set the current record
+processingContext.setRecord(current);
+
+// execute the operation
+record = operationExecutor.execute(new 
OperationExecutor.Operation() {
+@Override
+public R apply() {
+return transformation.apply(current);
+}
+}, null, processingContext);
+
+// move to the next stage
+processingContext.next();
+
 if (record == null) break;
 }
 
@@ -62,7 +84,7 @@ public int hashCode() {
 }
 
 public static > TransformationChain noOp() {
-return new 
TransformationChain(Collections.>emptyList());
+return new 
TransformationChain<>(Collections.>emptyList());
 }
 
 }
diff --git 
a/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/Worker.java 
b/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/Worker.java
index 1c6465855ff..dd560e28467 100644
--- a/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/Worker.java
+++ b/connect/runtime/src/main/java/org/apache/kafka/connect/

[jira] [Created] (KAFKA-6939) Change the default of log.message.timestamp.difference.max.ms to 500 years

2018-05-23 Thread Badai Aqrandista (JIRA)
Badai Aqrandista created KAFKA-6939:
---

 Summary: Change the default of 
log.message.timestamp.difference.max.ms to 500 years
 Key: KAFKA-6939
 URL: https://issues.apache.org/jira/browse/KAFKA-6939
 Project: Kafka
  Issue Type: Improvement
Reporter: Badai Aqrandista


If producer incorrectly provides timestamp in microsecond (not in millisecond), 
the record is accepted by default and can cause broker to roll the segment 
files continuously. And on a heavily used broker, this will generate a lot of 
index files, which then causes the broker to hit `vm.max_map_count`.

So I'd like to suggest changing the default for 
log.message.timestamp.difference.max.ms to 157680 (500 years * 365 days 
* 86400 seconds * 1000). This would reject timestamp in microsecond from 
producer and still allow most historical data to be stored in Kafka.




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (KAFKA-6937) In-sync replica delayed during fetch if replica throttle is exceeded

2018-05-23 Thread Jun Rao (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-6937?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jun Rao updated KAFKA-6937:
---
Component/s: core

> In-sync replica delayed during fetch if replica throttle is exceeded
> 
>
> Key: KAFKA-6937
> URL: https://issues.apache.org/jira/browse/KAFKA-6937
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 0.11.0.1, 1.1.0, 1.0.1
>Reporter: Jun Rao
>Assignee: Jun Rao
>Priority: Major
>
> When replication throttling is enabled, in-sync replica's traffic should 
> never be throttled. However, in DelayedFetch.tryComplete(), we incorrectly 
> delay the completion of an in-sync replica fetch request if replication 
> throttling is engaged. 
> The impact is that the producer may see increased latency if acks = all. The 
> delivery of the message to the consumer may also be delayed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-6936) Scala API Wrapper for Streams uses default serializer for table aggregate

2018-05-23 Thread Guozhang Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-6936?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16488381#comment-16488381
 ] 

Guozhang Wang commented on KAFKA-6936:
--

Is this PR going to resolve your issue? 
https://github.com/apache/kafka/pull/5066

> Scala API Wrapper for Streams uses default serializer for table aggregate
> -
>
> Key: KAFKA-6936
> URL: https://issues.apache.org/jira/browse/KAFKA-6936
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 2.0.0
>Reporter: Daniel Heinrich
>Priority: Major
>
> On of the goals of the Scala API is to not fall back on the configured 
> default serializer, but let the compiler provide them through implicits.
> The aggregate method on KGroupedStream misses to achieve this goal.
> Compared to the Java API is this behavior very supprising, because no other 
> stream operation falls back to the default serializer and a developer assums, 
> that the compiler checks for the correct serializer type.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (KAFKA-6940) Kafka Cluster and Zookeeper ensemble configuration with SASL authentication

2018-05-23 Thread Shashank Jain (JIRA)
Shashank Jain created KAFKA-6940:


 Summary: Kafka Cluster and Zookeeper ensemble configuration with 
SASL authentication
 Key: KAFKA-6940
 URL: https://issues.apache.org/jira/browse/KAFKA-6940
 Project: Kafka
  Issue Type: Task
  Components: consumer, KafkaConnect, log, producer , security, zkclient
Affects Versions: 0.11.0.2
 Environment: PRE Production
Reporter: Shashank Jain


Hi All, 
 
 
I have a working  Kafka Cluster and Zookeeper Ensemble  but  after  integrating 
  SASL authentication I am facing below exception, 
 
 
Zookeeper:- 
 
 
2018-05-23 07:39:59,476 [myid:1] - INFO  [ProcessThread(sid:1 cport:-1):: ] - 
Got user-level KeeperException when processing sessionid:0x301cae0b3480002 
type:delete cxid:0x48 zxid:0x2004e txntype:-1 reqpath:n/a Error 
Path:/admin/preferred_replica_election Error:KeeperErrorCode = NoNode for 
/admin/preferred_replica_election
2018-05-23 07:40:39,240 [myid:1] - INFO  [ProcessThread(sid:1 
cport:-1)::PrepRequestProcessor@653] - Got user-level KeeperException when 
processing sessionid:0x200b4f13c190006 type:create cxid:0x20 zxid:0x20052 
txntype:-1 reqpath:n/a Error Path:/brokers Error:KeeperErrorCode = NodeExists 
for /brokers
2018-05-23 07:40:39,240 [myid:1] - INFO  [ProcessThread(sid:1 
cport:-1)::PrepRequestProcessor@653] - Got user-level KeeperException when 
processing sessionid:0x200b4f13c190006 type:create cxid:0x21 zxid:0x20053 
txntype:-1 reqpath:n/a Error Path:/brokers/ids Error:KeeperErrorCode = 
NodeExists for /brokers/ids
2018-05-23 07:41:00,864 [myid:1] - INFO  [ProcessThread(sid:1 
cport:-1)::PrepRequestProcessor@653] - Got user-level KeeperException when 
processing sessionid:0x301cae0b3480004 type:create cxid:0x20 zxid:0x20058 
txntype:-1 reqpath:n/a Error Path:/brokers Error:KeeperErrorCode = NodeExists 
for /brokers
2018-05-23 07:41:00,864 [myid:1] - INFO  [ProcessThread(sid:1 
cport:-1)::PrepRequestProcessor@653] - Got user-level KeeperException when 
processing sessionid:0x301cae0b3480004 type:create cxid:0x21 zxid:0x20059 
txntype:-1 reqpath:n/a Error Path:/brokers/ids Error:KeeperErrorCode = 
NodeExists for /brokers/ids
2018-05-23 07:41:28,456 [myid:1] - INFO  [ProcessThread(sid:1 
cport:-1)::PrepRequestProcessor@487] - Processed session termination for 
sessionid: 0x200b4f13c190002
2018-05-23 07:41:29,563 [myid:1] - INFO  [ProcessThread(sid:1 
cport:-1)::PrepRequestProcessor@487] - Processed session termination for 
sessionid: 0x301cae0b3480002
2018-05-23 07:41:29,569 [myid:1] - INFO  [ProcessThread(sid:1 
cport:-1)::PrepRequestProcessor@653] - Got user-level KeeperException when 
processing sessionid:0x200b4f13c190006 type:create cxid:0x2d zxid:0x2005f 
txntype:-1 reqpath:n/a Error Path:/controller Error:KeeperErrorCode = 
NodeExists for /controller
2018-05-23 07:41:29,679 [myid:1] - INFO  [ProcessThread(sid:1 
cport:-1)::PrepRequestProcessor@653] - Got user-level KeeperException when 
processing sessionid:0x301cae0b3480004 type:delete cxid:0x4e zxid:0x20061 
txntype:-1 reqpath:n/a Error Path:/admin/preferred_replica_election 
Error:KeeperErrorCode = NoNode for /admin/preferred_replica_election
 
 
Kafka:- 
 
[2018-05-23 09:06:31,969] ERROR [ReplicaFetcherThread-0-1]: Error for partition 
[23MAY,0] to broker 
1:org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
does not host this topic-partition. (kafka.server.ReplicaFetcherThread)
 
 
 
ERROR [ReplicaFetcherThread-0-2]: Current offset 142474 for partition [23MAY,1] 
out of range; reset offset to 142478 (kafka.server.ReplicaFetcherThread)
 
 
ERROR [ReplicaFetcherThread-0-2]: Error for partition [23MAY,2] to broker 
2:org.apache.kafka.common.errors.NotLeaderForPartitionException: This server is 
not the leader for that topic-partition. (kafka.server.ReplicaFetcherThread)
 
 
 
Below are my configuration:- 
 
 
Zookeeper:- 
 
 java.env
SERVER_JVMFLAGS="-Djava.security.auth.login.config=/usr/local/zookeeper/conf/ZK_jaas.conf"
 
 
ZK_jaas.conf
Server
 
{ org.apache.zookeeper.server.auth.DigestLoginModule required
  username="admin"
  password="admin-secret"
  user_admin="admin-secret";
 };
 
QuorumServer {
       org.apache.zookeeper.server.auth.DigestLoginModule required
       user_test="test";
};
 
QuorumLearner {
       org.apache.zookeeper.server.auth.DigestLoginModule required
       username="test"
       password="test";
};
 
 
zoo.cfg
# The number of milliseconds of each tick
tickTime=2000
 
# The number of ticks that the initial
# synchronization phase can take
initLimit=10
 
# The number of ticks that can pass between
# sending a request and getting an acknowledgment
syncLimit=5
 
# the directory where the snapshot is stored.
# do not use /tmp for storage, /tmp here is just
# example sakes.
#dataDir=/zookeeper/data
dataDir=/zookeeper/zookeeper-3.4.12/data
 
#  dataLogDir === >     where you