[jira] [Commented] (KAFKA-4076) Kafka broker shuts down due to irrecoverable IO error

2016-08-22 Thread Manikumar Reddy (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4076?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15432252#comment-15432252
 ] 

Manikumar Reddy commented on KAFKA-4076:


You are storing kafka data logs to /tmp folder. This is not a good practice,  
since your OS may clean the tmp space.
Suggest you to change the kafka log.dirs  to some other valid directory.

> Kafka broker shuts down due to irrecoverable IO error
> -
>
> Key: KAFKA-4076
> URL: https://issues.apache.org/jira/browse/KAFKA-4076
> Project: Kafka
>  Issue Type: Bug
>  Components: log
>Affects Versions: 0.9.0.0
>Reporter: Anyun 
>
> kafka.common.KafkaStorageException: I/O exception in append to log 
> '__consumer_offsets-48'
> at kafka.log.Log.append(Log.scala:318)
> at kafka.cluster.Partition$$anonfun$9.apply(Partition.scala:442)
> at kafka.cluster.Partition$$anonfun$9.apply(Partition.scala:428)
> at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:262)
> at kafka.utils.CoreUtils$.inReadLock(CoreUtils.scala:268)
> at kafka.cluster.Partition.appendMessagesToLeader(Partition.scala:428)
> at 
> kafka.server.ReplicaManager$$anonfun$appendToLocalLog$2.apply(ReplicaManager.scala:401)
> at 
> kafka.server.ReplicaManager$$anonfun$appendToLocalLog$2.apply(ReplicaManager.scala:386)
> at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
> at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
> at scala.collection.immutable.Map$Map1.foreach(Map.scala:109)
> at 
> scala.collection.TraversableLike$class.map(TraversableLike.scala:244)
> at scala.collection.AbstractTraversable.map(Traversable.scala:105)
> at 
> kafka.server.ReplicaManager.appendToLocalLog(ReplicaManager.scala:386)
> at 
> kafka.server.ReplicaManager.appendMessages(ReplicaManager.scala:322)
> at 
> kafka.coordinator.GroupMetadataManager.store(GroupMetadataManager.scala:227)
> at 
> kafka.coordinator.GroupCoordinator$$anonfun$doSyncGroup$3.apply(GroupCoordinator.scala:312)
> at 
> kafka.coordinator.GroupCoordinator$$anonfun$doSyncGroup$3.apply(GroupCoordinator.scala:312)
> at scala.Option.foreach(Option.scala:236)
> at 
> kafka.coordinator.GroupCoordinator.doSyncGroup(GroupCoordinator.scala:312)
> at 
> kafka.coordinator.GroupCoordinator.handleSyncGroup(GroupCoordinator.scala:247)
> at kafka.server.KafkaApis.handleSyncGroupRequest(KafkaApis.scala:819)
> at kafka.server.KafkaApis.handle(KafkaApis.scala:82)
> at kafka.server.KafkaRequestHandler.run(KafkaRequestHandler.scala:60)
> at java.lang.Thread.run(Thread.java:724)
> Caused by: java.io.FileNotFoundException: 
> /tmp/kafka-logs-new/__consumer_offsets-48/.index (No such 
> file or directory)
> at java.io.RandomAccessFile.open(Native Method)
> at java.io.RandomAccessFile.(RandomAccessFile.java:241)
> at 
> kafka.log.OffsetIndex$$anonfun$resize$1.apply(OffsetIndex.scala:277)
> at 
> kafka.log.OffsetIndex$$anonfun$resize$1.apply(OffsetIndex.scala:276)
> at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:262)
> at kafka.log.OffsetIndex.resize(OffsetIndex.scala:276)
> at 
> kafka.log.OffsetIndex$$anonfun$trimToValidSize$1.apply$mcV$sp(OffsetIndex.scala:265)
> at 
> kafka.log.OffsetIndex$$anonfun$trimToValidSize$1.apply(OffsetIndex.scala:265)
> at 
> kafka.log.OffsetIndex$$anonfun$trimToValidSize$1.apply(OffsetIndex.scala:265)
> at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:262)
> at kafka.log.OffsetIndex.trimToValidSize(OffsetIndex.scala:264)
> at kafka.log.Log.roll(Log.scala:627)
> at kafka.log.Log.maybeRoll(Log.scala:602)
> at kafka.log.Log.append(Log.scala:357)
> ... 24 more



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-4076) Kafka broker shuts down due to irrecoverable IO error

2016-08-22 Thread Anyun (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4076?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15432235#comment-15432235
 ] 

Anyun  commented on KAFKA-4076:
---

kafka-server shuts down  sometimes ,but only server.log display the error as 
Description

> Kafka broker shuts down due to irrecoverable IO error
> -
>
> Key: KAFKA-4076
> URL: https://issues.apache.org/jira/browse/KAFKA-4076
> Project: Kafka
>  Issue Type: Bug
>  Components: log
>Affects Versions: 0.9.0.0
>Reporter: Anyun 
>
> kafka.common.KafkaStorageException: I/O exception in append to log 
> '__consumer_offsets-48'
> at kafka.log.Log.append(Log.scala:318)
> at kafka.cluster.Partition$$anonfun$9.apply(Partition.scala:442)
> at kafka.cluster.Partition$$anonfun$9.apply(Partition.scala:428)
> at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:262)
> at kafka.utils.CoreUtils$.inReadLock(CoreUtils.scala:268)
> at kafka.cluster.Partition.appendMessagesToLeader(Partition.scala:428)
> at 
> kafka.server.ReplicaManager$$anonfun$appendToLocalLog$2.apply(ReplicaManager.scala:401)
> at 
> kafka.server.ReplicaManager$$anonfun$appendToLocalLog$2.apply(ReplicaManager.scala:386)
> at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
> at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
> at scala.collection.immutable.Map$Map1.foreach(Map.scala:109)
> at 
> scala.collection.TraversableLike$class.map(TraversableLike.scala:244)
> at scala.collection.AbstractTraversable.map(Traversable.scala:105)
> at 
> kafka.server.ReplicaManager.appendToLocalLog(ReplicaManager.scala:386)
> at 
> kafka.server.ReplicaManager.appendMessages(ReplicaManager.scala:322)
> at 
> kafka.coordinator.GroupMetadataManager.store(GroupMetadataManager.scala:227)
> at 
> kafka.coordinator.GroupCoordinator$$anonfun$doSyncGroup$3.apply(GroupCoordinator.scala:312)
> at 
> kafka.coordinator.GroupCoordinator$$anonfun$doSyncGroup$3.apply(GroupCoordinator.scala:312)
> at scala.Option.foreach(Option.scala:236)
> at 
> kafka.coordinator.GroupCoordinator.doSyncGroup(GroupCoordinator.scala:312)
> at 
> kafka.coordinator.GroupCoordinator.handleSyncGroup(GroupCoordinator.scala:247)
> at kafka.server.KafkaApis.handleSyncGroupRequest(KafkaApis.scala:819)
> at kafka.server.KafkaApis.handle(KafkaApis.scala:82)
> at kafka.server.KafkaRequestHandler.run(KafkaRequestHandler.scala:60)
> at java.lang.Thread.run(Thread.java:724)
> Caused by: java.io.FileNotFoundException: 
> /tmp/kafka-logs-new/__consumer_offsets-48/.index (No such 
> file or directory)
> at java.io.RandomAccessFile.open(Native Method)
> at java.io.RandomAccessFile.(RandomAccessFile.java:241)
> at 
> kafka.log.OffsetIndex$$anonfun$resize$1.apply(OffsetIndex.scala:277)
> at 
> kafka.log.OffsetIndex$$anonfun$resize$1.apply(OffsetIndex.scala:276)
> at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:262)
> at kafka.log.OffsetIndex.resize(OffsetIndex.scala:276)
> at 
> kafka.log.OffsetIndex$$anonfun$trimToValidSize$1.apply$mcV$sp(OffsetIndex.scala:265)
> at 
> kafka.log.OffsetIndex$$anonfun$trimToValidSize$1.apply(OffsetIndex.scala:265)
> at 
> kafka.log.OffsetIndex$$anonfun$trimToValidSize$1.apply(OffsetIndex.scala:265)
> at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:262)
> at kafka.log.OffsetIndex.trimToValidSize(OffsetIndex.scala:264)
> at kafka.log.Log.roll(Log.scala:627)
> at kafka.log.Log.maybeRoll(Log.scala:602)
> at kafka.log.Log.append(Log.scala:357)
> ... 24 more



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-4076) Kafka broker shuts down due to irrecoverable IO error

2016-08-22 Thread Anyun (JIRA)
Anyun  created KAFKA-4076:
-

 Summary: Kafka broker shuts down due to irrecoverable IO error
 Key: KAFKA-4076
 URL: https://issues.apache.org/jira/browse/KAFKA-4076
 Project: Kafka
  Issue Type: Bug
  Components: log
Affects Versions: 0.9.0.0
Reporter: Anyun 



kafka.common.KafkaStorageException: I/O exception in append to log 
'__consumer_offsets-48'
at kafka.log.Log.append(Log.scala:318)
at kafka.cluster.Partition$$anonfun$9.apply(Partition.scala:442)
at kafka.cluster.Partition$$anonfun$9.apply(Partition.scala:428)
at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:262)
at kafka.utils.CoreUtils$.inReadLock(CoreUtils.scala:268)
at kafka.cluster.Partition.appendMessagesToLeader(Partition.scala:428)
at 
kafka.server.ReplicaManager$$anonfun$appendToLocalLog$2.apply(ReplicaManager.scala:401)
at 
kafka.server.ReplicaManager$$anonfun$appendToLocalLog$2.apply(ReplicaManager.scala:386)
at 
scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
at 
scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
at scala.collection.immutable.Map$Map1.foreach(Map.scala:109)
at scala.collection.TraversableLike$class.map(TraversableLike.scala:244)
at scala.collection.AbstractTraversable.map(Traversable.scala:105)
at 
kafka.server.ReplicaManager.appendToLocalLog(ReplicaManager.scala:386)
at kafka.server.ReplicaManager.appendMessages(ReplicaManager.scala:322)
at 
kafka.coordinator.GroupMetadataManager.store(GroupMetadataManager.scala:227)
at 
kafka.coordinator.GroupCoordinator$$anonfun$doSyncGroup$3.apply(GroupCoordinator.scala:312)
at 
kafka.coordinator.GroupCoordinator$$anonfun$doSyncGroup$3.apply(GroupCoordinator.scala:312)
at scala.Option.foreach(Option.scala:236)
at 
kafka.coordinator.GroupCoordinator.doSyncGroup(GroupCoordinator.scala:312)
at 
kafka.coordinator.GroupCoordinator.handleSyncGroup(GroupCoordinator.scala:247)
at kafka.server.KafkaApis.handleSyncGroupRequest(KafkaApis.scala:819)
at kafka.server.KafkaApis.handle(KafkaApis.scala:82)
at kafka.server.KafkaRequestHandler.run(KafkaRequestHandler.scala:60)
at java.lang.Thread.run(Thread.java:724)
Caused by: java.io.FileNotFoundException: 
/tmp/kafka-logs-new/__consumer_offsets-48/.index (No such 
file or directory)
at java.io.RandomAccessFile.open(Native Method)
at java.io.RandomAccessFile.(RandomAccessFile.java:241)
at kafka.log.OffsetIndex$$anonfun$resize$1.apply(OffsetIndex.scala:277)
at kafka.log.OffsetIndex$$anonfun$resize$1.apply(OffsetIndex.scala:276)
at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:262)
at kafka.log.OffsetIndex.resize(OffsetIndex.scala:276)
at 
kafka.log.OffsetIndex$$anonfun$trimToValidSize$1.apply$mcV$sp(OffsetIndex.scala:265)
at 
kafka.log.OffsetIndex$$anonfun$trimToValidSize$1.apply(OffsetIndex.scala:265)
at 
kafka.log.OffsetIndex$$anonfun$trimToValidSize$1.apply(OffsetIndex.scala:265)
at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:262)
at kafka.log.OffsetIndex.trimToValidSize(OffsetIndex.scala:264)
at kafka.log.Log.roll(Log.scala:627)
at kafka.log.Log.maybeRoll(Log.scala:602)
at kafka.log.Log.append(Log.scala:357)
... 24 more



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Build failed in Jenkins: kafka-trunk-jdk8 #832

2016-08-22 Thread Apache Jenkins Server
See 

Changes:

[ismael] KAFKA-3916; Check for disconnects properly before sending from the

--
[...truncated 10986 lines...]

org.apache.kafka.streams.state.internals.StreamThreadStateStoreProviderTest > 
shouldReturnEmptyListIfStoreExistsButIsNotOfTypeValueStore STARTED

org.apache.kafka.streams.state.internals.StreamThreadStateStoreProviderTest > 
shouldReturnEmptyListIfStoreExistsButIsNotOfTypeValueStore PASSED

org.apache.kafka.streams.state.internals.StreamThreadStateStoreProviderTest > 
shouldFindKeyValueStores STARTED

org.apache.kafka.streams.state.internals.StreamThreadStateStoreProviderTest > 
shouldFindKeyValueStores PASSED

org.apache.kafka.streams.state.internals.StreamThreadStateStoreProviderTest > 
shouldThrowInvalidStoreExceptionIfWindowStoreClosed STARTED

org.apache.kafka.streams.state.internals.StreamThreadStateStoreProviderTest > 
shouldThrowInvalidStoreExceptionIfWindowStoreClosed PASSED

org.apache.kafka.streams.state.internals.CompositeReadOnlyWindowStoreTest > 
shouldNotGetValuesFromOtherStores STARTED

org.apache.kafka.streams.state.internals.CompositeReadOnlyWindowStoreTest > 
shouldNotGetValuesFromOtherStores PASSED

org.apache.kafka.streams.state.internals.CompositeReadOnlyWindowStoreTest > 
shouldReturnEmptyIteratorIfNoData STARTED

org.apache.kafka.streams.state.internals.CompositeReadOnlyWindowStoreTest > 
shouldReturnEmptyIteratorIfNoData PASSED

org.apache.kafka.streams.state.internals.CompositeReadOnlyWindowStoreTest > 
shouldFindValueForKeyWhenMultiStores STARTED

org.apache.kafka.streams.state.internals.CompositeReadOnlyWindowStoreTest > 
shouldFindValueForKeyWhenMultiStores PASSED

org.apache.kafka.streams.state.internals.CompositeReadOnlyWindowStoreTest > 
shouldFetchValuesFromWindowStore STARTED

org.apache.kafka.streams.state.internals.CompositeReadOnlyWindowStoreTest > 
shouldFetchValuesFromWindowStore PASSED

org.apache.kafka.streams.state.internals.CompositeReadOnlyKeyValueStoreTest > 
shouldReturnValueIfExists STARTED

org.apache.kafka.streams.state.internals.CompositeReadOnlyKeyValueStoreTest > 
shouldReturnValueIfExists PASSED

org.apache.kafka.streams.state.internals.CompositeReadOnlyKeyValueStoreTest > 
shouldNotGetValuesFromOtherStores STARTED

org.apache.kafka.streams.state.internals.CompositeReadOnlyKeyValueStoreTest > 
shouldNotGetValuesFromOtherStores PASSED

org.apache.kafka.streams.state.internals.CompositeReadOnlyKeyValueStoreTest > 
shouldGetApproximateEntriesAcrossAllStores STARTED

org.apache.kafka.streams.state.internals.CompositeReadOnlyKeyValueStoreTest > 
shouldGetApproximateEntriesAcrossAllStores PASSED

org.apache.kafka.streams.state.internals.CompositeReadOnlyKeyValueStoreTest > 
shouldReturnLongMaxValueOnOverflow STARTED

org.apache.kafka.streams.state.internals.CompositeReadOnlyKeyValueStoreTest > 
shouldReturnLongMaxValueOnOverflow PASSED

org.apache.kafka.streams.state.internals.CompositeReadOnlyKeyValueStoreTest > 
shouldReturnNullIfKeyDoesntExist STARTED

org.apache.kafka.streams.state.internals.CompositeReadOnlyKeyValueStoreTest > 
shouldReturnNullIfKeyDoesntExist PASSED

org.apache.kafka.streams.state.internals.CompositeReadOnlyKeyValueStoreTest > 
shouldSupportRange STARTED

org.apache.kafka.streams.state.internals.CompositeReadOnlyKeyValueStoreTest > 
shouldSupportRange PASSED

org.apache.kafka.streams.state.internals.CompositeReadOnlyKeyValueStoreTest > 
shouldFindValueForKeyWhenMultiStores STARTED

org.apache.kafka.streams.state.internals.CompositeReadOnlyKeyValueStoreTest > 
shouldFindValueForKeyWhenMultiStores PASSED

org.apache.kafka.streams.state.internals.CompositeReadOnlyKeyValueStoreTest > 
shouldThrowInvalidStoreExceptionIfNoStoresExistOnAll STARTED

org.apache.kafka.streams.state.internals.CompositeReadOnlyKeyValueStoreTest > 
shouldThrowInvalidStoreExceptionIfNoStoresExistOnAll PASSED

org.apache.kafka.streams.state.internals.CompositeReadOnlyKeyValueStoreTest > 
shouldThrowInvalidStoreExceptionIfNoStoresExistOnGet STARTED

org.apache.kafka.streams.state.internals.CompositeReadOnlyKeyValueStoreTest > 
shouldThrowInvalidStoreExceptionIfNoStoresExistOnGet PASSED

org.apache.kafka.streams.state.internals.CompositeReadOnlyKeyValueStoreTest > 
shouldSupportRangeAcrossMultipleKVStores STARTED

org.apache.kafka.streams.state.internals.CompositeReadOnlyKeyValueStoreTest > 
shouldSupportRangeAcrossMultipleKVStores PASSED

org.apache.kafka.streams.state.internals.CompositeReadOnlyKeyValueStoreTest > 
shouldThrowInvalidStoreExceptionIfNoStoresExistOnRange STARTED

org.apache.kafka.streams.state.internals.CompositeReadOnlyKeyValueStoreTest > 
shouldThrowInvalidStoreExceptionIfNoStoresExistOnRange PASSED

org.apache.kafka.streams.state.internals.CompositeReadOnlyKeyValueStoreTest > 
shouldSupportAllAcrossMultipleStores STARTED

org.apache.kafka.streams.state.internals.CompositeReadOnlyKeyValueStoreTest > 
shouldSupportAllAcrossMultip

Build failed in Jenkins: kafka-trunk-jdk7 #1492

2016-08-22 Thread Apache Jenkins Server
See 

Changes:

[ismael] KAFKA-3916; Check for disconnects properly before sending from the

--
[...truncated 12105 lines...]
org.apache.kafka.streams.state.internals.RocksDBKeyValueStoreTest > 
testRestoreWithDefaultSerdes PASSED

org.apache.kafka.streams.state.internals.RocksDBKeyValueStoreTest > testRestore 
STARTED

org.apache.kafka.streams.state.internals.RocksDBKeyValueStoreTest > testRestore 
PASSED

org.apache.kafka.streams.state.internals.RocksDBKeyValueStoreTest > 
testPutGetRange STARTED

org.apache.kafka.streams.state.internals.RocksDBKeyValueStoreTest > 
testPutGetRange PASSED

org.apache.kafka.streams.state.internals.RocksDBKeyValueStoreTest > 
testPutGetRangeWithDefaultSerdes STARTED

org.apache.kafka.streams.state.internals.RocksDBKeyValueStoreTest > 
testPutGetRangeWithDefaultSerdes PASSED

org.apache.kafka.streams.state.internals.InMemoryLRUCacheStoreTest > testEvict 
STARTED

org.apache.kafka.streams.state.internals.InMemoryLRUCacheStoreTest > testEvict 
PASSED

org.apache.kafka.streams.state.internals.InMemoryLRUCacheStoreTest > testSize 
STARTED

org.apache.kafka.streams.state.internals.InMemoryLRUCacheStoreTest > testSize 
PASSED

org.apache.kafka.streams.state.internals.InMemoryLRUCacheStoreTest > 
testPutIfAbsent STARTED

org.apache.kafka.streams.state.internals.InMemoryLRUCacheStoreTest > 
testPutIfAbsent PASSED

org.apache.kafka.streams.state.internals.InMemoryLRUCacheStoreTest > 
testRestoreWithDefaultSerdes STARTED

org.apache.kafka.streams.state.internals.InMemoryLRUCacheStoreTest > 
testRestoreWithDefaultSerdes PASSED

org.apache.kafka.streams.state.internals.InMemoryLRUCacheStoreTest > 
testRestore STARTED

org.apache.kafka.streams.state.internals.InMemoryLRUCacheStoreTest > 
testRestore PASSED

org.apache.kafka.streams.state.internals.InMemoryLRUCacheStoreTest > 
testPutGetRange STARTED

org.apache.kafka.streams.state.internals.InMemoryLRUCacheStoreTest > 
testPutGetRange PASSED

org.apache.kafka.streams.state.internals.InMemoryLRUCacheStoreTest > 
testPutGetRangeWithDefaultSerdes STARTED

org.apache.kafka.streams.state.internals.InMemoryLRUCacheStoreTest > 
testPutGetRangeWithDefaultSerdes PASSED

org.apache.kafka.streams.state.internals.WrappingStoreProviderTest > 
shouldThrowInvalidStoreExceptionIfNoStoreOfTypeFound STARTED

org.apache.kafka.streams.state.internals.WrappingStoreProviderTest > 
shouldThrowInvalidStoreExceptionIfNoStoreOfTypeFound PASSED

org.apache.kafka.streams.state.internals.WrappingStoreProviderTest > 
shouldFindWindowStores STARTED

org.apache.kafka.streams.state.internals.WrappingStoreProviderTest > 
shouldFindWindowStores PASSED

org.apache.kafka.streams.state.internals.WrappingStoreProviderTest > 
shouldFindKeyValueStores STARTED

org.apache.kafka.streams.state.internals.WrappingStoreProviderTest > 
shouldFindKeyValueStores PASSED

org.apache.kafka.streams.state.internals.WindowStoreUtilsTest > 
testSerialization STARTED

org.apache.kafka.streams.state.internals.WindowStoreUtilsTest > 
testSerialization PASSED

org.apache.kafka.streams.state.internals.QueryableStoreProviderTest > 
shouldNotReturnKVStoreWhenIsWindowStore STARTED

org.apache.kafka.streams.state.internals.QueryableStoreProviderTest > 
shouldNotReturnKVStoreWhenIsWindowStore PASSED

org.apache.kafka.streams.state.internals.QueryableStoreProviderTest > 
shouldReturnNullIfKVStoreDoesntExist STARTED

org.apache.kafka.streams.state.internals.QueryableStoreProviderTest > 
shouldReturnNullIfKVStoreDoesntExist PASSED

org.apache.kafka.streams.state.internals.QueryableStoreProviderTest > 
shouldReturnNullIfWindowStoreDoesntExist STARTED

org.apache.kafka.streams.state.internals.QueryableStoreProviderTest > 
shouldReturnNullIfWindowStoreDoesntExist PASSED

org.apache.kafka.streams.state.internals.QueryableStoreProviderTest > 
shouldNotReturnWindowStoreWhenIsKVStore STARTED

org.apache.kafka.streams.state.internals.QueryableStoreProviderTest > 
shouldNotReturnWindowStoreWhenIsKVStore PASSED

org.apache.kafka.streams.state.internals.QueryableStoreProviderTest > 
shouldReturnKVStoreWhenItExists STARTED

org.apache.kafka.streams.state.internals.QueryableStoreProviderTest > 
shouldReturnKVStoreWhenItExists PASSED

org.apache.kafka.streams.state.internals.QueryableStoreProviderTest > 
shouldReturnWindowStoreWhenItExists STARTED

org.apache.kafka.streams.state.internals.QueryableStoreProviderTest > 
shouldReturnWindowStoreWhenItExists PASSED

org.apache.kafka.streams.state.internals.OffsetCheckpointTest > testReadWrite 
STARTED

org.apache.kafka.streams.state.internals.OffsetCheckpointTest > testReadWrite 
PASSED

org.apache.kafka.streams.state.internals.RocksDBWindowStoreTest > 
testPutAndFetch STARTED

org.apache.kafka.streams.state.internals.RocksDBWindowStoreTest > 
testPutAndFetch PASSED

org.apache.kafka.streams.state.internals.RocksDBWindowStoreTest > 
testPutAndFetchBefore STARTED

org.apache.k

[jira] [Commented] (KAFKA-4073) MirrorMaker should handle mirroring messages w/o timestamp better

2016-08-22 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4073?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15432136#comment-15432136
 ] 

ASF GitHub Bot commented on KAFKA-4073:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/1773


> MirrorMaker should handle mirroring messages w/o timestamp better
> -
>
> Key: KAFKA-4073
> URL: https://issues.apache.org/jira/browse/KAFKA-4073
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.10.0.1
>Reporter: Jun Rao
>Assignee: Ismael Juma
> Fix For: 0.10.1.0, 0.10.0.2
>
>
> Currently, if the 0.10.0.1 MirrorMaker reads a message w/o timestamp from the 
> source cluster, it will hit the following exception. This was introduced in 
> KAFKA-3787. So it only affects 0.10.0.1.
> [16/08/2016:18:26:41 PDT] [FATAL] [kafka.tools.MirrorMaker$MirrorMakerThread 
> mirrormaker-thread-1]: [mirrormaker-thread-1] Mirror maker thread failure due 
> to
> java.lang.IllegalArgumentException: Invalid timestamp -1
> at 
> org.apache.kafka.clients.producer.ProducerRecord.(ProducerRecord.java:60)
> at 
> kafka.tools.MirrorMaker$defaultMirrorMakerMessageHandler$.handle(MirrorMaker.scala:678)
> at kafka.tools.MirrorMaker$MirrorMakerThread.run(MirrorMaker.scala:414)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (KAFKA-4073) MirrorMaker should handle mirroring messages w/o timestamp better

2016-08-22 Thread Jun Rao (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4073?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jun Rao resolved KAFKA-4073.

Resolution: Fixed

Issue resolved by pull request 1773
[https://github.com/apache/kafka/pull/1773]

> MirrorMaker should handle mirroring messages w/o timestamp better
> -
>
> Key: KAFKA-4073
> URL: https://issues.apache.org/jira/browse/KAFKA-4073
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.10.0.1
>Reporter: Jun Rao
>Assignee: Ismael Juma
> Fix For: 0.10.0.2, 0.10.1.0
>
>
> Currently, if the 0.10.0.1 MirrorMaker reads a message w/o timestamp from the 
> source cluster, it will hit the following exception. This was introduced in 
> KAFKA-3787. So it only affects 0.10.0.1.
> [16/08/2016:18:26:41 PDT] [FATAL] [kafka.tools.MirrorMaker$MirrorMakerThread 
> mirrormaker-thread-1]: [mirrormaker-thread-1] Mirror maker thread failure due 
> to
> java.lang.IllegalArgumentException: Invalid timestamp -1
> at 
> org.apache.kafka.clients.producer.ProducerRecord.(ProducerRecord.java:60)
> at 
> kafka.tools.MirrorMaker$defaultMirrorMakerMessageHandler$.handle(MirrorMaker.scala:678)
> at kafka.tools.MirrorMaker$MirrorMakerThread.run(MirrorMaker.scala:414)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request #1773: KAFKA-4073: MirrorMaker should handle messages wit...

2016-08-22 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/1773


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] kafka pull request #1736: MINOR: Refactor TopologyBuilder with ApplicationID...

2016-08-22 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/1736


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] kafka pull request #1773: KAFKA-4073: MirrorMaker should handle messages wit...

2016-08-22 Thread ijuma
GitHub user ijuma opened a pull request:

https://github.com/apache/kafka/pull/1773

KAFKA-4073: MirrorMaker should handle messages without timestamp correctly



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/ijuma/kafka kafka-4073-mirror-maker-timestamps

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/1773.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1773


commit a85742cd7f0e55ce1eb9e82e3528817e0dea7adc
Author: Ismael Juma 
Date:   2016-08-23T02:25:31Z

Pass `null` as timestamp if source timestamp is `NO_TIMESTAMP`




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-4073) MirrorMaker should handle mirroring messages w/o timestamp better

2016-08-22 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4073?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15432007#comment-15432007
 ] 

ASF GitHub Bot commented on KAFKA-4073:
---

GitHub user ijuma opened a pull request:

https://github.com/apache/kafka/pull/1773

KAFKA-4073: MirrorMaker should handle messages without timestamp correctly



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/ijuma/kafka kafka-4073-mirror-maker-timestamps

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/1773.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1773


commit a85742cd7f0e55ce1eb9e82e3528817e0dea7adc
Author: Ismael Juma 
Date:   2016-08-23T02:25:31Z

Pass `null` as timestamp if source timestamp is `NO_TIMESTAMP`




> MirrorMaker should handle mirroring messages w/o timestamp better
> -
>
> Key: KAFKA-4073
> URL: https://issues.apache.org/jira/browse/KAFKA-4073
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.10.0.1
>Reporter: Jun Rao
>Assignee: Ismael Juma
> Fix For: 0.10.1.0, 0.10.0.2
>
>
> Currently, if the 0.10.0.1 MirrorMaker reads a message w/o timestamp from the 
> source cluster, it will hit the following exception. This was introduced in 
> KAFKA-3787. So it only affects 0.10.0.1.
> [16/08/2016:18:26:41 PDT] [FATAL] [kafka.tools.MirrorMaker$MirrorMakerThread 
> mirrormaker-thread-1]: [mirrormaker-thread-1] Mirror maker thread failure due 
> to
> java.lang.IllegalArgumentException: Invalid timestamp -1
> at 
> org.apache.kafka.clients.producer.ProducerRecord.(ProducerRecord.java:60)
> at 
> kafka.tools.MirrorMaker$defaultMirrorMakerMessageHandler$.handle(MirrorMaker.scala:678)
> at kafka.tools.MirrorMaker$MirrorMakerThread.run(MirrorMaker.scala:414)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (KAFKA-3916) Connection from controller to broker disconnects

2016-08-22 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3916?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma resolved KAFKA-3916.

   Resolution: Fixed
Fix Version/s: 0.10.1.0

Issue resolved by pull request 1734
[https://github.com/apache/kafka/pull/1734]

> Connection from controller to broker disconnects
> 
>
> Key: KAFKA-3916
> URL: https://issues.apache.org/jira/browse/KAFKA-3916
> Project: Kafka
>  Issue Type: Bug
>  Components: controller
>Affects Versions: 0.9.0.1
>Reporter: Dave Powell
>Assignee: Jason Gustafson
> Fix For: 0.10.1.0
>
>
> We recently upgraded from 0.8.2.1 to 0.9.0.1. Since then, several times per 
> day, the controllers in our clusters have their connection to all brokers 
> disconnected, and then successfully reconnected a few hundred ms later. Each 
> time this occurs we see a brief spike in our 99th percentile produce and 
> consume times, reaching several hundred ms.
> Here is an example of what we're seeing in the controller.log:
> {code}
> [2016-06-28 14:15:35,416] WARN [Controller-151-to-broker-160-send-thread], 
> Controller 151 epoch 106 fails to send request {…} to broker Node(160, 
> broker.160.hostname, 9092). Reconnecting to broker. 
> (kafka.controller.RequestSendThread)
> java.io.IOException: Connection to 160 was disconnected before the response 
> was read
> at 
> kafka.utils.NetworkClientBlockingOps$$anonfun$blockingSendAndReceive$extension$1$$anonfun$apply$1.apply(NetworkClientBlockingOps.scala:87)
> at 
> kafka.utils.NetworkClientBlockingOps$$anonfun$blockingSendAndReceive$extension$1$$anonfun$apply$1.apply(NetworkClientBlockingOps.scala:84)
> at scala.Option.foreach(Option.scala:236)
> at 
> kafka.utils.NetworkClientBlockingOps$$anonfun$blockingSendAndReceive$extension$1.apply(NetworkClientBlockingOps.scala:84)
> at 
> kafka.utils.NetworkClientBlockingOps$$anonfun$blockingSendAndReceive$extension$1.apply(NetworkClientBlockingOps.scala:80)
> at 
> kafka.utils.NetworkClientBlockingOps$.recurse$1(NetworkClientBlockingOps.scala:129)
> at 
> kafka.utils.NetworkClientBlockingOps$.kafka$utils$NetworkClientBlockingOps$$pollUntilFound$extension(NetworkClientBlockingOps.scala:139)
> at 
> kafka.utils.NetworkClientBlockingOps$.blockingSendAndReceive$extension(NetworkClientBlockingOps.scala:80)
> at 
> kafka.controller.RequestSendThread.liftedTree1$1(ControllerChannelManager.scala:180)
> at 
> kafka.controller.RequestSendThread.doWork(ControllerChannelManager.scala:171)
> at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:63)
> ... one each for all brokers (including the controller) ...
>  [2016-06-28 14:15:35,721] INFO [Controller-151-to-broker-160-send-thread], 
> Controller 151 connected to Node(160, broker.160.hostname, 9092) for sending 
> state change requests (kafka.controller.RequestSendThread)
> … one each for all brokers (including the controller) ...
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3916) Connection from controller to broker disconnects

2016-08-22 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3916?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15432004#comment-15432004
 ] 

ASF GitHub Bot commented on KAFKA-3916:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/1734


> Connection from controller to broker disconnects
> 
>
> Key: KAFKA-3916
> URL: https://issues.apache.org/jira/browse/KAFKA-3916
> Project: Kafka
>  Issue Type: Bug
>  Components: controller
>Affects Versions: 0.9.0.1
>Reporter: Dave Powell
>Assignee: Jason Gustafson
> Fix For: 0.10.1.0
>
>
> We recently upgraded from 0.8.2.1 to 0.9.0.1. Since then, several times per 
> day, the controllers in our clusters have their connection to all brokers 
> disconnected, and then successfully reconnected a few hundred ms later. Each 
> time this occurs we see a brief spike in our 99th percentile produce and 
> consume times, reaching several hundred ms.
> Here is an example of what we're seeing in the controller.log:
> {code}
> [2016-06-28 14:15:35,416] WARN [Controller-151-to-broker-160-send-thread], 
> Controller 151 epoch 106 fails to send request {…} to broker Node(160, 
> broker.160.hostname, 9092). Reconnecting to broker. 
> (kafka.controller.RequestSendThread)
> java.io.IOException: Connection to 160 was disconnected before the response 
> was read
> at 
> kafka.utils.NetworkClientBlockingOps$$anonfun$blockingSendAndReceive$extension$1$$anonfun$apply$1.apply(NetworkClientBlockingOps.scala:87)
> at 
> kafka.utils.NetworkClientBlockingOps$$anonfun$blockingSendAndReceive$extension$1$$anonfun$apply$1.apply(NetworkClientBlockingOps.scala:84)
> at scala.Option.foreach(Option.scala:236)
> at 
> kafka.utils.NetworkClientBlockingOps$$anonfun$blockingSendAndReceive$extension$1.apply(NetworkClientBlockingOps.scala:84)
> at 
> kafka.utils.NetworkClientBlockingOps$$anonfun$blockingSendAndReceive$extension$1.apply(NetworkClientBlockingOps.scala:80)
> at 
> kafka.utils.NetworkClientBlockingOps$.recurse$1(NetworkClientBlockingOps.scala:129)
> at 
> kafka.utils.NetworkClientBlockingOps$.kafka$utils$NetworkClientBlockingOps$$pollUntilFound$extension(NetworkClientBlockingOps.scala:139)
> at 
> kafka.utils.NetworkClientBlockingOps$.blockingSendAndReceive$extension(NetworkClientBlockingOps.scala:80)
> at 
> kafka.controller.RequestSendThread.liftedTree1$1(ControllerChannelManager.scala:180)
> at 
> kafka.controller.RequestSendThread.doWork(ControllerChannelManager.scala:171)
> at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:63)
> ... one each for all brokers (including the controller) ...
>  [2016-06-28 14:15:35,721] INFO [Controller-151-to-broker-160-send-thread], 
> Controller 151 connected to Node(160, broker.160.hostname, 9092) for sending 
> state change requests (kafka.controller.RequestSendThread)
> … one each for all brokers (including the controller) ...
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request #1734: KAFKA-3916: Check for disconnects properly before ...

2016-08-22 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/1734


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Assigned] (KAFKA-4073) MirrorMaker should handle mirroring messages w/o timestamp better

2016-08-22 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4073?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma reassigned KAFKA-4073:
--

Assignee: Ismael Juma

> MirrorMaker should handle mirroring messages w/o timestamp better
> -
>
> Key: KAFKA-4073
> URL: https://issues.apache.org/jira/browse/KAFKA-4073
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.10.0.1
>Reporter: Jun Rao
>Assignee: Ismael Juma
> Fix For: 0.10.1.0, 0.10.0.2
>
>
> Currently, if the 0.10.0.1 MirrorMaker reads a message w/o timestamp from the 
> source cluster, it will hit the following exception. This was introduced in 
> KAFKA-3787. So it only affects 0.10.0.1.
> [16/08/2016:18:26:41 PDT] [FATAL] [kafka.tools.MirrorMaker$MirrorMakerThread 
> mirrormaker-thread-1]: [mirrormaker-thread-1] Mirror maker thread failure due 
> to
> java.lang.IllegalArgumentException: Invalid timestamp -1
> at 
> org.apache.kafka.clients.producer.ProducerRecord.(ProducerRecord.java:60)
> at 
> kafka.tools.MirrorMaker$defaultMirrorMakerMessageHandler$.handle(MirrorMaker.scala:678)
> at kafka.tools.MirrorMaker$MirrorMakerThread.run(MirrorMaker.scala:414)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-3916) Connection from controller to broker disconnects

2016-08-22 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3916?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma updated KAFKA-3916:
---
Assignee: Jason Gustafson

> Connection from controller to broker disconnects
> 
>
> Key: KAFKA-3916
> URL: https://issues.apache.org/jira/browse/KAFKA-3916
> Project: Kafka
>  Issue Type: Bug
>  Components: controller
>Affects Versions: 0.9.0.1
>Reporter: Dave Powell
>Assignee: Jason Gustafson
>
> We recently upgraded from 0.8.2.1 to 0.9.0.1. Since then, several times per 
> day, the controllers in our clusters have their connection to all brokers 
> disconnected, and then successfully reconnected a few hundred ms later. Each 
> time this occurs we see a brief spike in our 99th percentile produce and 
> consume times, reaching several hundred ms.
> Here is an example of what we're seeing in the controller.log:
> {code}
> [2016-06-28 14:15:35,416] WARN [Controller-151-to-broker-160-send-thread], 
> Controller 151 epoch 106 fails to send request {…} to broker Node(160, 
> broker.160.hostname, 9092). Reconnecting to broker. 
> (kafka.controller.RequestSendThread)
> java.io.IOException: Connection to 160 was disconnected before the response 
> was read
> at 
> kafka.utils.NetworkClientBlockingOps$$anonfun$blockingSendAndReceive$extension$1$$anonfun$apply$1.apply(NetworkClientBlockingOps.scala:87)
> at 
> kafka.utils.NetworkClientBlockingOps$$anonfun$blockingSendAndReceive$extension$1$$anonfun$apply$1.apply(NetworkClientBlockingOps.scala:84)
> at scala.Option.foreach(Option.scala:236)
> at 
> kafka.utils.NetworkClientBlockingOps$$anonfun$blockingSendAndReceive$extension$1.apply(NetworkClientBlockingOps.scala:84)
> at 
> kafka.utils.NetworkClientBlockingOps$$anonfun$blockingSendAndReceive$extension$1.apply(NetworkClientBlockingOps.scala:80)
> at 
> kafka.utils.NetworkClientBlockingOps$.recurse$1(NetworkClientBlockingOps.scala:129)
> at 
> kafka.utils.NetworkClientBlockingOps$.kafka$utils$NetworkClientBlockingOps$$pollUntilFound$extension(NetworkClientBlockingOps.scala:139)
> at 
> kafka.utils.NetworkClientBlockingOps$.blockingSendAndReceive$extension(NetworkClientBlockingOps.scala:80)
> at 
> kafka.controller.RequestSendThread.liftedTree1$1(ControllerChannelManager.scala:180)
> at 
> kafka.controller.RequestSendThread.doWork(ControllerChannelManager.scala:171)
> at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:63)
> ... one each for all brokers (including the controller) ...
>  [2016-06-28 14:15:35,721] INFO [Controller-151-to-broker-160-send-thread], 
> Controller 151 connected to Node(160, broker.160.hostname, 9092) for sending 
> state change requests (kafka.controller.RequestSendThread)
> … one each for all brokers (including the controller) ...
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Build failed in Jenkins: kafka-0.10.0-jdk7 #193

2016-08-22 Thread Apache Jenkins Server
See 

Changes:

[ismael] MINOR: add slf4jlog4j to streams example

--
[...truncated 5750 lines...]
org.apache.kafka.streams.kstream.internals.KTableImplTest > testKTable PASSED

org.apache.kafka.streams.kstream.internals.KTableImplTest > testValueGetter 
PASSED

org.apache.kafka.streams.kstream.internals.KTableMapValuesTest > 
testSendingOldValue PASSED

org.apache.kafka.streams.kstream.internals.KTableMapValuesTest > 
testNotSendingOldValue PASSED

org.apache.kafka.streams.kstream.internals.KTableMapValuesTest > testKTable 
PASSED

org.apache.kafka.streams.kstream.internals.KTableMapValuesTest > 
testValueGetter PASSED

org.apache.kafka.streams.kstream.internals.KTableFilterTest > 
testSendingOldValue PASSED

org.apache.kafka.streams.kstream.internals.KTableFilterTest > 
testNotSendingOldValue PASSED

org.apache.kafka.streams.kstream.internals.KTableFilterTest > 
testSkipNullOnMaterialization PASSED

org.apache.kafka.streams.kstream.internals.KTableFilterTest > testKTable PASSED

org.apache.kafka.streams.kstream.internals.KTableFilterTest > testValueGetter 
PASSED

org.apache.kafka.streams.kstream.internals.KStreamTransformValuesTest > 
testTransform PASSED

org.apache.kafka.streams.kstream.internals.KStreamImplTest > 
testToWithNullValueSerdeDoesntNPE PASSED

org.apache.kafka.streams.kstream.internals.KStreamImplTest > testNumProcesses 
PASSED

org.apache.kafka.streams.kstream.internals.KStreamMapTest > testMap PASSED

org.apache.kafka.streams.kstream.internals.KeyValuePrinterProcessorTest > 
testPrintKeyValueWithProvidedSerde PASSED

org.apache.kafka.streams.kstream.internals.KeyValuePrinterProcessorTest > 
testPrintKeyValueDefaultSerde PASSED

org.apache.kafka.streams.kstream.internals.KStreamKStreamJoinTest > 
testOuterJoin PASSED

org.apache.kafka.streams.kstream.internals.KStreamKStreamJoinTest > testJoin 
PASSED

org.apache.kafka.streams.kstream.internals.KStreamKStreamJoinTest > 
testWindowing PASSED

org.apache.kafka.streams.kstream.internals.KStreamKTableLeftJoinTest > 
testNotJoinable PASSED

org.apache.kafka.streams.kstream.internals.KStreamKTableLeftJoinTest > testJoin 
PASSED

org.apache.kafka.streams.kstream.internals.KStreamMapValuesTest > 
testFlatMapValues PASSED

org.apache.kafka.streams.kstream.internals.KTableMapKeysTest > 
testMapKeysConvertingToStream PASSED

org.apache.kafka.streams.kstream.internals.KTableForeachTest > testForeach 
PASSED

org.apache.kafka.streams.kstream.internals.KGroupedTableImplTest > 
testGroupedCountOccurences PASSED

org.apache.kafka.streams.kstream.internals.KStreamKStreamLeftJoinTest > 
testLeftJoin PASSED

org.apache.kafka.streams.kstream.internals.KStreamKStreamLeftJoinTest > 
testWindowing PASSED

org.apache.kafka.streams.kstream.internals.KTableKTableJoinTest > testJoin 
PASSED

org.apache.kafka.streams.kstream.internals.KTableKTableJoinTest > 
testNotSendingOldValues PASSED

org.apache.kafka.streams.kstream.internals.KTableKTableJoinTest > 
testSendingOldValues PASSED

org.apache.kafka.streams.kstream.internals.KStreamForeachTest > testForeach 
PASSED

org.apache.kafka.streams.kstream.internals.KTableAggregateTest > testAggBasic 
PASSED

org.apache.kafka.streams.kstream.internals.KTableAggregateTest > 
testAggRepartition PASSED

org.apache.kafka.streams.kstream.internals.KStreamFilterTest > testFilterNot 
PASSED

org.apache.kafka.streams.kstream.internals.KStreamFilterTest > testFilter PASSED

org.apache.kafka.streams.kstream.internals.KStreamWindowAggregateTest > 
testAggBasic PASSED

org.apache.kafka.streams.kstream.internals.KStreamWindowAggregateTest > 
testJoin PASSED

org.apache.kafka.streams.kstream.internals.WindowedStreamPartitionerTest > 
testCopartitioning PASSED

org.apache.kafka.streams.StreamsConfigTest > testGetConsumerConfigs PASSED

org.apache.kafka.streams.StreamsConfigTest > defaultSerdeShouldBeConfigured 
PASSED

org.apache.kafka.streams.StreamsConfigTest > testGetProducerConfigs PASSED

org.apache.kafka.streams.StreamsConfigTest > 
shouldSupportMultipleBootstrapServers PASSED

org.apache.kafka.streams.StreamsConfigTest > testGetRestoreConsumerConfigs 
PASSED

org.apache.kafka.streams.processor.DefaultPartitionGrouperTest > testGrouping 
PASSED

org.apache.kafka.streams.processor.TopologyBuilderTest > testSourceTopics PASSED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
testAddSinkWithSameName PASSED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
testAddSinkWithSelfParent PASSED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
testAddProcessorWithSelfParent PASSED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
testAddStateStoreWithSink PASSED

org.apache.kafka.streams.processor.TopologyBuilderTest > testTopicGroups PASSED

org.apache.kafka.streams.processor.TopologyBuilderTest > testBuild PASSED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
testAddStateStoreWithSource PASS

Re: [DISCUSS] KIP-4 ACL Admin Schema

2016-08-22 Thread Jun Rao
Grant,

For your second comment, when propagating the ZK changes to acl cache, we
will also update the latest ZK version. So, if ACL requests are not issued
too quickly, the conditional updates to ZK should always be successful in
one try.

One potential benefit of serving the ACL request from any broker is for
resource managers such as Mesos. Mesos DCOS exposes a service (e.g.)
endpoint to external hosts. An external request by default can only be
issued to the service endpoint, which routes the request to an arbitrary
broker. Enabling an external request to go to a particular broker directly
(e.g., controller) requires more work.

Thanks,

Jun

On Thu, Aug 18, 2016 at 9:46 AM, Grant Henke  wrote:

> Thanks for the feedback. Below are some responses:
>
>
> > I don't have any problem with breaking things into 2 requests if it's
> > necessary or optimal. But can you explain why separate requests "vastly
> > simplifies the broker side implementation"? It doesn't seem like it
> should
> > be particularly complex to process each ACL change in order.
>
>
> You are right, it isn't super difficult to process ACL changes in order.
> The simplicity I was thinking about comes from not needing to group and
> re-order them like in the current implementation. It also removes the
> "action" enumerator which also simplifies things a bit. I am open to both
> ideas (single alter processing in order vs separate requests) as the trade
> offs aren't drastic.
>
> I am leaning towards separate requests for a few reasons:
>
>- The Admin API will likely submit separate requests for delete and add
>regardless
>   - I expect most admin apis will have a removeAcl/s and addAcl/s api
>   the fires a request immediately. I don't expect "batching" explicit
> or
>   implicit to be all that common.
>- Independent concerns and capabilities
>   - Separating delete into its own request makes it easier to define
>   "delete all" type behavior.
>   - I think 2 simple requests may be easier to document and understand
>   by client developers than 1 more complicated one.
>- Matches the Authorizer interface
>   - Separate requests matches authorizer interface more closely and
>   still allows for actions on collections of ACLs instead of one
> call per ACL
>   via Authorizer.addAcls(acls: Set[Acl], resource: Resource) and
>   Authorizer.removeAcls(acls: Set[Acl], resource: Resource)
>
> Hmm, even if all ACL requests are directed to the controller, concurrency
> > is only guaranteed on the same connection. For requests coming from
> > different connections, the order that they get processed on the broker is
> > non-deterministic.
>
>
> I was thinking less about making sure the exact order of requests is
> handled correctly, since each client will likely get a response between
> each request before sending another. Its more about ensuring local
> state/cache is accurate and that there is no way 2 nodes can have different
> ACLs which they think are correct. Good implementations will handle this,
> but may take a performance hit.
>
> Perhaps I am overthinking it and the SimpleAclAuthorizer is the only one
> that would be affected by this (and likely rarely because volume of ACL
> write requests should not be high). The SimpleAclAuthorizer is eventually
> consistent between instances. It writes optimistically with the cached zk
> node version while writing a complete list of ACLs (not just adding
> removing single nodes for acls). Below is a concrete demonstration of the
> impact I am thinking about with the SimpleAclAuthorizer:
>
> If we force all writes to go through one instance then follow up (ignoring
> first call to warm cache) writes for a resource would:
>
>1. Call addAcls
>2. Call updateResourceAcls combining the current cached acls and the new
>acls
>3. Write the result to Zookeeper via conditional write on the cached
>version
>4. Success 1 remote call
>
> If requests can go through any instance then follow up writes for a
> resource may:
>
>1. Call addAcls
>2. Call updateResourceAcls combining the current cached acls and the new
>acls
>   1. If no cached acls read from Zookeeper
>3. Write the result to Zookeeper via conditional write on the cached
>version
>   1. If the cached version is wrong due to a write on another
>   instance read from Zookeeper
>   2. Rebuild the final ACLs list
>   3. Repeat until the write is successful
>4. Success in 1 or 4 (or more) remote calls
>
> It looks like the Sentry implementation would not have this issue and the
> Ranger implementation doesn't support modifying ACLs anyway (must use the
> Ranger UI/API).
>
> I wanted to explain my original thoughts, but I am happy to remove the
> controller constraint given the SimpleAclAuthorizer appears to be the only
> (of those I know) implementation to be affected.
>
> Thank you,
> Grant
>
>
>
>
>
>
> On Sat, Aug 13, 2016 at 5:41 PM, Ewen

[GitHub] kafka pull request #1772: MINOR: Update Kafka configuration documentation to...

2016-08-22 Thread SinghAsDev
GitHub user SinghAsDev opened a pull request:

https://github.com/apache/kafka/pull/1772

MINOR: Update Kafka configuration documentation to use kafka-configs.…

…sh, instead of deprecated kafka-topics.sh --alter

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/SinghAsDev/kafka MinorKafkaConfigsDoc

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/1772.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1772


commit a8eb6a2e1420173bc129844b28fda7932d5f7d26
Author: Ashish Singh 
Date:   2016-08-23T00:22:44Z

MINOR: Update Kafka configuration documentation to use kafka-configs.sh, 
instead of deprecated kafka-topics.sh --alter




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Build failed in Jenkins: kafka-trunk-jdk7 #1491

2016-08-22 Thread Apache Jenkins Server
See 

Changes:

[ismael] KAFKA-4066; Fix NPE in consumer due to multi-threaded updates

[ismael] MINOR: Fix typos in security section

[wangguoz] MINOR: improve Streams application reset tool to make sure 
application

[ismael] MINOR: add slf4jlog4j to streams example

[cshapi] KAFKA-4051: Use nanosecond clock for timers in broker

[wangguoz] KAFKA-4049: Fix transient failure in RegexSourceIntegrationTest

[ismael] KAFKA-4032; Uncaught exceptions when autocreating topics

--
[...truncated 1268 lines...]
kafka.server.LogRecoveryTest > testHWCheckpointWithFailuresSingleLogSegment 
STARTED

kafka.server.LogRecoveryTest > testHWCheckpointWithFailuresSingleLogSegment 
PASSED

kafka.server.SaslPlaintextReplicaFetchTest > testReplicaFetcherThread STARTED

kafka.server.SaslPlaintextReplicaFetchTest > testReplicaFetcherThread PASSED

kafka.server.CreateTopicsRequestTest > testValidCreateTopicsRequests STARTED

kafka.server.CreateTopicsRequestTest > testValidCreateTopicsRequests PASSED

kafka.server.CreateTopicsRequestTest > testErrorCreateTopicsRequests STARTED

kafka.server.CreateTopicsRequestTest > testErrorCreateTopicsRequests PASSED

kafka.server.CreateTopicsRequestTest > testInvalidCreateTopicsRequests STARTED

kafka.server.CreateTopicsRequestTest > testInvalidCreateTopicsRequests PASSED

kafka.server.CreateTopicsRequestTest > testNotController STARTED

kafka.server.CreateTopicsRequestTest > testNotController PASSED

kafka.server.ServerStartupTest > testBrokerStateRunningAfterZK STARTED

kafka.server.ServerStartupTest > testBrokerStateRunningAfterZK PASSED

kafka.server.ServerStartupTest > testBrokerCreatesZKChroot STARTED

kafka.server.ServerStartupTest > testBrokerCreatesZKChroot PASSED

kafka.server.ServerStartupTest > testConflictBrokerRegistration STARTED

kafka.server.ServerStartupTest > testConflictBrokerRegistration PASSED

kafka.server.ServerStartupTest > testBrokerSelfAware STARTED

kafka.server.ServerStartupTest > testBrokerSelfAware PASSED

kafka.server.ApiVersionsRequestTest > 
testApiVersionsRequestWithUnsupportedVersion STARTED

kafka.server.ApiVersionsRequestTest > 
testApiVersionsRequestWithUnsupportedVersion PASSED

kafka.server.ApiVersionsRequestTest > testApiVersionsRequest STARTED

kafka.server.ApiVersionsRequestTest > testApiVersionsRequest PASSED

kafka.server.IsrExpirationTest > testIsrExpirationForSlowFollowers STARTED

kafka.server.IsrExpirationTest > testIsrExpirationForSlowFollowers PASSED

kafka.server.IsrExpirationTest > testIsrExpirationForStuckFollowers STARTED

kafka.server.IsrExpirationTest > testIsrExpirationForStuckFollowers PASSED

kafka.server.IsrExpirationTest > testIsrExpirationIfNoFetchRequestMade STARTED

kafka.server.IsrExpirationTest > testIsrExpirationIfNoFetchRequestMade PASSED

kafka.server.AdvertiseBrokerTest > testBrokerAdvertiseToZK STARTED

kafka.server.AdvertiseBrokerTest > testBrokerAdvertiseToZK PASSED

kafka.server.MetadataRequestTest > testReplicaDownResponse STARTED

kafka.server.MetadataRequestTest > testReplicaDownResponse PASSED

kafka.server.MetadataRequestTest > testRack STARTED

kafka.server.MetadataRequestTest > testRack PASSED

kafka.server.MetadataRequestTest > testIsInternal STARTED

kafka.server.MetadataRequestTest > testIsInternal PASSED

kafka.server.MetadataRequestTest > testControllerId STARTED

kafka.server.MetadataRequestTest > testControllerId PASSED

kafka.server.MetadataRequestTest > testAllTopicsRequest STARTED

kafka.server.MetadataRequestTest > testAllTopicsRequest PASSED

kafka.server.MetadataRequestTest > testNoTopicsRequest STARTED

kafka.server.MetadataRequestTest > testNoTopicsRequest PASSED

kafka.server.MetadataCacheTest > 
getTopicMetadataWithNonSupportedSecurityProtocol STARTED

kafka.server.MetadataCacheTest > 
getTopicMetadataWithNonSupportedSecurityProtocol PASSED

kafka.server.MetadataCacheTest > getTopicMetadataIsrNotAvailable STARTED

kafka.server.MetadataCacheTest > getTopicMetadataIsrNotAvailable PASSED

kafka.server.MetadataCacheTest > getTopicMetadata STARTED

kafka.server.MetadataCacheTest > getTopicMetadata PASSED

kafka.server.MetadataCacheTest > getTopicMetadataReplicaNotAvailable STARTED

kafka.server.MetadataCacheTest > getTopicMetadataReplicaNotAvailable PASSED

kafka.server.MetadataCacheTest > getTopicMetadataPartitionLeaderNotAvailable 
STARTED

kafka.server.MetadataCacheTest > getTopicMetadataPartitionLeaderNotAvailable 
PASSED

kafka.server.MetadataCacheTest > getAliveBrokersShouldNotBeMutatedByUpdateCache 
STARTED

kafka.server.MetadataCacheTest > getAliveBrokersShouldNotBeMutatedByUpdateCache 
PASSED

kafka.server.MetadataCacheTest > getTopicMetadataNonExistingTopics STARTED

kafka.server.MetadataCacheTest > getTopicMetadataNonExistingTopics PASSED

kafka.server.SaslSslReplicaFetchTest > testReplicaFetcherThread STARTED

kafka.server.SaslSslReplicaFetchTest > testReplicaFetcherThread PASSED

kafka.ser

[jira] [Commented] (KAFKA-2894) WorkerSinkTask doesn't handle rewinding offsets on rebalance

2016-08-22 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2894?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15431812#comment-15431812
 ] 

ASF GitHub Bot commented on KAFKA-2894:
---

GitHub user kkonstantine opened a pull request:

https://github.com/apache/kafka/pull/1771

KAFKA-2894: WorkerSinkTask should rewind offsets on rebalance



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/kkonstantine/kafka 
KAFKA-2894-rewind-offsets-on-rebalance

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/1771.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1771


commit d14eff6084bafcf5014c8309703faafd96fe7071
Author: Konstantine Karantasis 
Date:   2016-08-22T23:30:27Z

KAFKA-2894: WorkerSinkTask should rewind offsets on rebalance




> WorkerSinkTask doesn't handle rewinding offsets on rebalance
> 
>
> Key: KAFKA-2894
> URL: https://issues.apache.org/jira/browse/KAFKA-2894
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Affects Versions: 0.9.0.0
>Reporter: Ewen Cheslack-Postava
>Assignee: Liquan Pei
>Priority: Blocker
> Fix For: 0.10.1.0
>
>
> rewind() is only invoked at the beginning of each poll(). This means that if 
> a rebalance occurs in the poll, it's feasible to get data that doesn't match 
> a request to change offsets during the rebalance. I think the consumer will 
> hold on to consumer data across the rebalance if it is reassigned the same 
> offset, so there may already be data ready to be delivered. Additionally we 
> may already have data in an incomplete messageBatch that should be discarded 
> when the rewind is requested.
> While connectors that care about this (i.e. ones that manage their own 
> offsets) can handle this correctly by tracking the offsets they're expecting 
> to see, it's a hassle, error prone, an pretty unintuitive.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request #1771: KAFKA-2894: WorkerSinkTask should rewind offsets o...

2016-08-22 Thread kkonstantine
GitHub user kkonstantine opened a pull request:

https://github.com/apache/kafka/pull/1771

KAFKA-2894: WorkerSinkTask should rewind offsets on rebalance



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/kkonstantine/kafka 
KAFKA-2894-rewind-offsets-on-rebalance

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/1771.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1771


commit d14eff6084bafcf5014c8309703faafd96fe7071
Author: Konstantine Karantasis 
Date:   2016-08-22T23:30:27Z

KAFKA-2894: WorkerSinkTask should rewind offsets on rebalance




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Build failed in Jenkins: kafka-trunk-jdk8 #831

2016-08-22 Thread Apache Jenkins Server
See 

Changes:

[ismael] MINOR: Fix typos in security section

[wangguoz] MINOR: improve Streams application reset tool to make sure 
application

[ismael] MINOR: add slf4jlog4j to streams example

[cshapi] KAFKA-4051: Use nanosecond clock for timers in broker

[wangguoz] KAFKA-4049: Fix transient failure in RegexSourceIntegrationTest

[ismael] KAFKA-4032; Uncaught exceptions when autocreating topics

--
[...truncated 3395 lines...]

kafka.integration.SaslPlaintextTopicMetadataTest > testAutoCreateTopic STARTED

kafka.integration.SaslPlaintextTopicMetadataTest > testAutoCreateTopic PASSED

kafka.integration.SaslPlaintextTopicMetadataTest > testGetAllTopicMetadata 
STARTED

kafka.integration.SaslPlaintextTopicMetadataTest > testGetAllTopicMetadata 
PASSED

kafka.integration.SaslPlaintextTopicMetadataTest > 
testAliveBrokersListWithNoTopicsAfterNewBrokerStartup STARTED

kafka.integration.SaslPlaintextTopicMetadataTest > 
testAliveBrokersListWithNoTopicsAfterNewBrokerStartup PASSED

kafka.integration.SaslPlaintextTopicMetadataTest > testBasicTopicMetadata 
STARTED

kafka.integration.SaslPlaintextTopicMetadataTest > testBasicTopicMetadata PASSED

kafka.integration.SaslPlaintextTopicMetadataTest > 
testAutoCreateTopicWithInvalidReplication STARTED

kafka.integration.SaslPlaintextTopicMetadataTest > 
testAutoCreateTopicWithInvalidReplication PASSED

kafka.integration.SaslPlaintextTopicMetadataTest > 
testAliveBrokersListWithNoTopicsAfterABrokerShutdown STARTED

kafka.integration.SaslPlaintextTopicMetadataTest > 
testAliveBrokersListWithNoTopicsAfterABrokerShutdown PASSED

kafka.integration.PrimitiveApiTest > testMultiProduce STARTED

kafka.integration.PrimitiveApiTest > testMultiProduce PASSED

kafka.integration.PrimitiveApiTest > testDefaultEncoderProducerAndFetch STARTED

kafka.integration.PrimitiveApiTest > testDefaultEncoderProducerAndFetch PASSED

kafka.integration.PrimitiveApiTest > testFetchRequestCanProperlySerialize 
STARTED

kafka.integration.PrimitiveApiTest > testFetchRequestCanProperlySerialize PASSED

kafka.integration.PrimitiveApiTest > testPipelinedProduceRequests STARTED

kafka.integration.PrimitiveApiTest > testPipelinedProduceRequests PASSED

kafka.integration.PrimitiveApiTest > testProduceAndMultiFetch STARTED

kafka.integration.PrimitiveApiTest > testProduceAndMultiFetch PASSED

kafka.integration.PrimitiveApiTest > 
testDefaultEncoderProducerAndFetchWithCompression STARTED

kafka.integration.PrimitiveApiTest > 
testDefaultEncoderProducerAndFetchWithCompression PASSED

kafka.integration.PrimitiveApiTest > testConsumerEmptyTopic STARTED

kafka.integration.PrimitiveApiTest > testConsumerEmptyTopic PASSED

kafka.integration.PrimitiveApiTest > testEmptyFetchRequest STARTED

kafka.integration.PrimitiveApiTest > testEmptyFetchRequest PASSED

kafka.integration.PlaintextTopicMetadataTest > 
testIsrAfterBrokerShutDownAndJoinsBack STARTED

kafka.integration.PlaintextTopicMetadataTest > 
testIsrAfterBrokerShutDownAndJoinsBack PASSED

kafka.integration.PlaintextTopicMetadataTest > testAutoCreateTopicWithCollision 
STARTED

kafka.integration.PlaintextTopicMetadataTest > testAutoCreateTopicWithCollision 
PASSED

kafka.integration.PlaintextTopicMetadataTest > testAliveBrokerListWithNoTopics 
STARTED

kafka.integration.PlaintextTopicMetadataTest > testAliveBrokerListWithNoTopics 
PASSED

kafka.integration.PlaintextTopicMetadataTest > testAutoCreateTopic STARTED

kafka.integration.PlaintextTopicMetadataTest > testAutoCreateTopic PASSED

kafka.integration.PlaintextTopicMetadataTest > testGetAllTopicMetadata STARTED

kafka.integration.PlaintextTopicMetadataTest > testGetAllTopicMetadata PASSED

kafka.integration.PlaintextTopicMetadataTest > 
testAliveBrokersListWithNoTopicsAfterNewBrokerStartup STARTED

kafka.integration.PlaintextTopicMetadataTest > 
testAliveBrokersListWithNoTopicsAfterNewBrokerStartup PASSED

kafka.integration.PlaintextTopicMetadataTest > testBasicTopicMetadata STARTED

kafka.integration.PlaintextTopicMetadataTest > testBasicTopicMetadata PASSED

kafka.integration.PlaintextTopicMetadataTest > 
testAutoCreateTopicWithInvalidReplication STARTED

kafka.integration.PlaintextTopicMetadataTest > 
testAutoCreateTopicWithInvalidReplication PASSED

kafka.integration.PlaintextTopicMetadataTest > 
testAliveBrokersListWithNoTopicsAfterABrokerShutdown STARTED

kafka.integration.PlaintextTopicMetadataTest > 
testAliveBrokersListWithNoTopicsAfterABrokerShutdown PASSED

kafka.integration.AutoOffsetResetTest > testResetToLatestWhenOffsetTooLow 
STARTED

kafka.integration.AutoOffsetResetTest > testResetToLatestWhenOffsetTooLow PASSED

kafka.integration.AutoOffsetResetTest > testResetToEarliestWhenOffsetTooLow 
STARTED

kafka.integration.AutoOffsetResetTest > testResetToEarliestWhenOffsetTooLow 
PASSED

kafka.integration.AutoOffsetResetTest > testResetToEarliestWhenOffsetTooHigh 
STARTED

kafka.integration.AutoOf

Build failed in Jenkins: kafka-trunk-jdk7 #1490

2016-08-22 Thread Apache Jenkins Server
See 

Changes:

[junrao] KAFKA-3894; log cleaner can partially clean a segment

--
[...truncated 5036 lines...]
kafka.api.SslProducerSendTest > testSendWithInvalidCreateTime STARTED

kafka.api.SslProducerSendTest > testSendWithInvalidCreateTime PASSED

kafka.api.SslProducerSendTest > testSendCompressedMessageWithCreateTime STARTED

kafka.api.SslProducerSendTest > testSendCompressedMessageWithCreateTime PASSED

kafka.api.SslProducerSendTest > testCloseWithZeroTimeoutFromCallerThread STARTED

kafka.api.SslProducerSendTest > testCloseWithZeroTimeoutFromCallerThread PASSED

kafka.api.SslProducerSendTest > testCloseWithZeroTimeoutFromSenderThread STARTED

kafka.api.SslProducerSendTest > testCloseWithZeroTimeoutFromSenderThread PASSED

kafka.api.SslProducerSendTest > testSendNonCompressedMessageWithLogApendTime 
STARTED

kafka.api.SslProducerSendTest > testSendNonCompressedMessageWithLogApendTime 
PASSED

kafka.api.AuthorizerIntegrationTest > testCommitWithTopicWrite STARTED

kafka.api.AuthorizerIntegrationTest > testCommitWithTopicWrite PASSED

kafka.api.AuthorizerIntegrationTest > testConsumeWithNoTopicAccess STARTED

kafka.api.AuthorizerIntegrationTest > testConsumeWithNoTopicAccess PASSED

kafka.api.AuthorizerIntegrationTest > 
testCreatePermissionNeededToReadFromNonExistentTopic STARTED

kafka.api.AuthorizerIntegrationTest > 
testCreatePermissionNeededToReadFromNonExistentTopic PASSED

kafka.api.AuthorizerIntegrationTest > testDeleteWithWildCardAuth STARTED

kafka.api.AuthorizerIntegrationTest > testDeleteWithWildCardAuth PASSED

kafka.api.AuthorizerIntegrationTest > testOffsetFetchWithNoAccess STARTED

kafka.api.AuthorizerIntegrationTest > testOffsetFetchWithNoAccess PASSED

kafka.api.AuthorizerIntegrationTest > testListOfsetsWithTopicDescribe STARTED

kafka.api.AuthorizerIntegrationTest > testListOfsetsWithTopicDescribe PASSED

kafka.api.AuthorizerIntegrationTest > testProduceWithTopicRead STARTED

kafka.api.AuthorizerIntegrationTest > testProduceWithTopicRead PASSED

kafka.api.AuthorizerIntegrationTest > testListOffsetsWithNoTopicAccess STARTED

kafka.api.AuthorizerIntegrationTest > testListOffsetsWithNoTopicAccess PASSED

kafka.api.AuthorizerIntegrationTest > 
testCreatePermissionNeededForWritingToNonExistentTopic STARTED

kafka.api.AuthorizerIntegrationTest > 
testCreatePermissionNeededForWritingToNonExistentTopic PASSED

kafka.api.AuthorizerIntegrationTest > 
testSimpleConsumeWithOffsetLookupAndNoGroupAccess STARTED

kafka.api.AuthorizerIntegrationTest > 
testSimpleConsumeWithOffsetLookupAndNoGroupAccess PASSED

kafka.api.AuthorizerIntegrationTest > testConsumeWithTopicDescribe STARTED

kafka.api.AuthorizerIntegrationTest > testConsumeWithTopicDescribe PASSED

kafka.api.AuthorizerIntegrationTest > testOffsetFetchWithNoTopicAccess STARTED

kafka.api.AuthorizerIntegrationTest > testOffsetFetchWithNoTopicAccess PASSED

kafka.api.AuthorizerIntegrationTest > testCommitWithNoTopicAccess STARTED

kafka.api.AuthorizerIntegrationTest > testCommitWithNoTopicAccess PASSED

kafka.api.AuthorizerIntegrationTest > testProduceWithNoTopicAccess STARTED

kafka.api.AuthorizerIntegrationTest > testProduceWithNoTopicAccess PASSED

kafka.api.AuthorizerIntegrationTest > testProduceWithTopicWrite STARTED

kafka.api.AuthorizerIntegrationTest > testProduceWithTopicWrite PASSED

kafka.api.AuthorizerIntegrationTest > testAuthorization STARTED

kafka.api.AuthorizerIntegrationTest > testAuthorization PASSED

kafka.api.AuthorizerIntegrationTest > testUnauthorizedDeleteWithDescribe STARTED

kafka.api.AuthorizerIntegrationTest > testUnauthorizedDeleteWithDescribe PASSED

kafka.api.AuthorizerIntegrationTest > testConsumeWithTopicAndGroupRead STARTED

kafka.api.AuthorizerIntegrationTest > testConsumeWithTopicAndGroupRead PASSED

kafka.api.AuthorizerIntegrationTest > testUnauthorizedDeleteWithoutDescribe 
STARTED

kafka.api.AuthorizerIntegrationTest > testUnauthorizedDeleteWithoutDescribe 
PASSED

kafka.api.AuthorizerIntegrationTest > testConsumeWithTopicWrite STARTED

kafka.api.AuthorizerIntegrationTest > testConsumeWithTopicWrite PASSED

kafka.api.AuthorizerIntegrationTest > testOffsetFetchWithNoGroupAccess STARTED

kafka.api.AuthorizerIntegrationTest > testOffsetFetchWithNoGroupAccess PASSED

kafka.api.AuthorizerIntegrationTest > testCommitWithNoAccess STARTED

kafka.api.AuthorizerIntegrationTest > testCommitWithNoAccess PASSED

kafka.api.AuthorizerIntegrationTest > testCommitWithNoGroupAccess STARTED

kafka.api.AuthorizerIntegrationTest > testCommitWithNoGroupAccess PASSED

kafka.api.AuthorizerIntegrationTest > testConsumeWithNoAccess STARTED

kafka.api.AuthorizerIntegrationTest > testConsumeWithNoAccess PASSED

kafka.api.AuthorizerIntegrationTest > testOffsetFetchWithTopicAndGroupRead 
STARTED

kafka.api.AuthorizerIntegrationTest > testOffsetFetchWithTopicAndGroupRead 
PASSED

kafka.api.AuthorizerIntegrationTest > testCommitWithTopi

[jira] [Updated] (KAFKA-4032) Uncaught exceptions when autocreating topics

2016-08-22 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4032?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma updated KAFKA-4032:
---
Fix Version/s: 0.10.1.0

> Uncaught exceptions when autocreating topics
> 
>
> Key: KAFKA-4032
> URL: https://issues.apache.org/jira/browse/KAFKA-4032
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Reporter: Jason Gustafson
>Assignee: Grant Henke
> Fix For: 0.10.1.0
>
>
> With the addition of the CreateTopics API in KIP-4, we have some new 
> exceptions which can be raised from {{AdminUtils.createTopic}}. For example, 
> it is possible to raise InvalidReplicationFactorException. Since we have not 
> yet removed the ability to create topics automatically, we need to make sure 
> these exceptions are caught and handled in both the TopicMetadata and 
> GroupCoordinator request handlers. Currently these exceptions are propagated 
> all the way to the processor.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-4032) Uncaught exceptions when autocreating topics

2016-08-22 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4032?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma updated KAFKA-4032:
---
Resolution: Fixed
Status: Resolved  (was: Patch Available)

> Uncaught exceptions when autocreating topics
> 
>
> Key: KAFKA-4032
> URL: https://issues.apache.org/jira/browse/KAFKA-4032
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Reporter: Jason Gustafson
>Assignee: Grant Henke
>
> With the addition of the CreateTopics API in KIP-4, we have some new 
> exceptions which can be raised from {{AdminUtils.createTopic}}. For example, 
> it is possible to raise InvalidReplicationFactorException. Since we have not 
> yet removed the ability to create topics automatically, we need to make sure 
> these exceptions are caught and handled in both the TopicMetadata and 
> GroupCoordinator request handlers. Currently these exceptions are propagated 
> all the way to the processor.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Jenkins build is back to normal : kafka-trunk-jdk8 #830

2016-08-22 Thread Apache Jenkins Server
See 



[jira] [Commented] (KAFKA-4032) Uncaught exceptions when autocreating topics

2016-08-22 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4032?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15431774#comment-15431774
 ] 

ASF GitHub Bot commented on KAFKA-4032:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/1739


> Uncaught exceptions when autocreating topics
> 
>
> Key: KAFKA-4032
> URL: https://issues.apache.org/jira/browse/KAFKA-4032
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Reporter: Jason Gustafson
>Assignee: Grant Henke
>
> With the addition of the CreateTopics API in KIP-4, we have some new 
> exceptions which can be raised from {{AdminUtils.createTopic}}. For example, 
> it is possible to raise InvalidReplicationFactorException. Since we have not 
> yet removed the ability to create topics automatically, we need to make sure 
> these exceptions are caught and handled in both the TopicMetadata and 
> GroupCoordinator request handlers. Currently these exceptions are propagated 
> all the way to the processor.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request #1739: KAFKA-4032: Uncaught exceptions when autocreating ...

2016-08-22 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/1739


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Resolved] (KAFKA-4075) Failure in SaslSslTopicMetadataTest.testIsrAfterBrokerShutDownAndJoinsBack

2016-08-22 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4075?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma resolved KAFKA-4075.

Resolution: Duplicate

Duplicate of KAFKA-3103

> Failure in SaslSslTopicMetadataTest.testIsrAfterBrokerShutDownAndJoinsBack
> --
>
> Key: KAFKA-4075
> URL: https://issues.apache.org/jira/browse/KAFKA-4075
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Guozhang Wang
>  Labels: newbie, test
>
> {code}
> kafka.integration.SaslSslTopicMetadataTest > 
> testIsrAfterBrokerShutDownAndJoinsBack FAILED
> java.lang.AssertionError: Topic metadata is not correctly updated for 
> broker kafka.server.KafkaServer@1c711b00.
> Expected ISR: List(BrokerEndPoint(0,localhost,48976), 
> BrokerEndPoint(1,localhost,41142))
> Actual ISR  : 
> {code}
> https://builds.apache.org/job/kafka-trunk-git-pr-jdk7/5201



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-4049) Transient failure in RegexSourceIntegrationTest.testRegexMatchesTopicsAWhenDeleted

2016-08-22 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4049?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15431769#comment-15431769
 ] 

ASF GitHub Bot commented on KAFKA-4049:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/1746


> Transient failure in 
> RegexSourceIntegrationTest.testRegexMatchesTopicsAWhenDeleted
> --
>
> Key: KAFKA-4049
> URL: https://issues.apache.org/jira/browse/KAFKA-4049
> Project: Kafka
>  Issue Type: Sub-task
>  Components: streams
>Reporter: Guozhang Wang
>Assignee: Guozhang Wang
>  Labels: test
> Fix For: 0.10.1.0
>
>
> There is an hidden assumption in this test case that the created 
> {{TEST-TOPIC-A}} and {{TEST-TOPIC-B}} are propagated to the streams client at 
> the same time, and stored as {{assignedTopicPartitions[0]}}. However this is 
> not always true since these two topics may be added on the client side as two 
> consecutive metadata refreshes.
> The proposed fix includes the following:
> 1. In {{waitForCondition}} do not trigger the {{conditionMet}} function again 
> after the while loop, but just remember the returned value from the last 
> call. This is safer so that if the condition changes after the while loop it 
> will not be considered as well.
> 2. Not remembering a map of all the previous assigned partitions, but only 
> the most recent one. And also get rid of the final check after streams client 
> is closed by just use {{equals}} in the condition to make sure that it is 
> exactly the same to the expected assignment.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (KAFKA-4049) Transient failure in RegexSourceIntegrationTest.testRegexMatchesTopicsAWhenDeleted

2016-08-22 Thread Guozhang Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4049?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang resolved KAFKA-4049.
--
   Resolution: Fixed
Fix Version/s: 0.10.1.0

Issue resolved by pull request 1746
[https://github.com/apache/kafka/pull/1746]

> Transient failure in 
> RegexSourceIntegrationTest.testRegexMatchesTopicsAWhenDeleted
> --
>
> Key: KAFKA-4049
> URL: https://issues.apache.org/jira/browse/KAFKA-4049
> Project: Kafka
>  Issue Type: Sub-task
>  Components: streams
>Reporter: Guozhang Wang
>Assignee: Guozhang Wang
>  Labels: test
> Fix For: 0.10.1.0
>
>
> There is an hidden assumption in this test case that the created 
> {{TEST-TOPIC-A}} and {{TEST-TOPIC-B}} are propagated to the streams client at 
> the same time, and stored as {{assignedTopicPartitions[0]}}. However this is 
> not always true since these two topics may be added on the client side as two 
> consecutive metadata refreshes.
> The proposed fix includes the following:
> 1. In {{waitForCondition}} do not trigger the {{conditionMet}} function again 
> after the while loop, but just remember the returned value from the last 
> call. This is safer so that if the condition changes after the while loop it 
> will not be considered as well.
> 2. Not remembering a map of all the previous assigned partitions, but only 
> the most recent one. And also get rid of the final check after streams client 
> is closed by just use {{equals}} in the condition to make sure that it is 
> exactly the same to the expected assignment.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request #1746: KAFKA-4049: Fix transient failure in RegexSourceIn...

2016-08-22 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/1746


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Created] (KAFKA-4075) Failure in SaslSslTopicMetadataTest.testIsrAfterBrokerShutDownAndJoinsBack

2016-08-22 Thread Guozhang Wang (JIRA)
Guozhang Wang created KAFKA-4075:


 Summary: Failure in 
SaslSslTopicMetadataTest.testIsrAfterBrokerShutDownAndJoinsBack
 Key: KAFKA-4075
 URL: https://issues.apache.org/jira/browse/KAFKA-4075
 Project: Kafka
  Issue Type: Sub-task
Reporter: Guozhang Wang


{code}

kafka.integration.SaslSslTopicMetadataTest > 
testIsrAfterBrokerShutDownAndJoinsBack FAILED
java.lang.AssertionError: Topic metadata is not correctly updated for 
broker kafka.server.KafkaServer@1c711b00.
Expected ISR: List(BrokerEndPoint(0,localhost,48976), 
BrokerEndPoint(1,localhost,41142))
Actual ISR  : 

{code}

https://builds.apache.org/job/kafka-trunk-git-pr-jdk7/5201



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request #1768: KAFKA-4051: Use nanosecond clock for timers in bro...

2016-08-22 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/1768


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-4051) Strange behavior during rebalance when turning the OS clock back

2016-08-22 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4051?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15431752#comment-15431752
 ] 

ASF GitHub Bot commented on KAFKA-4051:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/1768


> Strange behavior during rebalance when turning the OS clock back
> 
>
> Key: KAFKA-4051
> URL: https://issues.apache.org/jira/browse/KAFKA-4051
> Project: Kafka
>  Issue Type: Bug
>  Components: consumer
>Affects Versions: 0.10.0.0
> Environment: OS: Ubuntu 14.04 - 64bits
>Reporter: Gabriel Ibarra
>Assignee: Rajini Sivaram
> Fix For: 0.10.1.0
>
>
> If a rebalance is performed after turning the OS clock back, then the kafka 
> server enters in a loop and the rebalance cannot be completed until the 
> system returns to the previous date/hour.
> Steps to Reproduce:
> - Start a consumer for TOPIC_NAME with group id GROUP_NAME. It will be owner 
> of all the partitions.
> - Turn the system (OS) clock back. For instance 1 hour.
> - Start a new consumer for TOPIC_NAME  using the same group id, it will force 
> a rebalance.
> After these actions the kafka server logs constantly display the messages 
> below, and after a while both consumers do not receive more packages. This 
> condition lasts at least the time that the clock went back, for this example 
> 1 hour, and finally after this time kafka comes back to work.
> [2016-08-08 11:30:23,023] INFO [GroupCoordinator 0]: Preparing to restabilize 
> group GROUP_NAME with old generation 2 (kafka.coordinator.GroupCoordinator)
> [2016-08-08 11:30:23,025] INFO [GroupCoordinator 0]: Stabilized group 
> GROUP_NAME generation 3 (kafka.coordinator.GroupCoordinator)
> [2016-08-08 11:30:23,027] INFO [GroupCoordinator 0]: Preparing to restabilize 
> group GROUP_NAME with old generation 3 (kafka.coordinator.GroupCoordinator)
> [2016-08-08 11:30:23,029] INFO [GroupCoordinator 0]: Group GROUP_NAME 
> generation 3 is dead and removed (kafka.coordinator.GroupCoordinator)
> [2016-08-08 11:30:23,032] INFO [GroupCoordinator 0]: Preparing to restabilize 
> group GROUP_NAME with old generation 0 (kafka.coordinator.GroupCoordinator)
> [2016-08-08 11:30:23,032] INFO [GroupCoordinator 0]: Stabilized group 
> GROUP_NAME generation 1 (kafka.coordinator.GroupCoordinator)
> [2016-08-08 11:30:23,033] INFO [GroupCoordinator 0]: Preparing to restabilize 
> group GROUP_NAME with old generation 1 (kafka.coordinator.GroupCoordinator)
> [2016-08-08 11:30:23,034] INFO [GroupCoordinator 0]: Group GROUP generation 1 
> is dead and removed (kafka.coordinator.GroupCoordinator)
> [2016-08-08 11:30:23,043] INFO [GroupCoordinator 0]: Preparing to restabilize 
> group GROUP_NAME with old generation 0 (kafka.coordinator.GroupCoordinator)
> [2016-08-08 11:30:23,044] INFO [GroupCoordinator 0]: Stabilized group 
> GROUP_NAME generation 1 (kafka.coordinator.GroupCoordinator)
> [2016-08-08 11:30:23,044] INFO [GroupCoordinator 0]: Preparing to restabilize 
> group GROUP_NAME with old generation 1 (kafka.coordinator.GroupCoordinator)
> [2016-08-08 11:30:23,045] INFO [GroupCoordinator 0]: Group GROUP_NAME 
> generation 1 is dead and removed (kafka.coordinator.GroupCoordinator)
> Due to the fact that some systems could have enabled NTP or an administrator 
> option to change the system clock (date/time) it's important to do it safely, 
> currently the only way to do it safely is following the next steps:
> 1-  Tear down the Kafka server.
> 2-  Change the date/time
> 3- Tear up the Kafka server.
> But, this approach can be done only if the change was performed by the 
> administrator, not for NTP. Also in many systems turning down the Kafka 
> server might cause the INFORMATION TO BE LOST.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-4051) Strange behavior during rebalance when turning the OS clock back

2016-08-22 Thread Gwen Shapira (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4051?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gwen Shapira updated KAFKA-4051:

   Resolution: Fixed
Fix Version/s: 0.10.1.0
   Status: Resolved  (was: Patch Available)

Issue resolved by pull request 1768
[https://github.com/apache/kafka/pull/1768]

> Strange behavior during rebalance when turning the OS clock back
> 
>
> Key: KAFKA-4051
> URL: https://issues.apache.org/jira/browse/KAFKA-4051
> Project: Kafka
>  Issue Type: Bug
>  Components: consumer
>Affects Versions: 0.10.0.0
> Environment: OS: Ubuntu 14.04 - 64bits
>Reporter: Gabriel Ibarra
>Assignee: Rajini Sivaram
> Fix For: 0.10.1.0
>
>
> If a rebalance is performed after turning the OS clock back, then the kafka 
> server enters in a loop and the rebalance cannot be completed until the 
> system returns to the previous date/hour.
> Steps to Reproduce:
> - Start a consumer for TOPIC_NAME with group id GROUP_NAME. It will be owner 
> of all the partitions.
> - Turn the system (OS) clock back. For instance 1 hour.
> - Start a new consumer for TOPIC_NAME  using the same group id, it will force 
> a rebalance.
> After these actions the kafka server logs constantly display the messages 
> below, and after a while both consumers do not receive more packages. This 
> condition lasts at least the time that the clock went back, for this example 
> 1 hour, and finally after this time kafka comes back to work.
> [2016-08-08 11:30:23,023] INFO [GroupCoordinator 0]: Preparing to restabilize 
> group GROUP_NAME with old generation 2 (kafka.coordinator.GroupCoordinator)
> [2016-08-08 11:30:23,025] INFO [GroupCoordinator 0]: Stabilized group 
> GROUP_NAME generation 3 (kafka.coordinator.GroupCoordinator)
> [2016-08-08 11:30:23,027] INFO [GroupCoordinator 0]: Preparing to restabilize 
> group GROUP_NAME with old generation 3 (kafka.coordinator.GroupCoordinator)
> [2016-08-08 11:30:23,029] INFO [GroupCoordinator 0]: Group GROUP_NAME 
> generation 3 is dead and removed (kafka.coordinator.GroupCoordinator)
> [2016-08-08 11:30:23,032] INFO [GroupCoordinator 0]: Preparing to restabilize 
> group GROUP_NAME with old generation 0 (kafka.coordinator.GroupCoordinator)
> [2016-08-08 11:30:23,032] INFO [GroupCoordinator 0]: Stabilized group 
> GROUP_NAME generation 1 (kafka.coordinator.GroupCoordinator)
> [2016-08-08 11:30:23,033] INFO [GroupCoordinator 0]: Preparing to restabilize 
> group GROUP_NAME with old generation 1 (kafka.coordinator.GroupCoordinator)
> [2016-08-08 11:30:23,034] INFO [GroupCoordinator 0]: Group GROUP generation 1 
> is dead and removed (kafka.coordinator.GroupCoordinator)
> [2016-08-08 11:30:23,043] INFO [GroupCoordinator 0]: Preparing to restabilize 
> group GROUP_NAME with old generation 0 (kafka.coordinator.GroupCoordinator)
> [2016-08-08 11:30:23,044] INFO [GroupCoordinator 0]: Stabilized group 
> GROUP_NAME generation 1 (kafka.coordinator.GroupCoordinator)
> [2016-08-08 11:30:23,044] INFO [GroupCoordinator 0]: Preparing to restabilize 
> group GROUP_NAME with old generation 1 (kafka.coordinator.GroupCoordinator)
> [2016-08-08 11:30:23,045] INFO [GroupCoordinator 0]: Group GROUP_NAME 
> generation 1 is dead and removed (kafka.coordinator.GroupCoordinator)
> Due to the fact that some systems could have enabled NTP or an administrator 
> option to change the system clock (date/time) it's important to do it safely, 
> currently the only way to do it safely is following the next steps:
> 1-  Tear down the Kafka server.
> 2-  Change the date/time
> 3- Tear up the Kafka server.
> But, this approach can be done only if the change was performed by the 
> administrator, not for NTP. Also in many systems turning down the Kafka 
> server might cause the INFORMATION TO BE LOST.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: [VOTE] KIP-73 - Replication Quotas

2016-08-22 Thread Jun Rao
Ben,

Thanks for the proposal. +1.

Just a few minor comments below.

1. We have a LeaderOverThrottledRate metric to indicate the amount of
throttling happening in the leader broker. It seems that we should have the
equivalent of that for the follower to indicate the amount of throttling in
the follower, if any.
2. Do we still need the PartitionBytesInRate metric? There is no reference
on how it's going to be used.
3. In the test plan, you mentioned "Then replicas should move at close to
(but no more than) than (the quota dictated rate - the inbound rate).". It
seems that the replicas should always be moved at the quota rate
independent whether there is incoming traffic from the producer?

Jun

On Fri, Aug 19, 2016 at 1:21 AM, Ben Stopford  wrote:

> I’d like to initiate the voting process for KIP-73:
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-
> 73+Replication+Quotas  confluence/display/KAFKA/KIP-73+Replication+Quotas>
>
> Ben


[GitHub] kafka pull request #1731: MINOR: add slf4jlog4j to streams example

2016-08-22 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/1731


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] kafka pull request #1764: MINOR: improve Streams application reset tool to m...

2016-08-22 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/1764


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-2629) Enable getting SSL password from an executable rather than passing plaintext password

2016-08-22 Thread Ashish K Singh (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2629?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15431725#comment-15431725
 ] 

Ashish K Singh commented on KAFKA-2629:
---

[~bharatviswa] just posted a WIP patch. This adds a config and so will require 
a KIP. I will post a KIP in a day or two. Feel free to take a look at the 
posted PR and let me know if it works for you guys.

> Enable getting SSL password from an executable rather than passing plaintext 
> password
> -
>
> Key: KAFKA-2629
> URL: https://issues.apache.org/jira/browse/KAFKA-2629
> Project: Kafka
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 0.9.0.0
>Reporter: Ashish K Singh
>Assignee: Ashish K Singh
>
> Currently there are a couple of options to pass SSL passwords to Kafka, i.e., 
> via properties file or via command line argument. Both of these are not 
> recommended security practices.
> * A password on a command line is a no-no: it's trivial to see that password 
> just by using the 'ps' utility.
> * Putting a password into a file, and then passing the location to that file, 
> is the next best option. The access to the file will be governed by unix 
> access permissions which we all know and love. The downside is that the 
> password is still just sitting there in a file, and those who have access can 
> still see it trivially.
> * The most general, secure solution is to provide a layer of abstraction: 
> provide functionality to get the password from "somewhere else".  The most 
> flexible and generic way to do this is to simply call an executable which 
> returns the desired password. 
> ** The executable is again protected with normal file system privileges
> ** The simplest form, a script that looks like "echo 'my-password'", devolves 
> back to putting the password in a file
> ** A more interesting implementation could open up a local encrypted password 
> store and extract the password from it
> ** A maximally secure implementation could contact an external secret manager 
> with centralized control and audit functionality.
> ** In short: getting the password as the output of a script/executable is 
> maximally generic and enables both simple and complex use cases.
> This JIRA intend to add a config param to enable passing an executable to 
> Kafka for SSL passwords.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2629) Enable getting SSL password from an executable rather than passing plaintext password

2016-08-22 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2629?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15431717#comment-15431717
 ] 

ASF GitHub Bot commented on KAFKA-2629:
---

GitHub user SinghAsDev opened a pull request:

https://github.com/apache/kafka/pull/1770

WIP: KAFKA-2629: Enable getting passwords from an executable rathe…

…r than passing plaintext password

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/SinghAsDev/kafka KAFKA-2629

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/1770.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1770


commit 025e27eda77292279b6b9629e7bb69dda2cef5dd
Author: Ashish Singh 
Date:   2016-08-16T23:16:38Z

WIP: KAFKA-2629: Enable getting SSL password from an executable rather than 
passing plaintext password




> Enable getting SSL password from an executable rather than passing plaintext 
> password
> -
>
> Key: KAFKA-2629
> URL: https://issues.apache.org/jira/browse/KAFKA-2629
> Project: Kafka
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 0.9.0.0
>Reporter: Ashish K Singh
>Assignee: Ashish K Singh
>
> Currently there are a couple of options to pass SSL passwords to Kafka, i.e., 
> via properties file or via command line argument. Both of these are not 
> recommended security practices.
> * A password on a command line is a no-no: it's trivial to see that password 
> just by using the 'ps' utility.
> * Putting a password into a file, and then passing the location to that file, 
> is the next best option. The access to the file will be governed by unix 
> access permissions which we all know and love. The downside is that the 
> password is still just sitting there in a file, and those who have access can 
> still see it trivially.
> * The most general, secure solution is to provide a layer of abstraction: 
> provide functionality to get the password from "somewhere else".  The most 
> flexible and generic way to do this is to simply call an executable which 
> returns the desired password. 
> ** The executable is again protected with normal file system privileges
> ** The simplest form, a script that looks like "echo 'my-password'", devolves 
> back to putting the password in a file
> ** A more interesting implementation could open up a local encrypted password 
> store and extract the password from it
> ** A maximally secure implementation could contact an external secret manager 
> with centralized control and audit functionality.
> ** In short: getting the password as the output of a script/executable is 
> maximally generic and enables both simple and complex use cases.
> This JIRA intend to add a config param to enable passing an executable to 
> Kafka for SSL passwords.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request #1770: WIP: KAFKA-2629: Enable getting passwords from an ...

2016-08-22 Thread SinghAsDev
GitHub user SinghAsDev opened a pull request:

https://github.com/apache/kafka/pull/1770

WIP: KAFKA-2629: Enable getting passwords from an executable rathe…

…r than passing plaintext password

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/SinghAsDev/kafka KAFKA-2629

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/1770.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1770


commit 025e27eda77292279b6b9629e7bb69dda2cef5dd
Author: Ashish Singh 
Date:   2016-08-16T23:16:38Z

WIP: KAFKA-2629: Enable getting SSL password from an executable rather than 
passing plaintext password




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] kafka pull request #1651: MINOR: Fix typos in security section

2016-08-22 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/1651


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Updated] (KAFKA-4066) NullPointerException in Kafka consumer due to unsafe access to findCoordinatorFuture

2016-08-22 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4066?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma updated KAFKA-4066:
---
Resolution: Fixed
  Reviewer: Ismael Juma
Status: Resolved  (was: Patch Available)

> NullPointerException in Kafka consumer due to unsafe access to 
> findCoordinatorFuture
> 
>
> Key: KAFKA-4066
> URL: https://issues.apache.org/jira/browse/KAFKA-4066
> Project: Kafka
>  Issue Type: Bug
>  Components: consumer
>Reporter: Rajini Sivaram
>Assignee: Rajini Sivaram
> Fix For: 0.10.1.0
>
>
> {quote}
> java.lang.NullPointerException
>   at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:164)
>   at 
> org.apache.kafka.clients.consumer.internals.AbstractCoordinator.ensureCoordinatorReady(AbstractCoordinator.java:193)
>   at 
> org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.poll(ConsumerCoordinator.java:245)
>   at 
> org.apache.kafka.clients.consumer.KafkaConsumer.pollOnce(KafkaConsumer.java:993)
>   at 
> org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:959)
>   at kafka.consumer.NewShinyConsumer.receive(BaseConsumer.scala:100)
>   at kafka.tools.ConsoleConsumer$.process(ConsoleConsumer.scala:120)
>   at kafka.tools.ConsoleConsumer$.run(ConsoleConsumer.scala:75)
>   at kafka.tools.ConsoleConsumer$.main(ConsoleConsumer.scala:50)
>   at kafka.tools.ConsoleConsumer.main(ConsoleConsumer.scala)
> {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-4066) NullPointerException in Kafka consumer due to unsafe access to findCoordinatorFuture

2016-08-22 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4066?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15431652#comment-15431652
 ] 

ASF GitHub Bot commented on KAFKA-4066:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/1763


> NullPointerException in Kafka consumer due to unsafe access to 
> findCoordinatorFuture
> 
>
> Key: KAFKA-4066
> URL: https://issues.apache.org/jira/browse/KAFKA-4066
> Project: Kafka
>  Issue Type: Bug
>  Components: consumer
>Reporter: Rajini Sivaram
>Assignee: Rajini Sivaram
> Fix For: 0.10.1.0
>
>
> {quote}
> java.lang.NullPointerException
>   at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:164)
>   at 
> org.apache.kafka.clients.consumer.internals.AbstractCoordinator.ensureCoordinatorReady(AbstractCoordinator.java:193)
>   at 
> org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.poll(ConsumerCoordinator.java:245)
>   at 
> org.apache.kafka.clients.consumer.KafkaConsumer.pollOnce(KafkaConsumer.java:993)
>   at 
> org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:959)
>   at kafka.consumer.NewShinyConsumer.receive(BaseConsumer.scala:100)
>   at kafka.tools.ConsoleConsumer$.process(ConsoleConsumer.scala:120)
>   at kafka.tools.ConsoleConsumer$.run(ConsoleConsumer.scala:75)
>   at kafka.tools.ConsoleConsumer$.main(ConsoleConsumer.scala:50)
>   at kafka.tools.ConsoleConsumer.main(ConsoleConsumer.scala)
> {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request #1763: KAFKA-4066: Fix NPE in consumer due to multi-threa...

2016-08-22 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/1763


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Build failed in Jenkins: kafka-trunk-jdk8 #829

2016-08-22 Thread Apache Jenkins Server
See 

Changes:

[junrao] KAFKA-3163; Minor follow up for (KIP-33)

[junrao] KAFKA-3894; log cleaner can partially clean a segment

--
[...truncated 1853 lines...]

kafka.api.PlaintextProducerSendTest > testAutoCreateTopic STARTED

kafka.api.PlaintextProducerSendTest > testAutoCreateTopic PASSED

kafka.api.PlaintextProducerSendTest > testSendWithInvalidCreateTime STARTED

kafka.api.PlaintextProducerSendTest > testSendWithInvalidCreateTime PASSED

kafka.api.PlaintextProducerSendTest > testSendCompressedMessageWithCreateTime 
STARTED

kafka.api.PlaintextProducerSendTest > testSendCompressedMessageWithCreateTime 
PASSED

kafka.api.PlaintextProducerSendTest > testCloseWithZeroTimeoutFromCallerThread 
STARTED

kafka.api.PlaintextProducerSendTest > testCloseWithZeroTimeoutFromCallerThread 
PASSED

kafka.api.PlaintextProducerSendTest > testCloseWithZeroTimeoutFromSenderThread 
STARTED

kafka.api.PlaintextProducerSendTest > testCloseWithZeroTimeoutFromSenderThread 
PASSED

kafka.api.PlaintextProducerSendTest > 
testSendNonCompressedMessageWithLogApendTime STARTED

kafka.api.PlaintextProducerSendTest > 
testSendNonCompressedMessageWithLogApendTime PASSED

kafka.api.PlaintextConsumerTest > testPartitionsForAutoCreate STARTED

kafka.api.PlaintextConsumerTest > testPartitionsForAutoCreate PASSED

kafka.api.PlaintextConsumerTest > testShrinkingTopicSubscriptions STARTED

kafka.api.PlaintextConsumerTest > testShrinkingTopicSubscriptions PASSED

kafka.api.PlaintextConsumerTest > testMaxPollIntervalMs STARTED

kafka.api.PlaintextConsumerTest > testMaxPollIntervalMs PASSED

kafka.api.PlaintextConsumerTest > testSubsequentPatternSubscription STARTED

kafka.api.PlaintextConsumerTest > testSubsequentPatternSubscription PASSED

kafka.api.PlaintextConsumerTest > testAsyncCommit STARTED

kafka.api.PlaintextConsumerTest > testAsyncCommit PASSED

kafka.api.PlaintextConsumerTest > testMultiConsumerSessionTimeoutOnStopPolling 
STARTED

kafka.api.PlaintextConsumerTest > testMultiConsumerSessionTimeoutOnStopPolling 
PASSED

kafka.api.PlaintextConsumerTest > testPartitionsForInvalidTopic STARTED

kafka.api.PlaintextConsumerTest > testPartitionsForInvalidTopic PASSED

kafka.api.PlaintextConsumerTest > testPauseStateNotPreservedByRebalance STARTED

kafka.api.PlaintextConsumerTest > testPauseStateNotPreservedByRebalance PASSED

kafka.api.PlaintextConsumerTest > testSeek STARTED

kafka.api.PlaintextConsumerTest > testSeek PASSED

kafka.api.PlaintextConsumerTest > testPositionAndCommit STARTED

kafka.api.PlaintextConsumerTest > testPositionAndCommit PASSED

kafka.api.PlaintextConsumerTest > testUnsubscribeTopic STARTED

kafka.api.PlaintextConsumerTest > testUnsubscribeTopic PASSED

kafka.api.PlaintextConsumerTest > testMultiConsumerSessionTimeoutOnClose STARTED

kafka.api.PlaintextConsumerTest > testMultiConsumerSessionTimeoutOnClose PASSED

kafka.api.PlaintextConsumerTest > testFetchRecordTooLarge STARTED

kafka.api.PlaintextConsumerTest > testFetchRecordTooLarge PASSED

kafka.api.PlaintextConsumerTest > testMultiConsumerDefaultAssignment STARTED

kafka.api.PlaintextConsumerTest > testMultiConsumerDefaultAssignment PASSED

kafka.api.PlaintextConsumerTest > testAutoCommitOnClose STARTED

kafka.api.PlaintextConsumerTest > testAutoCommitOnClose PASSED

kafka.api.PlaintextConsumerTest > testListTopics STARTED

kafka.api.PlaintextConsumerTest > testListTopics PASSED

kafka.api.PlaintextConsumerTest > testExpandingTopicSubscriptions STARTED

kafka.api.PlaintextConsumerTest > testExpandingTopicSubscriptions PASSED

kafka.api.PlaintextConsumerTest > testInterceptors STARTED

kafka.api.PlaintextConsumerTest > testInterceptors PASSED

kafka.api.PlaintextConsumerTest > testPatternUnsubscription STARTED

kafka.api.PlaintextConsumerTest > testPatternUnsubscription PASSED

kafka.api.PlaintextConsumerTest > testGroupConsumption STARTED

kafka.api.PlaintextConsumerTest > testGroupConsumption PASSED

kafka.api.PlaintextConsumerTest > testPartitionsFor STARTED

kafka.api.PlaintextConsumerTest > testPartitionsFor PASSED

kafka.api.PlaintextConsumerTest > testAutoCommitOnRebalance STARTED

kafka.api.PlaintextConsumerTest > testAutoCommitOnRebalance PASSED

kafka.api.PlaintextConsumerTest > testInterceptorsWithWrongKeyValue STARTED

kafka.api.PlaintextConsumerTest > testInterceptorsWithWrongKeyValue PASSED

kafka.api.PlaintextConsumerTest > testMultiConsumerRoundRobinAssignment STARTED

kafka.api.PlaintextConsumerTest > testMultiConsumerRoundRobinAssignment PASSED

kafka.api.PlaintextConsumerTest > testPartitionPauseAndResume STARTED

kafka.api.PlaintextConsumerTest > testPartitionPauseAndResume PASSED

kafka.api.PlaintextConsumerTest > testConsumeMessagesWithLogAppendTime STARTED

kafka.api.PlaintextConsumerTest > testConsumeMessagesWithLogAppendTime PASSED

kafka.api.PlaintextConsumerTest > testAutoCommitOnCloseAfterWakeup STARTED

kafka.api.PlaintextConsumerTe

[jira] [Created] (KAFKA-4074) Deleting a topic can make it unavailable even if delete.topic.enable is false

2016-08-22 Thread Joel Koshy (JIRA)
Joel Koshy created KAFKA-4074:
-

 Summary: Deleting a topic can make it unavailable even if 
delete.topic.enable is false
 Key: KAFKA-4074
 URL: https://issues.apache.org/jira/browse/KAFKA-4074
 Project: Kafka
  Issue Type: Bug
  Components: controller
Reporter: Joel Koshy
 Fix For: 0.10.1.0


The {{delete.topic.enable}} configuration does not completely block the effects 
of delete topic since the controller may (indirectly) query the list of topics 
under the delete-topic znode.

To reproduce:
* Delete topic X
* Force a controller move (either by bouncing or removing the controller znode)
* The new controller will send out UpdateMetadataRequests with leader=-2 for 
the partitions of X
* Producers eventually stop producing to that topic

The reason for this is that when ControllerChannelManager adds 
UpdateMetadataRequests for brokers, we directly use the partitionsToBeDeleted 
field of the DeleteTopicManager (which is set to the partitions of the topics 
under the delete-topic znode on controller startup).

In order to get out of the situation you have to remove X from the znode and 
then force another controller move.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Build failed in Jenkins: kafka-trunk-jdk7 #1489

2016-08-22 Thread Apache Jenkins Server
See 

Changes:

[junrao] KAFKA-3163; Minor follow up for (KIP-33)

--
[...truncated 4991 lines...]
kafka.api.PlaintextConsumerTest > testCoordinatorFailover STARTED

kafka.api.PlaintextConsumerTest > testCoordinatorFailover PASSED

kafka.api.PlaintextConsumerTest > testSimpleConsumption STARTED

kafka.api.PlaintextConsumerTest > testSimpleConsumption PASSED

kafka.api.SslProducerSendTest > testSendNonCompressedMessageWithCreateTime 
STARTED

kafka.api.SslProducerSendTest > testSendNonCompressedMessageWithCreateTime 
PASSED

kafka.api.SslProducerSendTest > testSendCompressedMessageWithLogAppendTime 
STARTED

kafka.api.SslProducerSendTest > testSendCompressedMessageWithLogAppendTime 
PASSED

kafka.api.SslProducerSendTest > testClose STARTED

kafka.api.SslProducerSendTest > testClose PASSED

kafka.api.SslProducerSendTest > testFlush STARTED

kafka.api.SslProducerSendTest > testFlush PASSED

kafka.api.SslProducerSendTest > testSendToPartition STARTED

kafka.api.SslProducerSendTest > testSendToPartition PASSED

kafka.api.SslProducerSendTest > testSendOffset STARTED

kafka.api.SslProducerSendTest > testSendOffset PASSED

kafka.api.SslProducerSendTest > testAutoCreateTopic STARTED

kafka.api.SslProducerSendTest > testAutoCreateTopic PASSED

kafka.api.SslProducerSendTest > testSendWithInvalidCreateTime STARTED

kafka.api.SslProducerSendTest > testSendWithInvalidCreateTime PASSED

kafka.api.SslProducerSendTest > testSendCompressedMessageWithCreateTime STARTED

kafka.api.SslProducerSendTest > testSendCompressedMessageWithCreateTime PASSED

kafka.api.SslProducerSendTest > testCloseWithZeroTimeoutFromCallerThread STARTED

kafka.api.SslProducerSendTest > testCloseWithZeroTimeoutFromCallerThread PASSED

kafka.api.SslProducerSendTest > testCloseWithZeroTimeoutFromSenderThread STARTED

kafka.api.SslProducerSendTest > testCloseWithZeroTimeoutFromSenderThread PASSED

kafka.api.SslProducerSendTest > testSendNonCompressedMessageWithLogApendTime 
STARTED

kafka.api.SslProducerSendTest > testSendNonCompressedMessageWithLogApendTime 
PASSED

kafka.api.AuthorizerIntegrationTest > testCommitWithTopicWrite STARTED

kafka.api.AuthorizerIntegrationTest > testCommitWithTopicWrite PASSED

kafka.api.AuthorizerIntegrationTest > testConsumeWithNoTopicAccess STARTED

kafka.api.AuthorizerIntegrationTest > testConsumeWithNoTopicAccess PASSED

kafka.api.AuthorizerIntegrationTest > 
testCreatePermissionNeededToReadFromNonExistentTopic STARTED

kafka.api.AuthorizerIntegrationTest > 
testCreatePermissionNeededToReadFromNonExistentTopic PASSED

kafka.api.AuthorizerIntegrationTest > testDeleteWithWildCardAuth STARTED

kafka.api.AuthorizerIntegrationTest > testDeleteWithWildCardAuth PASSED

kafka.api.AuthorizerIntegrationTest > testOffsetFetchWithNoAccess STARTED

kafka.api.AuthorizerIntegrationTest > testOffsetFetchWithNoAccess PASSED

kafka.api.AuthorizerIntegrationTest > testListOfsetsWithTopicDescribe STARTED

kafka.api.AuthorizerIntegrationTest > testListOfsetsWithTopicDescribe PASSED

kafka.api.AuthorizerIntegrationTest > testProduceWithTopicRead STARTED

kafka.api.AuthorizerIntegrationTest > testProduceWithTopicRead PASSED

kafka.api.AuthorizerIntegrationTest > testListOffsetsWithNoTopicAccess STARTED

kafka.api.AuthorizerIntegrationTest > testListOffsetsWithNoTopicAccess PASSED

kafka.api.AuthorizerIntegrationTest > 
testCreatePermissionNeededForWritingToNonExistentTopic STARTED

kafka.api.AuthorizerIntegrationTest > 
testCreatePermissionNeededForWritingToNonExistentTopic PASSED

kafka.api.AuthorizerIntegrationTest > 
testSimpleConsumeWithOffsetLookupAndNoGroupAccess STARTED

kafka.api.AuthorizerIntegrationTest > 
testSimpleConsumeWithOffsetLookupAndNoGroupAccess PASSED

kafka.api.AuthorizerIntegrationTest > testConsumeWithTopicDescribe STARTED

kafka.api.AuthorizerIntegrationTest > testConsumeWithTopicDescribe PASSED

kafka.api.AuthorizerIntegrationTest > testOffsetFetchWithNoTopicAccess STARTED

kafka.api.AuthorizerIntegrationTest > testOffsetFetchWithNoTopicAccess PASSED

kafka.api.AuthorizerIntegrationTest > testCommitWithNoTopicAccess STARTED

kafka.api.AuthorizerIntegrationTest > testCommitWithNoTopicAccess PASSED

kafka.api.AuthorizerIntegrationTest > testProduceWithNoTopicAccess STARTED

kafka.api.AuthorizerIntegrationTest > testProduceWithNoTopicAccess PASSED

kafka.api.AuthorizerIntegrationTest > testProduceWithTopicWrite STARTED

kafka.api.AuthorizerIntegrationTest > testProduceWithTopicWrite PASSED

kafka.api.AuthorizerIntegrationTest > testAuthorization STARTED

kafka.api.AuthorizerIntegrationTest > testAuthorization PASSED

kafka.api.AuthorizerIntegrationTest > testUnauthorizedDeleteWithDescribe STARTED

kafka.api.AuthorizerIntegrationTest > testUnauthorizedDeleteWithDescribe PASSED

kafka.api.AuthorizerIntegrationTest > testConsumeWithTopicAndGroupRead STARTED

kafka.api.AuthorizerIntegrationTest > testConsumeWithTopicAndGroup

[jira] [Updated] (KAFKA-4073) MirrorMaker should handle mirroring messages w/o timestamp better

2016-08-22 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4073?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma updated KAFKA-4073:
---
Fix Version/s: 0.10.0.2
   0.10.1.0

> MirrorMaker should handle mirroring messages w/o timestamp better
> -
>
> Key: KAFKA-4073
> URL: https://issues.apache.org/jira/browse/KAFKA-4073
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.10.0.1
>Reporter: Jun Rao
> Fix For: 0.10.1.0, 0.10.0.2
>
>
> Currently, if the 0.10.0.1 MirrorMaker reads a message w/o timestamp from the 
> source cluster, it will hit the following exception. This was introduced in 
> KAFKA-3787. So it only affects 0.10.0.1.
> [16/08/2016:18:26:41 PDT] [FATAL] [kafka.tools.MirrorMaker$MirrorMakerThread 
> mirrormaker-thread-1]: [mirrormaker-thread-1] Mirror maker thread failure due 
> to
> java.lang.IllegalArgumentException: Invalid timestamp -1
> at 
> org.apache.kafka.clients.producer.ProducerRecord.(ProducerRecord.java:60)
> at 
> kafka.tools.MirrorMaker$defaultMirrorMakerMessageHandler$.handle(MirrorMaker.scala:678)
> at kafka.tools.MirrorMaker$MirrorMakerThread.run(MirrorMaker.scala:414)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3937) Kafka Clients Leak Native Memory For Longer Than Needed With Compressed Messages

2016-08-22 Thread William Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3937?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15431505#comment-15431505
 ] 

William Yu commented on KAFKA-3937:
---

[~ijuma] I've created a new PR based on your comments. Let me know what you 
think. thanks

> Kafka Clients Leak Native Memory For Longer Than Needed With Compressed 
> Messages
> 
>
> Key: KAFKA-3937
> URL: https://issues.apache.org/jira/browse/KAFKA-3937
> Project: Kafka
>  Issue Type: Bug
>  Components: clients
>Affects Versions: 0.8.2.2, 0.9.0.1, 0.10.0.0
> Environment: Linux, latest oracle java-8
>Reporter: Tom Crayford
>Priority: Minor
>
> In https://issues.apache.org/jira/browse/KAFKA-3933, we discovered that 
> brokers can crash when performing log recovery, as they leak native memory 
> whilst decompressing compressed segments, and that native memory isn't 
> cleaned up rapidly enough by garbage collection and finalizers. The work to 
> fix that in the brokers is taking part in 
> https://github.com/apache/kafka/pull/1598. As part of that PR, Ismael Juma 
> asked me to fix similar issues in the client. Rather than have one large PR 
> that fixes everything, I'd rather break this work up into seperate things, so 
> I'm filing this JIRA to track the followup work. I should get to a PR on this 
> at some point relatively soon, once the other PR has landed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-4073) MirrorMaker should handle mirroring messages w/o timestamp better

2016-08-22 Thread Jun Rao (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4073?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15431460#comment-15431460
 ] 

Jun Rao commented on KAFKA-4073:


One way to fix that is to check if the timestamp is present in the source 
message in MirrorMaker and if not, use the constructor w/o timestamp (it will 
be using the current timestamp).


> MirrorMaker should handle mirroring messages w/o timestamp better
> -
>
> Key: KAFKA-4073
> URL: https://issues.apache.org/jira/browse/KAFKA-4073
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.10.0.1
>Reporter: Jun Rao
>
> Currently, if the 0.10.0.1 MirrorMaker reads a message w/o timestamp from the 
> source cluster, it will hit the following exception. This was introduced in 
> KAFKA-3787. So it only affects 0.10.0.1.
> [16/08/2016:18:26:41 PDT] [FATAL] [kafka.tools.MirrorMaker$MirrorMakerThread 
> mirrormaker-thread-1]: [mirrormaker-thread-1] Mirror maker thread failure due 
> to
> java.lang.IllegalArgumentException: Invalid timestamp -1
> at 
> org.apache.kafka.clients.producer.ProducerRecord.(ProducerRecord.java:60)
> at 
> kafka.tools.MirrorMaker$defaultMirrorMakerMessageHandler$.handle(MirrorMaker.scala:678)
> at kafka.tools.MirrorMaker$MirrorMakerThread.run(MirrorMaker.scala:414)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-4073) MirrorMaker should handle mirroring messages w/o timestamp better

2016-08-22 Thread Jun Rao (JIRA)
Jun Rao created KAFKA-4073:
--

 Summary: MirrorMaker should handle mirroring messages w/o 
timestamp better
 Key: KAFKA-4073
 URL: https://issues.apache.org/jira/browse/KAFKA-4073
 Project: Kafka
  Issue Type: Bug
Affects Versions: 0.10.0.1
Reporter: Jun Rao


Currently, if the 0.10.0.1 MirrorMaker reads a message w/o timestamp from the 
source cluster, it will hit the following exception. This was introduced in 
KAFKA-3787. So it only affects 0.10.0.1.

[16/08/2016:18:26:41 PDT] [FATAL] [kafka.tools.MirrorMaker$MirrorMakerThread 
mirrormaker-thread-1]: [mirrormaker-thread-1] Mirror maker thread failure due to
java.lang.IllegalArgumentException: Invalid timestamp -1
at 
org.apache.kafka.clients.producer.ProducerRecord.(ProducerRecord.java:60)
at 
kafka.tools.MirrorMaker$defaultMirrorMakerMessageHandler$.handle(MirrorMaker.scala:678)
at kafka.tools.MirrorMaker$MirrorMakerThread.run(MirrorMaker.scala:414)




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: [DISCUSS] KIP-71 Enable log compaction and deletion to co-exist

2016-08-22 Thread Guozhang Wang
The update looks good to me. Thanks Damian.

Guozhang

On Fri, Aug 19, 2016 at 11:05 AM, Damian Guy  wrote:

> Thanks Jun.
>
> Everyone - i've updated the KIP to use a comma-separated list of
> cleanup.policies as suggested. I know we have already voted on this
> proposal, so are there any objections to this change?
>
> Thanks,
> Damian
>
> On Fri, 19 Aug 2016 at 18:38 Jun Rao  wrote:
>
> > Damian,
> >
> > Yes, using comma-separated policies does seem more extensible for the
> > future. If we want to adopt it, it's better to do it as part of this KIP.
> > Perhaps you can just update the KIP and ask this thread to see if there
> is
> > any objections with the change.
> >
> > Thanks,
> >
> > Jun
> >
> > On Fri, Aug 19, 2016 at 10:01 AM, Damian Guy 
> wrote:
> >
> > > Hi Grant,
> > >
> > > I apologise - I seemed to have skipped over Joel's email.
> > > It is not something we considered, but seems valid.
> > > I'm not sure if we should do it as part of this KIP or revisit it
> if/when
> > > we have more cleanup policies?
> > >
> > > Thanks,
> > > Damian
> > >
> > > On Fri, 19 Aug 2016 at 15:57 Grant Henke  wrote:
> > >
> > > > Thanks for this KIP Damien.
> > > >
> > > > I know this vote has passed and I think its is okay as is, but I had
> > > > similar thoughts as Joel about combining compaction policies.
> > > >
> > > > I'm just wondering if it makes sense to allow specifying multiple
> > > > > comma-separated policies "compact,delete" as opposed to
> > > > > "compact_and_delete" or "x_and_y_and_z" or "y_and_z" if we ever
> come
> > up
> > > > > with more policies. The order could potentially indicate
> precedence.
> > > > >
> > > >
> > > > Out of curiosity was the option of supporting a list of policies
> > > rejected?
> > > > Is it something we may consider adding later but didn't want to
> include
> > > in
> > > > the scope of this KIP?
> > > >
> > > > Thanks,
> > > > Grant
> > > >
> > > >
> > > >
> > > > On Mon, Aug 15, 2016 at 7:25 PM, Joel Koshy 
> > wrote:
> > > >
> > > > > Thanks for the proposal. I'm +1 overall with a thought somewhat
> > related
> > > > to
> > > > > Jun's comments.
> > > > >
> > > > > While there may not yet be a sensible use case for it, it should be
> > (in
> > > > > theory) legal to have compact_and_delete with size based retention
> as
> > > > well.
> > > > > I'm just wondering if it makes sense to allow specifying multiple
> > > > > comma-separated policies "compact,delete" as opposed to
> > > > > "compact_and_delete" or "x_and_y_and_z" or "y_and_z" if we ever
> come
> > up
> > > > > with more policies. The order could potentially indicate
> precedence.
> > > > > Anyway, it is just a thought - it may end up being very confusing
> for
> > > > > users.
> > > > >
> > > > > @Jason - I agree this could be used to handle offset expiration as
> > > well.
> > > > We
> > > > > can discuss that separately though; and if we do that we would want
> > to
> > > > also
> > > > > deprecate the retention field in the commit requests.
> > > > >
> > > > > Joel
> > > > >
> > > > > On Mon, Aug 15, 2016 at 2:07 AM, Damian Guy 
> > > > wrote:
> > > > >
> > > > > > Thanks Jason.
> > > > > > The log retention.ms will be set to a value that greater than
> the
> > > > window
> > > > > > retention time. So as windows expire, they eventually get cleaned
> > up
> > > by
> > > > > the
> > > > > > broker. It doesn't matter if old windows are around for sometime
> > > beyond
> > > > > > their usefulness, more that they do eventually get removed and
> the
> > > log
> > > > > > doesn't grow indefinitely (as it does now).
> > > > > >
> > > > > > Damian
> > > > > >
> > > > > > On Fri, 12 Aug 2016 at 20:25 Jason Gustafson  >
> > > > wrote:
> > > > > >
> > > > > > > Hey Damian,
> > > > > > >
> > > > > > > That's true, but it would avoid the need to write tombstones
> for
> > > the
> > > > > > > expired offsets I guess. I'm actually not sure it's a great
> idea
> > > > > anyway.
> > > > > > > One thing we've talked about is not expiring any offsets as
> long
> > > as a
> > > > > > group
> > > > > > > is alive, which would require some custom expiration logic.
> > There's
> > > > > also
> > > > > > > the fact that we'd risk expiring group metadata which is stored
> > in
> > > > the
> > > > > > same
> > > > > > > log. Having a builtin expiration mechanism may be more useful
> for
> > > the
> > > > > > > compacted topics we maintain in Connect, but I think there too
> we
> > > > might
> > > > > > > need some custom logic. For example, expiring connector configs
> > > > purely
> > > > > > > based on time doesn't seem like what we'd want.
> > > > > > >
> > > > > > > By the way, I wonder if you could describe the expected usage
> in
> > a
> > > > > little
> > > > > > > more detail in the KIP for those of us who are not as familiar
> > with
> > > > > Kafka
> > > > > > > Streams. Is the intention to have the log retain only the most
> > > recent
> > > > > > > window? In that case, would you always set the log retention
> ti

Re: Prevent broker from leading topic partitions

2016-08-22 Thread Alexey Ozeritsky
Hi

22.08.2016, 19:16, "Tom Crayford" :
> Hi,
>
> I don't think I understand *why* you need this. Kafka is by default a
> distributed HA system that balances data and leadership over nodes. Why do
> you need to change this?

In fact, kafka does not balance anything. It uses static partition distribution.
In most cases the first replica in replica list is always a leader.

If a replica is under heavy load (for example hard disk was replaced or so on) 
it is a bad idea to do automatic leader "rebalance".

>
> You could accomplish something like this with mirror maker, that may make
> more sense.
>
> Thanks
>
> Tom Crayford
>
> Heroku Kafka
>
> On Mon, Aug 22, 2016 at 4:05 PM, Jason Aliyetti 
> wrote:
>
>>  I have a use case that requires a 2 node deployment with a Kafka-backed
>>  service with the following constraints:
>>
>>  - All data must be persisted to node 1. If node 1 fails (regardless of the
>>  status of node 2), then the system must stop.
>>  - If node 2 is up, then it must stay in synch with node 1.
>>  - If node 2 fails, then service must not be disrupted, but as soon as it
>>  comes back up and rejoins ISR it must stay in synch.
>>
>>  The deployment is basically a primary node and a cold node with real time
>>  replication, but no failover to the cold node.
>>
>>  To achieve this I am considering adding a broker-level configuration option
>>  that would prevent a broker from becoming a leader for any topic partition
>>  it hosts - this would allow me to enforce that the cold node never take
>>  leadership for any topics. In conjunction with manipulating a topic's
>>  "min.insync.replicas" setting at runtime, I should be able to achieve the
>>  behavior desired (2 if both brokers up, 1 if the standby goes down).
>>
>>  I know this sounds like an edgy use case, but does this sound like a
>>  reasonable approach? Are there any valid use cases around such a broker or
>>  topic level configuration (i.e. does this sound like a feature that would
>>  make sense to open a KIP against)?


[jira] [Updated] (KAFKA-3894) LogCleaner thread crashes if not even one segment can fit in the offset map

2016-08-22 Thread Jun Rao (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3894?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jun Rao updated KAFKA-3894:
---
Assignee: Tom Crayford

[~tcrayford-heroku], thanks for the patch. Filed a followup jira KAFKA-4072 to 
improve the memory usage in log cleaner.

> LogCleaner thread crashes if not even one segment can fit in the offset map
> ---
>
> Key: KAFKA-3894
> URL: https://issues.apache.org/jira/browse/KAFKA-3894
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 0.8.2.2, 0.9.0.1, 0.10.0.0
> Environment: Oracle JDK 8
> Ubuntu Precise
>Reporter: Tim Carey-Smith
>Assignee: Tom Crayford
>  Labels: compaction
> Fix For: 0.10.1.0
>
>
> The log-cleaner thread can crash if the number of keys in a topic grows to be 
> too large to fit into the dedupe buffer. 
> The result of this is a log line: 
> {quote}
> broker=0 pri=ERROR t=kafka-log-cleaner-thread-0 at=LogCleaner 
> \[kafka-log-cleaner-thread-0\], Error due to  
> java.lang.IllegalArgumentException: requirement failed: 9750860 messages in 
> segment MY_FAVORITE_TOPIC-2/47580165.log but offset map can fit 
> only 5033164. You can increase log.cleaner.dedupe.buffer.size or decrease 
> log.cleaner.threads
> {quote}
> As a result, the broker is left in a potentially dangerous situation where 
> cleaning of compacted topics is not running. 
> It is unclear if the broader strategy for the {{LogCleaner}} is the reason 
> for this upper bound, or if this is a value which must be tuned for each 
> specific use-case. 
> Of more immediate concern is the fact that the thread crash is not visible 
> via JMX or exposed as some form of service degradation. 
> Some short-term remediations we have made are:
> * increasing the size of the dedupe buffer
> * monitoring the log-cleaner threads inside the JVM



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-4072) improving memory usage in LogCleaner

2016-08-22 Thread Jun Rao (JIRA)
Jun Rao created KAFKA-4072:
--

 Summary: improving memory usage in LogCleaner
 Key: KAFKA-4072
 URL: https://issues.apache.org/jira/browse/KAFKA-4072
 Project: Kafka
  Issue Type: Improvement
Reporter: Jun Rao


This is a followup jira from KAFKA-3894.

We can potentially make the allocation of the dedup buffer more dynamic. We can 
start with something small like 100MB. If needed, we can grow the dedup buffer 
up to the configured size. This will allow us to set a larger default dedup 
buffer size (say 1GB). If there are not lots of keys, the broker won't be using 
that much memory. This will allow the default configuration to accommodate more 
keys.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3894) LogCleaner thread crashes if not even one segment can fit in the offset map

2016-08-22 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3894?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15431303#comment-15431303
 ] 

ASF GitHub Bot commented on KAFKA-3894:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/1725


> LogCleaner thread crashes if not even one segment can fit in the offset map
> ---
>
> Key: KAFKA-3894
> URL: https://issues.apache.org/jira/browse/KAFKA-3894
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 0.8.2.2, 0.9.0.1, 0.10.0.0
> Environment: Oracle JDK 8
> Ubuntu Precise
>Reporter: Tim Carey-Smith
>  Labels: compaction
> Fix For: 0.10.1.0
>
>
> The log-cleaner thread can crash if the number of keys in a topic grows to be 
> too large to fit into the dedupe buffer. 
> The result of this is a log line: 
> {quote}
> broker=0 pri=ERROR t=kafka-log-cleaner-thread-0 at=LogCleaner 
> \[kafka-log-cleaner-thread-0\], Error due to  
> java.lang.IllegalArgumentException: requirement failed: 9750860 messages in 
> segment MY_FAVORITE_TOPIC-2/47580165.log but offset map can fit 
> only 5033164. You can increase log.cleaner.dedupe.buffer.size or decrease 
> log.cleaner.threads
> {quote}
> As a result, the broker is left in a potentially dangerous situation where 
> cleaning of compacted topics is not running. 
> It is unclear if the broader strategy for the {{LogCleaner}} is the reason 
> for this upper bound, or if this is a value which must be tuned for each 
> specific use-case. 
> Of more immediate concern is the fact that the thread crash is not visible 
> via JMX or exposed as some form of service degradation. 
> Some short-term remediations we have made are:
> * increasing the size of the dedupe buffer
> * monitoring the log-cleaner threads inside the JVM



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request #1725: KAFKA-3894: log cleaner can partially clean a segm...

2016-08-22 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/1725


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Resolved] (KAFKA-3894) LogCleaner thread crashes if not even one segment can fit in the offset map

2016-08-22 Thread Jun Rao (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3894?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jun Rao resolved KAFKA-3894.

Resolution: Fixed

Issue resolved by pull request 1725
[https://github.com/apache/kafka/pull/1725]

> LogCleaner thread crashes if not even one segment can fit in the offset map
> ---
>
> Key: KAFKA-3894
> URL: https://issues.apache.org/jira/browse/KAFKA-3894
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 0.8.2.2, 0.9.0.1, 0.10.0.0
> Environment: Oracle JDK 8
> Ubuntu Precise
>Reporter: Tim Carey-Smith
>  Labels: compaction
> Fix For: 0.10.1.0
>
>
> The log-cleaner thread can crash if the number of keys in a topic grows to be 
> too large to fit into the dedupe buffer. 
> The result of this is a log line: 
> {quote}
> broker=0 pri=ERROR t=kafka-log-cleaner-thread-0 at=LogCleaner 
> \[kafka-log-cleaner-thread-0\], Error due to  
> java.lang.IllegalArgumentException: requirement failed: 9750860 messages in 
> segment MY_FAVORITE_TOPIC-2/47580165.log but offset map can fit 
> only 5033164. You can increase log.cleaner.dedupe.buffer.size or decrease 
> log.cleaner.threads
> {quote}
> As a result, the broker is left in a potentially dangerous situation where 
> cleaning of compacted topics is not running. 
> It is unclear if the broader strategy for the {{LogCleaner}} is the reason 
> for this upper bound, or if this is a value which must be tuned for each 
> specific use-case. 
> Of more immediate concern is the fact that the thread crash is not visible 
> via JMX or exposed as some form of service degradation. 
> Some short-term remediations we have made are:
> * increasing the size of the dedupe buffer
> * monitoring the log-cleaner threads inside the JVM



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3522) Consider adding version information into rocksDB storage format

2016-08-22 Thread Ishita Mandhan (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3522?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15431288#comment-15431288
 ] 

Ishita Mandhan commented on KAFKA-3522:
---

Just wanted to check in to see if now is a good time to bring this back up. I'm 
looking for some small feature in streams to work on so if there's some other 
minor feature that you'd rather me divert my attention to at the moment, that's 
fine too :)

> Consider adding version information into rocksDB storage format
> ---
>
> Key: KAFKA-3522
> URL: https://issues.apache.org/jira/browse/KAFKA-3522
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Reporter: Guozhang Wang
>Assignee: Ishita Mandhan
>  Labels: architecture
> Fix For: 0.10.1.0
>
>
> Kafka Streams does not introduce any modifications to the data format in the 
> underlying Kafka protocol, but it does use RocksDB for persistent state 
> storage, and currently its data format is fixed and hard-coded. We want to 
> consider the evolution path in the future we we change the data format, and 
> hence having some version info stored along with the storage file / directory 
> would be useful.
> And this information could be even out of the storage file; for example, we 
> can just use a small "version indicator" file in the rocksdb directory for 
> this purposes. Thoughts? [~enothereska] [~jkreps]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3163) KIP-33 - Add a time based log index to Kafka

2016-08-22 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3163?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15431240#comment-15431240
 ] 

ASF GitHub Bot commented on KAFKA-3163:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/1769


> KIP-33 - Add a time based log index to Kafka
> 
>
> Key: KAFKA-3163
> URL: https://issues.apache.org/jira/browse/KAFKA-3163
> Project: Kafka
>  Issue Type: Task
>  Components: core
>Reporter: Jiangjie Qin
>Assignee: Jiangjie Qin
> Fix For: 0.10.1.0
>
> Attachments: 00113931.log, 00113931.timeindex
>
>
> This ticket is associated with KIP-33.
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-33+-+Add+a+time+based+log+index



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request #1769: Minor follow up for KAFKA-3163 (KIP-33)

2016-08-22 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/1769


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-3163) KIP-33 - Add a time based log index to Kafka

2016-08-22 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3163?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15431151#comment-15431151
 ] 

ASF GitHub Bot commented on KAFKA-3163:
---

GitHub user becketqin opened a pull request:

https://github.com/apache/kafka/pull/1769

Minor follow up for KAFKA-3163 (KIP-33)

@junrao Could you take a look when get a chance? Thanks.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/becketqin/kafka KAFKA-3163-follow-up

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/1769.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1769


commit 2895d0b9af99a29b64de4c3ef42bf9312f313ff9
Author: Jiangjie Qin 
Date:   2016-08-22T16:39:25Z

Minor follow up for KAFKA-3163 (KIP-33)




> KIP-33 - Add a time based log index to Kafka
> 
>
> Key: KAFKA-3163
> URL: https://issues.apache.org/jira/browse/KAFKA-3163
> Project: Kafka
>  Issue Type: Task
>  Components: core
>Reporter: Jiangjie Qin
>Assignee: Jiangjie Qin
> Fix For: 0.10.1.0
>
> Attachments: 00113931.log, 00113931.timeindex
>
>
> This ticket is associated with KIP-33.
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-33+-+Add+a+time+based+log+index



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request #1769: Minor follow up for KAFKA-3163 (KIP-33)

2016-08-22 Thread becketqin
GitHub user becketqin opened a pull request:

https://github.com/apache/kafka/pull/1769

Minor follow up for KAFKA-3163 (KIP-33)

@junrao Could you take a look when get a chance? Thanks.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/becketqin/kafka KAFKA-3163-follow-up

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/1769.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1769


commit 2895d0b9af99a29b64de4c3ef42bf9312f313ff9
Author: Jiangjie Qin 
Date:   2016-08-22T16:39:25Z

Minor follow up for KAFKA-3163 (KIP-33)




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Re: Prevent broker from leading topic partitions

2016-08-22 Thread Tom Crayford
Hi,

I don't think I understand *why* you need this. Kafka is by default a
distributed HA system that balances data and leadership over nodes. Why do
you need to change this?

You could accomplish something like this with mirror maker, that may make
more sense.

Thanks

Tom Crayford

Heroku Kafka

On Mon, Aug 22, 2016 at 4:05 PM, Jason Aliyetti 
wrote:

> I have a use case that requires a 2 node deployment with a Kafka-backed
> service with the following constraints:
>
> - All data must be persisted to node 1.  If node 1 fails (regardless of the
> status of node 2), then the system must stop.
> - If node 2 is up, then it must stay in synch with node 1.
> - If node 2 fails, then service must not be disrupted, but as soon as it
> comes back up and rejoins ISR it must stay in synch.
>
> The deployment is basically a primary node and a cold node with real time
> replication, but no failover to the cold node.
>
> To achieve this I am considering adding a broker-level configuration option
> that would prevent a broker from becoming a leader for any topic partition
> it hosts - this would allow me to enforce that the cold node never take
> leadership for any topics.  In conjunction with manipulating a topic's
> "min.insync.replicas" setting at runtime, I should be able to achieve the
> behavior desired (2 if both brokers up, 1 if the standby goes down).
>
>
> I know this sounds like an edgy use case, but does this sound like a
> reasonable approach?  Are there any valid use cases around such a broker or
> topic level configuration (i.e. does this sound like a feature that would
> make sense to open a KIP against)?
>


Prevent broker from leading topic partitions

2016-08-22 Thread Jason Aliyetti
I have a use case that requires a 2 node deployment with a Kafka-backed
service with the following constraints:

- All data must be persisted to node 1.  If node 1 fails (regardless of the
status of node 2), then the system must stop.
- If node 2 is up, then it must stay in synch with node 1.
- If node 2 fails, then service must not be disrupted, but as soon as it
comes back up and rejoins ISR it must stay in synch.

The deployment is basically a primary node and a cold node with real time
replication, but no failover to the cold node.

To achieve this I am considering adding a broker-level configuration option
that would prevent a broker from becoming a leader for any topic partition
it hosts - this would allow me to enforce that the cold node never take
leadership for any topics.  In conjunction with manipulating a topic's
"min.insync.replicas" setting at runtime, I should be able to achieve the
behavior desired (2 if both brokers up, 1 if the standby goes down).


I know this sounds like an edgy use case, but does this sound like a
reasonable approach?  Are there any valid use cases around such a broker or
topic level configuration (i.e. does this sound like a feature that would
make sense to open a KIP against)?


[jira] [Commented] (KAFKA-4071) Corruptted replication-offset-checkpoint leads to kafka server disfunctional

2016-08-22 Thread Alexey Ozeritskiy (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4071?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15431056#comment-15431056
 ] 

Alexey Ozeritskiy commented on KAFKA-4071:
--

Thanks [~zanezhang]. Could you attach replication-offset-checkpoint ? It is 
interesting to see its original binary content.

> Corruptted replication-offset-checkpoint leads to kafka server disfunctional
> 
>
> Key: KAFKA-4071
> URL: https://issues.apache.org/jira/browse/KAFKA-4071
> Project: Kafka
>  Issue Type: Bug
>  Components: clients, offset manager
>Affects Versions: 0.9.0.1
> Environment: Red Hat Enterprise 6.7
>Reporter: Zane Zhang
>Priority: Critical
>
> For an unknown reason, [kafka data root]/replication-offset-checkpoint was 
> corrupted. First Kafka reported an NumberFormatException in kafka sever.out. 
> And then it reported "error when handling request Name: FetchRequest; ... " 
> ERRORs repeatedly (ERROR details below). As a result, clients cannot read 
> from or write to Kafka on several partitions until 
> replication-offset-checkpoint was manually deleted.
> Can Kafka broker handle this error and survive from it?
> And what's the reason this file was corrupted? - Only one file was corrupted 
> and no noticeable disk failure was detected.
> ERROR [KafkaApi-7] error when handling request 
> java.lang.NumberFormatException: For input string: " N?-;   O"
>   at 
> java.lang.NumberFormatException.forInputString(NumberFormatException.java:77)
>   at java.lang.Integer.parseInt(Integer.java:493)
>   at java.lang.Integer.parseInt(Integer.java:539)
>   at 
> scala.collection.immutable.StringLike$class.toInt(StringLike.scala:272)
>   at scala.collection.immutable.StringOps.toInt(StringOps.scala:30)
>   at kafka.server.OffsetCheckpoint.read(OffsetCheckpoint.scala:78)
>   at kafka.cluster.Partition.getOrCreateReplica(Partition.scala:93)
>   at 
> kafka.cluster.Partition$$anonfun$4$$anonfun$apply$2.apply(Partition.scala:173)
>   at 
> kafka.cluster.Partition$$anonfun$4$$anonfun$apply$2.apply(Partition.scala:173)
>   at scala.collection.immutable.Set$Set2.foreach(Set.scala:111)
>   at kafka.cluster.Partition$$anonfun$4.apply(Partition.scala:173)
> ERROR [KafkaApi-7] error when handling request Name: FetchRequest; Version: 
> 1; CorrelationId: 0; ClientId: ReplicaFetcherThread-1-7; ReplicaId: 6; 
> MaxWait: 500 ms; MinBytes: 1 bytes; RequestInfo: [prodTopicDal09E,166] -> 
> PartitionFetchInfo(7123666,20971520),[prodTopicDal09E,118] -> 
> PartitionFetchInfo(7128188,20971520),[prodTopicDal09E,238] -> 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-4051) Strange behavior during rebalance when turning the OS clock back

2016-08-22 Thread Gabriel Ibarra (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4051?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15430981#comment-15430981
 ] 

Gabriel Ibarra commented on KAFKA-4051:
---

Hi, I'm sorry for the delay.
I agree with you guys, it is not a typical scenario, it is even difficult for 
me to think about the reasons for a System Administrator to change the 
data/time; but he can, and as Rajini Sivaram said the impact is quite big if it 
does happen.

Added to that, our system is working in a VM with NTP, and we see some spurious 
changes on the system time (this way we detected this issue), we are now 
analyzing why the time is changing, but we suspect that the system start with 
the hardware time and then NTP synchronize the system clock using the time zone 
configured in the VM.

It is a good news that it could be fixed with small changes. Great Job Rajini

> Strange behavior during rebalance when turning the OS clock back
> 
>
> Key: KAFKA-4051
> URL: https://issues.apache.org/jira/browse/KAFKA-4051
> Project: Kafka
>  Issue Type: Bug
>  Components: consumer
>Affects Versions: 0.10.0.0
> Environment: OS: Ubuntu 14.04 - 64bits
>Reporter: Gabriel Ibarra
>Assignee: Rajini Sivaram
>
> If a rebalance is performed after turning the OS clock back, then the kafka 
> server enters in a loop and the rebalance cannot be completed until the 
> system returns to the previous date/hour.
> Steps to Reproduce:
> - Start a consumer for TOPIC_NAME with group id GROUP_NAME. It will be owner 
> of all the partitions.
> - Turn the system (OS) clock back. For instance 1 hour.
> - Start a new consumer for TOPIC_NAME  using the same group id, it will force 
> a rebalance.
> After these actions the kafka server logs constantly display the messages 
> below, and after a while both consumers do not receive more packages. This 
> condition lasts at least the time that the clock went back, for this example 
> 1 hour, and finally after this time kafka comes back to work.
> [2016-08-08 11:30:23,023] INFO [GroupCoordinator 0]: Preparing to restabilize 
> group GROUP_NAME with old generation 2 (kafka.coordinator.GroupCoordinator)
> [2016-08-08 11:30:23,025] INFO [GroupCoordinator 0]: Stabilized group 
> GROUP_NAME generation 3 (kafka.coordinator.GroupCoordinator)
> [2016-08-08 11:30:23,027] INFO [GroupCoordinator 0]: Preparing to restabilize 
> group GROUP_NAME with old generation 3 (kafka.coordinator.GroupCoordinator)
> [2016-08-08 11:30:23,029] INFO [GroupCoordinator 0]: Group GROUP_NAME 
> generation 3 is dead and removed (kafka.coordinator.GroupCoordinator)
> [2016-08-08 11:30:23,032] INFO [GroupCoordinator 0]: Preparing to restabilize 
> group GROUP_NAME with old generation 0 (kafka.coordinator.GroupCoordinator)
> [2016-08-08 11:30:23,032] INFO [GroupCoordinator 0]: Stabilized group 
> GROUP_NAME generation 1 (kafka.coordinator.GroupCoordinator)
> [2016-08-08 11:30:23,033] INFO [GroupCoordinator 0]: Preparing to restabilize 
> group GROUP_NAME with old generation 1 (kafka.coordinator.GroupCoordinator)
> [2016-08-08 11:30:23,034] INFO [GroupCoordinator 0]: Group GROUP generation 1 
> is dead and removed (kafka.coordinator.GroupCoordinator)
> [2016-08-08 11:30:23,043] INFO [GroupCoordinator 0]: Preparing to restabilize 
> group GROUP_NAME with old generation 0 (kafka.coordinator.GroupCoordinator)
> [2016-08-08 11:30:23,044] INFO [GroupCoordinator 0]: Stabilized group 
> GROUP_NAME generation 1 (kafka.coordinator.GroupCoordinator)
> [2016-08-08 11:30:23,044] INFO [GroupCoordinator 0]: Preparing to restabilize 
> group GROUP_NAME with old generation 1 (kafka.coordinator.GroupCoordinator)
> [2016-08-08 11:30:23,045] INFO [GroupCoordinator 0]: Group GROUP_NAME 
> generation 1 is dead and removed (kafka.coordinator.GroupCoordinator)
> Due to the fact that some systems could have enabled NTP or an administrator 
> option to change the system clock (date/time) it's important to do it safely, 
> currently the only way to do it safely is following the next steps:
> 1-  Tear down the Kafka server.
> 2-  Change the date/time
> 3- Tear up the Kafka server.
> But, this approach can be done only if the change was performed by the 
> administrator, not for NTP. Also in many systems turning down the Kafka 
> server might cause the INFORMATION TO BE LOST.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: [VOTE] KIP-74: Add FetchResponse size limit in bytes

2016-08-22 Thread Andrey L. Neporada
Thanks.
Request parameter renamed: response_max_bytes -> max_bytes.

Andrey.

> On 19 Aug 2016, at 16:52, Ismael Juma  wrote:
> 
> Thanks for the KIP. +1 (binding) with the following suggestion:
> 
> Fetch Request (Version: 3) => replica_id max_wait_time min_bytes
> response_max_bytes [topics]
>  replica_id => INT32
>  max_wait_time => INT32
>  min_bytes => INT32
>  response_max_bytes => INT32
>  topics => topic [partitions]
>topic => STRING
>partitions => partition fetch_offset max_bytes
>  partition => INT32
>  fetch_offset => INT64
>  max_bytes => INT32
> 
> 
> I think "response_max_bytes" should be called "max_bytes". That way
> it's consistent with "min_bytes" (which is also a response-level
> property).
> 
> I understand the desire to differentiate it from the "max_bytes"
> passed with each partition, but I think it's fine to rely on the
> context (containing struct) for that.
> 
> 
> Ismael
> 
> 
> 
> On Fri, Aug 19, 2016 at 1:47 PM, Tom Crayford  wrote:
> 
>> +1 (non binding)
>> 
>> On Fri, Aug 19, 2016 at 6:20 AM, Manikumar Reddy <
>> manikumar.re...@gmail.com>
>> wrote:
>> 
>>> +1 (non-binding)
>>> 
>>> This feature help us control memory footprint and allows consumer to
>>> progress on fetching  large messages.
>>> 
>>> On Fri, Aug 19, 2016 at 10:32 AM, Gwen Shapira 
>> wrote:
>>> 
 +1 (binding)
 
 On Thu, Aug 18, 2016 at 1:47 PM, Andrey L. Neporada
  wrote:
> Hi all!
> I’ve modified KIP-74 a little bit (as requested by Jason Gustafson &
>>> Jun
 Rao):
> 1) provided more detailed explanation on memory usage (no functional
 changes)
> 2) renamed “fetch.response.max.bytes” -> “fetch.max.bytes”
> 
> Let’s continue voting in this thread.
> 
> Thanks!
> Andrey.
> 
>> On 17 Aug 2016, at 00:02, Jun Rao  wrote:
>> 
>> Andrey,
>> 
>> Thanks for the KIP. +1
>> 
>> Jun
>> 
>> On Tue, Aug 16, 2016 at 1:32 PM, Andrey L. Neporada <
>> anepor...@yandex-team.ru> wrote:
>> 
>>> Hi!
>>> 
>>> I would like to initiate the voting process for KIP-74:
>>> https://cwiki.apache.org/confluence/display/KAFKA/KIP-
>>> 74%3A+Add+Fetch+Response+Size+Limit+in+Bytes
>>> 
>>> 
>>> Thanks,
>>> Andrey.
> 
 
 
 
 --
 Gwen Shapira
 Product Manager | Confluent
 650.450.2760 | @gwenshap
 Follow us: Twitter | blog
 
>>> 
>> 



[jira] [Updated] (KAFKA-4051) Strange behavior during rebalance when turning the OS clock back

2016-08-22 Thread Rajini Sivaram (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4051?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajini Sivaram updated KAFKA-4051:
--
Status: Patch Available  (was: Open)

Agree with [~onurkaraman] that this is an unusual scenario. But the fix is a 
small change that can be seen as one small step towards improving Kafka's 
resilience to wall-clock time changes. The PR avoids the need for a broker 
restart in this case, but perhaps [~ibarra] can provide more context to the 
problem scenario.

> Strange behavior during rebalance when turning the OS clock back
> 
>
> Key: KAFKA-4051
> URL: https://issues.apache.org/jira/browse/KAFKA-4051
> Project: Kafka
>  Issue Type: Bug
>  Components: consumer
>Affects Versions: 0.10.0.0
> Environment: OS: Ubuntu 14.04 - 64bits
>Reporter: Gabriel Ibarra
>Assignee: Rajini Sivaram
>
> If a rebalance is performed after turning the OS clock back, then the kafka 
> server enters in a loop and the rebalance cannot be completed until the 
> system returns to the previous date/hour.
> Steps to Reproduce:
> - Start a consumer for TOPIC_NAME with group id GROUP_NAME. It will be owner 
> of all the partitions.
> - Turn the system (OS) clock back. For instance 1 hour.
> - Start a new consumer for TOPIC_NAME  using the same group id, it will force 
> a rebalance.
> After these actions the kafka server logs constantly display the messages 
> below, and after a while both consumers do not receive more packages. This 
> condition lasts at least the time that the clock went back, for this example 
> 1 hour, and finally after this time kafka comes back to work.
> [2016-08-08 11:30:23,023] INFO [GroupCoordinator 0]: Preparing to restabilize 
> group GROUP_NAME with old generation 2 (kafka.coordinator.GroupCoordinator)
> [2016-08-08 11:30:23,025] INFO [GroupCoordinator 0]: Stabilized group 
> GROUP_NAME generation 3 (kafka.coordinator.GroupCoordinator)
> [2016-08-08 11:30:23,027] INFO [GroupCoordinator 0]: Preparing to restabilize 
> group GROUP_NAME with old generation 3 (kafka.coordinator.GroupCoordinator)
> [2016-08-08 11:30:23,029] INFO [GroupCoordinator 0]: Group GROUP_NAME 
> generation 3 is dead and removed (kafka.coordinator.GroupCoordinator)
> [2016-08-08 11:30:23,032] INFO [GroupCoordinator 0]: Preparing to restabilize 
> group GROUP_NAME with old generation 0 (kafka.coordinator.GroupCoordinator)
> [2016-08-08 11:30:23,032] INFO [GroupCoordinator 0]: Stabilized group 
> GROUP_NAME generation 1 (kafka.coordinator.GroupCoordinator)
> [2016-08-08 11:30:23,033] INFO [GroupCoordinator 0]: Preparing to restabilize 
> group GROUP_NAME with old generation 1 (kafka.coordinator.GroupCoordinator)
> [2016-08-08 11:30:23,034] INFO [GroupCoordinator 0]: Group GROUP generation 1 
> is dead and removed (kafka.coordinator.GroupCoordinator)
> [2016-08-08 11:30:23,043] INFO [GroupCoordinator 0]: Preparing to restabilize 
> group GROUP_NAME with old generation 0 (kafka.coordinator.GroupCoordinator)
> [2016-08-08 11:30:23,044] INFO [GroupCoordinator 0]: Stabilized group 
> GROUP_NAME generation 1 (kafka.coordinator.GroupCoordinator)
> [2016-08-08 11:30:23,044] INFO [GroupCoordinator 0]: Preparing to restabilize 
> group GROUP_NAME with old generation 1 (kafka.coordinator.GroupCoordinator)
> [2016-08-08 11:30:23,045] INFO [GroupCoordinator 0]: Group GROUP_NAME 
> generation 1 is dead and removed (kafka.coordinator.GroupCoordinator)
> Due to the fact that some systems could have enabled NTP or an administrator 
> option to change the system clock (date/time) it's important to do it safely, 
> currently the only way to do it safely is following the next steps:
> 1-  Tear down the Kafka server.
> 2-  Change the date/time
> 3- Tear up the Kafka server.
> But, this approach can be done only if the change was performed by the 
> administrator, not for NTP. Also in many systems turning down the Kafka 
> server might cause the INFORMATION TO BE LOST.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-4019) LogCleaner should grow read/write buffer to max message size for the topic

2016-08-22 Thread Rajini Sivaram (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4019?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajini Sivaram updated KAFKA-4019:
--
Status: Patch Available  (was: Open)

> LogCleaner should grow read/write buffer to max message size for the topic
> --
>
> Key: KAFKA-4019
> URL: https://issues.apache.org/jira/browse/KAFKA-4019
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 0.10.0.0
>Reporter: Jun Rao
>Assignee: Rajini Sivaram
>
> Currently, the LogCleaner.growBuffers() only grows the buffer up to the 
> default max message size. However, since the max message size can be 
> customized at the topic level, LogCleaner should allow the buffer to grow up 
> to the max message allowed by the topic. Otherwise, the cleaner will get 
> stuck on a large message.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-4066) NullPointerException in Kafka consumer due to unsafe access to findCoordinatorFuture

2016-08-22 Thread Rajini Sivaram (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4066?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajini Sivaram updated KAFKA-4066:
--
Status: Patch Available  (was: Open)

> NullPointerException in Kafka consumer due to unsafe access to 
> findCoordinatorFuture
> 
>
> Key: KAFKA-4066
> URL: https://issues.apache.org/jira/browse/KAFKA-4066
> Project: Kafka
>  Issue Type: Bug
>  Components: consumer
>Reporter: Rajini Sivaram
>Assignee: Rajini Sivaram
> Fix For: 0.10.1.0
>
>
> {quote}
> java.lang.NullPointerException
>   at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:164)
>   at 
> org.apache.kafka.clients.consumer.internals.AbstractCoordinator.ensureCoordinatorReady(AbstractCoordinator.java:193)
>   at 
> org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.poll(ConsumerCoordinator.java:245)
>   at 
> org.apache.kafka.clients.consumer.KafkaConsumer.pollOnce(KafkaConsumer.java:993)
>   at 
> org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:959)
>   at kafka.consumer.NewShinyConsumer.receive(BaseConsumer.scala:100)
>   at kafka.tools.ConsoleConsumer$.process(ConsoleConsumer.scala:120)
>   at kafka.tools.ConsoleConsumer$.run(ConsoleConsumer.scala:75)
>   at kafka.tools.ConsoleConsumer$.main(ConsoleConsumer.scala:50)
>   at kafka.tools.ConsoleConsumer.main(ConsoleConsumer.scala)
> {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-4051) Strange behavior during rebalance when turning the OS clock back

2016-08-22 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4051?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15430479#comment-15430479
 ] 

ASF GitHub Bot commented on KAFKA-4051:
---

GitHub user rajinisivaram opened a pull request:

https://github.com/apache/kafka/pull/1768

KAFKA-4051: Use nanosecond clock for timers in broker

Use System.nanoseconds instead of System.currentTimeMillis in broker timer 
tasks to cope with changes to wall-clock time.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/rajinisivaram/kafka KAFKA-4051

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/1768.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1768


commit 366659f9a6a0212030349bc54e0c798e3d6c2122
Author: Rajini Sivaram 
Date:   2016-08-19T10:13:03Z

KAFKA-4051: Use nanosecond clock for timers in broker




> Strange behavior during rebalance when turning the OS clock back
> 
>
> Key: KAFKA-4051
> URL: https://issues.apache.org/jira/browse/KAFKA-4051
> Project: Kafka
>  Issue Type: Bug
>  Components: consumer
>Affects Versions: 0.10.0.0
> Environment: OS: Ubuntu 14.04 - 64bits
>Reporter: Gabriel Ibarra
>Assignee: Rajini Sivaram
>
> If a rebalance is performed after turning the OS clock back, then the kafka 
> server enters in a loop and the rebalance cannot be completed until the 
> system returns to the previous date/hour.
> Steps to Reproduce:
> - Start a consumer for TOPIC_NAME with group id GROUP_NAME. It will be owner 
> of all the partitions.
> - Turn the system (OS) clock back. For instance 1 hour.
> - Start a new consumer for TOPIC_NAME  using the same group id, it will force 
> a rebalance.
> After these actions the kafka server logs constantly display the messages 
> below, and after a while both consumers do not receive more packages. This 
> condition lasts at least the time that the clock went back, for this example 
> 1 hour, and finally after this time kafka comes back to work.
> [2016-08-08 11:30:23,023] INFO [GroupCoordinator 0]: Preparing to restabilize 
> group GROUP_NAME with old generation 2 (kafka.coordinator.GroupCoordinator)
> [2016-08-08 11:30:23,025] INFO [GroupCoordinator 0]: Stabilized group 
> GROUP_NAME generation 3 (kafka.coordinator.GroupCoordinator)
> [2016-08-08 11:30:23,027] INFO [GroupCoordinator 0]: Preparing to restabilize 
> group GROUP_NAME with old generation 3 (kafka.coordinator.GroupCoordinator)
> [2016-08-08 11:30:23,029] INFO [GroupCoordinator 0]: Group GROUP_NAME 
> generation 3 is dead and removed (kafka.coordinator.GroupCoordinator)
> [2016-08-08 11:30:23,032] INFO [GroupCoordinator 0]: Preparing to restabilize 
> group GROUP_NAME with old generation 0 (kafka.coordinator.GroupCoordinator)
> [2016-08-08 11:30:23,032] INFO [GroupCoordinator 0]: Stabilized group 
> GROUP_NAME generation 1 (kafka.coordinator.GroupCoordinator)
> [2016-08-08 11:30:23,033] INFO [GroupCoordinator 0]: Preparing to restabilize 
> group GROUP_NAME with old generation 1 (kafka.coordinator.GroupCoordinator)
> [2016-08-08 11:30:23,034] INFO [GroupCoordinator 0]: Group GROUP generation 1 
> is dead and removed (kafka.coordinator.GroupCoordinator)
> [2016-08-08 11:30:23,043] INFO [GroupCoordinator 0]: Preparing to restabilize 
> group GROUP_NAME with old generation 0 (kafka.coordinator.GroupCoordinator)
> [2016-08-08 11:30:23,044] INFO [GroupCoordinator 0]: Stabilized group 
> GROUP_NAME generation 1 (kafka.coordinator.GroupCoordinator)
> [2016-08-08 11:30:23,044] INFO [GroupCoordinator 0]: Preparing to restabilize 
> group GROUP_NAME with old generation 1 (kafka.coordinator.GroupCoordinator)
> [2016-08-08 11:30:23,045] INFO [GroupCoordinator 0]: Group GROUP_NAME 
> generation 1 is dead and removed (kafka.coordinator.GroupCoordinator)
> Due to the fact that some systems could have enabled NTP or an administrator 
> option to change the system clock (date/time) it's important to do it safely, 
> currently the only way to do it safely is following the next steps:
> 1-  Tear down the Kafka server.
> 2-  Change the date/time
> 3- Tear up the Kafka server.
> But, this approach can be done only if the change was performed by the 
> administrator, not for NTP. Also in many systems turning down the Kafka 
> server might cause the INFORMATION TO BE LOST.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request #1768: KAFKA-4051: Use nanosecond clock for timers in bro...

2016-08-22 Thread rajinisivaram
GitHub user rajinisivaram opened a pull request:

https://github.com/apache/kafka/pull/1768

KAFKA-4051: Use nanosecond clock for timers in broker

Use System.nanoseconds instead of System.currentTimeMillis in broker timer 
tasks to cope with changes to wall-clock time.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/rajinisivaram/kafka KAFKA-4051

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/1768.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1768


commit 366659f9a6a0212030349bc54e0c798e3d6c2122
Author: Rajini Sivaram 
Date:   2016-08-19T10:13:03Z

KAFKA-4051: Use nanosecond clock for timers in broker




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---